 Your new post is loading...
|
Scooped by
Charles Tiayon
June 3, 4:02 AM
|
Pym, A. (2025). Deconstructing translational trust. Translation Studies, 1–16. https://doi.org/10.1080/14781700.2025.2476487 "ABSTRACT: Although trust seems germane to the post-Renaissance translation form, it is also important in numerous other kinds of service provision. The act of trusting is pertinent because it responds to uncertainty found in translation situations. One type of uncertainty ensues from the risks of non-aligned loyalties, giving rise to distrust of the traitor. A second type ensues from the nature of language use, where decisions are undetermined, different translators give different translations, and the client or user cannot verify the optimality of a translation. This means translators’ credibility claims cannot be empirically tested, familiarity cannot provide a sufficient foundation, and there are no grounds for accepting trustworthiness as an inherent virtue of translators. The blind trust idealistically invested in professionals may be instructively contrasted with the vigilant low trust with which automated translations can be received. Such deployment of low trust may become a viable ethical alternative to essentialist presuppositions." https://www.tandfonline.com/doi/full/10.1080/14781700.2025.2476487 #metaglossia_mundus
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"...In ancient India, the most powerful god was known as “sky father,” or in the Sanskrit language, Dyaus pita. Sound it out. Can you see where this is going? In Greece, his equivalent was Zeus pater; in Rome, Jupiter. English speakers have always been used to tracing the etymologies of their words back to the classical languages of Europe, but the suggestion, in the late 18th century, that there were also clear and consistent features in common with languages spoken much farther to the east was an astonishing and exhilarating one.
If we think of languages as belonging to families, this is like finding out that an acquaintance is really a distant cousin: It points to a shared ancestor, a unity – and a divergence – somewhere in the past. What would it mean for those cultural origin stories – for Who We Are – to speak not of European but of Indo-European? “Of what ancient and fantastic encounters were these the fading echoes?” asks Laura Spinney in her latest book, “Proto: How One Ancient Language Went Global.”
There are about 7,000 languages spoken in the world today; they can be divided into about 140 families. Nevertheless, the languages most of us speak belong to just five: Indo-European, Sino-Tibetan, Niger-Congo, Afro-Asiatic and Austronesian. And of these, Indo-European and Sino-Tibetan (which includes Mandarin) are the biggest. As Spinney notes, “Almost every second person on Earth speaks Indo-European.”
- ADVERTISEMENT -
Such a large family is, naturally, a diverse one. Among the Indo-European languages are English, German, Spanish, Italian, Russian, Hindi-Urdu and Persian. At the head of the family tree is the postulated language we call Proto-Indo-European, or PIE for short. It was spoken – but not written down, as it emerged in the days before writing – some five or six millennia ago, somewhere around the Black Sea. (Exactly where is an open question. We’ll come to that later.)
Although we have no documentation of what PIE sounded like, it is possible to triangulate backward using the languages we do know. Over time, the way we speak – the set of sounds we use – gradually changes. This is not a chaotic, willy-nilly drift; rather, the shifts obey a set of general rules. To give you an example, compare the group of words where Latin has a p sound but the English equivalent has f: pater/father; pisces/fish, and so on. We can posit, then, that at an early point in their development, Germanic languages went down an f path from their PIE root, while other languages stayed with p.
Using these sound-change rules to identify ancestral similarities among languages, we can reconstruct a corpus of about 1,600 PIE words, including (with apologies for the somewhat rebarbative notation) family relations like *méh₂tēr and *bʰréh₂tēr and numbers like *duó-, *tréy. If you were living on the eastern fringes of Europe some five millennia ago, you might stay around your *domo and drink some *h₂melǵ or maybe ride out on your *éḱwos to lie under the night sky and fix your eye on a *h₂ster.
So this is the original material. But how, to use Spinney’s subtitle, did this ancient language go global? Most of “Proto” is spent picking through the genetic and archaeological evidence to explain how PIE, or its descendants, came to be spoken from Galway to Kolkata.
Whatever its contested origins, by the third millennium B.C., PIE was being spoken in the Pontic steppe, the region of grasslands to the north of the Black Sea, by a culture known as the Yamnaya, who were nomadic and had ox-drawn wagons as well as, probably, domesticated horses. It was the Yamnaya – whose migration routes were studded with large burial mounds, still discernible, that they built into the landscape – who carried the language deep into Asia. Later branches of the tree, each with its own history, include the Baltic and Slavic languages, not to mention Tocharian, Albanian, Armenian, Greek, Celtic, Germanic and Indo-Iranian.
“Proto,” it has to be said, is a dense read. Spinney’s approach is a thorough one, so that the chapters tack back and forth in time as we follow each of these branches in turn. It is easy to lose sight of the timeline while concentrating on the differences between Hittites, Hattians and Hurrians, between Lydians and Luwians. Spinney’s background is as a science journalist, and it shows in the way she has assimilated a vast amount of up-to-date scholarly information and presented it concisely and accurately. Nevertheless, this is still a knotty story.
Spinney leavens the history lesson with sections dealing with the present: pen portraits of academics she has interviewed in the archaeological trenches of Ukraine and Russia as well as on the more rarefied battlefields of international academic conferences. As an observer rather than a participant, Spinney is comfortable laying out different sides of some of the ongoing debates, the biggest of all being where exactly the ancestral language was first spoken. Did PIE arise in Anatolia (roughly modern-day Turkey) before spreading first west into Europe along with farming and then east again with the Yamnaya? Or did it originate in the steppe, borne west by marauding nomads who replaced the populations they drove out? Or – the latest linguistic hypothesis – were these Anatolian and steppe languages both offspring of an even older ancestor spoken in the northern Caucasus (now southern Russia) some 6,000 years ago?
“Proto” is an impressive piece of work – hard but ultimately rewarding. And if the debates Spinney has delved into are still far from being resolved, time at least is an abundant resource. One Bulgarian archaeologist who spoke to Spinney addresses the ebb and flow of academic funding with an admirable lack of ego: “These people have lain in the ground for thousands of years. Who cares if it’s me or someone else who digs them up.” – – – Dennis Duncan teaches English literature at University College London... By Dennis Duncan June 5, 2025 https://www.newsindiatimes.com/our-languages-have-more-in-common-than-you-might-think/ #metaglossia_mundus
Editors spend hours cutting clips, adding text, fixing sound, and more. Now, smart editors are changing the way they work. Instead of spending so much time editing, they are using smart tools to help them finish faster. This gives them more time to focus on the fun and creative parts—like making stories, adding humour, or talking to their audience.
"Why smart editors are spending less time editing and more time creating
By Dimsumdaily Hong Kong -11:30PM Wed
4th June 2025 – (Hong Kong) In today’s world, video content is everywhere. From TikTok to YouTube, people watch short and long videos every day. But what many don’t know is that editing videos takes a lot of time. Editors spend hours cutting clips, adding text, fixing sound, and more. Now, smart editors are changing the way they work. Instead of spending so much time editing, they are using smart tools to help them finish faster. This gives them more time to focus on the fun and creative parts—like making stories, adding humour, or talking to their audience. One tool that many editors now use is CapCut’s text to speech. It helps them turn written words into a real-sounding voice. So, if you don’t want to use your own voice in a video, or if you are camera shy, you can just write what you want to say and use this tool. It will read it out loud in the video for you. This saves a lot of time and also makes your videos sound more professional.
The Old Way of Editing Was Too Slow
Before all these smart tools came, editors had to do everything manually. They needed to cut videos one by one, fix the sound, remove background noise, add music, and write subtitles—all by hand. This made the process very slow. Also, if you made a mistake, you had to go back and fix everything again. Even small changes would take hours. Editors had less time to think about ideas or try new things. Most of their time went into fixing videos instead of creating something new.
New Tools Are Helping Creators
But now, things are different. With new tools like AI and automation, editing has become much easier and faster. Editors don’t need to do everything by hand anymore. They can use smart software that edits videos for them, adds subtitles automatically, or even creates a full video based on a script. One great example is the CapCut’s AI video generator. It lets you create a full video just by writing some text. You can choose styles, music, and visuals, and the tool will build the video for you. This is super helpful for busy creators who want to post often but don’t have time to do heavy editing every day. By using tools like these, editors can now focus more on being creative. They can think about fun video ideas, talk to their followers, or plan their next big content piece. Editing becomes just a small step in the bigger process.
How Smart Editors Spend Their Time Now
Now that editing is easier, creators are doing more exciting things with their time. They spend more time planning, writing better scripts, and finding new trends online. Some are learning how to make videos that go viral, while others are building strong personal brands.They also test new video styles like animations, reactions, or storytelling shorts. Instead of fixing sound or editing frame-by-frame, they’re creating meaningful content. They collaborate with others and grow their communities. This shift has changed how people view video making. Today, even one person with a laptop and internet can build a huge audience if they use the right tools smartly.
How to Use Text to Speech in Your Videos (3 Easy Steps)
If you want to try it yourself, here is how you can use text to speech in your videos:
Step 1: Import your video
Go to the CapCut desktop video editor and click on the New Project > Import and upload your desired file.
Step 2: Convert text to speech
Go to Text > Add text and then type what you want to say in your video. Choose a voice from the options by clicking the “Text to Speech” tab. You can pick male or female voices, different accents, or even emotions like happy or serious.
Step 3: Export video
Once you’re happy with the result, export the video. You can now share it on TikTok, YouTube, Instagram, or wherever you like.
This process takes just a few minutes, but it gives your video a big boost in quality.
Smart Editors Use Voice Tools for a Creative Edge
Besides using text to speech, many smart creators are also using tools like CapCut’s voice changers. These let you change your voice in funny or dramatic ways. For example, you can sound like a robot, a cartoon character, or even an old man. Why is this useful? Because it makes your content more fun! People like to watch videos that are creative, unexpected, or entertaining. Using a voice changer can help you stand out from the crowd. It also helps if you’re doing storytelling or gaming content where different characters have different voices. And again—it saves time. You don’t need to act or use your real voice. The software does it all.
Why Less Editing Means Better Content
Some people might think that doing less editing is bad. But actually, it’s the opposite. When you spend less time editing, you have more time to do things that really matter.Here are a few benefits:
You can post more videos every week
You have time to test new ideas
You don’t get tired or stressed from long editing hours
You can grow your audience faster
You can make better content because you are focused on the message, not the edits
Final Thoughts
Video editing is changing. Smart editors are not spending long hours fixing small things anymore. Instead, they are using tools like text to speech, AI video generator, and voice changer to finish videos faster and better.If you’re a creator, student, teacher, or business owner—this is your chance. You don’t need to learn complicated software. Just use these tools and start creating! Be smart. Spend less time editing and more time sharing your message with the world."
https://www.dimsumdaily.hk/why-smart-editors-are-spending-less-time-editing-and-more-time-creating%EF%BF%BC/
#metaglossia_mundus
"These newsrooms desperately need the help these technologies provide, but they’re the ones being left out because they work in languages that are considered low-resource, so they are not a big priority for tech companies to support."
"These journalism pioneers are working to keep their countries’ languages alive in the age of AI news
“These newsrooms desperately need the help these technologies provide, but they’re the ones being left out because they work in languages that are considered low-resource, so they are not a big priority for tech companies to support.”
By GRETEL KAHN
JUNE 4, 2025, 10:28 A.M.
This story was originally published by the Reuters Institute for the Study of Journalism.
Since the launch of ChatGPT in 2022, newsrooms have been grappling with both the promise and the peril posed by generative AI. But not every publisher is equally prepared to pursue these opportunities. While newsrooms in the U.S. and Europe innovate and experiment with large language models (LLMs), many newsrooms in the Global South are being left behind.
While AI models in languages like English, Spanish, and French have ample training and resources, profound linguistic and cultural biases embedded within many mainstream AI tools pose substantial challenges for newsrooms and communities operating outside of dominant Western languages and cultural contexts.
For these newsrooms, gaps in data aren’t merely technical glitches but existential threats to their ability to participate equitably in this evolving digital ecosystem. What threats does this lack of data present to newsrooms in the Global South? How can these AI gaps be narrowed? To answer these questions, I spoke to six journalists and experts from India, the Philippines, Belarus, Nigeria, Paraguay, and Mali who are aiming to level the field.
What AI can (and can’t) do for low-resource languages
AI tools do not work well (or at all) for local, regional, and indigenous languages, according to all the sources I spoke to. This is particularly evident in countries where many of these languages are spoken beyond dominant ones like English, Spanish, or French. This language gap exacerbates inequalities for newsrooms and communities that operate in non-dominant languages.
Jaemark Tordecilla is a journalist, media advisor, and technologist from the Philippines focusing on AI and newsroom innovation. Having worked as a consultant for newsrooms across Asia, Tordecilla has seen a lot of curiosity among journalists about AI, but uptake has been mixed. For transcriptions — the AI functionality he has seen used the most — cost and language have often been an issue.
“For the longest time, [transcription] worked well for English and it didn’t work at all for Filipino, so journalists in the Philippines are only very recently getting to use these tools for their own reporting, but cost is an issue,” he said.
He described instances where journalists in the Philippines were forced to share accounts for paid subscriptions to transcription tools, which can create security issues. Things are even worse for journalists who use regional languages, for which these tools are essentially useless.
“The roll-out for support for regional languages has been slow and they are being left behind,” he said. “If your interview is in a regional language, then obviously you can’t use AI to process that. You can’t use AI to translate that into another language. You can’t use AI to say monitor town hall meetings in a regional language and make [them] more accessible to the public.”
Indian journalist Sannuta Raghu, who heads Scroll.in’s AI Lab and is now a journalist fellow at the Reuters Institute, has documented what these linguistic and cultural inequities look like in practice for newsrooms like hers.
AI tools don’t work very efficiently for most of the 22 official languages in India. Raghu listed issues like inaccurate outputs, hallucinations, and incorrect translations. A big issue, she said, is that, for a long time, AI tools were unable to account for nuances in language. Unlike English, for example, many Indian languages have large differences between spoken and written discourse.
“Asian countries speak in code-mixed language — for example, using multiple Hindi words and English words in a normal conversation,” she said, “which means we need rich enough data to be able to understand that.”
If there isn’t sufficient “good data” to train models on these specific languages and contexts, Raghu said, linguistic and cultural inequities are going to happen. Raghu attributed this lack of training data to a combination of complexity and lack of interest by Big Tech. But she also said the situation is starting to improve.
“Is it really a priority for you to optimize for all those languages in India from a tech product sales perspective? As you move eastward, the complexity of how societies use language changes. It becomes far more multilingual. It becomes far more complex. It becomes far more code-mixed. Those are the complexities that we live with, and that is not reflected in any of the models,” Raghu said.
AI ignores political biases
Beyond these inefficiencies, my sources also pointed out the cultural and political nuances that are missing from these models, which make them even more problematic for newsrooms.
For example, Raghu said that newsrooms are already noticing an American slant in everything AI generates. She described an instance where they were testing a tool to see how useful it’d be to help them write copy on cricket. For something as simple as explaining the sport, she says there were hallucinations, with players being made up, and the model simply not understanding the rules of the game.
“Up to 2.6 billion people follow cricket. It’s a huge cultural thing for us, Australia, Bangladesh, England…but the U.S. doesn’t play cricket, which is why a lot of the cultural aspects of this are not included in the models,” she said. “There is a lack of contextual training data. Cricket is very important for us, but we’re not able to do this with the LLMs because these models don’t quite understand the rules.”
Daria Minsky is a Belarusian media innovation specialist focusing on AI applications in journalism. After working with several newsrooms in exile, she has seen a lot of skepticism toward AI use — not because of factual errors, but because some of these models lack nuance in politically sensitive contexts.
Minsky used her own country as an example of how LLMs may simply repeat narratives put forth by authoritarian regimes.
“The word Belarusian is very politically loaded in terms of how you spell it, so I compared different models. ChatGPT actually spells the democratic or version of it, while DeepSeek uses the old, Soviet version of it,” she says. “It’s Belorussian versus Belarusian. Belorussian is this imperialistic term used by the regime. If newsrooms use Belorussian instead of Belarusian, they risk losing their audience and their reputation.”
AI models are trained based on data available online, which is why these models are more fine-tuned to English (and its contexts) than, say, Belarusian. Since the official narrative of authoritarian regimes is what is most available online, AI is being trained to follow that narrative.
These gaps in the training system have already been exploited by bad actors. A recent study by NewsGuard revealed that a Moscow-based disinformation network is deliberately infiltrating the retrieved data of AI chatbots, publishing false claims and propaganda for the purpose of influencing the responses of AI models on topics in the news. The result is more output that propagates propaganda and disinformation.
“I’ve heard of the same problems in Burma, for example, because the opposition uses Burma instead of Myanmar. I can see this type of problem even within the United States where I’m based, with the debate around using ‘Gulf of Mexico’ or ‘Gulf of America’ because Trump started to rename things just like in other dictatorial regimes,” she says.
How to close the gap
Despite these issues, some newsrooms in the Global South are taking the development of AI tools into their own hands.
Tama Media, a francophone West African news outlet, has launched Akili, an app, currently in beta, that provides fact-checking in local African languages through vocal interaction. Moïse Mounkoro, the editorial advisor in charge of Akili, traced the origin of the idea to two things: the fact that misinformation is rampant in West Africa and the realization that many people communicate via voice messages rather than through reading and writing.
“Educated people can read articles and they can fact-check,” Mounkoro said. “But for people who are illiterate or who don’t speak French, English or Spanish, the most common way to communicate is orally. I’m originally from Mali, and most of the conversations there are through WhatsApp voice messages. If you want to touch those people, you need to use their language.”
The Akili app uses AI to fact-check information by taking the question and finding an answer through its database of sources, which range from BBC Africa to Tama Media. The answer is then given orally to the user. To include more African languages like Wolof, Bambara, and Swahili, Akili’s team is experimenting with either using Google Translate’s API or building their own through online dictionaries.
“These AI technologies came from the West so they focus on their own languages. It’s like cameras: in the beginning, they were not made to photograph Black skin,” he says. “People need to be aware that they need to integrate other languages. Google has at least tried to make an effort by integrating many African languages these last couple years.”
In Paraguay digital news outlet El Surti is developing GuaraníAI. While the project is still in development, their goal is to build a chatbot to detect whether someone is speaking in this language and provide them with a response. In order to do that, they are developing a dataset of spoken Guaraní so that LLM engines can recognize oral speech in this Indigenous language, spoken by almost 12 million people.
Sebastián Auyanet, who leads on the project, told me they wanted to explore how those who don’t speak dominant languages are shut out from accessing LLMs. Guaraní is a language spoken still spoken widely throughout the Southern Cone, mainly in Paraguay, where it is an official language along with Spanish. Up to 90% of the non-indigenous population in Paraguay speaks Guaraní.
“Guaraní is an oral, unwritten language,” Auyanet said. “What we need is for any LLM engine to be able to recognize Guaraní speech and to be able to respond to these questions in Spanish. News consumption may be turning into ChatGPT, Perplexity, and other models. So there’s no way to get into that world if you speak in a language none of these systems can use.”
El Surti is organizing hackathons throughout Paraguay to test this dataset. Mozilla’s Common Voice is a platform designed to gather voice data for diverse languages and dialects to be used by LLMs. El Surti is using Common Voice to develop their minimum viable product, which aims to achieve 70% validation for spoken Guaraní within Mozilla’s datasets. With this degree of validation, the chatbot would be able to respond to queries in this Indigenous language.
A picture of the first hackathon of the project, which included conversations between El Surti’s journalists and Guaraní native speakers. (El Surti)
Ideally, Auyanet said, this project will allow El Surti to build an audience of Guaraní speakers who eventually will be able to interact with the outlet’s coverage by asking the chatbot questions in their own language.
“Right now we are excluding people who are only Guaraní speakers from the reporting that El Surti does,” he said. “This is an effort to bring them closer.”
In Nigeria, The Republic, a digital news outlet, is developing Minim, an AI-powered decentralized text-to-speech platform designed to support diverse languages and voices. They are actively training an AI model on specific African languages such as Nigerian Pidgin, Hausa, and Swahili, with plans to add more over time. The team is aiming to have a minimum viable product by the end of the year.
This model would allow independent creators to lend their own voices and train the AI in their unique vocal quality, including age, regional accents, and other demographics.
Editor-in-chief Wale Lawal told me that their goal is to engage audiences who speak these languages and to make their outlet more relevant to them. “We believe that global media has an indigenous language problem,” he said. “In a place like Africa you have a lot of people who are just automatically locked out of global media because of language.”
Minsky, the media consultant from Belarus, has been working with newsrooms in exile to develop an AI tool that allows for the automation of news monitoring from trusted sources. Her goal is to account for all the cultural and political nuances missing from the current models by allowing newsrooms to monitor very specific contextual sources, including local and hyper-local channels like Telegram.
This would include uploading archival data and historical data to fine-tune the output, and to explicitly prompt to control terminology and spelling (i.e. “don’t call Lukashenko president”).
No newsroom left behind
Newsrooms have been working without AI for decades before the first generative AI tools were released to the public. So what’s the problem if these tools are not available to everyone?
Tordecilla stressed that the gap between newsrooms is getting wider and pointed to things already happening: there are newsrooms in Manila who are doing AI investigations and newsrooms in rural Philippines who could benefit from the efficiencies that AI provides that are struggling for their survival.
“We are talking about newsrooms who have five people on their team, so every minute they spend on transcription is a minute they don’t spend reporting and editing stories,” Tordecilla said. “These newsrooms desperately need the help these technologies provide, but they’re the ones being left out because they work in languages that are considered low-resource, so they are not a big priority for tech companies to support.”
Indian journalist Sannuta Raghu said that addressing the AI digital gap is important for access, scale, and reaching a broader audience. Raghu mentioned a specific goal that her newsroom had when they started their AI lab: to build a multilingual text-to-video tool, as video content is extremely popular in India. With thousands of millions of Indians using smartphones and having access to very cheap internet, there is a significant opportunity to deliver journalism via video in these local languages. They have created Factivo 1.0, a URL-to-MP4 tool that creates accurate videos from news articles in a matter of minutes and which is available for major languages.
“For a small English-language newsroom like us, the universe available to us right now is about 30 million Indians who understand English,” she explains. “But to tap into the larger circle that is India, which is 1.4 billion people, we would need to be able to do language at scale.”"
https://www.niemanlab.org/2025/06/these-journalism-pioneers-are-working-to-keep-their-countries-languages-alive-in-the-age-of-ai-news/
#metaglossia_mundus
Tokenizer design significantly impacts language model performance, yet evaluating tokenizer quality remains challenging.…
"Beyond Text Compression: Evaluating Tokenizers Across Scales Authors Jonas F. Lotz†‡, António Vilarinho Lopes, Stephan Peitz, Hendra Setiawan, Leonardo Emili
Tokenizer design significantly impacts language model performance, yet evaluating tokenizer quality remains challenging. While text compression has emerged as a common intrinsic metric, recent work questions its reliability as a quality indicator. We investigate whether evaluating tokenizers on smaller models (350M parameters) reliably predicts their impact at larger scales (2.7B parameters). Through experiments with established tokenizers from widely-adopted language models, we find that tokenizer choice minimally affects English tasks but yields significant, scale-consistent differences in machine translation performance. Based on these findings, we propose additional intrinsic metrics that correlate more strongly with downstream performance than text compression. We combine these metrics into an evaluation framework that enables more reliable intrinsic tokenizer comparisons.
† Work done while at Apple ‡ University of Copenhagen & ROCKWOOL Foundation Research Unit https://machinelearning.apple.com/research/beyond-text-compression #metaglossia_mundus
"Development of intercultural communicative competence. Challenges and contributions for the training of health professionals Authors Juan Beltrán-Véliz Universidad de la Frontera José Luis Gálvez-Nieto Universidad de la Frontera Maura Klenner Loebel Universidad de la Frontera Ana María Alarcón Universidad de la Frontera Nathaly Vera-Gajardo Universidad Autónoma de Chile Vol. 31 No. 1 (2025): NARRATIVAS, BIOÉTICA Y MEDICINA Abstract This paper reflects on intercultural communicative competence as a contribution to the training of professionals in health sciences. A reductionist, technocratic and culturally decontextualized health model is visualized, and the scarce development of "intercultural communicative competence" in health professionals is seen as a great barrier, which hinders communication with patients from other cultures. Therefore, it is vitally important to develop this competence in the training of these professionals. This will imply developing components that underlie this competence, that is, awareness, sensitivity and intercultural efficacy, which will allow the acquisition of cognitive, affective and behavioral skills that benefit effective and appropriate communication with patients from diverse cultural and social contexts. Likewise, they will allow them to understand in depth aspects, values and practices that underlie a culturally different health system, and will promote the advancement in the knowledge, understanding and development of innovative and relevant practices in the health system, from an intercultural and comprehensive approach. Finally, it will make it possible to overcome inequalities, misunderstandings, prejudices, exclusions and social injustices." https://actabioethica.uchile.cl/index.php/AB/article/view/78367 #metaglossia_mundus
Scientists discover how story language evolves from scene-setting to action to resolution, analyzing thousands of fictional narratives.
"Scientists decode the universal language pattern in 40,000 stories ELLSWORTH TOOHEY 8:46 AM WED JUN 4, 2025 "The Hare's Bride", a fairy tale from Buckow in Mecklenburg, by Tom Seidmann-Freud, (1924) Public Domain A study analyzing 40,000 stories reveals what Aristotle intuited centuries ago — narratives follow a common, predictable structure consisting of three key elements.
Using computer analysis, researchers Ryan Boyd, Kate Blackburn, and James Pennebaker revealed how language patterns evolved across novels, short stories, romance novels, movies and other narratives. Their findings, published in Science in 2929, identified three consistent components that form narratives: At the beginning, articles and prepositions dominate as authors set the stage. The middle sees a rise in "cognitive tension words" like "think" and "realize" as characters grapple with conflicts. By the end, pronouns and auxiliary verbs increase as the action resolves.
This pattern held true regardless of genre, length, or even quality — highly-rated books and movies followed the same structure as poorly-rated ones. However, the researchers found that non-fiction formats like newspaper articles and TED talks deviated from this pattern, particularly in how they handle cognitive tension.
"We find that all different types of stories — novels, films, short stories, and even amateur writing — trend toward a universal narrative structure," the researchers write in their report. Rather than constraining creativity, they suggest this structure may provide an optimal way for humans to process and share information through stories.
They've created a website where anyone can analyze the narrative patterns of hundreds of published books and movie scripts, or upload their own texts for analysis." https://boingboing.net/2025/06/04/scientists-decode-the-universal-language-pattern-in-40000-stories.html #metaglossia_mundus
"Bíblia: Comissão da CEP divulga nova tradução do «Génesis», assumindo «tensão» entre sentido original e «interpretações posteriores» 2 Junho, 2025 10:25 Especialistas alertam para «desconhecimento do contexto em que nasceram os textos»
Lisboa, 02 jun 2025 (Ecclesia) – A comissão coordenadora da tradução da Bíblia, da Conferência Episcopal Portuguesa (CEP), lançou hoje a nova versão do texto do Génesis, primeiro livro da Bíblia, alertando para o “desconhecimento do contexto” em que o mesmo nasceu.
“As discussões sobre a responsabilidade do judaísmo-cristianismo na inquinação do planeta terra, a relação da ciência com a Bíblia e as teorias do evolucionismo e do criacionismo, bem como as do big bang, têm a ver com as narrações da criação do mundo e da humanidade nos primeiros onze capítulos do Génesis”, indicam os especialistas, na introdução ao texto divulgado neste mês de junho.
Os tradutores destacam a “importância e atualidade” do livro do Génesis, do qual partem “várias linhas de força teológicas”.
“A religião, a arte e a cultura recorrem à inspiração do Génesis para explorar e articular conhecimentos sobre a vida humana”, indicam.
O desconhecimento do contexto em que nasceram os textos do Génesis gerou tensão entre o seu sentido original e as interpretações posteriores. Estas mantiveram viva e fresca a força dos relatos, mas divergem significativamente do horizonte cultural que produziu o seu sentido original.”
Os especialistas da CEP sublinham o “estilo narrativo” do primeiro livro da Bíblia e a sua “força antropológica”, precisando que este “fundo cultural, literário e religioso deve ser tido em conta na sua interpretação, pois o que um texto quer dizer depende também daquilo que está por trás dele e que o ilumina”.
“Motivos literários, temas, conceções cosmológicas, religiosas e culturais (criação de todos os seres pela divindade, o conhecimento como distintivo do ser humano em relação aos outros seres, a grandeza e os limites da vida humana, união sexual entre seres divinos e mulheres, um dilúvio, genealogias…) remetem para o contexto histórico do Próximo Oriente antigo, o chamado mundo bíblico”, refere a equipa de tradução.
No judaísmo, o livro do Génesis é designado pela expressão inicial, bere’shit (“No princípio”).
O título Génesis vem da tradução grega dos Setenta (assumido pela tradução latina) e resume o seu conteúdo: origem do universo, da humanidade e de Israel.
No princípio, quando Deus criou os céus e a terra, a terra estava caótica e vazia, as trevas pairavam por cima do abismo e um vento impetuoso soprava sobre a superfície das águas.”...
OC" https://agencia.ecclesia.pt/portal/biblia-comissao-da-cep-divulga-nova-traducao-do-genesis-assumindo-tensao-entre-sentido-original-e-interpretacoes-posteriores/
Wisconsin Republicans are advancing a bill that could expand the role of virtual language interpretation in state courts. Backers say it's an effort to address a shortage of interpreters, but opponents worry the changes could erode the rights of victims.
"Wisconsin bill would allow court interpreters to work remotely during trials
Backers say it could address a shortage of interpreters but opponents worry it could open the door to miscommunication
BY SARAH LEHR
JUNE 3, 2025
Court interpreter Floralba Vivas speaks Spanish into a microphone as a man is sentenced in Milwaukee County Circuit Court on Friday, March 21, 2025, in Milwaukee, Wis. Angela Major/WPR
Wisconsin Republicans are advancing a bill that could expand the role of virtual language interpretation in state courts.
Supporters say they hope to enable flexibility in a state with an unmet demand for qualified interpreters.
But opponents worry the changes could erode the rights of victims and defendants by making it easer for miscommunication to happen.
Under Wisconsin law, people with limited English proficiency have the right to a qualified interpreter when they appear before a circuit or appellate court.
Currently, those courts can only use tele-interpretation during some types of legal proceedings. During trials, interpreters need to appear in-person.
But, under a bill that cleared Wisconsin’s GOP-controlled Senate last month, interpreters could appear by telephone or videoconference even during trials.
“I think that we’re all aware of the staffing shortages and backlogs plaguing our court system, and additionally, county budgets are also feeling the pinch,” state Sen. Van Wanggaard, R-Racine, said during a hearing on the bill this spring.
In Wisconsin, the state partially reimburses counties for the costs of court interpreters. Although those rates of pay vary county-by-county, many local courts also reimburse interpreters for their mileage, and sometimes their driving time, before traveling to court.
Court interpreters across the state are in high demand, which in some cases, has forced judges to postpone cases while they search for a qualified interpreter. That’s especially true for interpreters in less commonly spoken languages.
In 2023, courts across Wisconsin billed for more than 26,200 hours of interpretation — a 27 percent increase compared to five years prior.
“One way to help alleviate some of that pressure is to remove burdensome requirements that the state places upon our circuit courts,” said Wanggaard, who introduced the bill earlier this year.
Some say in-person interpretation is most effective during lengthy, complex trials
Under Wisconsin law, the right to an interpreter extends to witnesses, people accused of crimes, victims and their family members.
But some groups contend that right could be diminished if tele-interpretation is expanded. In written testimony opposing the bill, Elena Kruse of the Wisconsin State Public Defender’s Office, said the changes could make it more difficult for a non-English speaking client to have a private, side conversation with their attorney.
And she noted that, compared to many pre-trial proceedings, trials are most often lengthy and complex.
“Jury trials, more than most other court proceedings, carry extremely high stakes: individuals’ liberty and freedoms,” Kruse wrote. “We are not willing to risk technical and logistical difficulties hindering a person’s ability to fully participate in the legal process, especially when their liberty is at stake.”
The American Civil Liberties of Union of Wisconsin and victims’ rights groups including the Wisconsin Coalition Against Sexual Assault also registered against the bill. The Wisconsin Defense Counsel, which represents civil trial lawyers, registered in favor.
Nadya Rosen, an attorney with Disability Rights Wisconsin, says the proposal could be especially harmful to people who are deaf and hard of hearing.
American Sign Language relies on someone’s hand gestures and facial expressions, and Rosen said it can be difficult to fully grasp those visuals via video.
“We had a case recently in a rural county where there was a video interpreter, and the setup in the courtroom was such that the litigants were so far away from the camera that the ASL interpreters that were in a remote location were unable to see the faces of the people that they were supposed to be interpreting for,” Rosen said, adding the case had to be postponed until an in-person ASL interpreter could be appointed.
Rosen says unimpeded communication is especially important during a trial, when jurors are expected to decide whether or not they believe in the truthfulness of someone who’s testifying.
Christina Green, a freelance court interpreter who works in Wisconsin, echoed those concerns. Green says remote interpretation can be useful during brief legal proceedings. But, during trials, Green said she believes remote interpretation could actually lead to more delays because of technical issues.
“Most courthouses in Wisconsin have inadequate equipment, so they have low-quality cameras, low-quality microphones, screens that are not clear,” said Green, who sits on the board of the American Translators Association, a professional advocacy group.
Court interpreter Reme Bashi speaks into a microphone so the court proceedings can be understood by Spanish speakers Friday, March 21, 2025, in Milwaukee, Wis. Angela Major/WPR
Amendment would require all parties to agree to remote interpretation
In response to concerns about the bill, lawmakers agreed to add an amendment stipulating that remote interpretation could only be used during a trial if all parties in a case agree.
Even with that change, Rosen, of Disability Rights Wisconsin, says her group still opposes the bill. Rosen says the amendment puts the onus to request an in-person interpreter on the person who needs that service.
“Being a litigant in court can be a really difficult process, and you don’t want to slow things down,” she said. “You don’t want to be perceived as being a problem, and so having to assert your need for interpretation and quality interpretation can be just another barrier for people who need to be able to effectively communicate.”
Effort comes as other lawmakers push separate AI interpretation bill
Under Wanggaard’s bill, Wisconsin courts would still be required to use real, live people as interpreters even if some of those interpreters appear remotely.
But a separate, more-recently introduced bill would allow Wisconsin courts to use machine-assisted interpretation enabled by artificial intelligence. That AI option could be used in addition to or instead of human interpreters, the bill says.
“By integrating AI-assisted translation tools, courts can deliver faster, more efficient services to individuals with limited English proficiency while significantly reducing the costs associated with hiring human interpreters,” state Rep. Dave Maxey, R-New Berlin, and Sen. Chris Kapenga, R-Delafield, wrote in a memo attached to a draft bill.
Green, who interprets in Spanish, French and Italian, said replacing human interpreters with AI could be disastrous. She says AI hallucinations and mistranslations could lead to lawsuits and overturned convictions.
“AI will always struggle with things like nuances in the legal language, specialized terminology, idiomatic expressions, Spanglish,” Green said. “AI may misinterpret the context, confuse the pronouns (and) introduce certain biases.”
Lawmakers formally introduced the AI interpretation bill on Friday. So far, the ACLU of Wisconsin has registered against it while the Wisconsin Counties Association has come out in favor."
https://www.wpr.org/news/wisconsin-bill-court-interpreters-work-remotely-during-trials
#metaglossia_mundus
"Preparing and handling digital documents for conference interpreters
20/06/2025
On Friday June 20, 2025, Andy Gillies, will be giving a one day training on “Efficiently handling PDF documents, navigating between documents and files and annotating in preparation for conferences”.
Here are the details:
Date: Friday, June 20, 2025 from 09:00 to 17:30.
Location: Paris, France
Trainer: Andy Gillies
Training: Efficiently handling PDF documents, navigating between documents and files and annotating in preparation for conferences (see description below)
Participants: 20 (maximum number of participants)
Registration fee: The course costs 400€ excl. VAT. For French participants with freelance status, the course is eligible to a 250€ HT refund via the FIFPL training fund. Real cost for the participant: 150€ HT
Registration link: https://www.sft-services.fr/produit/interpretation-conference-documents-numerique/
About this event
Training by SFT Services in collaboration with AIIC France
Learn how to handle multiple documents in PDF format. Rename, organise and navigate seamlessly between files to keep your eye on what’s relevant in a high-intensity environment.
We’ll explore how to create links between documents or key sections of documents to access them effortlessly during interpreting.
Also, discover a few tips on how best to prepare and annotate documents to enhance your performance in the booth. "
https://aiic.at/client/event/roster/eventRosterDetails.html?productId=747&eventRosterId=9
#metaglossia_mundus
"...In a significant step toward more inclusive technology, researchers at Florida Atlantic University (FAU) have developed a real-time American Sign Language (ASL) interpretation system that uses artificial intelligence (AI) to bridge communication gaps for individuals who are deaf or hard of hearing.
The system, built by a team from FAU’s College of Engineering and Computer Science, leverages the combined strengths of YOLOv11, an advanced object detection model, and MediaPipe, a tool for real-time hand tracking. This integration allows the system to interpret ASL alphabet letters from video input with remarkable accuracy and speed—even under inconsistent lighting or with complex backgrounds—and translate it into readable text.
“What makes this system especially notable is that the entire recognition pipeline—from capturing the gesture to classifying it—operates seamlessly in real time, regardless of varying lighting conditions or backgrounds,” Bader Alsharif, the study’s first author and a PhD candidate in FAU’s Department of Electrical Engineering and Computer Science, said in a press release. “And all of this is achieved using standard, off-the-shelf hardware. This underscores the system’s practical potential as a highly accessible and scalable assistive technology.”
A key innovation lies in the use of skeletal hand mapping. A webcam captures the ASL translator’s hand gestures, which are rendered into digital frames. MediaPipe identifies 21 key points on each hand—including fingertips, knuckles, and the wrist—creating a structural map that YOLOv11 then uses to distinguish between ASL letters, even those that look similar, such as “A” and “T” or “M” and “N.”
This approach helped the system achieve a mean Average Precision (mAP@0.5) of 98.2%, indicating high classification accuracy. The results, published in the journal Sensors, demonstrate minimal latency, making the system ideal for applications requiring real-time communication, such as virtual meetings or interactive kiosks.
The researchers also developed a robust dataset to train and test their model. The ASL Alphabet Hand Gesture Dataset consists of 130,000 images under a wide range of conditions, including various lighting scenarios, hand orientations, and skin tones. Each image was annotated with 21 landmarks to ensure the model could generalize across diverse users and environments.
“This project is a great example of how cutting-edge AI can be applied to serve humanity,” Imad Mahgoub, PhD, co-author and Tecore Professor in FAU’s Department of Electrical Engineering and Computer Science, said in the release. “By fusing deep learning with hand landmark detection, our team created a system that not only achieves high accuracy but also remains accessible and practical for everyday use.”
The team later extended the project to explore how another object detection model—YOLOv8—performed when combined with MediaPipe. In a separate study published in Franklin Open, researchers trained the model using a new dataset of nearly 30,000 annotated images. Results from this effort were similarly promising, demonstrating 98% accuracy and recall. The model maintained strong performance across a range of hand gestures and positions, reinforcing its real-world applicability.
Beyond academic validation, the system’s practical implications are significant. According to the National Institute on Deafness and Other Communication Disorders, approximately 37.5 million adults in the U.S. report some trouble hearing, while about 11 million are considered deaf or functionally deaf.
“The significance of this research lies in its potential to transform communication for the deaf community by providing an AI-driven tool that translates American Sign Language gestures into text, enabling smoother interactions across education, workplaces, health care, and social settings,” said Mohammad Ilyas, PhD, co-author and professor at FAU.
Future development will focus on expanding the model’s capability from recognizing static letters to interpreting full ASL sentences, as well as optimizing performance on mobile and edge devices. This would allow more natural conversations and greater accessibility on widely used platforms."
BY ERIK CLIBURN JUNE 3, 2025 https://insightintoacademia.com/asl-ai-translation/
#metaglossia_mundus
"NHS England has warned that healthcare organisations need to be careful in their use of AI translation apps.
It said that they carry risks, particularly regarding their accuracy of translations and the potential impact on patient safety.
This comes in response to a number of organisations beginning to use the apps in consulting with and treating patients with limited English.
NHS England’s warning has come within a new improvement framework document on community language and translation services.
It acknowledges the potential value of apps that can translate language for people with limited English, saying they provide a convenient and timely means of translation.
Accuracy factor
But it also cites research published in the US National Journal of Medicine stating that apps, and informal interpreters, may not always ensure that key medical information is interpreted or communicated accurately, and that clinicians are unable to get assurance that their advice is being received.
The framework includes a series of recommendations on interpreting services, including that organisations note the risk and liability issues over the use of AI interpreting tools, and that they should only be used within clearly defined trust policies and risk assessments.
They should also look at the appointment of a champion for them in a primary care network, feedback systems for patients and staff to identify issues, recording language needs on the primary care electronic record, and involving patients in the development and improvement of interpreting services."
04/06/25
Mark Say
Managing Editor
@markssay
https://www.ukauthority.com/articles/nhs-england-sounds-warning-over-translation-apps
#metaglossia_mundus
"All-in-One Default Translator in
By Reverso Jun 3, 2025 Updated Jun 3, 2025
Reverso, the global leader in online language tools.
By Reverso
Latest app provides instant translation in iOS 18.4 and up along with an innovative dictionary and upgraded learning tools
NEW YORK, June 3, 2025 /PRNewswire/ -- Reverso, the global leader in online language tools, has released a powerful new version of its mobile app, Reverso Translate and Learn 15.3, the first of its kind able to be set as a third-party default translation app on iPhones and iPads running iOS 18.4 and higher.
The new Reverso app integration provides one-tap AI translation and all-in-one learning in Apple iOS.
"Our latest Reverso app is an all-in-one translation tool right there with you whatever, wherever, whenever you are reading, writing, and communicating on your phone or tablet," said Théo Hoffenberg, founder and CEO of Reverso. "We're extremely proud of our collaboration with Apple in making this game-changing integration happen."
Providing one-tap AI translation for over 100 language combinations in texts, emails, instant messages, news articles, books, and more, the state-of-the-art app goes beyond translation to promote genuine mastery with a robust suite of learning capabilities:
Translation by text, image, voice, or real-time conversation providing relevant, real-world results in context
An autoplay mode with a pronunciation interface that reads target vocabulary aloud for studying as well as listening and speaking practice
An English dictionary with over 500,000 clear and concise meanings and examples for words, expressions, idioms, acronyms, informal language, and inflected forms
An enhanced learning dashboard that sets goals, shows streaks, and tracks progress
Quizzes, flashcards, word lists, synonyms, conjugation guides, and other study aids
For over the past 20 years, Reverso and its dedicated team of engineers, data scientists, and linguistic specialists have been driving innovation in translation and language learning with its proven AI-based models.
"The more sophisticated our technology gets, the more human our mission at Reverso becomes," said Hoffenberg. "We're passionate about going beyond one-off, soon-forgotten lookups to lifelong learning so that people can understand what they read, chat with natives, work with foreigners, and meaningfully communicate with people across languages, across the globe."
Reverso's app has 5 million active monthly users and over 30 million downloads. It recently won the 2025 People's Voice Webby Award for Learning & Education Apps & Software and has been featured as the App of the Day in Apple's App Store in over 50 countries.
Download the latest version of the Reverso app for iOS: https://apps.apple.com/us/app/reverso-translate-and-learn/id919979642
The app is also available for Android. Discover more: https://context.reverso.net/translation/mobile-app/
About Reverso
Reverso is a global leader in online translation and language tools, helping millions of people and professionals read, write, learn, and communicate across the world's languages. For over 100 language combinations, Reverso provides AI-powered contextual voice, image, text, and document translation and an all-in-one learning ecosystem with a top grammar checker and user-first English dictionary. Each month, it serves over 50 million active users on the web, 5 million app users, and 5 million users in corporate environments.
John Kelly
john.kelly@boldsquare.com"
https://www.news-journal.com/reverso-releases-first-of-its-kind-all-in-one-default-translator-in-apple-ios/article_c7674e7c-d88d-5f9f-8e14-c676d2533e25.html
#metaglossia_mundus
No African writer has as many major, lasting creative achievements in such a wide range of genres as Ngũgĩ wa Thiong’o.
"Published: June 4, 2025 3.20pm SAST Charles Cantalupo, Penn State
Celebrated Kenyan writer and decolonial scholar Ngũgĩ wa Thiong'o passed away on 28 May at the age of 87. Many tributes and obituaries have appeared across the world, but we wanted to know more about Thiong'o the man and his thought processes. So we asked Charles Cantalupo, a leading scholar of his work, to tell us more.
Who was Ngũgĩ wa Thiong'o – and who was he to you? When I heard that Ngũgĩ had died, one of my first thoughts was about how far he had come in his life. No African writer has as many major, lasting creative achievements in such a wide range of genres as Ngũgĩ wa Thiong'o. His books include novels, plays, short stories, essays and scholarship, criticism, poetry, memoirs and children’s books.
Read more: Five things you should know about Ngũgĩ wa Thiong'o, one of Africa's greatest writers of all time
His fiction, nonfiction and plays from the early 1960s until today are frequently reprinted. Furthermore, Ngũgĩ’s monumental oeuvre is in two languages, English and Gĩkũyũ, and his works have been translated into many other languages.
Heinemann Educational Books From a large family in rural Kenya and a son of his father’s third wife, he was saved by his mother’s pushing him to be educated. This included a British high school in Kenya and Makerere University in Uganda.
When the brilliant young writer had his first big breakthrough at a 1962 meeting in Kampala, the Conference of African Writers of English Expression, he called himself “James Ngũgi”. This was also the name on the cover his first three novels. He had achieved fame already as an African writer but, as is often said, the best was yet to come.
Not until he co-wrote the play I Will Marry When I Want with Ngũgĩ wa Mirii was the name “Ngũgĩ wa Thiong’o” on the cover of his books, including on the first modern novel written in Gĩkũyũ, Devil on the Cross (Caitaani Mũtharaba-inĩ).
I Will Marry When I Want was performed in 1977 in Gĩkũyũ in a local community centre. It was banned and Ngũgĩ was imprisoned for a year.
East African Publishers And still so much more was to come: exile from Kenya, professorships in the UK and US, book after book, fiction and nonfiction, myriad invited lectures and conferences all over the world, a stunning collection of literary awards (with the notable exception of the Nobel Prize for Literature), honorary degrees, and the most distinguished academic appointments in the US, from the east coast to the west.
Yet besides his mother’s influence and no doubt his own aptitude and determination, if one factor could be said to have fuelled his intellectual and literary evolution – from the red clay of Kenya into the firmament of world literary history – it was the language of his birth: Gĩkũyũ. From the stories his mother told him as a child to his own writing in Gĩkũyũ for a local, pan-African and international readership. He provided every reason why he should choose this path in his books of criticism and theory.
East African Educational Publishers Ngũgĩ was also my friend for over three decades – through his US professorships, to Eritrea, to South Africa, to his finally moving to the US to live with his children. We had an ongoing conversation – in person, during many literary projects, over the phone and the internet.
Our friendship started in 1993, when I first interviewed him. He was living in exile from Kenya in Orange, New Jersey, where I was born. We both felt at home at the start of our working together. We felt the same way together through the conferences, books, translations, interviews and the many more literary projects that followed.
What are his most important works? Since Ngũgĩ was such a voluminous and highly varied writer, he has many different important works. His earliest and historical novels like A Grain of Wheat and The River Between. His regime-shaking plays.
Random House His critical and controversial novels like Devil on the Cross and Petals of Blood. His more experimental and absolutely modern novels like Matigari and Wizard of the Crow.
His epoch-making literary criticism like Decolonising the Mind. His informal and captivating three volumes of memoirs written later in life. His retelling in poetry of a Gĩkũyũ epic, The Perfect Nine, his last great book. A reader of Ngũgĩ can have many a heart’s desire.
My book, Ngũgĩ wa Thiong’o: Texts and Contexts, was based on the three-day conference of the same name that I organised in the US. At the time, it was the largest conference ever held on an African writer anywhere in the world.
Africa World Press Books What I learned back then applies now more than ever. There are no limits to the interest that Ngũgĩ’s work can generate anytime anywhere and in any form. I saw it happen in 1994 in Reading, Pennsylvania, and I see it now 30 years later in the outpouring of interest and recognition all over the world at Ngũgĩ’s death.
In 1993, he had published a book of essays titled Moving the Centre: The Struggle for Cultural Freedoms. Focusing on Ngũgĩ’s work, the conference and the book were “moving the centre” in Ngũgĩ’s words, “to real creative centres among the working people in conditions of gender, racial, and religious equality”.
What are your takeaways from your discussions with him? First, African languages are the key to African development, including African literature. Ngũgĩ comprehensively explored and advocated this fundamental premise in over 40 years of teaching, lectures, interviews, conversations and throughout his many books of literary criticism and theory. Also, he epitomised it, writing his later novels in Gĩkũyũ, including his magnum opus, Wizard of the Crow.
Currey Moreover, he codified his declaration of African language independence in co-writing The Asmara Declaration, which has been widely translated. It advocates for the importance and recognition of African languages and literatures.
Second, literature and writing are a world and not a country. Every single place and language can be omnicentric: translation can overcome any border, boundary, or geography and make understanding universal. Be it Shakespeare’s English, Dante’s Italian, Ngugi’s Gĩkũyũ, the Bible’s Hebrew and Aramaic, or anything else, big or small.
Random House Third, on a more personal level, when I first met Ngũgĩ, I was a European American literary scholar and a poet with little knowledge of Africa and its literature and languages, much less of Ngũgĩ himself. He was its favourite son. But this didn’t stop him from giving me the idea and making me understand how African languages contained the seeds of an African Renaissance if only they were allowed to grow.
I knew that the historical European Renaissance rooted, grew, flourished and blossomed through its writers in European vernacular languages. English, French, German, Italian, Spanish and more took the place of Latin in expressing the best that was being thought and said in their countries. Yet translation between and among these languages as well as from classical Latin and Greek culture, plus biblical texts and cultures, made them ever more widely shared and understood.
Read more: Drama that shaped Ngũgĩ’s writing and activism comes home to Kenya
From Ngũgĩ discussing African languages I took away a sense that African writers, storytellers, people, arts, and cultures could create a similar paradigm and overcome colonialism, colonial languages, neocolonialism and anything else that might prevent greatness."
Jabulani Sikhakhane" https://theconversation.com/3-things-ngugi-wa-thiongo-taught-me-language-matters-stories-are-universal-africa-can-thrive-258074 #metaglossia_mundus
Businesses in Quebec are now required to follow updated regulations for French-language commercial signage and packaging
"New French signage rules now mandatory for businesses in Quebec with fines up to $90,000 Businesses in Quebec are now required to follow updated regulations for French-language commercial signage and packaging.
It's official: Companies like Canadian Tire, Best Buy, and Second Cup must now add French descriptions to their storefronts, covering two-thirds of the text space.
Despite a request from business groups to extend the deadline, as of Sunday, June 1,2025, several French-language requirements related to commercial signage and packaging came into force under Law 14 (formerly Bill 96).
With French required to be the dominant language on store signs and stricter guidelines for product packaging, the key changes include any business name featuring a specific term (such as a store name) in a language other than French and visible from outside must now be accompanied by French wording—such as a generic term, a description, or a slogan—to ensure the clear predominance of French.
This also applies to recognized trademarks, whether fully or partially in another language, if they appear in signage visible from outside a premises.
"Visible from outside" includes displays seen from the exterior of a building or structure, within a shopping mall, or on terminals and standalone signage like pylons.
Photograph: Stéphan Poulin What to know about Quebec’s new language rules? Under the new rules, French must occupy twice the space of other languages on storefronts, meaning businesses with English names must add prominent French descriptions.
While trademarks can remain in other languages, new rules require generic terms within them—like "lavender and shea butter"—to be translated into French.
Critics warn this could limit product availability if global suppliers don't adapt, pushing customers to online retailers.
Quebec's language requirement, previously for businesses with 50+ employees, now applies to those with 25–49 staff, who must register with the language office—even if no changes are ultimately needed.
Businesses that violate the new rules face fines from $3,000 to $30,000 per day, rising to $90,000 for repeat offences—though officials say penalties may be delayed if efforts to comply are underway.
What is Bill 96 Quebec 2025? Bill 96, which amends Quebec's Charter of the French Language, introduces changes that impact businesses in Quebec, particularly regarding language use in commerce and business.
Specifically, on June 1, 2025, a key element of Bill 96 regarding trademarks comes into effect, requiring translation of descriptive or generic terms within trademarks into French." By Laura Osborne Editor, Time Out Canada Monday June 2 2025 https://www.timeout.com/montreal/news/new-french-signage-rules-now-mandatory-for-businesses-in-quebec-with-fines-up-to-90-000-060225 #metaglossia_mundus
"UNESCO Highlights the Value of Indigenous Languages as Intangible Cultural Heritage in Ecuador and Beyond
Anne Gael Bilhaut 2 June 2025 Puyo, Pastaza – On behalf of the Provincial Government of Pastaza and the Consortium of Provincial Autonomous Governments of Ecuador (CONGOPE), the Intercultural Meeting of Provincial Governments on Heritage Languages in the Territory of Ecuador took place on May 30, 2025. As part of the event, the UNESCO Office in Ecuador contributed with a keynote lecture by Julio César Guanche, Programme Specialist in Social and Human Sciences.
During his address, Guanche emphasized that “the loss of a language does not only mean the extinction of a means of communication, but also the disappearance of ancestral knowledge, social practices, traditional health systems, and unique ways of understanding the world.”
He highlighted UNESCO’s leadership in global efforts to safeguard this living heritage, and Ecuador’s commitment to developing safeguarding plans for endangered languages such as Zápara.
The lecture also addressed the international normative framework, focusing on the 2003 Convention for the Safeguarding of the Intangible Cultural Heritage and the Los Pinos Declaration (2020), which guides the actions of States and communities within the framework of the International Decade of Indigenous Languages (2022–2032). “Indigenous languages are essential vehicles for cultural transmission, and their preservation is key to sustainable development, social cohesion, and community resilience,” Guanche stated.
The presentation also underscored that in Ecuador, cultural heritage—both tangible and intangible—is recognized as a human right. The 2008 Constitution guarantees the right to cultural identity and social memory. In this context, Guanche remarked that “declaring cultural heritage is not an end in itself, but a means to ensure the collective right to culture, memory, and identity.”
The conference concluded with a call to strengthen the participation of Indigenous peoples in decision-making processes, expand access to linguistic technologies, and implement intercultural public policies that recognize Indigenous languages as pillars of humanity’s cultural diversity." https://www.unesco.org/en/articles/intercultural-meeting-provincial-governments-heritage-languages-ecuador #metaglossia_mundus
AI: Explore how Artificial Intelligence is reshaping human cognition, creativity, and the implications of an algorithm-driven world. Are we risking conformity and mediocrity or unlocking new intellectual potential?
" Is AI sparking a cognitive revolution that will lead to mediocrity and conformity? What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance?
PTI Updated On Jun 3, 2025 at 04:25 PM IST Highlights The rise of artificial intelligence is reshaping cognitive processes in various fields, prompting concerns about the potential loss of originality and depth in creative work as reliance on AI tools increases. Generative AI, while capable of producing competent-sounding content, often lacks true creativity and originality, as it predominantly reflects and rearranges existing human-created material. The challenge posed by the cognitive revolution driven by artificial intelligence is not only technological but also cultural, as it raises questions about preserving the irreplaceable value of human creativity amid a surge of algorithmically generated content.
Representative image Artificial Intelligence began as a quest to simulate the human brain.
Is it now in the process of transforming the human brain's role in daily life?
The Industrial Revolution diminished the need for manual labour. As someone who researches the application of AI in international business, I can't help but wonder whether it is spurring a cognitive revolution, obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide.
Advt Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time - and teachers use similar tools to provide feedback.
The economic and cultural implications are profound.
EVENT Enhance CX with Conversational Journeys via WhatsApp
Thu, 05 Jun 2025 Mumbai Register Now EVENT The Big Leap Collective Thu, 05 Jun 2025 Chennai Register Now EVENT The Future Ready CMO Roundtable Series Fri, 06 Jun 2025 Mumbai Register Now EVENT Brand World Summit 2025 Fri, 04 Jul 2025 Grand Hyatt, BKC, Mumbai Register Now AWARD Shark Awards 2025 Fri, 04 Jul 2025 Know More EVENT Martech+ Summit 2025 Thu, 04 Sep 2025 Gurugram Register Now AWARD Martech+ Awards 2025 Thu, 04 Sep 2025 Know More EVENT DigiPlus Fest 2025 Fri, 05 Sep 2025 Gurugram Register Now AWARD DigiPlus Awards 2025 Nominations till Tue, 08 Jul 2025 Nominate Now EVENT Brand World Summit South 2025 Thu, 20 Nov 2025 Chennai Register Now What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance?
Advt Echoes of the industrial revolution
We've been here before.
The Industrial Revolution replaced artisanal craftsmanship with mechanised production, enabling goods to be replicated and manufactured on a mass scale.
Shoes, cars and crops could be produced efficiently and uniformly. But products also became more bland, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance.
Today, there's a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality.
The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and "good enough," there's the risk of losing the depth, nuance and intellectual richness that define exceptional human work.
The rise of algorithmic mediocrity
Despite the name, AI doesn't actually think.
Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they have processed.
They are, in essence, mirrors that reflect collective human creative output back to users - rearranged and recombined, but fundamentally derivative.
And this, in many ways, is precisely why they work so well.
Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other.
Generative AI excels at producing competent-sounding content - lists, summaries, press releases, advertisements - that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when "good enough" is, well, good enough.
When AI sparks - and stifles - creativity
Yet, even in a world of formulaic content, AI can be surprisingly helpful.
In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance.
However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges.
I wasn't surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and world views of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate.
More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions.
One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions.
What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality - not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself.
Navigating the cognitive revolution
True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience. These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past.
What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness.
The challenge, then, isn't just technological. It's cultural.
How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content?
The historical parallel with industrialisation offers both caution and hope. Mechanisation displaced many workers but also gave rise to new forms of labour, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.
This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.
Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence?
The answer, for now, is up in the air. Published On Jun 3, 2025 at 04:20 PM IST https://brandequity.economictimes.indiatimes.com/amp/news/digital/ai-the-catalyst-for-a-cognitive-revolution-or-a-slip-into-mediocrity/121595312
"KUALA LUMPUR: With more than four decades of dedication as a sign language interpreter, Tan Lee Bee is the face seen by millions of Malaysians watching the news programmes on television.
Often seen at the corner of the television screen showing her expressive facial expressions, Tan, 64, was grateful to have been awarded the Bintang Ahli Mangku Negara (AMN) from His Majesty Sultan Ibrahim, King of Malaysia, during the investiture ceremony in conjunction with the King’s official birthday celebration at Istana Negara yesterday.
She described the award as a great recognition for her in the “silent” struggle to convey the voices of the “voiceless”.
“I feel happy, moved; I feel like crying too... I feel very grateful,” the veteran sign language interpreter told Bernama after the ceremony.
Although the award was celebrated in a ceremonial atmosphere at the palace, for Tan, it came with long memories and the twists and turns of life in a career that received little public attention.
“This recognition is not just for me but a symbolic tribute to every interpreter who works silently for fairer inclusivity in Malaysia,” she said.
Tan’s journey as a sign interpreter began not because of ambition but because of love and empathy for her younger sister, who is deaf.
“I learnt sign language because of my sister. Then, I started working as a teacher for the deaf. I was a teacher for 17 years.
“While I was a teacher, I also served as an interpreter with the Federal Court of Malaysia,” said Tan, who also became the first court sign interpreter in Malaysia in 1994.
The contribution of the Segamat native to the world of broadcasting began earlier in 1985 when she appeared on television through the show Selamat Pagi Malaysia.
Her face and sign language actions became important visuals in news broadcasts on RTM as well as in official communication videos, advertisements and various broadcasting mediums, which played an important role in conveying information to special groups."
Tuesday, 03 Jun 2025
https://www.thestar.com.my/news/nation/2025/06/03/royal-recognition-for-sign-language-interpreter
#metaglossia_mundus
Türkiye passes law allowing the Religious Affairs Directorate to confiscate Quran translations it deems theologically inappropriate
"Türkiye’s Parliament has taken a significant step in altering the balance between state and religion by approving a bill that grants the Directorate of Religious Affairs (Diyanet) sweeping authority over Quran translations. The legislation was passed in the Planning and Budget Committee and introduces major revisions to the law that defines Diyanet’s institutional mandate.
The most contentious provision allows Diyanet to examine Quran translations—known as 'meals'—either on its own initiative or upon the request of public or private entities. If the content is found “objectionable” from the perspective of “Islam’s fundamental characteristics,” Diyanet can apply to judicial authorities to halt its publication and distribution.
In cases where an unfavorable ruling is issued, all previously distributed copies of the translation can be ordered for confiscation and destruction. This power not only applies to newly published works but also includes texts that were already printed and circulated prior to the review.
Ali Erbaş, President of the Directorate of Religious Affairs, January 30, 2025. (Photo: AA) Expanded jurisdiction: From bookshelves to servers The bill also extends Diyanet’s reach into the digital realm. Quran translations published online may be subjected to similar scrutiny, and if found problematic, Diyanet can request the judiciary to remove the content or block access to it. This means that theological material on websites, blogs, or social media could be taken down through court orders.
While legal challenges to such actions are allowed within two weeks, appeals will not suspend the enforcement of takedown orders. Content may be removed or destroyed even as legal objections are under review, a move that critics argue undermines procedural fairness and due process.
If no appeal is filed, or if the appeal is rejected, the destruction of the materials becomes permanent. Legal experts caution that this process could lead to irreversible censorship, especially if the criteria for what is deemed “objectionable” remain vague.
Legal and religious backlash over ambiguous standards Legal scholars have been quick to highlight the broad and undefined nature of the phrase “incompatibility with the fundamental characteristics of Islam.” Without a precise legal definition, the clause leaves ample room for arbitrary interpretation and enforcement by state authorities.
This ambiguity, they argue, violates essential constitutional principles such as legal certainty and foreseeability. Under Türkiye’s Constitution, these principles are fundamental to the rule of law and are meant to prevent discretionary and unpredictable governance.
Some religious scholars have also voiced concern that the regulation gives Diyanet unchecked power to determine theological correctness. Some theologians warn that this could lead to the suppression of alternative Islamic interpretations and label the move as reminiscent of an “inquisition-style” enforcement mechanism.
Rather than banning works it finds contrary to its official religious stance, policy-makers could have chosen Diyanet to publish counter-arguments and critical analyses instead.
The official logo of the Directorate of Religious Affairs, Diyanet. (Photo: AA) Precedents show pattern of increasing control The new legal changes follow a series of precedents in which Diyanet has already taken steps to restrict religious publications. In 2023, a Quran commentary by a Turkish theologian was banned by court order after Diyanet’s legal department flagged it as objectionable.
More recently, this week, a publishing house applying for a barcode for a previously printed Quran translation was denied approval unless the text was submitted for review. This occurred even before the proposed legal changes were enacted, indicating how Diyanet had already begun operating under an expanded interpretation of its authority.
These examples demonstrate a trend toward increasing arbitrary control over religious scholarship and content. The formalization of these powers through law now risks transforming Diyanet into a gatekeeper for what may or may not be considered acceptable religious material.
Religious monopolization Observers warn that the law may empower Diyanet to impose a singular, state-sanctioned interpretation of Islam. This centralization of religious authority could threaten theological diversity and restrict the ability of independent scholars to contribute to public discourse.
Civil society actors point out that the law could be used to target religious groups or viewpoints that are not aligned with the prevailing political ideology. The potential for selective enforcement raises concerns about ideological censorship masquerading as religious protection.
By linking religious orthodoxy to state enforcement, Türkiye risks normalizing a model in which alternative interpretations are not only marginalized but actively criminalized. This would mark a serious departure from the country’s historically more pluralistic approach to religion.
A Friday sermon at Uhud Mosque in Akyazı, Ordu, October 13, 2023. (Photo: Diyanet website) More state, in interpretation and elsewhere For decades, Türkiye stood apart from many Muslim-majority countries by allowing a relatively wide spectrum of Islamic thought. This openness made the country a hub for theologians and thinkers who sought refuge from censorship in their own countries.
In contrast, countries like Saudi Arabia and Iran enforce rigid control over religious texts, including Quran translations. In these states, only officially sanctioned interpretations are permitted, and dissenting works are routinely banned or destroyed.
Türkiye's new approach brings it closer to these restrictive models. By eroding pluralism in religious interpretation, the country risks losing its unique position as a space for independent Islamic scholarship in the region.
Repercussions for Islamic thought and religious mobility The legal changes may have broader geopolitical implications. Scholars from countries with limited religious freedom have previously viewed Türkiye as a place where they can publish or teach freely. That opportunity may now disappear.
Moreover, thinkers who publish materials critical of official doctrines elsewhere may now face legal risk in Türkiye, as it could lead to an official demand for the extradition of scholars in theological disputes.
The move could also undermine Türkiye’s credibility as a moderate model and weaken its soft power in its sphere of influence, particularly in places such as Syria.
While the current focus appears to be on marginal or unorthodox interpretations, the sweeping nature of the new powers leaves the door open for future governments to redefine acceptable religious thought according to their own ideological preferences.
By Enes Berna Kilic June 02, 2025 09:02 AM GMT+03:00" https://www.turkiyetoday.com/nation/turkiye-grants-religious-authority-power-to-censor-and-destroy-quran-translations-3202190 #metaglossia_mundus
"That time “AI” translation almost caused a fight between a doctor and my parents Thom Holwerda 2025-06-02
What if you want to find out more about the PS/2 Model 280? You head out to Google, type it in as a query, and realise the little “AI” summary that’s above the fold is clearly wrong. Then you run the same query again, multiple times, and notice that each time, the “AI” overview gives a different wrong answer, with made-up details it’s pulling out of its metaphorical ass. Eventually, after endless tries, Google does stumble upon the right answer: there never was a PS/2 Model 280, and every time the “AI” pretended that there was, it made up the whole thing.
Google’s “AI” is making up a different type of computer out of thin air every time you ask it about the PS/2 Model 280, including entirely bonkers claims that it had a 286 with memory expandable up to 128MB of RAM (the 286 can’t have more than 16). Only about 1 in 10 times does the query yield the correct answer that there is no Model 280 at all.
An expert will immediately notice discrepancies in the hallucinated answers, and will follow for example the List of IBM PS/2 Models article on Wikipedia. Which will very quickly establish that there is no Model 280.
The (non-expert) users who would most benefit from an AI search summary will be the ones most likely misled by it.
How much would you value a research assistant who gives you a different answer every time you ask, and although sometimes the answer may be correct, the incorrect answers look, if anything, more “real” than the correct ones?
↫ Michal Necasek at the OS/2 Museum
This is only about a non-existent model of PS/2, which doesn’t matter much in the grand scheme of things. However, what if someone is trying to find information about how to use a dangerous power tool? What if someone asks the Google “AI” about how to perform a certain home improvement procedure involving electricity? What if you try to repair your car following the instructions provided by “AI”? What if your mother follows the instructions listed in the leaflet that came with her new medication, which was “translated” using “AI”, and contains dangerous errors?
My father is currently undertaking a long diagnostic process to figure out what kind of age-related condition he has, which happens to involve a ton of tests and interviews by specialists. Since my parents are Dutch and moved to Sweden a few years ago, language is an issue, and as such, they rely on interpreters and my Swedish wife’s presence to overcome that barrier. A few months ago, though, they received the Swedish readout of an interview with a specialist, and pasted it into Google Translate to translate it to Dutch, since my wife and I were not available to translate it properly.
Reading through the translation, it all seemed perfectly fine; exactly the kind of fact-based, point-by-point readout doctors and medical specialists make to be shared with the patient, other involved specialists, and for future reference. However, somewhere halfway through, the translation suddenly said, completely out of nowhere: “The patient was combative and non-cooperative” (translated into English).
My parents, who can’t read Swedish and couldn’t double-check this, were obviously taken aback and very upset, since this weird interjection had absolutely no basis in reality. This readout covered a basic question-and-answer interview about symptoms, and at no point during the conversation with the friendly and kind doctor was there any strife or modicum of disagreement. Still, being into their ’70s and going through a complex and stressful diagnostic process in a foreign healthcare system, it’s not unsurprising my parents got upset.
When they shared this with the rest of our family, I immediately thought there must’ve been some sort of translation error introduced by Google Translate, because not only does the sentence in question not match my parents and the doctor in question at all, it would also be incredibly unprofessional. Even if the sentence were an accurate description of the patient-doctor interaction, it would never be shared with the patient in such a manner.
So, trying to calm everyone down by suggesting it was most likely a Google Translate error, I asked my parents to send me the source text so my wife and I could pour over it to discover where Google Translate went wrong, and if, perhaps, there was a spelling error in the source, or maybe some Swedish turn of phrase that could easily be misinterpreted even by a human translator. After pouring over the documents for a while, we came to a startling conclusion that was so, so much worse.
Google Translate made up the sentence out of thin air.
This wasn’t Google Translate taking a sentence and mangling it into something that didn’t make any sense. This wasn’t a spelling error that tripped up the numbskull “AI”. This wasn’t a case of a weird Swedish expression that requires a human translator to properly interpret and localise into Dutch. None of the usual Google Translate limitations were at play here. It just made up a very confrontational sentence out of thin air, and dumped it in between two other sentence that were properly present in the source text.
Now, I can only guess at what happened here, but my guess is that the preceding sentence in the source readout was very similar to a ton of other sentences in medical texts ingested by Google’s “AI”, and in some of the training material, that sentence was followed by some variation of “patient was combative and non-cooperative”. Since “AI” here is really just glorified autocomplete, it did exactly what autocomplete does: it made shit up that wasn’t there, thereby almost causing a major disagreement between a licensed medical professional and a patient.
Luckily for the medical professional and the patient in question, we caught it in time, and my family had a good laugh about it, but the next person this happens to might not be so lucky. Someone visiting a foreign country and getting medicine prescribed there after an incident might run instructions through Google Translate, only for Google to add a bunch of nonsense to the translation that causes the patient to misuse the medication – with potentially lethal consequences.
And you don’t even need to add “AI” translation into the mix, as the IBM PS/2 Model 280 queries show – Google’s “AI” is entirely capable of making shit up even without having to overcome a language barrier. People are going to trust what Google’s “AI” tells them above the fold, and it’s unquestionably going to lead to injury and most likely death.
And who will be held responsible?"
https://www.osnews.com/story/142469/that-time-ai-translation-almost-caused-a-fight-between-a-doctor-and-my-parents/
#metaglossia_mundus
"Prix Albertine Jeunesse is awarded to three works of children’s literature in translation
(c) Jennifer Knotts
By Villa Albertine
The Prix Albertine Jeunesse is awarded in 2025 to Never, not Ever! (Même pas en rêve) by Beatrice Allemagna, Aiko and the planet of dogs (Aiko et la planète des chiens) by Ainhoa Cayuso and Christoffer Ellegaard, and Shepherdess Warriors, volume 1(Bergères Guerrières, volume 1) by Jonathan Garnier and Amélie Fléchais. It’s been an enthusiastic 7th edition of the Prize: another year rich in amazing readings and debates, with nearly 15,596 students gathering to determine the annual reader’s choice award for the best Francophone children’s book in English Translation.
New York, June 2, 2025 — Villa Albertine, the French Institute for Culture and Education, the Embassies of France in the United States and Canada, Institut français du Canada, Agence pour l’enseignement français à l’étranger (AEFE) and Albertine Books are thrilled to announce the three winning titles of the 2025 Prix Albertine Jeunesse. Congratulations to these exceptional books and to the thousands of young voters who selected them!
Encouraging children ages 3-11 to choose their favorite book from several works of Francophone youth literature available in English translation, the Prix Albertine Jeunesse fosters a love of reading in both French and English. This year’s winners are: to Never, not Ever! (Même pas en rêve) by Beatrice Allemagna, translated by Jill Davis; to Aiko and the planet of dogs (Aiko et la planète des chiens) by Ainhoa Cayuso and Christoffer Ellegaard, translated by Irene Vázquez; and to Shepherdess Warriors, volume 1(Bergères Guerrières, volume 1) by Jonathan Garnier and Amélie Fléchais, translated by Ivanka Hahnenberger. These books have been honored for their exceptional literary and artistic value, the uniqueness and authenticity of the perspectives they offer to young readers on the world, and their captivating appeal.
Created in 2018, the annual Prix Albertine Jeunesse promotes recently published books translated from French in the US and Canada. This year, 15,596 students (672 classes) in North America participated, reading several books from a bilingual shortlist of various youth titles. Young voters assessed the quality of each book in both French and English languages.
This year, a network of 70 accredited French schools and public bilingual/dual language schools across North America have integrated the Prix Albertine Jeunesse books into their curricula. This Prize is a tool to strengthen bilingual and multilingual education, fostering connections between French and English. Participating schools receive complimentary educational resources and lesson plans for teachers made available for free, here. Additionally, Villa Albertine organized a series of 32 online author events in March 2025, providing students with direct interactions and discussions with the authors of the shortlisted books.
The 2025 shortlist was selected by a committee of experts representing Villa Albertine, the Embassies of France in the United States and Canada, Albertine Books, and representatives of the AEFE network of North America.
The forthcoming 2026 Prix Albertine Jeunesse shortlist will be available by the beginning of summer on the Albertine website. School classes may register for the 8th Prize edition in fall 2025.
The winners of this year’s prize are available for purchase on the Albertine website, alongside all the nominees.
AWARDED BOOKS:
3-5 years old: Never, not Ever! (Même pas en rêve)
by Beatrice Allemagna
translated by Jill Davis
Harper Collins (L’école des loisirs)
A laugh-out-loud tribute to little kids everywhere who would prefer not to leave home on the first day of school. The other animals are marching dutifully to school, but Pascaline could care less. “Never, not ever!” she declares. And nothing—not even her parents pulling her by her feet—will change her mind. She shrieks so loudly that her parents shrink down to the size of peanuts—becoming just the right size to fit snugly under Pascaline’s wing. Now they can all go to school together!
6-8 years old: Aiko and the planet of dogs (Aiko et la planète des chiens)
by Ainhoa Cayuso and Christoffer Ellegaard
translated by Irene Vázquez
Levine Querido (Les fourmis rouges)
Aiko is a courageous astronaut, specially trained to brave the extremes of space. The whole of humanity is counting on her success. But on a planet that shows signs of life, something goes awry, and when she wakes up, she finds . . . a pack of dogs? And . . . they can talk? Aiko is delighted. This discovery will make her the most famous astronaut on Earth! The dogs are… less delighted. They’re going to keep her prisoner on their planet rather than let humanity find them again.
9-11 years old: Shepherdess Warriors, volume 1 (Bergères Guerrières, volume 1)
by Jonathan Garnier and Amélie Fléchais
translated by Ivanka Hahnenberger
Ablaze (Glénat)
Shepherdess Warriors is the odyssey of a young heroine living great adventures in a medieval-fantasy universe inspired by Celtic legends. It’s been 10 years since the men of the village left to fight in The Great War. But the women quickly took charge of village matters. This is how the Order of Shepherdess Warriors was formed, a group of female fighters chosen among the most courageous and acrimonious, to protect not only their flocks, but also the village! Molly is happy because as soon as she turns 10, she can finally start training, and if she’s good enough, she can join the order. This quest will take her far beyond the boundaries of the land she knows, into the sorcerers’ forest and on the trail of her missing parents."
https://villa-albertine.org/va/press-release/prix-albertine-jeunesse-is-awarded-to-three-works-of-childrens-literature-in-translation/
#metaglossia_mundus
"Mexican novelist Guadalupe Nettel (The Accidentals), Argentine novelist Gabriela Cabezón Cámara (We Are Green and Trembling) and Uruguayan author Fernanda Trías (Pink Slime) offer sharp commentary on Latin American politics, gender roles, and environmental and political corruption in their novels.
As part of the PEN World Voices Festival, these three authors were joined by writer and curator Lily Philpott for a discussion entitled “Bold Voices: Latin American Writers in Conversation” to reflect on Latin American literature, literature in translation, and language.
On Language Latin America is a melting pot of languages, dialects, and accents. During the panel, the authors discussed the languages they grew up hearing, and speaking, and how these languages impacted their writing.
Gabriela Cabezón Cámara: “When I think of languages, I don’t think of something fixed, but of something living. As a girl, my grandfather spoke the Spanish of Spain, which is very different from Argentinian Spanish. I learned English from music and school and tried to learn Chinese. My neighbors spoke Italian and French. The only languages that I didn’t hear growing up were Indigenous languages, which parents did not teach to their children back then. But now we are reclaiming these languages. So in a way, language is always evolving and being revised for me.”
Guadalupe Nettel: “I grew up in a neighborhood full of mixed languages, accents, and dialects from South America.…There are some Mexican writers who believe that we should only write in Mexican Spanish [because we are Mexican]. But Spanish is such a rich language…that I believe that writers should be able to make full use of different phrases, insults, and languages; not just those that we as authors, in theory, correspond with.”
Fernanda Trías: “I have lived in many countries over the past twenty years, including Colombia, Chile, and France. I ask myself what country I really belong to, because I sound like a foreigner wherever I go, including Uruguay. Even my mother tells me I have an accent. From moving, I have learned that language is a way to understand the place that you are in, and this has deeply affected me as a writer.”
From moving, I have learned that language is a way to understand the place that you are in, and this has deeply affected me as a writer.– Fernanda Trías
On Literature in Translation All three authors’ works were written in Spanish and translated into other languages, including English, French, and Italian. Here, Nettel and Trías discuss what it’s like to have their work translated and what it is like to enter into a relationship with a translator.
Guadalupe Nettel: “I always feel tension when translating my work. It’s that you are not just translating into another language, but another culture. A good translator is a writer and poet too, one that does not cut out irony, nuance, and poetry from the book when the other language doesn’t have those exact sounds or plays on words, but recaptures them for another culture in the literary production.”
Fernanda Trías: “I was very involved when my novel was translated into English and French, which I speak. But there are translations of my book that happened into languages I do not speak, and some translators had a lot of questions about the text while others had none. For those where the translator did not have questions, I must accept that those translated texts are different bodies of work, different novels altogether.”
A good translator is a writer and poet too, one that does not cut out irony, nuance, and poetry from the book when the other language doesn’t have those exact sounds or plays on words, but recaptures them for another culture in the literary production.-Guadalupe Nettel Check out the translations panelists’ books: The We Are Green and Trembling by Gabriela Cabezón CámaraThe Girl From the Orange Grove by Gabriela Cabezón Cámara, translated by Robin Myers
Pink Slime by Fernanda Trías, translated by Heather Cleary
The Accidentals by Guadalupe Nettel, translated by Rosalind Harvey" Allison Lee June 2, 2025
https://pen.org/latin-american-writers-pen-world-voices-festival/ #metaglossia_mundus
"Complete New World Translation Released in Two Languages During May 2025 Kisi
On May 25, 2025, Brother Winston Bestman, a member of the Liberia Branch Committee, released the complete New World Translation of the Holy Scriptures in Kisi. The announcement was made to an audience of 1,029 gathered for a special program held in Guéckédou, Guinea. An additional 2,467 tied in to the program via videoconference from satellite locations in Foya, Liberia, and Koindu, Sierra Leone. Printed copies were distributed to those in attendance. The release was also immediately made available for download from jw.org and in the JW Library app. Additionally, it was announced that audio recordings of the New World Translation in Kisi will gradually be made available for download.
Nearly one million people speak Kisi in what is known as Kissiland, a region where Guinea, Liberia, and Sierra Leone share a common border. More than 1,200 brothers and sisters currently serve in 29 Kisi-language congregations and one group throughout the branch territory. These Kisi-speaking Witnesses rejoice at having an accurate translation of the complete Bible that uses God’s personal name, Jehovah.
Solomon Islands Pidgin
On May 25, 2025, Brother Jeffrey Winder, a member of the Governing Body, released the complete New World Translation of the Holy Scriptures in Solomon Islands Pidgin. The announcement was made during a special program in Honiara, the capital of the Solomon Islands. A total of 2,317 attended the program in person. Another 1,695 tied in via videoconference from two remote locations on the island of Malaita. All in attendance received a printed copy of the New World Translation in Solomon Islands Pidgin. The release was also made available for download from jw.org and in the JW Library app.
The good news first reached the Solomon Islands in 1953. Today, this island nation is home to approximately 757,000 people who speak more than 70 indigenous languages. Although English is the country’s official language, Solomon Islands Pidgin is the most widely spoken language in the Solomon Islands. Currently, some 2,100 brothers and sisters who speak Solomon Islands Pidgin serve in 38 congregations throughout the country." JUNE 2, 2025 GLOBAL NEWS https://www.jw.org/en/news/region/global/Complete-New-World-Translation-Released-in-Two-Languages-During-May-2025/ #metaglossia_mundus
"12e édition du Prix de la Francophonie pour jeunes chercheurs 2025
L’Agence Universitaire de la Francophonie (AUF) lance la douzième édition du Prix de la francophonie pour jeunes chercheurs. Ce prix est ouvert tous les deux ans et couvre les deux champs disciplinaires suivants : Sciences et Technologies et Sciences humaines et sociales.
Le Prix de la Francophonie pour jeunes chercheurs vise à valoriser le mérite et la qualité des travaux de recherche de quatre chercheur·ses francophones tout en prenant en compte la diversité de l’espace universitaire francophone, notamment celle des pays en développement. Ce prix distingue des chercheur·ses qui se sont illustrés dans leur domaine et ont obtenu une reconnaissance scientifique pour les avancées significatives réalisées.
Modalités de candidature
Les candidatures doivent être portées et présentées par un·e dirigeant·e (Recteur·ice, Président·e, Directeur·ice Général·e…) des institutions d’enseignement supérieur et de recherche membres de l’AUF.
Le Prix s’adresse à des candidat·es âgé·es de 40 ans au plus à la date de clôture de l’appel à candidatures, titulaires d’un doctorat, pouvant justifier d’une activité de recherche importante et innovante et rattachés à des établissements membres de l’AUF.
Le dépôt des dossiers de candidatures doit être fait impérativement avant le 27 juin 2025 (18h GMT) par l’intermédiaire d’un formulaire à renseigner en ligne sur la plateforme de l’AUF.
Remise du prix et récompenses
L’annonce et la remise du Prix Jeunes chercheurs 2025 aura lieu lors de la 9e Assemblée générale et de la 5e Semaine Mondiale de la Francophonie Scientifique (SMFS) qui se tiendront à Dakar (Sénégal) du 3 au 7 novembre 2025.
Les lauréat·es recevront une certification officielle et une dotation monétaire de 5 000€ (cinq mille euros) chacuns·es, afin de les encourager à poursuivre leurs activités
Contact et règlement
Pour toutes les informations concernant les critères de sélection, les conditions d’éligibilité et le contenu du dossier de candidature, veuillez consulter l’appel à candidatures.
Pour toute précisions ou informations supplémentaires, écrivez à activites-instances@auf.org
Date de publication : 21/05/2025
Date limite : 27/06/2025"
https://www.auf.org/nouvelles/appels-a-candidatures/12e-edition-du-prix-de-la-francophonie-pour-jeunes-chercheurs/
#metaglossia_mundus
|
Pym, A. (2025). Deconstructing translational trust. Translation Studies, 1–16. https://doi.org/10.1080/14781700.2025.2476487
"ABSTRACT: Although trust seems germane to the post-Renaissance translation form, it is also important in numerous other kinds of service provision. The act of trusting is pertinent because it responds to uncertainty found in translation situations. One type of uncertainty ensues from the risks of non-aligned loyalties, giving rise to distrust of the traitor. A second type ensues from the nature of language use, where decisions are undetermined, different translators give different translations, and the client or user cannot verify the optimality of a translation. This means translators’ credibility claims cannot be empirically tested, familiarity cannot provide a sufficient foundation, and there are no grounds for accepting trustworthiness as an inherent virtue of translators. The blind trust idealistically invested in professionals may be instructively contrasted with the vigilant low trust with which automated translations can be received. Such deployment of low trust may become a viable ethical alternative to essentialist presuppositions."
https://www.tandfonline.com/doi/full/10.1080/14781700.2025.2476487
#metaglossia_mundus