 Your new post is loading...
|
Scooped by
Charles Tiayon
February 4, 2022 7:30 PM
|
CALL FOR PAPERS: COLLECTED VOLUME “THE COMPLEXITY OF SOCIAL-CULTURAL EMERGENCE: BIOSEMIOTICS, SEMIOTICS AND TRANSLATION STUDIES”
CALL FOR PAPERS: COLLECTED VOLUME “THE COMPLEXITY OF SOCIAL-CULTURAL EMERGENCE: BIOSEMIOTICS, SEMIOTICS AND TRANSLATION STUDIES” Editors: Kobus Marais Reine Meylaerts Maud Gonne Conceptualization Since the emergence of complexity thinking, scholars from the natural and social sciences as well as the humanities are renewing efforts to construct a unified framework that would unite all scholarly activity. The work of Terrence Deacon (2013), at the interface of (at least) physics, chemistry, biology, neurology, cognitive science, semiotics, anthropology and philosophy, is a great, though not the only, example of this kind of work. It is becoming clear that this paradigm of complex relational and process thinking means, among others, that the relationships between fields of study are more important than the differences between them. Deacon’s contribution, for instance, lies not (only) in original findings in any of the fields in which the works but (also) in the ways in which he relates bodies of knowledge to one another. An example would be his links between a theory of work (physics) and a theory of information (cybernetics) by means of a theory of meaning (semiotics). This line of thinking indeed situates semiotics and biosemiotics in the centre of the abovementioned debate (also see Hoffmeyer, 2008; Kauffman, 2012). In semiotics, Susan Petrilli’s (2003) thought-provoking collection covers a wide variety of chapters focused on translation, which she conceptualizes as semiotic process. Her work made it possible to link biosemiotics and semiotics through the notion of “translation”, which is what we aim to explore further in this book. Michael Cronin’s work in translation studies links up with the above through his use of the notion of “ecology”. To apprehend interconnectedness and vulnerability in the age of the Anthropocene, his work challenges text-oriented and linear approaches while engaging in eco-translational thinking. He calls tradosphere all translation systems on the planet, all the ways in which information circulates between living and non-living organisms and is translated into a language or a code that can be processed or understood by the receiving entity (Cronin, 2017, p. 71). The aptness of Cronin’s work on ecology finds a partner in that of Bruno Latour, whose development of a sociology of translation (2005) responds to the need to reconnect the social and natural worlds and to account for the multiple connections that make what he calls the ‘social’. In an effort further to work out the implications of this new way of thinking, Marais (2019, p. 120) conceptualized translation in terms of “negentropic semiotic work performed by the application of constraints on the semiotic process” (see also Kress 2013). Building on Peirce, namely that the meaning of a sign is its translation into another sign, translation is defined as a process that entails semiotic work done by constraining semiotic possibilities. This conceptualization allows for the study of all forms of meaning-making, i.e. translation, under a single conceptual framework, but it also allows for a unified ecological view for both the sciences and the humanities. “The long standing distinction between the human and social sciences and the natural and physical sciences is no longer tenable in a world where we cannot remain indifferent to the more than human” (Cronin, 2017, p. 3). These kind of approaches open ample possibilities for a dialogue between Translation Studies, Semiotics and Biosemiotics, exploring translation not only in linguistic and anthropocentric terms, but as a semiotic process that can take place in and between all (living) organisms – human and non-human organic and inorganic, material and immaterial alike. Not only the translation of Hamlet into French, or of oral speech into subtitles, but also communication between dolphins or between a dog and its master, or moving a statue from one place to another, or rewatching a film are translation processes. However, many of the implications of this line of thinking still need to be explored, and if the references to Deacon, Petrilli and Cronin holds, this should be done in an interdisciplinary way that tests, transgresses and transforms scholarly boundaries. Based on the conference that took place in August 2021, we call for papers for an edited volume in which we hope to draw together biosemioticians, semioticians and translation studies scholars to discuss the interdisciplinary relations between these fields and the implications of these relations for the study of social and cultural reality as emerging from both matter and mind. We invite colleagues who presented at the conference as well as those who did not to submit either theoretical or data-driven or mixed proposals, reflecting on the complexity of social-cultural emergence as a translation process. Some of the topics that colleagues could consider would be the following: Is translation, as semiotic work and process, indeed able to link all of the biological world, including humans, with the non-living world in one ecology, and if so how? What conceptual constructs in each of the three fields are relevant for the other fields, and how? Could the fields learn methodological and epistemological lessons from one another? If so, what would these entail? Could collaborative scholarship enhance an understanding of social-cultural emergence, and if so, what would this scholarship entail? How, if at all, does entropy and negentropy play out differently in social-cultural systems compared to biological and/or physical systems? How does social-cultural emergence differ from biological and even physical emergence? Systems thinking tends to ignore differences like the intentionality of biological agents in contrast to physical agents. Thus, if one were to consider the possibility that intention has causal effect, how does one factor intention into thinking about complex adaptive systems? References Cronin, M., 2017. Eco-translation: Translation and ecology in the age of the anthropocene. New York: Routledge. Deacon, T. W., 2013. Incomplete nature: How mind emerged from matter. New York: WW Norman & Company. Hoffmeyer, J., 2008. Biosemiotics: An examination into the signs of life and the life of signs. London: University of Scranton Press. Kauffman, S., 2012. From physics to semiotics. In: S. Rattasepp & T. Bennet, eds. Biosemiotic gatherings. Tartu: University of Tartu Press, pp. 30-46. Kress, G., 2013. Multimodal discourse analysis. In: J. P. Gee & M. Handford, eds. The Routledge handbook of discourse analysis. New York: Routledge, pp. 35-50.Latour, B., 2005. Reassembling the social: An introduction to actor-network-theory. Oxford: Oxford University Press. Marais, K., 2019. A (bio)semiotic theory of translation: The emergence of social-cultural reality. New York: Routledge. Petrilli, S., ed., 2003. Translation Translation. Amsterdam: Rodopi. Timeline We are currently in negotiations with a pre-eminent publisher who is interested in this proposal, but we need to submit a list of abstracts with the proposal. In order to achieve this, we foresee the following timeline: Submission of abstracts: 1 April 2022 Decision on abstracts: 15 April 2022 Submission of papers for peer review (if proposal is accepted): 1 December 2022 Feedback from peer reviewers: 1 February 2023 Submission of reworked papers: 1 April 2023 Submission of manuscript: 1 June 2023 Publication: End of 2023
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"This study explores Machine Translationese (MTese) -- the linguistic peculiarities of machine translation outputs -- focusing on the under-researched English-to-Chinese language pair in news texts. We construct a large dataset consisting of 4 sub-corpora and employ a comprehensive five-layer feature set. Then, a chi-square ranking algorithm is applied for feature selection in both classification and clustering tasks. Our findings confirm the presence of MTese in both Neural Machine Translation systems (NMTs) and Large Language Models (LLMs). Original Chinese texts are nearly perfectly distinguishable from both LLM and NMT outputs. Notable linguistic patterns in MT outputs are shorter sentence lengths and increased use of adversative conjunctions. Comparing LLMs and NMTs, we achieve approximately 70% classification accuracy, with LLMs exhibiting greater lexical diversity and NMTs using more brackets. Additionally, translation-specific LLMs show lower lexical diversity but higher usage of causal conjunctions compared to generic LLMs. Lastly, we find no significant differences between LLMs developed by Chinese firms and their foreign counterparts." Decoding Machine Translationese in English-Chinese News: LLMs vs. NMTs June 2025
DOI: 10.48550/arXiv.2506.22050
License: CC BY-NC-ND 4.0
Published version
Decoding Machine Translationese in English-Chinese News: LLMs vs. NMTs
Delu Daniel Kong Lieve Macken https://www.researchgate.net/publication/393148446_Decoding_Machine_Translationese_in_English-Chinese_News_LLMs_vs_NMTs #Metaglossia #metaglossia_mundus
"24 Arabic tales reach global readers
RIYADH: The King Abdulaziz Public Library has published 24 children’s stories translated from Arabic into English, French and Chinese.
The initiative was carried out in collaboration with Princess Nourah bint Abdulrahman University as part of a cultural translation project, the Saudi Press Agency reported on Sunday.
According to the library, one of the stories translated into French is a tale centered on Saudi coffee titled “Hours Pour Le Café Saoudien.”
A large collection of children’s stories written by various authors specializing in children’s literature was also translated into Chinese.
The project aims to share cultural and human values rooted in Arabic literature with a global audience, the SPA reported.
It also seeks to elevate Saudi and Arabic literature on the international stage by providing engaging, age-appropriate content for children of all ages.
The translation of these stories is part of a broader effort to build bridges of communication between cultures and peoples, aligning with the goals of Vision 2030 to enrich global culture with Arabic intellectual and creative output." ARAB NEWS 24 August 2025 Short Url https://arab.news/6upkt https://www.arabnews.com/node/2612884/amp #Metaglossia #metaglossia_mundus
"Czech author’s debut novel translated into 27 languages
23.08.2025
IRYNA BATUREVYCH
The debut novel “Deconstruction of Memory” (Rozložíš pamět, or “Memory Burn”) by Marek Torčík, published in October 2023, has already been translated into 27 languages and is available in over 30 countries, including almost all European countries.
Meanwhile, one of the most translated Ukrainian novels, “Internat” by Serhiy Zhadan, rights for which are held by the German publisher Suhrkamp, has been translated into 28 languages.
According to Pavlina Juračkova, editor and rights manager, this is not the limit for the sales of rights, but publishing house Paseka has “hit the limit” of its translation capacity. At the same time, the publisher does not see translation to English as possible.
“At 3:37, the novel’s protagonist is awakened by a phone call, and a late-night conversation with their mother stirs a whirlwind of memories. It takes them back to when they were teenagers in the Moravian town of Přerov, to the experiences of a queer youth growing up in a conservative industrial city, and in a family that always lacks money. This world punishes you for being different,” reads the annotation.
The novel begins with a quote from the novel “The Hour of the Star” by a Brazilian author of Jewish-Ukrainian descent, Clarice Lispector (the Ukrainian translation was published by Anetta Antonenko Publishing): “Is there anyone who has never asked themselves at least once: am I a monster, or is this what it means to be human?”
The Czech title of the novel reflects the protagonist’s desire to “deconstruct memory into pieces,” analyzing not only their memories (of their mother, father, grandfather, bowling, alcoholism, and coming out), but also the family’s “collective memory.”
In 2024, the novel received multiple awards, including the Czech literary prize Magnesia Litera and the Jiří Orten Award, which is awarded to young poets and writers under 30. Furthermore, one of its translations received the Susanna Roth Award.
The editor believes that its international popularity was boosted by the local themes that connect with global issues, as well as the personal narrative that mirrors modern social challenges.
The rise in the book’s popularity happened partly during a period of increasing challenges to freedom of speech and human rights in the US and Europe, including bans on books accused of “promoting homosexuality” and restrictions on holding Pride events.
“In recent years, many topics, previously overlooked or underrepresented, have started to receive more and more attention, and groups that were historically marginalized are gaining greater visibility through public discussions. This shift has led to varied reactions, including some resistance to these changes and attempts to restrict certain content,” notes Juračkova. “However, it is important to support honest voices that offer alternative perspectives and contribute to a more inclusive public dialogue. This applies not only to ‘Deconstruction of Memory’, but also to other important books.”
Torčík is a Prague-based poetry and prose writer and cultural journalist, holding a Master’s degree in English Literature and Culture from Charles University in Prague. Torčík’s debut poetry collection, “Rhizomy” (Roots), was published in 2016. Following the success of “Deconstruction of Memory,” the author is preparing a second novel set in the remote corners of northern Moravia, where history and myths resurface as reflections of the present.
Paseka publishing house releases both Czech and foreign authors translated into Czech, covering fiction, nonfiction, and children’s literature. Since its founding in 1989, Paseka has introduced over 1,600 titles to readers and currently produces around 40 new titles annually. Among its authors are Alice Munro, Salman Rushdie, Timothy Snyder, Susan Sontag, and others.
Rights sales began in 2022, and since then, 100 licenses have already been sold.
Copy editing: Joy Tataryn"
https://chytomo.com/en/czech-author-s-debut-novel-translated-into-27-languages/
#Metaglossia
#metaglossia_mundus
"Irlande : des passages bibliques remplacés par des textes inclusifs
23 AOÛT 2025
Le nouveau lectionnaire pour la messe en Irlande utilisera un langage plus inclusif dans les lectures bibliques, ce qui devrait « attirer plus profondément les fidèles vers la Parole de Dieu », selon le P. Neil Xavier O’Donoghue, secrétaire exécutif de la liturgie des évêques irlandais.
Un expert a expliqué que les changements apportés aux lectures de la messe auront un langage « plus inclusif » et n’excluront pas les femmes comme dans les traductions précédentes.
Basé sur la Revised New Jerusalem Bible de 2019, le texte actualisé remplacera le lectionnaire basé sur la Jerusalem Bible, qui est utilisé depuis plus de cinquante ans. Parmi les changements figurent le remplacement de certaines occurrences de « homme » ou « hommes » par des expressions telles que « hommes et femmes », « sœurs et frères » ou « personnes », par exemple.
Le P. O’Donoghue a ajouté que la traduction est « plus inclusive, elle n’est pas “woke” », mais, commente InfoCatolica avec justesse, il est évident que seule un tel idéologie peut mener à bien un projet de cette nature.
L’expert a encore expliqué : « Quand il faut utiliser le mot “homme”, on utilise “homme”. Mais parfois, on utilise autre chose. C’est une utilisation plus réfléchie de la langue. »
Et de conclure : « Je ne pense pas que la traduction de la RNJB soit woke. Je pense qu’il est tout simplement normal en anglais standard de dire qu’il y a une différence entre “frères” et “frères et sœurs”. Si je demande “combien as-tu de frères et sœurs ?”, on comprend une question différente de “combien as-tu de frères ?”. Je dirais que la traduction de la RNJB tient davantage compte du fait que la Bible s’adresse aussi bien aux hommes qu’aux femmes. »
La consultation publique en Irlande a montré un fort soutien à cette approche, avec plus de 150 contributions individuelles et la quasi-totalité des 20 organisations qui ont répondu en faveur de l’utilisation d’un langage inclusif lorsque cela est approprié.
Cette déviation woke va aussi s’exporter
Le projet de lectionnaire est une initiative conjointe des conférences épiscopales d’Irlande, d’Australie et de Nouvelle-Zélande. Les évêques des trois pays ont été invités à examiner et à commenter les textes préliminaires, actuellement révisés par un groupe de travail composé d’experts en Ecriture et en liturgie.
Le P. O’Donoghue a souligné que le texte actualisé vise à favoriser un lien plus profond avec les Ecritures : « L’idée est que les catholiques grandissent dans leur appréciation de la Parole de Dieu, qu’ils aient une rencontre plus profonde avec le Christ dans la Parole de Dieu lorsqu’ils assistent à la messe. Le nouveau lectionnaire est une occasion de formation liturgique. »
Une profonde erreur de perspective
Saint Jean Chrysostome a eu cette belle formule : la sainte Ecriture est comme une lettre envoyée par un Père à ses enfants qui pérégrinent sur terre. Voudrait-on changer les termes d’une lettre ancienne, parce qu’ils ne cadrent pas avec les lubies du moment ?
Mais le problème principal est que la sainte Ecriture est une Révélation de Dieu, un donné, situé dans le temps, même si son contenu s’adresse à tous. Changer les formules, c’est s’attaquer à l’inspiration du Saint Esprit qui nous les a livrées telles quelles.
C’est considérer que l’Esprit qui les a « dictées », inspirées selon le langage théologique, n’était pas capable d’embrasser tous les temps et de considérer le nôtre. Autrement dit, le censurer.
C’est enfin montrer que l’on est incapable d’expliquer droitement la Sainte Ecriture et de l’assumer telle qu’elle est, c’est-à-dire, sans inclusion. Chaque époque, même depuis les Apôtres, nécessite une telle explication. Ce n’est pas en changeant ou en transformant le texte qu’il est possible d’y arriver.
(Sources : Irish Catholic/InfoCatólica – FSSPX.Actualités)"
https://fsspx.news/fr/news/irlande-des-passages-bibliques-remplaces-par-des-textes-inclusifs-54016
#Metaglossia
#metaglossia_mundus
"Abstract: The book Linguistics for Translators, coauthored by Ali Almanna and Juliane House, represents an in-depth exploration of how linguistic theory intersects with translation practices. Its goal is to equip translators with a comprehensive understanding of key linguistic domains. It covers essential areas of linguistics, including phonetics, morphology, syntax, semantics, pragmatics, and discourse analysis, illustrating how each field contributes to more accurate and effective translations. The authors present practical examples and highlight multilingual perspectives, focusing on languages such as English, German, Arabic, French, and Chinese, thus enabling them to identify the challenges and nuances associated with translating between languages that feature different syntactic, morphological, and phonological systems. This work emphasizes the importance of understanding sociolinguistic variations and cultural contexts, and it offers strategies that can be used to manage dialects, registers, and idiomatic expressions while maintaining the intended meaning and tone across different languages. It also explores cognitive linguistics, particularly with respect to how language reflects thought processes, which can help translators preserve the conceptual and metaphorical meanings of source texts. While this book excels in its presentation of linguistic theories and their applications, it also has certain limitations, including a strong emphasis on theoretical concepts, which may be challenging for readers seeking more practical translation exercises. Additionally, this book’s coverage is somewhat narrow with respect to modern technological tools, including computer-assisted translation, machine translation (MT), and artificial intelligence (AI) translation, which are increasingly relevant in the translation industry. Overall, by encouraging readers to consider both the cultural norms and cognitive patterns that shape language use, this work serves as a valuable resource for translators, as it provides theoretical insights and practical examples that help them navigate idiomatic expressions, cultural references, and conceptual differences across languages to produce more accurate, culturally sensitive, and cognitively informed translations."
Review
Open access
Published: 22 August 2025
Book review: Linguistics for Translators by Ali Almanna and Juliane House
Zhengbing Liu
Humanities and Social Sciences Communications volume 12, Article number: 1373 (2025) Cite this article
https://www.nature.com/articles/s41599-025-05738-3
#Metaglossia
#metaglossia_mundus
"The multilingual edge: How local languages can future-proof India’s education Education / By Subha Sankar Chatterjee / August 23, 2025
Walk into a primary school in a small town in Odisha, or a village in Madhya Pradesh, and you will notice a quiet but powerful struggle. Children often encounter their first lessons in English or Hindi, languages alien to their homes. For many, learning becomes not a process of discovery but a daily exercise in translation. Comprehension falters, curiosity is dulled, and dropout rates climb. This is the paradox of India’s education system: in a nation with over 22 official languages and hundreds of spoken tongues, we continue to imagine excellence primarily through the prism of English. Yet if India is to future-proof its education system and unlock its demographic dividend truly, the answer may not be more English, but more multilingualism.
India’s obsession with English as the language of aspiration has served us well in certain contexts. It has opened global doors for our IT industry, created a cosmopolitan managerial class, and connected us to international business. But the downside has been equally stark. Over 70% of Indian children in government schools still struggle to achieve grade-level reading proficiency by age 10, according to ASER surveys. Much of this is because they are taught in a language they do not speak at home. Cognitive science has an unambiguous verdict on this – children learn best in their mother tongue, especially in the foundational years. Starting education in an unfamiliar language can delay literacy, distort comprehension, and weaken critical thinking. It creates learners who can “read without understanding”—a phenomenon all too visible in classrooms today.
What if, instead of treating local languages as obstacles, we treated them as assets? Research worldwide shows that multilingual children have higher problem-solving ability, better memory, and greater adaptability. In a world where adaptability is the ultimate skill, India’s linguistic diversity could become our hidden superpower. Embedding regional languages alongside English can create a dual advantage. Children learn foundational concepts in their mother tongue, ensuring clarity and confidence, while gradually acquiring English as a bridge to global opportunities. This is not about rejecting English – it is about sequencing learning so that comprehension precedes fluency.
The National Education Policy of 2020 took an important step in this direction by recommending mother tongue or regional language as the medium of instruction at least until Grade 5. This was a recognition that linguistic diversity is not a barrier but a bridge. But policy alone is not enough. It requires curriculum redesign, teacher training, and digital innovation. Imagine AI-powered textbooks that can instantly switch between Hindi, Bengali, or Marathi to explain a concept. Picture learning apps that allow a child in a tribal belt of Jharkhand to master science in Santhali before transitioning to English terminology. These are not futuristic dreams – they are within reach if India invests in multilingual educational technology at scale.
This shift is not just about pedagogy; it is also about dignity. Language is identity. When a child hears their mother tongue in the classroom, it signals recognition of who they are. It embeds inclusion into education. It bridges rural-urban divides, giving first-generation learners confidence that they too, belong in the nation’s growth story. This is particularly critical for future workforce readiness. As India seeks to become a ten trillion-dollar economy, it will need to unlock talent beyond the metropolitan elite. A multilingual education model ensures that potential is not restricted to those who happen to grow up speaking English.
Global debates on education often revolve around STEM, AI, and skills of the future. But the foundation for all of this lies in comprehension. Without clarity of understanding, no amount of coding classes or robotics labs will create innovators. A multilingual model ensures that comprehension is universal, not elite. Far from being a handicap, India’s linguistic diversity could become our competitive edge. By institutionalising multilingual education, we will not only bridge foundational gaps but also create a workforce that is cognitively sharper, culturally grounded, and globally agile.
There are, of course, challenges. Teacher capacity is limited, and many educators themselves are not trained to switch comfortably across languages. Developing high-quality learning material in multiple languages requires investment and creativity. Parents, too, sometimes resist mother-tongue education, fearing it will disadvantage their children in competitive exams or global careers. These fears are understandable, but they are also misplaced. Evidence suggests that children who gain strong literacy in their first language actually acquire English faster later on. What matters is not choosing one language over another, but ensuring that the sequence of learning respects the child’s cognitive development.
The debate is not ‘English versus local languages.’ It is about recognising that true fluency comes from confidence in one’s first language, coupled with mastery of a global second language. India’s future-ready education system must therefore be multilingual by design, not by default. It must give every child the ability to think deeply in their mother tongue, to communicate seamlessly in English, and to appreciate the richness of India’s linguistic mosaic. Such an education model is not just about producing employable graduates; it is about nurturing empowered citizens who can innovate locally and compete globally.
The way forward will demand partnership between government, technology innovators, publishers, and civil society. It will need creative use of AI and digital platforms to reduce costs and scale access. It will call for a cultural shift where parents and policymakers alike view local languages not as burdens but as bridges. And it will require a narrative that redefines aspiration – not as leaving one’s mother tongue behind, but as carrying it proudly into the future.
In the end, a child who thinks in their mother tongue and works in English will not just be employable – they will be empowered. That is how India can transform its linguistic diversity into an educational dividend, and its classrooms into engines of inclusive growth." https://etedge-insights.com/industry/education/the-multilingual-edge-how-local-languages-can-future-proof-indias-education/ #Metaglossia #metaglossia_mundus
"The Sign Languages Interpreters Association of Fiji (SLIAF) has been launched and is aimed at empowering members who have been described as ‘the silent bridges between worlds’.
The launching event held at the Tanoa Hotel in Suva brought stakeholders together to unveil a mission they’ve planned and put together for two decades.
Association’s board chairperson Claudette Wilson said for decades, Fijian interpreters have been the silent bridges between worlds.
“Ours is a story of love, sacrifice and unyielding advocacy from performing our duties in church pews to Parliament and today we honour that journey.
“As interpreters, we did not just translate words, we dismantled barriers and ensure access across the education sector, justice, media and the community.”"
Sign language interpreters form a new association
By Serafina Silaitoga
https://www.fijitimes.com.fj/sign-language-interpreters-form-a-new-association/
#Metaglossia
#metaglossia_mundus
"DeafBlind mentors and interpreters come to Newport to practice an emerging language based solely on touch AUGUST 21, 2025
Protactile is a language based solely on touch. When two DeafBlind people are conversing, they use all four arms and physical contact is maintained the whole time.
By SHAYLA ESCUDERO/Lincoln Chronicle
NEWPORT – Inside Newport’s Visual Arts Center, a group of 30 people sit with their chairs facing one another, knees together, hands together and tracing over each other’s palms, chest, legs and back. The room is mostly silent, but nearly everyone is in conversation.
All are using an emerging language – a method of communication solely based on touch.
When Jason “Jaz” Herbers’ vision began to change, it took the joy out of American Sign Language. All the facial expressions and signs he learned and loved as a Deaf person fell flat as a DeafBlind person.
Historically, DeafBlind people have been limited to using interpreters or used sign language that left gaps in communication. But Protactile, an emerging language that only took shape over the last 20 years, changed how Herbers and other DeafBlind people communicate.
“It gives us more information and we are in direct communication,” Herbers, a DeafBlind teacher from Ohio said. “It gives us autonomy.”
What began in the Pacific Northwest, has now spread all over the United States.
Approximately 25 interpreters and 20 DeafBlind mentors flew to Oregon the past two weeks from all over the United States to attend training sessions as part of a Western Oregon University grant. The training program, which included a stop in Newport, gave interpreters a chance to put their skills into practice.
Connecting
Silva-Kopec When Lesley Silva-Kopec uses the Protactile expression for beach, her hands trace a rolling wave on the forearm of the person she is speaking with. Then, she places her hands over their upper chest, wiggling her fingers like sunbursts.
That’s one of her favorite expressions, but there are so many she loves. When she first saw a video of people using Protactile, she described the moment as mesmerizing.
When two people are conversing, they use all four arms and physical contact is maintained the whole time. One hand is often used to give feedback, called back channeling, to show you are listening– the equivalent of maintaining eye contact or nodding in the sighted world.
For Silva-Kopec, a DeafBlind educator from New York, it’s difficult to describe the differences between American Sign Language and Protactile because it’s a difference you feel – it’s visceral and transcends words.
“I’m able to reciprocate emotion, to connect more,” she said, “I can feel how others feel when they touch me.”
Silva-Kopec uses Protactile with her husband, who is also DeafBlind, and feels that she is able to connect on a deeper level with him because of it. But she doesn’t just feel more connected to other DeafBlind people, she feels more connected to herself.
“I feel like sometimes touch is a universal way to connect with people,” Silva-Kopec said.
Shayla Escudero / Lincoln Chronicle Jelica Nuccio, right, one of the founders of Protactile, an emerging language for DeafBlind people, greets a visitor during a visit to Newport this week. Emerging language Facing the ocean on Tuesday, interpreters and DeafBlind educators clustered in groups, hands placed on Jelica Nuccio’s soft beige shawl, eager to greet the woman who helped create the language they cherish.
In 2007, when Nuccio served as the first DeafBlind director of the DeafBlind Service Center in Seattle, she began to advocate for DeafBlind people to communicate with each other without the use of interpreters. They established a space where all communication that happened would be by touch and started training DeafBlind people to use it.
“That’s when the concept of touch was introduced but it hadn’t become a language yet,” she said. “That happened through years of community coming together.”
Nuccio didn’t set out to create a language, but it happened anyway.
“I didn’t expect it, it was shocking to know that through data collection and research we found out a language was developing.”
Protactile changed with time and American Sign Language signs were replaced with tactile signs. Now, linguists argue Protactile is its own language, with grammatical structures and expressions separate from American Sign Language.
For some DeafBlind people, Protactile felt like a missing piece had finally fallen into place.
“I had grown up in the deaf community but I felt lost, like something was missing,” said Rhonda Voight-Campbell, a DeafBlind educator from New York. “When I learned Protactile I finally felt connected to the DeafBlind community.”
Now living in Monmouth, Nuccio runs Tactile Communications, which builds curriculum and trains people in Protactile. Several of the DeafBlind educators learned or now work for Nuccio’s organization.
Recently, people came all the way from France to learn, Nuccio said. They spoke French Sign Language which was a bit of a barrier but found common ground. And now they will bring back what they had learned to their DeafBlind community in France, she said.
With time, she believes Protactile will spread even more outside the United States.
Shayla Escudero / Lincoln Chronicle Interpreters and DeafBlind educators from all over the United States came to Oregon and Newport this week for Protactile training. One of the stops was in Nye Beach overlooking the Pacific Ocean, where interpreters received training and practiced using what they learned. Visiting Newport to train In 2021, Western Oregon University was awarded a five-year federal grant through the U.S. Department of Education to train sign language interpreters working with DeafBlind individuals who use the new language.The grant is based on the Rehabilitation Acts of 1973 and 1974, a landmark law that prohibits disability discrimination in programs conducted by federal agencies.
Shayla Escudero / Lincoln Chronicle Members of the DeafBlind community touch the sculpture in Newport’s Don Davis Park entitled The Absence of Emptiness during a visit and workshop this week. The tactile sculpture has several people carved into wood, with varying texture. The grant was in part awarded in recognition that DeafBlind people are underemployed and the lack of interpreters is a barrier, said CM Hall. She is the co-director of WOU’s Protactile Language Interpreting National Education program and a Newport city councilor.
DeafBlind educators from all over the United States come to the university to train interpreters, and stopping in Newport allows them a chance to apply what they learn.
“Newport is full of tactile sensations, you can feel the ocean, smell the taffy on the Bayfront, it really perks up the senses,” Hall said.
Newport also has a lot of tactile art. At Nye Beach, their hands followed the markings of relief wood sculpted faces. Along the Bayfront, their hands slid over the ridges of the tire sculptures of the animals in front of Ripley’s Believe it Or Not.
Interpreters put their skills to use from interpreting presentations by Newport City Manager Nina Vetter to interactions along Bayfront stores.
The grant is in its fourth year and has one more year left of funding. But, with the Trump Administration’s pullback of federal grants, there is uncertainty in the air.
“It’s the last day of possibly the last year of the training program,” Herbers said.
It’s a cause for contemplation. As a DeafBlind educator, Herbers worries about DeafBlind people losing their autonomy if they do not access the language and if interpreters aren’t learning directly from DeafBlind people.
He also believes that the sighted world has a lot of work to break down the stigma of touch. DeafBlind people would be cut off from the world without touch – it is how Herbers gets his information. But sighted people may not be used to this type of communication.
“I think that for many people touch is still very taboo,” Herbers said. “ I just want people to be open minded.”
Shayla Escudero covers Lincoln County government, education, Newport, housing and social services for Lincoln Chronicle and can be reached at Shayla@LincolnChronicle.org"
https://lincolnchronicle.org/62643-2/ #Metaglossia #metaglossia_mundus
"ABSTRACT: The advancement of machine interpreting (MI) has the potential to revolutionise the field of interpreting. However, concerns persist regarding the capacity of MI to accurately convey emotions in original discourse, especially in high-stakes settings such as press conferences. To address this, the present study investigates the sentiment mediation in human interpreting (HI) and MI when rendering source speeches during press conferences. Employing the Linguistic Inquiry and Word Count (LIWC), specifically LIWC2015 and LIWC-22, a comprehensive sentiment analysis was conducted on a self-built corpus comprising Chinese source speeches and their corresponding English interpreting produced by human interpreters and machines. The findings reveal that both HI and MI demonstrate a comparable capacity to convey sentiments, with MI exhibiting human-like patterns in handling emotional content in original discourse. However, compared to human interpreters, MI plays a more active role in attenuating negative emotions while accentuating positive emotions. These results underscore the enhanced ability of AI-powered MI to emulate human interactions and generate renditions that align with human values, highlighting its potential to effectively facilitate cross-linguistic communication. The study contributes to the growing body of research on the impact of interpreting technology on communication dynamics and the evolving nature of interpreting in the AI era."
Can artificial intelligence mirror the human’s emotions? A comparative sentiment analysis of human and machine interpreting in press conferences Wenkang Zhang,Yao Yao,Rui Xie &Dechao Lic Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR Correspondence ctdechao@polyu.edu.hk View further author information Received 22 Nov 2024, Accepted 05 Aug 2025, Published online: 21 Aug 2025 Cite this article https://doi.org/10.1080/0144929X.2025.2546975
https://www.tandfonline.com/doi/full/10.1080/0144929X.2025.2546975?mi=kigznk #Metaglossia #metaglossia_mundus
"Women in translation: 5 voices redefining global literature
As August is celebrated globally as the Month for Women in Translation, it is a moment to honour the women who have redefined the craft and its possibilities.
Written by Aanya Mehta
Updated: August 22, 2025 12:16 PM IST
Deepa Bhasthi and Daisy Rockwell's respective Booker wins encouraged interest in Indian translations.
For a long time, my understanding of literature was confined to reading the “classics” of the English language, essentially works that belonged to a canon of prestige and authority. But my perception has shifted. The emergence and growing recognition of Indian texts in English translation has broadened the field of what it means to read, to belong, and to access stories. While some thinkers still debate the authenticity of translations, it is a fact that translated works expand accessibility, deepen our understanding of cultures, and create a wider reach for future generations.
STORY CONTINUES BELOW THIS AD
As August is celebrated globally as the Month for Women in Translation, it is a moment to honour the women who have redefined the craft and its possibilities. Here are five remarkable translators who continue to inspire and influence the literary world:
Daisy Rockwell
Daisy Rockwell is best known for her English rendering of Geetanjali Shree’s Tomb of Sand. (bookerprize.com)
An American translator and painter, Daisy Rockwell is best known for her English rendering of Geetanjali Shree’s Tomb of Sand, which won the 2022 International Booker Prize. Rockwell has often spoken about the challenges of translating novels steeped in regional dialects, cultural nuance, and linguistic play. Her choice of title, Tomb of Sand, she once explained, was meant to give readers “an open door.” That openness defines her work: she allows readers to belong to a story without ever forcing entry. Beyond Tomb of Sand, she has translated major Hindi and Urdu writers, including Upendranath Ashk and Bhisham Sahni, cementing her place as one of the most important bridges between South Asian literature and the world.
STORY CONTINUES BELOW THIS AD
Lakshmi Holmström
Lakshmi Holmström’s translation of Bama’s Karukki, an autobiographical novel by a Dalit Christian woman, remains a powerful testament to literature as resistance. (Source: RLF)
Lakshmi Holmström was a pioneer in bringing Tamil voices to English readers. Her translation of Bama’s Karukku, a landmark autobiographical novel by a Dalit Christian woman, remains a powerful testament to literature as resistance. Holmström’s ability to retain the rhythm, idioms, and cultural depth of Tamil speech made her translations both authentic and accessible. She also translated Ashokamitran and other modern Tamil classics, ensuring that Tamil literature gained recognition on international platforms. Her work not only preserved voices from the margins but also reshaped how Indian literature is perceived globally.
STORIES YOU MAY LIKE
Sunita Ahuja files for divorce from Govinda; alleges adultery, cruelty and cheating: Report
In ‘Ba***ds of Bollywood’, Bollywood’s ‘crown prince’ Aryan Khan debuts with a curve ball
Best of Both Sides: Reset with China is not a possibility
Jennifer Croft
Jennifer Croft, an American translator, co-won the 2018 International Booker Prize for Flights.
Jennifer Croft, an American translator, is celebrated for her work with Polish Nobel laureate Olga Tokarczuk. She co-won the 2018 International Booker Prize for Flights, a genre-defying novel of fragments, meditations, and philosophical wanderings. Croft’s translation skillfully balanced Tokarczuk’s playfulness and complexity with clarity for English readers. She later translated Tokarczuk’s The Books of Jacob, as well as works from Spanish and Ukrainian, extending her reach across multiple linguistic traditions. Croft embodies the translator as both interpreter and artist, attuned to rhythm and form as much as meaning.
Deborah Smith
Deborah Smith’s translation of The Vegetarian won the 2016 International Booker Prize. (Source: bookerprize.com)
Deborah Smith, a British translator, gained international recognition for introducing South Korean author Han Kang to the English-speaking world. Her translation of The Vegetarian won the 2016 International Booker Prize, propelling Han Kang into global prominence. Smith’s work is noted for its lyrical precision and ability to preserve both mystery and clarity in Han Kang’s prose. At the same time, her style has sparked debates about the translator’s role as co-creator, raising important questions about fidelity, interpretation, and creativity in translation. She has since translated more of Han Kang’s work and founded Tilted Axis Press, a publishing house dedicated to translated literature.
Also Read | International Booker Prize 2025: ‘I call myself a writer-translator with a hyphen in between’, says Deepa Bhasthi
Deepa Bhasthi
Deepa Bhasthi recently brought Banu Mushtaq’s The Heart Lamp into English. (Source: Bookerprize.com)
One of the most compelling contemporary voices in translation, Indian translator Deepa Bhasthi recently brought Banu Mushtaq’s The Heart Lamp into English. The novel, which portrays the struggles of a Muslim woman’s community in Karnataka, reflects Bhasthi’s philosophy of decolonizing language. She has spoken about deliberately avoiding italicized “foreign” words, instead retaining Kannada expressions in their natural form to preserve linguistic integrity. Her translator’s note in the new edition emphasizes why certain words must remain untouched: they carry a cultural and emotional weight that cannot—and should not—be erased. In doing so, Bhasthi asserts translation as an act of both preservation and defiance."
https://indianexpress.com/article/books-and-literature/women-in-translation-5-voices-10196974/
#Metaglossia
#metaglossia_mundus
"Global TV without subtitles: How audio translators are powering multilingual streaming
August 18, 2025
In a time of unprecedented connectivity, the barriers between cultures have never been more permeable. The streaming explosion worldwide has spawned an insatiable hunger for content that knows no borders. What was once a specialised pursuit among film aficionados has become a mass entertainment activity. The stumbling block has always been the language barrier.
Subtitles, as effective as they are, ask the audience to shift their focus from the visual narrative to reading what’s on the screen. It’s a less immersive and more passive experience, particularly for those who struggle with reading and those who are visually impaired. The development of artificial intelligence is now providing a forceful solution, one that is poised to dismantle these language barriers in a way that is both seamless and natural to watch. This is the future of media consumption, where technology provides a genuine global cinematic language.
Streaming services have long known the tremendous value in making their material available to global audiences. The popularity of non-English language shows has increased dramatically, reflecting an international demand for varied storytelling. Indeed, a study by Ampere Analysis revealed that frequent watching of non-English language content grew 24 per cent among a specific audience in a number of English-speaking nations over a span of four years. This phenomenon is not limited to a single geographical area or genre, with Korean dramas, for instance, experiencing a 35 per cent increase in frequent watching. The problem, then, has been how to keep up with this demand and provide content that feels every bit as real as the original. This is where AI-based tools step in.
By using advanced algorithms, these tools can transcribe and translate dialogue automatically. Not just a basic translation, these sophisticated audio translators examine the vocal features of the original voice, from tone and pitch to emotional expression, and reproduce a new audio track in another language with those same subtleties intact. This procedure, also known as AI dubbing, is not just a computerised voice reading from a script; it is a sophisticated, multi-step process for generating a realistic and emotionally engaging experience. Continue reading to know more.
The Technology Behind the Magic
The AI audio translation process is a wonder of contemporary technology. It starts with automatic speech recognition, wherein the AI translates the original spoken words into text. This is a critical initial step that has gotten extremely precise because of deep learning algorithms that have been trained on huge databases. After transcribing, the text is run through a neural machine translation system, which renders the conversation in the target language. Yet the most innovative part is the last step: voice cloning and text-to-speech synthesis.
Current AI can now produce extremely natural and expressive speech. Unlike computer voices of the past that droned on without expression, these machines are capable of real intonation and rhythm reproduction. The best of these go one step further by mimicking the voice of the original speaker. This is to say that a viewer can listen to the translated dialogue from a voice that almost exactly resembles the on-screen actor, providing an immersive experience generally out of reach of traditional dubbing. The tech also synchronises the new audio with on-screen action and lip movements, a tedious task now being automated to an impressive extent. This capacity to preserve the integrity of the initial performance, but present it in a viewer’s home language, is what makes the tech so revolutionary for international streaming.
The Benefits for Viewers and Creators
The transition from conventional subtitling and hand-dubbing to audio translation powered by AI brings tremendous advantages to audiences and the industry alike. For audiences, the key benefit is a more naturalistic and immersive experience. Being able to watch a movie or a program without constantly reading subtitles enables more attention to the visual narrative and film craft. This is particularly useful for fast-moving or visually intensive scenes when reading may be a distraction. Dubbed content also opens up access to a wider age group, including children and those who are visually impaired or dyslexic. Emotional connection with the characters tends to be stronger when their dialogue is listened to in one’s own language, making the content more appealing and evocative.
For both creators and streaming services, the advantages are no less important. AI dubbing speed and cost savings are game changers. Legacy dubbing is a time-consuming and costly endeavor requiring the hiring of voiceover talent, recording studio time, and careful post-production editing. AI automation slashes this time dramatically, enabling new content to be localised and rolled out in various languages at once alongside the original release. This enables platforms to reach new markets quickly and provide a broader library of multilingual content. The technology is scalable, so even huge back catalogues of old movies and shows can be dubbed in less time and at lower expense, making vast amounts of content available to global audiences.
Addressing the Challenges and the Human Element
Though it has revolutionary possibilities, AI audio translation does have its pitfalls. The technology needs to be able to accurately deliver the subtle nuances of human language, such as idioms, slang, and cultural humor, which are hard to translate in a literal sense. One misstep of translation can change the meaning and effect of a scene. The objective is not merely to swap the words but to preserve the cultural and emotional content of the original conversation. It is here that a hybrid model, which integrates AI automation with human know-how, is the most effective approach.
AI-generated translations are fine-tuned by human linguists and editors. They make sure that the translated script is culturally correct and that the end audio output sounds natural and authentic. The human-machine collaboration makes sure that the quality does not suffer, and the minute storytelling nuances do not get lost. This is what takes the technology from just being a translation tool to truly being a creative resource. It demonstrates that while AI can automate the heavy lifting, the final polish and creative integrity of the content still require a human touch.
Conclusion: The Future of Multilingual Streaming
The future of international television is certainly multilingual. As streaming services battle for international viewership, providing content in a viewer’s native language will become the norm and not an indulgence. Audio translation technology will become a vital part of an effective localisation content strategy. In the future, the technology will keep evolving, with voice cloning, emotion synthesis, and lip-sync accuracy getting better. You will soon have live translation abilities, and live broadcasts and events will be instantly dubbed in all different languages. This is going to be a giant leap forward for international news, sports, and live entertainment.
Do you enjoy consuming foreign content? What is your opinion about watching content without subtitles but in your language? Share your thoughts."
https://www.advanced-television.com/2025/08/18/global-tv-without-subtitles-how-audio-translators-are-powering-multilingual-streaming/
#Metaglossia
#metaglossia_mundus
"Kurds demand recognition of their language in Syria’s constitution
Securing Kurdish rights in Syria’s new constitution requires recognition of their identity and mother tongue, alongside guarantees for all communities.
19 Aug 2025, 12:12
NURŞAN EBDÎ
Kobanê – Amid Syria’s political transition following the fall of the regime and the rise of the interim government, marginalized communities are pressing for constitutional guarantees. At the forefront are Kurds, who argue that recognition of their mother tongue is essential to preserving their identity and securing their rights.
‘Mother tongue is history and identity’
Ronida Ali, an administrator at the Kurdish Language Institute in Kobani, said that building a decentralized, democratic Syria requires constitutional recognition of all communities, identities, languages, and religions.
She argued that despite the change in government, policies toward the Kurds remain largely the same. “They continue to reject Kurdish achievements, our identity, and our language. The refusal to provide legal guarantees for Kurds in the new Syrian constitution reflects the broader policies of dominant states toward minorities,” she said.
Ali stressed that despite massacres, forced displacement, and decades of denial, Kurds have continued to resist and safeguard their language and identity. “Mother tongue is history and identity. When hegemonic powers seek to erase a community, they first target its language and culture,” she added.
Condemning the interim government for excluding Kurds from constitutional recognition, she said: “The Kurdish people have fought and made huge sacrifices for the gains they now defend. We will not allow these achievements to be taken away.”
Ali emphasized that Kurds are not demanding special privileges, but equality. “We demand recognition of our language, just as Arabic is recognized. This must apply to all communities in Syria — Kurds, Arabs, Assyrians, and Syriacs. What we seek is peace, democracy, and a free Syria that embraces everyone.”
Defending culture and language
Ali underlined the responsibility of Kurdish communities to safeguard and develop their language in order to secure their rights in a new Syria...
She recalled that for more than a century, Kurds across the four parts of Kurdistan have faced massacres and systematic repression aimed at erasing their identity. “After the July 19 Revolution, also known as the Women’s Revolution, we gained the strength to resist authoritarianism and secure many achievements through sacrifice. These cannot be surrendered or taken from us,” she said.
Citing Abdullah Öcalan’s call for peace and a democratic society, Ali urged Syria’s new authorities to end conflict and embrace political dialogue. “It is time for peace, democracy, and negotiations to build a free, democratic Syria inclusive of all its components.”"
https://perma.cc/JR9L-7KN8
#Metaglossia
#metaglossia_mundus
Chartered Psychologist Zeynep Yaşar on being a psychologist and a translator.
"‘I am not just translating words, I am translating worlds’
20 August 2025
As a clinical psychologist, author and translator, I have spent the past years moving between multiple settings –private practice, humanitarian fieldwork and collaborative projects in child protection and psychosocial support. Whether sitting across from a patient in therapy or working alongside teams addressing systemic forms of harm, I have come to see language not only as a tool, but as a threshold. I find myself continually moving between languages – some spoken, some silent, some hidden deep in the folds of psychic life.
Words often return to me during both therapy sessions and long nights spent shaping translated manuscripts. Just as a translator listens for what lies between the lines, a psychologist listens for what lies beneath the words. Both seek meaning in the margins.
The analyst as Übersetzer
In psychoanalysis, translation is not a metaphor; it is method. Freud speaks of dreams as translations of unconscious thoughts, and of interpretation as the analyst's effort to translate latent meanings into conscious understanding. In this framework, the analyst functions not only as a listener, but as a Übersetzer. I use this German word deliberately, echoing Sigmund Freud's own language, in which translation (Übersetzung) is central to both dreams and interpretation. While Übersetzer is a gender-neutral or masculine form, its feminine counterpart, Übersetzerin, remains present by implication. In this text, I allow the term to hold both, gesturing toward the figure of the female translator as a ferrywoman between inner and outer worlds, the known and the unsymbolised. This choice mirrors my own position as a woman therapist and translator, carrying meaning across psychic and linguistic borders.
This work often feels like standing at the edge of language, trying to find words for what resists symbolisation. I believe that therapists, like translators, work at the frontier of meaning. In other words, a therapist functions as a kind of intermediary or guide between the unknown (the patient's deep inner world, the unconscious) and the known (consciousness, meaning expressed in words).
Carrying words, carrying worlds
I am currently working as the translation editor of The Heart Lamp, Banu Mushtaq's Booker Prize-winning novel, a powerful and poetic narrative that brings to light the experiences of women living under systemic oppression. While the novel is set in India and rooted in specific cultural and political contexts, its emotional truths transcend geography. Working on this text has deepened my understanding that although the forms of gender-based suffering may differ from one place to another, the underlying wounds often echo the same themes, silencing, displacement, resilience. The pain of a woman in India reverberates with the sorrow of a woman in Turkey or the quiet despair of someone in the UK.
This is not only a human reality but also a psychic one: a kind of emotional translation that occurs within us as readers, listeners, and therapists. As I immersed myself in The Heart Lamp, I experienced firsthand how literature, just like therapy, can translate the unconscious suffering of one individual or group into something that others can feel, understand, and carry.
The process reminded me that deep empathy is itself a form of translation: a movement between inner and outer, self and other, local and universal. As I navigate the political weight of terms like 'empowerment','embodiment' or 'testimony', I am constantly aware that I am not just translating words, I am translating worlds. Each sentence bears traces of someone's pain, survival and history.
In translating, I find myself pausing, asking: What is the responsibility of carrying someone else's experience across linguistic and cultural thresholds? Isn't this the very question that arises in every therapeutic encounter as well?
When I was writing my own book, a work born out of years of clinical practice and field research, centered on the gendered oppression of widowhood, I encountered a similar act of translation. I wasn't translating someone else's words across languages, but I was translating something equally intricate: the emotional, intellectual, and somatic residues of all I had witnessed. The stories I had heard from patients, survivors, and displaced individuals echoed within me long after they had been spoken. Writing became a way of making sense of those echoes. I found myself translating from inner to outer, from fragmented sensations into narrative form. In doing so, I realised that the act of writing, like the act of therapy, requires the transformation of what is deeply felt but not yet formulated.
What began as a book about others' experiences gradually became a space where my own internal world found voice and structure. In this way, authorship became its own kind of therapeutic translation, one in which I did not merely put something into the world, but allowed the world I had absorbed to find its form through me.
This interplay between fidelity and resonance is central to both. In translation, we often face the challenge of choosing between several meanings a word might carry, each valid, but only one fitting the texture of the whole. We ask: which interpretation best serves the integrity of the text? Similarly, in therapy, we must choose which meanings to reflect back, when to remain silent and when to translate a patient's fragmented speech or embodied silence into something that can be metabolised. The goal is not to impose coherence, but to offer a version of their inner world that feels both truthful and tolerable. Just as a translator must serve the spirit of the text more than its literal form, the therapist must translate what is heard and also what is not heard into the language most attuned to the patient's needs and truth.
Both in therapy and in translation, we are confronted with the limits of representation. The patient, like the author, offers a fragmented narrative, an interrupted syntax of memory, fantasy, and silence. Our task is not to 'correct' or 'complete' it, but to stay with its ambiguity. I often think of the patient's symptom as a text written in an unfamiliar script. We sit together, decoding, sometimes awkwardly, sometimes poetically, until meaning emerges, hesitantly and always in translation.
Deciphering the unspoken
Sigmund Freud describes dreams as written in a pictographic language that must be deciphered. He compared the work of interpretation to the deciphering of hieroglyphs, a process of symbolic translation between hidden and manifest content. In this analogy, the psychotherapist is not unlike the archaeologist or the literary translator,searching for the missing key, aware that there is no perfect equivalence, only approximations. There is always a remainder, something that resists being put into words.
The process also calls for the analyst's own unconscious to participate. As Freud noted, interpretations are not simply objective formulations, they are transferences themselves, shaped by the unconscious of the therapist. Likewise, a translation is never neutral. My own voice, my cultural lenses, and my affective responses inevitably shape the final text. The same is true when I listen to a patient. I translate not from a place of detachment but of shared humanity and informed subjectivity. This dual role, psychologist and translator, has taught me that fidelity is not about literal accuracy but about holding complexity.
Whether I am translating a chapter on reproductive justice or witnessing a patient's retelling of childhood trauma, I ask myself: What is being said? What is being avoided? What wants to be heard, and what cannot yet be borne?
All healing is an act of translation
In both crafts, patience is key. So is humility. There is no final translation, no ultimate interpretation. There are only attempts, sometimes stumbling, sometimes transformative, to get closer to maybe what Donald Winnicott called 'feeling real', the authentic experience of self, alive and connected. And often, the truest moments emerge not from what is clearly articulated but from the pauses, the hesitations, the fragments, just as in translation, meaning often resides not only in the word itself but in what it carries behind it; its resonances, silences and the unspoken echoes it gathers from lived experience.
I do not believe the aim is to eliminate ambiguity. Rather, it is to accompany it. Just as a good translation does not erase the trace of the original, a good therapy does not overwrite the patient's truth. It amplifies it. It allows it to resonate in another key, to be heard anew, a new shape in another language.
Perhaps, in the end, all healing is an act of translation. And perhaps every act of translation, when done with care, becomes a form of healing." https://www.bps.org.uk/psychologist/i-am-not-just-translating-words-i-am-translating-worlds #Metaglossia #metaglossia_mundus
"Google Meet propose une traduction vocale en temps réel – Vidéo
21 AOÛT
(Adnkronos) – Google Workspace a annoncé, dans un communiqué de presse officiel, la disponibilité générale de la traduction vocale en temps réel pour Google Meet, une innovation qui promet de révolutionner la communication des équipes multilingues. Cette nouvelle fonctionnalité est déjà progressivement déployée, en commençant par la combinaison linguistique anglais-italien. Surmonter les barrières linguistiques est l'un des défis les plus importants de la collaboration internationale. Les réunions, les séances de brainstorming et les rencontres avec des clients et partenaires internationaux nécessitent souvent l'intervention d'un interprète ou génèrent des malentendus qui freinent la productivité. Avec l'introduction de cette technologie, Google vise à simplifier et à optimiser la communication. Contrairement aux simples traducteurs de texte, la nouvelle fonctionnalité Google Meet utilise une intelligence artificielle sophistiquée pour traduire la parole quasi instantanément. L'originalité de cette technologie ne se limite pas à la conversion des mots, mais s'étend à la capacité de préserver le ton, la cadence et les nuances émotionnelles de l'orateur. Voici la vidéo de démonstration. Le résultat : un dialogue étonnamment fluide et authentique, permettant aux participants de se concentrer pleinement sur le contenu de la conversation plutôt que sur les difficultés linguistiques. Cette innovation constitue une avancée significative vers une expérience d'appel vidéo plus naturelle, simulant la fluidité d'une conversation en personne. La fonction de traduction vocale est conçue pour favoriser la productivité dans des contextes très variés. Qu'il s'agisse de réunions d'équipe avec des membres situés dans des fuseaux horaires différents, de sessions de formation avec des participants internationaux ou de négociations avec des partenaires étrangers, la technologie Google Meet rend ces interactions plus efficaces et moins stressantes.
La nouvelle fonctionnalité est progressivement déployée à l'échelle mondiale et sera accessible aux utilisateurs disposant d'un abonnement Google AI Pro et Ultra. Pour ceux qui souhaitent un aperçu du fonctionnement de la technologie, une vidéo de démonstration illustrant son efficacité est déjà disponible. —tecnologiawebinfo@adnkronos.com (Infos Web)"
https://www.prpchannel.com/fr/google-meet-rende-disponibile-la-traduzione-vocale-in-tempo-reale-il-video/
#Metaglossia
#metaglossia_mundus
From French 101 to a Fulbright year in France, this UGA alumnus dedicated his career to teaching and interpreting French novelist Marcel Proust's work.
"More than a century ago, French novelist Marcel Proust began writing what would one day be considered the longest novel ever written,In Search of Lost Time, a sprawling multi-volume masterpiece with more than a million words. Renowned for its lushly introspective and elaborate prose, the work poses a formidable challenge to any reader. But for William Carter AB ’63, MA ’67, teaching and interpreting Proust’s novel became a lifelong passion.
Carter’s journey began far from the salons of Paris. Born in small-town Jesup, he discovered a love of language at the University of Georgia. For many, French 101 is a blur of irregular verbs. For Carter, it became a singular passion.
“I loved it immediately,” he says. “I wasn’t even sure what I wanted to major in when I came to UGA, but once I took that first class, I never looked back.”
In 1965, Carter won a Fulbright scholarship and traveled to France to study at the Université de Strasbourg. There, he immersed himself in the language. He also met his wife of almost 60 years, Lynn Goudreau, a fellow Fulbright scholar. They went back to the United States together. After completing his master’s at UGA, Carter took a job teaching at Indiana University while he completed his Ph.D. in French.
It was during his Fulbright year in France that Carter met his lifelong subject.
“I couldn’t help but be drawn to Proust,” Carter says. “When I read his writing, it’s like music. It’s just beautiful.”
Few living people know Marcel Proust (pronounced “Proost”) better than Carter. He has spent decades in conversation with this 20th century French novelist, decoding his famously layered prose and helping English-speaking audiences access it without diluting its complexity.
Great works of literature aren’t just words on a page. They’re about us, about the reader, about the human experience. [Proust’s] work is a many-layered thing that you want to read over and over again. And that’s why we keep coming back to it.”
WILLIAM CAUSEY CARTER, AUTHOR OF MARCEL PROUST: A LIFE
At the University of Alabama at Birmingham, where he spent the bulk of his career, Carter taught French courses on Proust. His lectures were known for bringing Proust’s revelations of late 19th century France and the World War I era to life.
“When in France, I would take slides of places Proust wrote about and use them in lectures,” he says, blending the visual and literary to capture the essence of Proust’s world.
Carter soon set out to create what renowned literary critic Harold Bloom called the definitive English-language biography of his favorite author. Marcel Proust: A Life, published by Yale University Press in 2000, became a New York Times Notable Book of the Year. In 2024, it made the New York Times Book Review’s list of best nonfiction books from 2000 to 2023. This work established Carter as one of the leading Proustian scholars in the world.
But Carter didn’t stop at the written word. In 1993, he co-produced Marcel Proust: A Writer’s Life, a PBS documentary that brought Proust’s Parisian world to the screen. And when the celebrated early 20th-century English translation of In Search of Lost Time by C.K. Scott Moncrieff entered the public domain, Yale University Press tapped Carter once more—this time to produce a revised and annotated edition of the novel.
The task was monumental: six volumes, hundreds of pages each, with every word weighed against the original French. But this labor of love was worth every sentence to Carter.
“Great works of literature aren’t just words on a page,” he says. “They’re about us, about the reader, about the human experience. His work is a many-layered thing that you want to read over and over again. And that’s why we keep coming back to it.”
Now in his 80s, Carter continues to write and lecture in English and French, maintaining his ties with the literary world. He has also served on the editorial board of the Proust Journal (Bulletin Marcel Proust) in France for nearly 50 years.
After a lifetime of writing and translating, Carter’s dedication affirms what Proust himself suggested, that “the real voyage of discovery lies not in seeking new landscapes, but in having new eyes.”
“At the very end of In Search of Lost Time,” Carter says, “the narrator uses the analogy of a pair of glasses to talk about the book he’s writing. He says that if they enable you to see and understandthe world better, fine. But if not, throw the glasses away and get another pair.”
This story appears in the Fall 2025 issue of Georgia Magazine."
Jayne Roberts
Aug 21, 2025
https://news.uga.edu/georgia-magazine-articles/william-carter-a-life-in-translation/
#Metaglossia
#metaglossia_mundus
Meta AI voice translations for Reels go global! Start with English & Spanish, check eligibility, explore tips, and reach new audiences.
"Meta has rolled out its AI-powered voice translation feature for videos on Instagram and Facebook globally. The tool not only translates your voice into another language but can also lip-sync your video so it looks and sounds natural. Right now, the feature works in English and Spanish, with more languages coming soon.
What’s the Meta AI Voice Translation Feature?
This free tool uses your voice and tone to automatically dub your reels on Facebook and Instagram in another language. The ultimate goal? Helping creators reach audiences beyond their native language and expand their reach.
The AI adjusts your mouth movements to match the new language, so viewers barely notice it’s been translated. And creators remain in full control. You can turn off translations or remove them anytime.
Until now, the feature was limited to select creators in the US and Latin America. With this global roll-out, it’s becoming available worldwide (though with a few exceptions — more on that later).
How to Use Meta AI Translations
Getting started is easy:
Before publishing your reel, click “Translate your voice with Meta AI.”
Toggle the button to turn on translations, and decide if you want lip-syncing enabled.
Click “Share now.”
Your reel is now available in English and Spanish.
You can also review translations before they go live. Enable the review toggle, and you’ll get a notification (or check the Professional Dashboard) to approve the translation. Meta reassures:
“Accepting or rejecting the translation will not impact your original, non-translated reel.”
How Will Translated Reels Appear?
Translated reels are shown to viewers in their preferred language, with a note that Meta AI did the translation.
Viewers can control what they see by selecting “Don’t translate” in the audio and language section within the three-dot menu.
Want a sneak peek at Meta AI voice translations in action? Check out Instagram head Adam Mosseri speaking in Spanish:
Who is Eligible?
The feature is available to:
Facebook creators (with a Page or professional mode turned on) with at least 1,000 followers
All Instagram public accounts.
Where Is Meta AI Translations Available?
In theory, Meta AI translations should be available anywhere Meta AI is offered. But if you check the eligibility page closely, there are some exceptions.
Even though Meta AI is live in the European Union, the translations feature is not available in the EU.
You also won’t find it in the UK, South Korea, Brazil, Australia, Turkey, South Africa, Nigeria, and two states in the US: Texas, and Illinois.
4 Tips to Get the Most Out of Meta AI Translations
Meta suggests some best practices when it comes to AI translations:
Face the camera: Speak clearly and avoid covering your mouth. Translations work best when your lips are visible.
Two speakers max: On Facebook, the tool supports up to two speakers. If more than one person is talking, avoid overlapping dialogue for better accuracy.
Reduce background noise: Minimise loud music or other distractions so the AI can focus on your voice.
Be consistent: You’re building a new audience in a new language. Give them time to get to know your content and your style.
Facebook creators can now upload up to 20 of their own dubbed audio tracks to a reel, making it easier to reach audiences beyond English- or Spanish-speaking markets. You’ll find this in the “Closed captions and translations” section of the Meta Business Suite.
You can add these translations before or after publishing.
Even though Meta AI has faced backlash since its launch and rapid growth (mainly because it can’t really be turned off), it does come with some exciting features, like the AI voice translations. What’s your take: a powerful reach boost or a bit scary? Share your thoughts in the comments.
Kata , 21 August 2025"
https://metricool.com/meta-ai-translations/
#Metaglossia
#metaglossia_mundus
"Plus besoin d’aligner trois ans de cours du soir intensifs avant de créer une version en espagnol ou en anglais d’un Reel sur Instagram ! Meta vient de lancer une nouvelle fonction de traduction vocale par IA, disponible dès maintenant sur Facebook et Instagram. L’outil permet de doubler vos vidéos dans une autre langue, tout en gardant sa propre voix.
Cette fonction de doublage conserve non seulement la voix, mais aussi l’accent et le ton histoire que le résultat paraisse un minimum naturel. Cerise sur le gâteau, une option de synchronisation labiale ajuste les lèvres pour que tout colle parfaitement à la nouvelle bande-son.
Des doublages automatiques pour élargir l’audience Pour l’instant, seuls l’anglais et l’espagnol sont concernés, mais Meta promet d’ajouter d’autres langues au fil du temps. Comme le résume Adam Mosseri, patron d’Instagram : « Beaucoup de créateurs ont une audience potentielle énorme, mais la langue est un obstacle. Si on peut les aider à franchir cette barrière, tout le monde est gagnant. »
Comment ça marche ? Cette fonction n’a rien de bien sorcier : avant de poster un Reel, il suffit de cliquer sur l’option « Traduire votre voix avec Meta AI », d’activer ou non la synchro labiale, puis de partager. La traduction s’ajoute automatiquement et peut être pré-visualisée. Pas satisfait ? Il est possible de la couper à tout moment, sans toucher à la vidéo originale. Les spectateurs verront un petit bandeau indiquant qu’il s’agit d’un contenu traduit par Meta AI, et ceux qui préfèrent les versions brutes pourront désactiver l’option dans leurs réglages.
La fonction est ouverte aux créateurs Facebook avec au moins 1.000 abonnés, ainsi qu’à tous les comptes publics sur Instagram. Et pour suivre l’efficacité du doublage, un nouvel indicateur dans les statistiques permet de voir combien de vues proviennent de chaque langue.
Meta va plus loin en autorisant les créateurs à téléverser jusqu’à 20 pistes audio doublées de leur propre voix sur un Reel. Une bonne nouvelle pour ceux qui veulent contrôler eux-mêmes leurs traductions, avant ou après publication. L’idée reste toujours la même, c’est de rendre un contenu compréhensible au-delà du cercle linguistique habituel du créateur.
Pour optimiser le résultat, Meta conseille de parler face caméra, clairement, sans couvrir sa bouche ni s’entourer de trop de bruit. L’outil ne gère que deux intervenants à la fois, donc on évitera de parler en chœur. Quant aux prochaines langues disponibles, mystère… mais il ne fait aucun doute que la liste s’allongera.
Cette nouveauté est une manière pour Meta de pousser encore plus loin l’intégration de l’IA dans ses services. Pour le pire comme pour le meilleur..." Par Olivier le 20 août 2025 à 8h30 https://www.journaldugeek.com/2025/08/20/meta-double-les-voix-des-createurs-sur-facebook-et-instagram-avec-lia/ #metaglossia_mundus #Metaglossia
"Abstract: This article examines the impact on legal processes of the need to use interpreters, drawing examples from refugee status determination procedures in the United Kingdom. It describes the roles played by interpreters in facilitating intercultural communication between asylum applicants and the administrative and legal actors responsible for assessing or defending their claims at the various stages of those procedures. The UK authorities' somewhat naïve expectations about the nature of the interpretation process display little understanding of the practical dilemmas that interpreters face. Much of the confusion and many of the barriers to communication created by the involvement of interpreters reflect the inherent untranslatability of particular notions, and so arise irrespective of the technical competence of the interpreters themselves. For example, dates may be reckoned using non-Gregorian calendars; terminologies for family relationships and parts of the body may be incongruent between the two languages; and there may be no exact indigenous legal equivalents to UK notions such as 'detention' or 'rape'. Different interpreters may therefore give different, though equally legitimate, translations of such terms, creating apparent 'inconsistencies' in the resulting translated accounts. Given the centrality of notions of credibility in asylum decision-making, even quite trivial divergencies over such matters may prove crucial."
Written by Anthony Good. Originally published in the International Journal for the Semiotics of Law
https://www.ein.org.uk/blog/interpretation-translation-and-confusion-refugee-status-determination-procedures #metaglossia_mundus #Metaglossia
Queens’ courts have a shortage of court interpreters and have seen a major reduction in their ranks over the past half decade.
"August 20, 2025
COURT INTERPRETERS PLAY A VITAL ROLE IN THE COURT SYSTEM BUT HAVE SEEN THEIR NUMBERS DIMINISH IN QUEENS, WHERE OVER 160 LANGUAGES ARE SPOKEN.
By Noah Powelson
Court interpreters provide crucial services for hundreds of New Yorkers going through the court system every day, navigating a system in a language they don’t speak. Nowhere is their job more crucial than the World’s Borough, home to the most diverse population in the United States.
But despite the need, Queens’ courts have a shortage of court interpreters and have seen a major reduction in their ranks over the past half decade.
According to New York Office of Court Administration data, Queens’ courts have lost a third of their court interpreter staff over the past five years. In 2019, there were 61 court interpreters assigned to Queens courts. Today, there are only 41 interpreters.
With the exception of a minor increase in 2023, Queens’ court interpreter staff numbers have steadily declined with no upward trends dating back to 2019.
The reason behind the drop in numbers is two-fold: the court system saw many older employees retire in the years following the pandemic, and court interpreting is a highly skilled profession that requires rigorous education and testing, making recruiting qualified candidates difficult, officials say.
The result, according to Queens court interpreters and judges who spoke to the Eagle, are regular delays and rescheduling as staff rush to ensure coverage across all hearings in a system already prone to delays and case backlogs.
In the World’s Borough, which contains at least 160 unique languages and dialects according to the World Economic Forum, the need for a wide array of interpreters is higher than anywhere else in the country. A New York City-based language documenting nonprofit known as the Endangered Language Alliance said there are as many as 800 languages spoken across the whole city, and Queens is home to more of them than any other borough.
A spokesperson for the Office of Court Administration said court leaders are aware of the interpreter shortage, and that they’ve implemented a number of policies to drive recruitment. That includes a multi-platform recruitment initiative for the Spanish Court Interpreter civil service examination, a court interpreter internship program, increased court interpreter salaries and raised rates for per diem court interpreters.
“The New York State Unified Court System is committed to expanding its pool of qualified court interpreters to meet the growing need for language access services in the Courts, incorporating a variety of recruitment strategies that include digital and media outreach, community and stakeholder engagement, interpreter pipeline development, expanded exam access, and ongoing outreach for less common but high-need languages,” the OCA spokesperson told the Eagle in a statement.
Yet with only 41 court interpreters serving the needs of all Queens courts, including Criminal, Civil, Family Court and others, slow downs and rescheduled hearings are only natural.
Two Queens judges told the Eagle they experience regular delays of 15 to 20 minutes waiting for court interpreters to be available for hearings. While that might not seem like much time, given the hundreds of cases that go through all Queens courts every day, those waiting periods can add up, the judges said.
Spanish and Mandarin are the languages most in demand in Queens by far, and make up most of the court’s interpreters. Queens Criminal Court tends to have a much higher volume of cases with quicker hearings, and it’s common for hundreds of cases to require Spanish interpretation each day.
According to the OCA, Queens Criminal Court has 22 interpreters on staff, while the Supreme Court, Criminal Term has four. Queens Family Court has the second-highest number of interpreters on staff with 10, while Queens Civil and Supreme Civil have three and two respectively.
Issues are especially apparent when an interpreter for a less common language is needed. For languages like Korean, Punjabi, Haitian or other dialects, the courts may only have one or two interpreters. At times, someone appearing in court may speak a language that the court does not have an interpreter for, in which case the courts will need to reserve a specialized interpreter from outside the borough, oftentimes virtually.
"Queens is the melting pot of the world,” a Queens judge granted anonymity told the Eagle. “If it's a very unusual language, we need to order somebody in advance."
Queens’ court staffers have access to a registry of per diem interpreters who are called in on a case-by-case basis to address language needs. An OCA spokesperson said the registry includes over 1,500 per diem interpreters in over 200 languages. OCA said this year, Queens County has had 4,782 per diem interpreter appearances in 74 languages, who have assisted 21,144 court users.
But other courts also have access to this registry, and the logistics of organizing hundreds of interpreters across the state to appear virtually naturally means gaps in service will happen. One Queens judge who was given anonymity said it’s not uncommon to adjourn a hearing until the following day without progress on the case because no interpreter was available for the day.
"Over the years, I'm encountering more specialized languages,” the Queens judge told the Eagle. “It has been getting worse."
Court staff shortages are not a Queens specific issue. Courts across the state have struggled to recruit attorneys, clerks, court officers, interpreters and all positions since the COVID-19 pandemic.
Judges have said recent efforts to modernize courtroom technology has helped the issue. Video calls have allowed interpreters from elsewhere in the state or country to appear in court remotely. Such a solution was not widely available prior to the start of the pandemic. Many courtrooms are also supplied with headsets and microphones, making it easier for multiple individuals who speak the same language to use the same interpreter.
But virtual court appearances bring their own set of problems. Remote interpretation is generally slower because of internet latency, and there is always the risk of severe lag, audio issues and general troubleshooting in the middle of the hearing. While technology can fill in some gaps, it can’t make up for additional experienced and skilled interpreters appearing in-person.
"There is no question there is a shortage,” a different Queens judge who was granted anonymity, told the Eagle. “Although they try the best they can, they are stretched thin…We have to problem solve on a daily basis.”
Despite the shortage, no one who spoke with the Eagle said the current interpreter shortage has led to the prevention of a case reaching a disposition.
Every case requires constant coordination and communication to ensure the right staff are ready and available, and if an interpreter calls out sick, that’s one more hurdle the judges and clerks need to account for as they get through the day’s work.
“We deal with it, we get through it, and those judges that are more experienced get through it faster," the judge said.
Court interpreters who spoke with the Eagle said that part of the issue with recruiting is that it’s a highly specialized profession that not many know about. Simply being bilingual isn’t enough. All potential recruits must receive a court interpretation certification from an accredited program, and pass a written and oral exam before they can be hired. The oral exam is frequently where applicants struggle.
One court interpreter in Queens Criminal Court said it takes years of practice and training to be able to quickly and accurately interpret live court hearings, especially during tense criminal trials with lots of back-and-forth arguments and people talking over each other.
"It's just really hard to find the right fit," the interpreter said. "Even if you have people that have the right skills, they still need to be knowledgeable about the various modes of interpreting.”
Sometimes, interpreters will have to be on call for two separate court parts if one of their coworkers is sick or otherwise unavailable that day. One interpreter told the Eagle that it was common for them to interpret dozens of individual cases in one morning.
Court interpreters don’t exist just for the court record; they are the literal voice for clients who cannot defend or represent themselves without one. While every judge, attorney, clerk, reporter and officer are necessary to ensure a fair justice system, court interpreters play an intimate role ensuring the voices of Queens are heard.
River Liu, a senior court interpreter at Queens Criminal Court, said his staff play a crucial role in ensuring the people of Queens feel respected when their day in court comes.
"For them, I think they do find some comfort when they have an interpreter there," Liu said. "When you are in a setting where you don't know what's going on, it's scary…Just being there for them, our presence, it helps them feel like they have their equal rights despite the language barrier."
Many court interpreters, whether they were immigrants themselves or raised by an immigrant family, grew up speaking two languages at home. For many interpreters, they were raised acutely aware of the difficulties non-native English speakers go through when navigating the intimidating and convoluted judicial system. For many, interpreters are not just impartial court staff, but their guide through likely some of the most difficult parts of their lives.
"It's a great job," Liu said. "We're the voice for the people of the court; we bridge that gap.""
https://queenseagle.com/all/2025/8/20/160-languages-41-interpreters-queens-courts-have-interpreter-shortage-leading-to-delays
#Metaglossia
#metaglossia_mundus
"Meta is rolling out an AI-powered voice translation feature to all users on Facebook and Instagram globally, the company announced on Tuesday.
The new feature, which is available in any market where Meta AI is available, allows creators to translate content into other languages so it can be viewed by a broader audience.
The feature was first announced at Meta’s Connect developer conference last year, where the company said it would pilot test automatic translations of creators’ voices in reels across both Facebook and Instagram.
Meta notes that the AI translations will use the sound and tone of the creator’s own voice to make the dubbed voice sound authentic when translating the content to a new language.
In addition, creators can optionally use a lip-sync feature to align the translation with their lip movements, which makes it seem more natural.
At launch, the feature supports translations from English to Spanish and vice versa, with more languages to be added over time. These AI translations are available to Facebook creators with 1,000 or more followers and all public Instagram accounts globally, where Meta AI is offered.
To access the option, creators can click on “Translate your voice with Meta AI” before publishing their reel. Creators can then toggle the button to turn on translations and choose if they want to include lip-syncing, too. When they click “Share now” to publish their reel, the translation will be available automatically.
Creators can view translations and lip syncs before they’re posted publicly and can toggle off either option at any time. (Rejecting the translation won’t impact the original reel, the company notes.) Viewers watching the translated reel will see a notice at the bottom that indicates it was translated with Meta AI. Those who don’t want to see translated reels in select languages can disable this in the settings menu.
Creators are also gaining access to a new metric in their Insights panel, where they can see their views by language. This can help them better understand how their content is reaching new audiences via translations — something that will be more helpful as additional languages are supported over time.
Meta recommends that creators who want to use the feature face forward, speak clearly, and avoid covering their mouth when recording. Minimal background noise or music also helps. The feature only supports up to two speakers, and they should not talk over each other for the translation to work.
Plus, Facebook creators will be able to upload up to 20 of their own dubbed audio tracks to a reel to expand their audience beyond those in English- or Spanish-speaking markets. This is offered in the “Closed captions and translations” section of the Meta Business Suite and supports the addition of translations both before and after publishing, unlike the AI feature.
Meta says more languages will be supported in the future but did not detail which ones would be next to come or when.
“We believe there are lots of amazing creators out there who have potential audiences who don’t necessarily speak the same language,” explained Instagram head Adam Mosseri in a post on Instagram. “And if we can help you reach those audiences who speak other languages, reach across cultural and linguistic barriers, we can help you grow your following and get more value out of Instagram and the platform.”
The launch of the AI feature comes as multiple reports indicate that Meta is restructuring its AI group again to focus on four key areas: research, superintelligence, products, and infrastructure."
Meta rolls out AI-powered translations to creators globally, starting with English and Spanish
Sarah Perez
10:20 AM PDT · August 19, 2025
https://techcrunch.com/2025/08/19/meta-rolls-out-ai-powered-translations-to-creators-globally-starting-with-english-and-spanish/
#Metaglossia
#metaglossia_mundus
"Intelligence Artificielle : le Maroc proche d’un exploit historique
PAR JACQUES EVRARD GBAGUIDI 15 août 2025 à 12:28
Le développement de l’Intelligence Artificielle sera désormais au cœur de l’université internationale du Maroc. Ce complexe du savoir s’est lancé dans le processus d’intégration de la technologie avancée. Un processus qui a abouti avec un protocole d’accord avec Cisco.
Création d’un centre de cybersécurité et Intelligence Artificielle au Maroc
L’Intelligence Artificielle se positionne peu à peu sur le continent africain. Et ce dans les universités de certains pays. En effet, l’Université Internationale du Maroc (UIR) se lance dans la danse afin d’être un centre stratégique de développement et d’innovation dans la région. Ainsi, l’université du Maroc a réussi à obtenir un protocole d’accord (MOU) avec Cisco. Un protocole d’accord qui s’inscrit dans le cadre de l’installation d’un Cisco EDGE Incubation Centre au sein de l’université pour le développement d’un centre de création en Intelligence Artificielle et cybersécurité.
Cette association entre les deux institutions pourrait avoir un impact positif sur le développement du pays et de la région. Cependant, pour que cet impact soit réel dans le nouveau partenariat de l’UIR, Cisco lance un appel aux institutions privées et publiques du pays. Ce dernier demande une promotion de cohésion entre l’université, le secteur privé et les institutions publiques. Une cohésion qui pourrait participer à l’accélération de l’innovation, d’impact durable dans le domaine de l’IA, dans le pays et sans oublier la région d’une part. Sur un autre plan, cette synergie permettra de faire entrer la nouvelle génération dans la formation de la technologie, faisant d’eux des leaders dans la technologie, notamment l’intelligence artificielle, par le biais du projet tels que la Cisco Networking Academy.
Avec la mise en œuvre de l’accord, soutiendra l’écosystème marocain de l’innovation, en pleine croissance. Soulignons que EDGE est un centre d’Experience, Design, Go-to-market, Earn. Ce dernier est en collaboration avec Cisco par le programme mondial Country Digital Acceleration de Cisco. Cette structure est le pilier de la transformation numérique en connectant le monde académique, les startups, les acteurs industriels et les institutions publiques, afin d’accélérer l’adoption des nouvelles technologies."
https://www.afrique-sur7.fr/intelligence-artificielle-le-maroc-proche-dun-exploit-historique
#metaglossia_mundus
#Metaglossia
The Cambridge Dictionary has added over 6,000 new words including slang terms like "skibidi," pronounced SKIH-bih-dee, "tradwife" and "delulu."
"“Skibidi,” pronounced SKIH-bih-dee, is one of the slang terms popularized by social media that are among more than 6,000 additions this year to the Cambridge Dictionary.
“Internet culture is changing the English language and the effect is fascinating to observe and capture in the dictionary,” said Colin McIntosh, lexical program manager at Cambridge Dictionary, the world’s largest online dictionary.
“Skibidi” is a gibberish term coined by the creator of an animated YouTube series and can mean “cool” or “bad” or be used with no real meaning as a joke.
Other planned additions include “tradwife," a contraction of “traditional wife” referring to a married mother who cooks, cleans and posts on social media, and "delulu,” a shortening of the word delusional that means “believing things that are not real or true, usually because you choose to.”
Christian Ilbury, senior lecturer in sociolinguistics at the University of Edinburgh, said many of the new words are tied to social media platforms like TikTok because that is how most young people communicate.
However, Ilbury said some of the words, including “delulu,” have longer histories than people might think and have been used by speech communities for years.
“It’s really just the increase in visibility and potential uptake amongst communities who may not have engaged with those words before,” he explained.
An increase in remote working since the pandemic has created the new dictionary entry “mouse jiggler,” a device or piece of software used to make it seem like you are working when you are not.
Environmental concerns are behind the addition of “forever chemical,” a harmful substance that remains in the environment for a long time.
Cambridge Dictionary uses the Cambridge English Corpus, a database of more than 2 billion words of written and spoken English, to monitor how new words are used by different people, how often and in what contexts they are used, the company said.
“If you look at what a dictionary’s function is, it’s a public record of how people use language and so if people are now using words like ‘skibidi’ or ‘delulu,’ then the dictionary should take account of that,” Ilbury said.
McIntosh added the dictionary has only added words it thinks have “staying power.”"
Cambridge Dictionary adds 'skibidi' and 'tradwife' among 6,000 new words
By The Associated Press
Updated August 18, 2025 5:15 pm
LONDON — What the skibidi is happening to the English language?
https://www.newsday.com/news/nation/cambridge-dictionary-new-additions-skibidi-tradwife-delulu-f58990
#metaglossia_mundus
#Metaglossia
"While States can often refer to a single language text of a multilingual treaty, there are times when an examination of other language texts is required. This article proposes a novel three-step method for applying Article 33(4) of the Vienna Convention on the Law of Treaties to remove, or otherwise reconcile, differences in meaning between multilingual treaty texts. In doing so, this article seeks to address the current vacuum of practical guidance on when an examination of different authentic treaty texts is necessary in the process of interpretation, and how any differences in meaning between the texts should be removed or reconciled."
Reconciling Divergent Meanings in the Interpretation of Multilingual Treaties
August 2025
International and Comparative Law Quarterly
74(2):467-484
DOI: 10.1017/S0020589325100778
License: CC BY 4.0
Cleo Hansen-Lohrey
https://www.researchgate.net/publication/394412373_Reconciling_Divergent_Meanings_in_the_Interpretation_of_Multilingual_Treaties
#Metaglossia
#metaglossia_mundus
"Classic of German Theatre translated into Welsh
10 Aug 2025
Georg Büchner’s Woyzeck from Melin Bapur Books
A masterpiece of German theatre has been published in a new Welsh edition.
At turns disturbing, tragic and moving, Georg Büchner’s Woyzeck is widely considered a masterpiece of European theatre, all the more remarkably so given that it remained unfinished after the death of the author aged just 23.
The play as it exists today is a reconstruction of scenes, some of which exist in mutliple versions, and whose original intended order is uncertain. Perhaps due to this very ambiguity it has been the subject of hundreds of productions and adaptations, including the opera Wozzeck by Alban Berg.
The play follows the tragic life of the eponymous Franz Woyzeck, a poor soldier struggling with poverty, exploitation, and mental instability.
To supplement his meagre income, he allows himself to be the subject of often bizarre medical experiments (such as eating nothing but peas) by a deranged doctor who treats him as less than human, as does his captain, both experiences worsening his physical and psychological condition.
Woyzeck’s mental deterioration intensifies as he becomes obsessed with the infidelity of his partner Marie, with whom he has a child. Driven by jealousy, despair, and a fractured sense of reality, he ultimately murders her. The play ends ambiguously, with Woyzeck’s fate uncertain.
Büchner’s unfinished, fragmented structure adds to the chaotic and disjointed atmosphere.
Welsh edition
The Welsh edition was translated by Sarah Pogoda, a lecturer in German at Bangor University, and Huw Jones. This team previously translated a Welsh learners’ version of Franz Kafka’s Metamorphosis, but this new Welsh version of Woyzeck is for general Welsh audiences.
Translator Huw Jones explains: “The idea for the translation came from Sarah, who started learning Welsh on her third day here. She and many of her colleagues are so enthusiastic to contribute to Wales; it’s a real scandal that language departments are being closed in so many universities.
“Woyzeck has inspired so many different adaptations. And we soon found that we really had our work cut out puzzling over our own interpretation. Therefore, we were very lucky to have the expert eye and red biro of author Lloyd Jones (a Wales Book of the Year winner). Lloyd really pulled our draft translations together.
“The play has probably one of the earliest portrayals of the classic ‘mad professor’, who has since become such a Hollywood stock character. In the play, Woyzeck is made to eat a diet of only peas as a guinea pig in the unhinged doctor’s dubious experiments!
“But Woyzeck is far more than just a tale of betrayal and murder most foul. Like many classic works, the underlying themes — social class, gender roles, mental health, and human nature vs. the natural world — are as relevant today as when they were written.”
Relevance
His co-translator Sarah Pogoda added: “After alsmost 200 years, Woyzeck is still relevant for our lives today, I’d rather see us living in a world in which texts like Woyzeck would not speak to us anymore.”
The new Welsh edition is published by Melin Bapur books as part of their Clasuron Byd (World Classics) series, which aims to make important literary works from all over the world available in Welsh.
“We’re really excited to be able to bring this Welsh version of Woyzeck to readers, and perhaps some day a Welsh stage,” explains Melin Bapur editor, Adam Pearce.
“This is exactly the kind of work we wanted to make availabel in Welsh via our Clasuron Byd series. Interestingly, this isn’t the first Welsh version of Woyzeck – as I understand it a translation was made in the 1980s, but this is the first time the work has been published and made available to the reading public as a book.”
Woyzeck can be purchased from the Melin Bapur website, www.melinbapur.cymru for £7.99+P&P, as an eBook from a variety of eBook platforms, o from a range of bookshops across Wales and beyond."
https://nation.cymru/feature/classic-of-german-theatre-translated-into-welsh/
#metaglossia_mundus
#Metaglossia
"...The UAE has set a model in leveraging artificial intelligence (AI) to integrate the Arabic language and its cultural heritage into the digital sphere, boosting its regional and global presence as a language capable of meeting future demands.
Various state institutions are rolling out AI-driven initiatives in sectors such as publishing, education, lexicography and creative content.
One of the leading projects is the Historical Dictionary of Arabic Language, a monumental scientific achievement completed last year by Sharjah, the "Capital of Arab Culture". The project documents the evolution of the Arabic language throughout history.
This was followed by the launch of the “GPT Historical Dictionary of the Arabic Language” project, which utilises modern innovations to serve and disseminate the language globally. Linked to AI, the dictionary offers researchers and enthusiasts with over 20 million Arabic words. It also enables them to write and read texts, convert them into videos, and continuously feed the dictionary with new information through a collaboration between the Arabic Language Academy in Sharjah and the Emirates Scholar Research Centre.
Meanwhile, the Mohammed bin Rashid Al Maktoum Knowledge Foundation is advancing digital culture and knowledge in the Arab world and globally through initiatives including the “Digital Knowledge Hub,” an Arabic platform for producing, collecting and organising digital content. Last year, it surpassed 800,000 titles and 8.5 million digital items across more than 18 specialised libraries.
The Abu Dhabi Arabic Language Centre, part of the Department of Culture and Tourism, has launched several AI-based publishing projects, including a specialised digital dictionary to support digital Arabic content. It is the first comprehensive Arabic-English dictionary employing AI and computational linguistics.
The dictionary covers over 7,000 core modern terms, offering automated pronunciation, simplified definitions, examples, images, and precise grammatical and semantic classifications.
In collaboration with a team from New York University Abu Dhabi and Zayed University, the centre launched the Balanced Arabic Readability Corpus project “BAREC”, which aims to collect a linguistic corpus of 10 million words encompassing a wide range of literary genres and topics.
The most recent edition of the Abu Dhabi International Book Fair saw the launch of the "Digital Square" initiative, a technical space that provided a platform to enhance the use of AI in publishing and books.
Furthermore, many educational institutions have been keen to launch diverse initiatives to promote the use of AI and modern technologies in teaching the Arabic language." UAE harnesses AI to boost Arabic language global reach Monday, August 11, 2025 1:05 PM ABU DHABI, 10th August, 2025 (WAM) https://www.wam.ae/en/article/15ppos9-uae-harnesses-boost-arabic-language-global-reach #Metaglossia #metaglossia_mundus
|