 Your new post is loading...
|
Scooped by
Charles Tiayon
Today, 1:27 PM
|
The GDELT Project, which collects and analyses global news and social data in real time, is disclosing experiments using AI to process large volumes of news and policy documents. It continuously gathers content in more than 100 languages and updates key datasets about events, relationships and images about every 15 minutes. GDELT also runs a platform translating news written in 65 languages. Recent tests include extracting leadership-change announcements and converting a 3,100-page U.S. bill into an infographic. " 기자명Jinju Hong 2026-03-16 13:05:00 GDELT unveils AI experiments translating multilingual news, extracting leadership changes and turning a 3,100-page U.S. defence bill into an infographic. (홍진주)] The GDELT Project, which collects and analyses global news and social data in real time, is releasing various experiments that use artificial intelligence to analyse large volumes of news and policy documents. An online outlet, Gigazine, reported on March 15 local time that the GDELT Project is a global archive that continuously collects content published in more than 100 languages worldwide, including broadcasts, newspapers and web news, and builds it into a database. It links various elements, including people, organisations, places, events and news sources, into a single network. It provides data on events around the world, their background and trends in public opinion. The project was founded by data scientist Kalev Leetaru and political scientist Philip Schrodt, and it collects news and social media (SNS) data from 1979 to the present. The collected data are used as a basis for analysing global political, economic and social trends by quantitatively coding social events and reactions to them. GDELT in particular releases large datasets so researchers and journalists can use them for analysis. The data consist of three streams: event data that classify physical activity worldwide into more than 300 categories; relationship data that record people, organisations, places, topics and emotions; and data that analyse the visual story of news images. The data are updated about every 15 minutes. GDELT also operates a translingual platform that processes global news written in 65 languages through real-time translation using its own translation system. Recently, it has also been actively conducting analysis experiments using AI. The GDELT Project disclosed an experiment that uses a Gemini-based model to automatically extract announcements of leadership changes at governments or companies from global news and organise them into a knowledge graph. In the process, AI was used to generate reports by going beyond organising personnel information and inferring the political and economic background. In another experiment, work was carried out to input the roughly 3,100-page U.S. National Defense Authorization Act into AI and convert the entire bill into a single infographic. In the process, various analyses were also performed, including topic analysis of the bill, organisation of related bills and generation of expected questions. GDELT also disclosed a large-scale translation experiment. According to a February 2026 announcement, it translated about 3 million TV news broadcasts accumulated over 25 years using AI. The cost to translate a total of 62 billion characters of broadcast data amounting to about 6 billion seconds was about $74,634. This is work that is estimated to have required millions of dollars using past methods. Such projects are assessed as examples showing the possibility that AI can comprehensively analyse vast amounts of news and policy documents. Experts say such data-based analysis could become a new tool for understanding global political and economic trends." https://www.digitaltoday.co.kr/en/view/39425/ai-translates-25-years-of-news-in-100-countries-summarises-3100-page-bill-in-big-data-test #metaglossia #metaglossia_mundus
Are humans the only beings on the planet that use language to communicate?
"Burg Giebichenstein
Kunsthochschule Halle
“Language can only deal meaningfully with a special, restricted segment of reality. The rest, and it is presumably the much larger part, is silence.” George Steiner
Are humans the only beings on the planet that use language to communicate? Can we decipher the nonhuman world around us without harnessing it to our own socialization, syntax, and lexicon? Is interspecies communication even possible? Translation has been described as a precondition that underlies all (human) cultural transactions upon which communication is based. It also is inherently political and stands at the forefront of so many of today’s questions around identity, gender, post-colonial criticism, feminist critique, machine translation and canon creation, yet its connection within the context of the nonhuman turn, interspecies communication, and eco-criticism has not yet been fully explored.
Whether we are talking about classic linguistic and literary translation, or any number of related fields including: language and literature, cultural studies, performance, visual and media arts—the core question that translators and theorists of translation have been debating about for centuries remains the same: is it possible to translate without interpreting? Is linguistic and cultural equivalence even possible? These questions become all the more urgent in the limit-case of interspecies communication. Can we apply empathic modes of translation to nonhuman articulations, wherein translation involves a form of metamorphosis, not of text, but of the translator. As such, translators are something of a hybrid species with one foot in each culture and language, and whose very existence revolves around traveling between worlds. Translators have something of a mythical being about them, akin to a chameleon or centaur. In this course, we will not be engaging in a scientific exploration of interspecies communication, but examining theories around empathic translation-- a process that sees translation not merely as the transformation of a text, but of the translator themself.
Emerging and classical theories of translation can offer a paradigm for engaging with plant and animal articulation, not language as such, but different forms of articulation perceived through the senses, one in which our hearing and seeing,“once intertwined and attentive to the calls and cries of animals, all but disappeared with the invention of the alphabet, retreating into a kind of silence.”
In David Abram's words: “By giving primacy to perception we can see the natural world, not as inert and passive, but as dynamic and participatory. The winds, rivers and birds speak in their own way (if we listen), the sounds of nature not only have informed indigenous languages, but language in general--humans are but one being intertwined with other beings and ‘presences.’ This perspective sees the landscape as a sensuous field, and human perception as but one point of view that is in reciprocity, in expressive communication, with other points of view and ways of being.”
How can theories of translation help us make sense of this new view of a world teeming with language and sentience? What theories abound in reference to multiplicity of “language,” even as Walter Benjamin would argue for a “universal (human) language.” What practical tools does translation studies offer, and what bridges can it forge between the disciplines? The first half of the seminar focuses on key theoretical concepts relevant to the history and practice of translation. In the second half, students will engage in translation experiments that intersect with their own artistic/design practice. A final project should be considered a first draft of something that could develop later into a larger project.
The course will be taught in English and German.
This seminar is ideally suited to students interested in: Literature, Translation Theory / Translation / Cultural Studies / Critical Theory, Creative Writing/ Post-humanism, Trans-humanism, Eco-criticism, the More-than-Human Turn.
Teachers
Dr. Zaia Alexander"
https://www.burg-halle.de/en/course/l/talk-with-the-animals-translation-in-a-more-than-human-world
#Metaglossia
#metaglossia_mundus
#métaglossie
"Korean fiction has experienced a rapid surge in popularity in the English-speaking world in recent years. Many attribute this to the Korean Wave that's been sweeping through cinema and music. Whatever the reason, Korean writers have been winning major literary awards and attracting the spotlight for their achievements. With so much amazing fiction to choose from, there are tons of great options for readers. We've compiled a list of some of the best Korean fiction in multiple genres from the 2010s and 2020s, including powerhouse authors like Han Kang and Bora Chung alongside rising stars like Sang Young Park.
Cursed Bunny
by Bora Chung translated by Anton Hur
Originally published in Korean in 2017, Cursed Bunny was a finalist for the 2023 National Book Award for Translated Literature. This haunting collection of short stories is deliciously eerie, sometimes veering into body horror and at other times utilizing surrealism and even absurdism.
Kim Jiyoung, Born 1982
by Cho Nam-Joo translated by Jamie Chang
Kim Jiyoung, Born 1982's Korean release in 2016 coincided with the global #MeToo movement. Featuring a sort of Korean everywoman figure as its protagonist, the novel dives right into a powerful critique of misogyny in the contemporary era. The book's interrogation of gender inequality is enacted both through its unique premise (a woman takes on the consciousness of a myriad other women) and its unsettling narration, delivered by the male psychiatrist evaluating her case.
Love in the Big City
by Sang Young Park translated by Anton Hur
Longlisted for the 2022 International Booker Prize (among others), Love in the Big City has garnered enough popularity that it was recently made into an independent film. Told in sections organized around different relationships in the protagonist's life, it has a surprisingly lighthearted feel to it for a book that contends with homophobia, dysfunctional families, unhealthy relationships, and loneliness. It's a thought-provoking read about a young gay man's quest for love.
Untold Night and Day
by Bae Suah translated by Deborah Smith
Untold Night and Day is a fascinatingly disorienting work of literary fiction. It begins firmly enough, grounded in reality, but as the story unfolds, characters and experiences begin to collapse in on each other. If you're a reader who enjoys an unconventional and potentially challenging read, this book is perfect for you.
Welcome to the Hyunam-dong Bookshop
by Hwang Bo-reum translated by Shanna Tan
The popularity of what some have termed "healing fiction," which is what select contemporary Korean and Japanese fiction with "cozy" and fabulist elements have been labeled, has been growing over the past several years. Welcome to the Hyunam-dong Bookshop falls into this category and makes for a comforting and introspective read that focuses on the balance between ambition and happiness.
The Hole
by Hye-young Pyun translated by Sora Kim-Russell
This novel is a psychological thriller at its finest. Grappling with some of the darker aspects of life—such as control, guilt, and loss—this deeply uncomfortable story of a man who has been paralyzed in a car accident pushes readers to reflect on the consequences of living. As he is subjected to abuse and neglect, the lines between truth and lies blur in terrifying ways.
View eBook
Greek Lessons
by Han Kang translated by Deborah Smith and Emily Yae Won
Han Kang was awarded a Nobel Prize in literature “for her intense poetic prose that confronts historical traumas and exposes the fragility of human life.” Despite the fact that it was the first book Kang published after her smash hit The Vegetarian, Greek Lessons wasn't published in English until over a decade later. It grapples with themes of loss and trauma using prose that exhibits the author's roots as a poet."
By Kobo • March 15, 2026
https://www.kobo.com/blog/the-best-korean-fiction-in-translation
#metaglossia
#metaglossia_mundus
"Translation technology tools play a pivotal role in overcoming linguistic and other communication-related obstacles in different types of crises. Drawing on real-life examples, the chapter explores translation technology as an agency-enabling solution that facilitates access to instant, accessible information. The complex interaction between translation tools, human actors, and society is mapped through the lenses of sociological theories of agency. The chapter highlights how the development and deployment of such technologies can affect both crisis preparedness and containment, frequently amplifying the voices of governments and technology providers at the expense of those directly affected. To establish inclusive, technology-enabled communication, the chapter offers recommendations for contextually relevant crisis policies and management strategies, advocating for adaptable approaches and positioning human translators as safeguards against overreliance on AI tools. It also underlines the need for transparent, trustworthy communication channels and balancing sociocultural factors and power dynamics, ensuring that crisis communication is inclusive and people-centred." https://www.taylorfrancis.com/chapters/edit/10.4324/9781003271314-15/translation-technologies-automation-crisis-situations-khetam-al-sharou-mieke-vandenbroucke-gert-vercauteren #metaglossia #metaglossia_mundus
"The world's first Tibetan large language model and its application, DeepZang, has been officially unveiled in Lhasa, Southwest China's Xizang Autonomous Region. This model fills the gap in indigenous large language models at both the national and ethnic levels, while also facilitating the innovation and inheritance of Tibetan ethnic culture in the AI era, the company's chairman told the Global Times.
Developed independently by CHOKNOR Information Technology Co., Ltd. in Xizang, the model and its application are the first Tibetan large language model to complete national filing for generative AI in China, filling a technological gap in this field globally, according to local media Tibet.cn.
The World Record Certification Agency (WRCA) also awarded the certification of "the World's first Tibetan large language model" at DeepZang's launch event, chinanews.com reported on Monday.
Tenzin Norbu, chairman of the CHOKNOR company, told the Global Times on Monday that this open-source large model platform is China's first ethnic language AI open platform designed for multilingual and multimodal capabilities. The DeepZang platform supports over 80 languages, including Tibetan, Putonghua, English, Mongolian and Uygur, enabling an integrated approach to listening, speaking, translating, recognizing and thinking, Tenzin added.
The DeepZang model marks a strategic leap for China to take the lead in the AI field for ethnic languages, officially inaugurating the high-quality AI development of Tibetan-language in Xizang and dawning the era of AI for the Tibetan language, Tibet.cn reported.
The DeepZang application was also launched on Sunday, supporting intelligent interactions in Tibetan, Putonghua and English. Users can speak or type a sentence to access real-time mutual translation, Tibetan-language Q&A and cultural knowledge inquiries, according to the report.
Shortly after its launch on Sunday, the app recorded an average of 4,000 downloads per hour, the Global Times learned from the company.
Tenzin said the company has built a high-quality parallel corpus of nearly 70 million precise Tibetan-Putonghua language pairs. Additionally, they have completed large-scale speech data collection across the three major Tibetan dialect regions, establishing China's largest and accurately annotated Tibetan speech database to date, he added.
As shown in a video released by the Xizang Daily, several users voice-inputted instructions in different Tibetan dialects, and the application achieved accurate recognition and delivered prompt responses with high efficiency.
Tenzin said the development of this large language model has filled the gap in Tibetan large language models at the national and ethnic levels, and it also gives full play to the Tibetan cultural value, facilitating the innovation and inheritance of Tibetan ethnic culture in the AI era.
An official from Lhasa people's government was quoted by Tibet.cn as saying that the successful development of DeepZang has provided a valuable exploratory model for the global AI community in the processing of low-resource languages. It stands as a testament that modern information technology can effectively underpin the preservation and development of traditional cultures, the official added.
"Through this large language model and its application, we also aim to provide an authentic platform for global users seeking to learn about Tibetan culture, history and politics, thereby preventing the dissemination of distorted ideologies and values," Tenzin said.
In another video posted by the Lhasa Women's Federation on its official WeChat account, a student from Xizang University said that DeepZang's translation function is very useful, though the translation of some four-character idioms is still not fully developed.
Tenzin said that the model is currently limited by the scope of its corpus data, and the company will continue to refine and update it based on user feedback.
In the future, this large language model is set to extend its capabilities to sectors including education, healthcare and ecology, delivering convenient and efficient services to enterprises and government agencies..."
https://www.globaltimes.cn/page/202603/1357052.shtml
#metaglossia
#metaglossia_mundus
" Stanford engineer has demonstrated that frontier language models can run directly on everyday edge devices using convex optimization, eliminating reliance on cloud servers and costly GPUs. The breakthrough, unveiled at NeurIPS 2024, enables secure, lower-cost, personalized AI with early international commercial deployments.
United States, March 12, 2026 -- A Stanford engineer has shown that the world’s most advanced "frontier" language models can now run directly on regular edge and local devices. This removes the pure reliance on cloud servers and costly specialized hardware.
This engineer used advanced mathematical optimization techniques to show that sophisticated and helpful "frontier" language models can run on the personal devices people already have. This change means the industry no longer has to rely on the cloud or expensive specialized GPU hardware.
Breaking the Cloud Dependency
Running advanced neural networks usually means using an army of cloud computing resources, which requires expensive GPU farms, a steady internet connection, and per-token API fees. Miria K. Feng, a PhD candidate in electrical engineering at Stanford University, has successfully merged the potential of mathematical convex optimization techniques with large-scale deep learning applications for far more accessible and personalizable AI. Powerful frontier models running on your local devices mean greater security, since your data stays local and reduces the cost of paying per-token fees to a few large tech conglomerates.
The combination of using mathematical optimization to reformulate neural networks is not new and was proposed by Turing Award winner Yoshua Bengio. But the practical deployment of these elegant theoretical techniques in large-scale AI was first publicly announced in Miria's work at NeurIPS 2024. This quiet breakthrough has led to frontier models that efficiently run personalizable inference on everyday edge devices that we already carry in our back pockets.
“The goal was to prove that you don't need a GPU cluster or fiber internet connection to use frontier technology,” said Miria. “We use principled convex optimization techniques in conjunction with machine learning to cut the computing power needed without sacrificing quality in results. This dramatically reduces barriers to entry for global users and helps safeguard user privacy since data is not being constantly shared on the cloud."
From Academic Research to Market Launch
Early deployments in Canada, Singapore, and Japan to build accessible, everyday, personalized AI tools were a resounding success for Miria's innovations. Her commercial deployments span widely, from Toyota Motor Corporation in Nagoya, Japan, to FCS Solutions in Singapore.
Meanwhile, Miria is continuing her cutting-edge doctoral work at Stanford University as a Rambus Corporation Fellow, with beta tests in the hospitality sector set to go live in Los Angeles and Las Vegas in 2026. Official news about partnerships is expected later this year.
A Multidisciplinary Approach Her unique background shapes Miria's technical work. She is a Kiwanis Music Festival gold medalist and concert pianist and is currently a student of Melinda Lee Masur. Her top national performances in the Pascal, Fermat, and Euclid mathematics competitions continue to give her a creative yet principled approach to engineering. She paid her own way through school and has lived in several countries, which led her to focus on "equitable access" and to build tools that work for everyone, regardless of local infrastructure or income.
About the company: Miria K. Feng is a doctoral researcher in the Department of Electrical Engineering at Stanford University, focusing on electrical engineering and convex optimization for deep learning. As a Stanford Graduate Fellowship winner and a Rambus Corporation Fellow, she connects theoretical math optimization with real-world applications through refreshing innovation
Contact Info: Name: Miria Feng Email: Send Email Organization: 9-Figure Media Website: https://9figuremedia.com/"
https://markets.businessinsider.com/news/stocks/new-technology-brings-advanced-language-models-to-everyday-devices-1035923587 #metaglossia #metaglossia_mundus
"La Chine adopte une loi qui promeut le mandarin comme «langue commune nationale»
L’Assemblée nationale populaire chinoise Chine a approuvé jeudi une loi dite d’«unité ethnique» que les défenseurs des droits de l’Homme estiment délétère pour les langues et les cultures minoritaires dans le pays.
La Chine a adopté jeudi, au cours de son événement politique annuel des Deux Sessions (une réunion parlementaire durant laquelle le gouvernement chinois fixe ses grandes orientations économiques et politiques pour l’année à venir, une loi sur la «promotion de l’unité et du progrès ethniques», approuvée sans débat lors de cette session parlementaire. Cette nouvelle loi, adoptée par l’Assemblée nationale populaire chinoise (ANP), formalise désormais des politiques visant à promouvoir le mandarin comme «langue commune nationale» dans l’éducation, les affaires officielles et les lieux publics.
Pékin présente cette loi comme un outil de modernisation et de prospérité, affirmant qu’elle renforcera «le sentiment de communauté commune de la nation chinoise» et améliorera les perspectives d’emploi des minorités grâce à la maîtrise du mandarin. Cependant les universitaires et défenseurs des droits humains y voient la consolidation juridique d’une politique d’assimilation forcée, selon les informations de la BBC.
D’après les informations du média britannique, cette mesure est composée de clauses qui affaiblissent le statut des autres langues officielles présentes en Chine au profit du mandarin. Elle vise ainsi les 55 minorités officielles représentant environ 9% des 1,4 milliard d’habitants de la Chine.
La langue comme principale cible Dans certaines régions comme le Tibet ou la Mongolie intérieure, où vivent d’importants groupes ethniques minoritaires, des politiques gouvernementales ont déjà ordonné que le mandarin soit utilisé comme langue d’enseignement. Yalkun Uluyol, chercheur dédié à la Chine à l’ONG Human Rights Watch, décrit à l’AFP la nouvelle loi comme un «changement radical» par rapport à une politique de l’ère de l’ancien dirigeant Deng Xiaoping, qui garantissait aux minorités le droit d’utiliser leurs propres langues. Les établissements d’enseignement devront désormais utiliser le mandarin comme principale langue d’enseignement. Les adolescents seront désormais tenus d’avoir «une maîtrise de base» du mandarin à l’issue de la scolarité obligatoire.
Des tensions autour de la langue avaient déjà éclaté bien avant l’adoption de cette loi. En 2020, en Mongolie intérieure, la suppression brutale des manuels scolaires en mongol avait provoqué de rares mais puissantes manifestations. Certains parents avaient même retenu leurs enfants à la maison en signe de protestation, considérant cette mesure comme une menace directe à leur identité culturelle. La répression avait été immédiate et massive, suivie de campagnes de rééducation. Désormais les élèves de la région ne peuvent plus étudier le mongol qu’une heure par jour, comme simple langue étrangère, d’après les informations de Associated Press.
La loi prévoit aussi des sanctions contre les parents ou tuteur chinois en Chine qui transmettraient à leurs enfants des idées jugées contraires à «l’harmonie ethnique». Le texte instaure également une base juridique inédite pour poursuivre des individus ou organisations basés hors de Chine si leurs actes nuisent à «l’unité ethnique», un mécanisme qui inquiète particulièrement les communautés ouïghoures, tibétaines et mongoles en exil, souvent parmi les plus critiquées par le régime.
La Chine, où l’ethnie largement majoritaire est celle des Hans, reconnaît à l’intérieur de ses frontières 55 minorités qui rassemblent plusieurs centaines de langues et dialectes. Le gouvernement chinois est accusé depuis des décennies de mener des politiques pour assimiler de force ces minorités à la majorité Han." Joséphine Guilhem de Pothuau 13 mars 2026 https://www.lefigaro.fr/international/la-chine-adopte-une-loi-qui-promeut-le-mandarin-comme-langue-commune-nationale-20260313 #metaglossia #metaglossia_mundus
"English PEN’s flagship translation grant programme, PEN Translates, announced its latest round of winners, awarding grants to 18 titles from 14 publishers across 12 languages and 16 regions. Three of those titles come from African writers — from Egypt, Sudan, and Mauritius — and one of them makes history as the first Mauritian title ever to receive a PEN Translates award.
The Egyptian title is The Field by Hamdi Abu Golayyel, translated from the Arabic by Robin Moger and published by Saqi Books. Abu Golayyel — who passed away in June 2023 — was one of Egypt’s most distinctive literary voices, born in Fayoum and widely described as a chronicler of the lives of Egypt’s marginalised and working class. Three of his novels have previously been translated into English: Thieves in Retirement (tr. Marilyn Booth, 2006), A Dog with No Tail(tr. Robin Moger, 2009), which won the Naguib Mahfouz Medal for Literature in 2008, and The Men Who Swallowed the Sun (tr. Humphrey Davies, 2022), whose translator was joint winner of the 2022 Saif Ghobash Banipal Prize for Arabic Literary Translation. The Field will be a welcome return of his work to English-language readers. The Sudanese title is Under the Neem Tree by Rania Mamoun, a Sudanese activist and bestselling writer of poetry, fiction, and nonfiction, translated from the Arabic by Elisabeth Jaquette and published by Comma Press. Jaquette previously translated Mamoun’s Thirteen Months of Sunrise (Comma Press, 2019), which was shortlisted for the 2020 Warwick Prize for Women in Translation and was itself a PEN Translates award winner, making this a continuation of a celebrated translating partnership.
The most historic of the three is The Rasta’s Song by Sharon Paul from Mauritius, translated from the French and Mauritian Creole by Nadiyah Abdullatif and published by Balestier Press. This is the first time a title from Mauritius has ever received a PEN Translates award, a milestone that reflects both the programme’s expanding geographic reach and the growing recognition that Francophone and Creole-language African literatures deserve a place in the global translation conversation. The inclusion of Mauritian Creole as a source language is itself significant: it joins Slovak as one of two languages appearing in the PEN Translates portfolio for the first time in this round.
PEN Translates has now supported over 400 books translated from over 90 languages, awarding over £1.2m in grants since its inception. Books are selected on the basis of outstanding literary quality, the strength of the publishing project, and their contribution to UK bibliodiversity. The programme’s Translation Advisory Co-chair Nichola Smalley described this round as giving “hope for the future of UK translation publishing” and for African literature specifically, three grants in a single round, including a historic first, is a result worth celebrating!" by Blessing Uwisike February 26, 2026 https://share.google/B1Gq1t8fiiOoFaf2z #metaglossia #metaglossia_mundus
Creators can now upload language-specific thumbnails, enabling viewers to see previews in their preferred language and improving discoverability globally
"YouTube has introduced a new feature that allows creators to upload translated thumbnails for their videos, a move aimed at helping content reach audiences across different languages more effectively.
The update enables creators to add multiple thumbnail versions for a single video in different languages. When viewers browse the platform, the thumbnail displayed will automatically match their language preferences, allowing them to see a preview image that feels more familiar and relevant.
For instance, a viewer whose interface language is Hindi may see a Hindi-language thumbnail, while someone browsing in Spanish could see a Spanish version of the same video’s thumbnail. Despite the different preview images, both viewers would still be watching the same underlying video.
The feature is designed to complement YouTube’s existing multi-language audio capabilities, which allow creators to upload alternative audio tracks in different languages for the same video. By adding translated thumbnails to the mix, the platform is extending localisation beyond audio to the visual entry point of a video.
Creators can add these translated thumbnails through YouTube Studio, where they can upload different thumbnail images mapped to specific languages. Once added, YouTube automatically determines which version to display based on the viewer’s language settings.
The company says the feature is intended to help creators improve discoverability and engagement among global audiences, particularly for channels that publish content aimed at viewers in multiple regions."
https://www.buzzincontent.com/news/youtube-rolls-out-translated-thumbnails-to-help-creators-reach-multilingual-audiences-11205414
#metaglossia #metaglossia_mundus
"Taariq Ahmed, Assistant Campus Editor March 13, 202 Poet, translator and New York University English and Spanish and Portuguese Prof. Urayoán Noel shared his work and discussed ideas in a Thursday poetry reading and Q&A event with Northwestern community members as part of the English department’s Unsettling Sound series.
Noel is the author of several books in English and Spanish. He performed for about 15 attendees in University Hall.
He started by reading poems from his 2021 collection, “Transversal,” performing with voice and volume changes and reciting both the English and Spanish versions. Throughout the event, he performed poetry with instrumental music in the background.
“Now poetry is just a name for this, our faint embodied sound, for music once it’s not around, for ash in lockstep with the flame, for streets still summoning the same old shadows,” Noel said in one poem named “Juliécimas.”
Noel then transitioned into his ongoing series, “Wokitokiteki,” which he described to the audience as a “walking poetic improvisation project.” He said he creates the content while walking through neighborhoods in Puerto Rico and those in U.S. states with significant Puerto Rican populations.
In honor of his visit to the Chicagoland area, he delivered one piece inspired by Humboldt Park, Illinois, a historically Puerto Rican cultural hub, specifically referencing his observations from the walk.
As a translator, Noel has repeatedly translated works from Garifuna and Guatemalan poet Wingston González. Noel recited poems from an unpublished translation of González’s 2015 book, “Translaciones.”
When explaining his relationship with González, he said they share a relationship for things like performance and improvisation, despite their cultural differences.
Citing inspiration from “The Traffic in Meaning: Translation, Contagion, Infiltration” by Mary Louise Pratt, he talked about how translation is less about producing equivalences and more about understanding and representing the experiences of others.
Later, Noel read from his 2025 autobiographical prose work, “Cuaderno de Isabela/Isabela Notebook,” and handed out copies to attendees.
“Tell me if there’s a city like the one with the horse staring at the sea in front of windows with iron bars and flanked by piles of car tires…” Noel said in one poem, translated to be “Pueblo” or “Town.”
Noel then transitioned into a Q&A with the audience. He discussed the Wokitokiteki project and the concept of improvisation. He also compared product versus process.
Noel also talked about his philosophy on teaching poetry and writing to students. He said emphasizing the process of writing poetry is essential, as the product is “tied to racial capitalist ideas” of generating something to sell.
“We can always do things to become better writers, but I can’t tell you what you need to write,” Noel said. “What I can share with you is the process. How did my process get me from A to B?”
NU Spanish and Portuguese Prof. Emily Maguire, who went to graduate school with Noel at NYU, said she believes he is an impressive performer.
She said he is one of the most proficient bilingual people she has ever met.
“He has a tremendous facility in both Spanish and English, but he is also someone who has a tremendous gift for performing live and a real ability to capture an audience and move and entertain in surprising and creative ways,” she said.
Spanish and Portuguese Prof. Julia Oliver Rajan, who is Puerto Rican, said though she was initially unfamiliar with Noel, she enjoyed his performance.
“It resonated with me the vibrancy of his poetry,” Rajan said. “The way he described Puerto Rico, the struggles of Puerto Rico — I liked those things in his poetry.”
In the Q&A, Noel spoke about what it is like to translate works from poets who are from a different culture or who are dead, both of which he has done in his career.
He said to be a translator it was crucial to embrace these discrepancies, calling translation the “least messed-up kind of appropriation.”
“You’re not going to do away with the fundamental tension of ‘Oh, this person is dead, and I’m here telling their story,’ especially if they’re from community X, and I’m from community Y,” Noel said. “But to me, that shouldn’t dissuade us, because there’s way more work that needs to be translated than there are translators.”
Email: r.ahmed@u.northwestern.edu"
Taariq Ahmed, Assistant Campus Editor
March 13, 2026 https://dailynorthwestern.com/2026/03/13/campus/poet-translator-and-professor-urayoan-noel-shares-work-in-reading-qa-event/ #metaglossia #metaglossia_mundus
"Zigbang (PDG : Ahn Seong-woo), une entreprise de technologies immobilières complète, a annoncé le 13 décembre l’intégration de la reconnaissance vocale en temps réel basée sur l’IA et de la traduction multilingue à sa plateforme de bureau virtuel, Soma. Ces fonctionnalités visent à faciliter la collaboration des équipes internationales en s’affranchissant des barrières linguistiques.
Soma est une plateforme de bureau virtuel basée sur le métavers, développée par Zigbang. Elle recrée l'environnement spatial d'un bureau physique en ligne, favorisant les échanges et la collaboration naturels, même dans des contextes de travail à distance ou hybrides. Grâce à cette mise à jour, les utilisateurs peuvent visualiser le contenu vocal dans l'espace virtuel sous forme de texte en temps réel et traduire les propos de leur interlocuteur dans la langue de leur choix. Actuellement, Soma prend en charge plus de 50 langues et 145 paramètres régionaux. Le texte généré peut être enregistré localement et utilisé, par exemple, pour la rédaction de comptes rendus de réunion.
Cette fonctionnalité intègre une technologie de traduction contextuelle qui prend en compte le contexte de la conversation précédente. En allant au-delà d'une simple substitution mot à mot pour proposer des traductions qui préservent la fluidité du dialogue, l'objectif est de minimiser les problèmes de communication susceptibles de survenir lors de réunions multinationales.
Les données de conversation sont conçues pour être traitées entre les participants plutôt que stockées sur un serveur. La reconnaissance vocale et la traduction utilisent une architecture à double flux qui assure simultanément une transmission stable et en temps réel des messages, ce qui la rend adaptée aux environnements d'entreprise exigeant une sécurité renforcée.
Grâce à cette mise à jour, Zigbang prévoit de faire évoluer Soma en une plateforme contribuant à l'amélioration des processus en apprenant les flux de décision organisationnels et les contextes de travail, allant ainsi au-delà du simple enregistrement des réunions et de l'extraction des tâches. À long terme, l'entreprise poursuit ses recherches afin de créer un environnement « Moi numérique » où le travail peut se poursuivre même en l'absence de l'utilisateur, grâce à des agents d'IA qui apprennent le langage et les jugements professionnels individuels.
Un responsable de Zigbang a expliqué : « L'introduction de cette fonctionnalité multilingue basée sur l'IA est une première étape vers la réduction des barrières linguistiques », ajoutant : « Nous prévoyons d'étendre progressivement les fonctionnalités afin que Soma puisse évoluer au-delà d'un simple espace virtuel pour devenir une plateforme qui améliore les méthodes de collaboration organisationnelle. »
Avec la récente généralisation du travail à distance et hybride, les outils de traduction et de collaboration en temps réel basés sur l'IA suscitent un intérêt croissant en tant que solutions clés pour améliorer la productivité et l'efficacité dans les environnements d'entreprise mondiaux.
출처: Zigbang intègre des fonctionnalités de reconnaissance vocale multilingues en temps réel basées sur l'IA et de traduction à Virtual Office Soma - 벤처스퀘어 https://www.venturesquare.net/fr/1049186/"
https://www.venturesquare.net/fr/1049186/
#metaglossia #metaglossia_mundus
"… À Locronan (29), la traduction en breton de panneaux installés dans le cadre d’un parcours patrimonial a fait hérisser le poil des brittophones de la commune.
Jean-Marc Louboutin fait partie du collectif qui demande le retrait des panneaux d’un parcours patrimonial installé à Locronan. En cause ? La traduction « catastrophique » des textes en langue bretonne. (Photo Aude Flambard) « Il s’agit vraiment d’un exemple de mésusage de l’intelligence artificielle », estime Anne Gouerou. Elle fait partie d’un collectif d’habitants brittophones de Locronan (29) qui s’est constitué après la découverte de panneaux installés, début mars, dans le cadre d’un parcours patrimonial qui retrace l’histoire du cinéma dans la petite commune finistérienne..." Par Paul Bohec avec Aude Flambard Le 13 mars 2026 à 16h50 https://www.letelegramme.fr/finistere/locronan-29180/un-massacre-de-la-langue-a-locronan-la-traduction-bretonne-de-ces-panneaux-fait-bondir-des-habitants-7003989.php #metaglossia #metaglossia_mundus
"Many of the immigrants detained at Northwest State Correctional Facility in Swanton have the same question for the volunteer attorneys who’ve visited to provide in-person counsel.
“One of the questions that we got asked the most often was, ‘Where am I? What state am I in?’” said Emma Matters, an immigration attorney with the Vermont Asylum Assistance Project. “Even that very, very basic information that you assume someone has access to, people go without if they don’t have someone coming in and conversing with them in their language and explaining to them just what is going on.”
Matters says the experience underscores the disadvantage that immigrants who don’t speak English face when they’re detained in facilities that can’t communicate in a language they understand. And she says prohibitions on language-access devices at the Vermont Department of Corrections have in some cases prevented attorneys from providing the basic legal services that immigrants need to fight their cases.
“Without someone who’s able to provide them with that information, let them know what’s being put in front of them or what might be put in front of them, people end up being vulnerable to life-changing harm,” Matters said.
The number of people arrested and detained by Immigration and Customs Enforcement is up tenfold in New England since the start of President Donald Trump’s second term in office. Some of them have ended up in two prisons operated by the Department of Corrections, which contracts with the Department of Homeland Security to provide temporary lodging for immigrant detainees.
Local immigration attorneys almost universally support the state’s decision to lodge detained immigrants, at Northwest, for men, and Chittenden Regional Correctional Facility, in South Burlington, for women.
“We need these beds,” Jill Martin Diaz, executive director of the Vermont Asylum Assistance Project, told lawmakers in January. “Because there is absolutely no substitute to me getting in my car and driving up the road ... flashing my attorney credential and being able to meet with my client face-to-face.”
Mae Nagusky / Vermont Public File Chittenden Regional Correctional Facility is Vermont's only women's prison and one of two facilities that routinely houses immigrant detainees. Attorneys are raising concerns about what they say is a lack of language translation services available as they meet with clients. But a Department of Corrections policy that prohibits attorneys from using their own translation services in state facilities has hindered their ability to help, attorneys say.
“DOC policies and deficiencies are preventing low bono and volunteer attorneys from being able to speak with their clients who are in detention and is thereby depriving them of access to their due process rights,” said Hillary Rich, a staff attorney at the Vermont ACLU who spent two years practicing asylum law in Laredo and San Antonio, Texas.
No outside devices Matters said the Vermont Asylum Assistance Project has been making regular trips to the state prisons since last year to meet with newly detained immigrants. She said the organization explains their rights, advises them of potential claims, and provides referrals to lawyers who might provide representation.
VAAP attorneys had previously been allowed to bring in their own “tools of interpretation,” including laptops or cell phones on which they could call out to access live translation services.
“It’s very hard to know in advance what type of language capabilities we’re going to need on that day,” Matter said. “We see people detained who speak a wide variety of languages, including rare and Indigenous languages.”
But in October, officials at VAAP say, the department told them they could no longer bring those devices into the facilities. The single DOC landline that attorneys now have access to drops calls frequently, Matters said. And she said it bottlenecks a process that previously allowed multiple attorneys to work several cases simultaneously. The process became so inefficient that VAAP has cut the number of trips it makes to state prisons in half.
“The numerical reality of that is that … between tens and hundreds of people who would otherwise have access to legal screenings, basic know your rights, and case advice and potential referral out to legal services, go without,” she said.
Peter Hirschfeld / Vermont Public File Elected officials and nonprofit leaders gathered in the Statehouse in May 2025 to announce the launch of the Vermont Immigration Legal Defense Fund. Jill Martin Diaz, at the podium, with Vermont Asylum Assistance Project, said the money would be used to train and hire legal professionals to provide pro bono assistance to noncitizens facing immigration proceedings. Corrections Commissioner Jon Murad said in an interview Wednesday that the department has a longstanding policy that prohibits people from bringing “anything with cellular capacity” into a state prison.
“What if it’s misplaced? What if it disappears? What if it is then transferred over to the direct control of people in our custody?” said Murad, who joined the department in August. “That is a risk, and one that we don’t want to countenance.”
VAAP’s ability to bring in its own devices up until October, according to Murad, might have been related to a lapse in policy enforcement.
'Set up to fail' The commissioner said the department has since taken steps to lower the language barrier, by providing attorneys with DOC-owned devices that have translation capabilities.
Murad said DOC had six such devices at Northwest and three at Chittenden Regional. Matters said VAAP attorneys who visited DOC facilities as recently as March 6 have not been told about the new devices.
“That was brand new information to me and to all of my colleagues,” Matters said Wednesday.
The DOC devices don’t have cellular capacity – a shortcoming Matters said would likely render them useless to VAAP attorneys.
“We require live interpretation services. We need to be speaking to a human,” Matters said.
Murad said the department is working on a plan that would give lawyers the ability to make calls to translation services on DOC-owned devices, though he said he doesn’t have a timeline for that yet. He said the department has undertaken other efforts to facilitate access to counsel for detained immigrants – it sends VAAP a daily list of names of new arrivals at facilities, so the organization is aware of individuals who might need assistance.
Rich, of the ACLU, said a DOC policy she obtained in February through a public records request shows that immigrant detainees are responsible for coordinating their own remote hearings.
“Which for a limited English proficient detainee who does not have counsel and doesn’t even know what state they’re in is going to prove impossible,” she said. “These folks are being set up to fail in their immigration court systems by the deficiencies in DOC procedures.”
Rich said Northwest and Chittenden Regional are subject to public accommodations laws that include language-access requirements. She said the Department of Corrections might be violating those laws.
“Lawsuits are just one tool in our toolbox,” Rich said, “but of course it is a tool we are very comfortable wielding when necessary.”
Peter Hirschfeld https://www.vermontpublic.org/local-news/2026-03-12/lawyers-raise-alarm-about-language-translation-services-for-vermonts-detained-immigrants #metaglossia #metaglossia_mundus
"Theorizing “Global Criticality” and the Politics of Just Translation
07 May 2026 18:00 to 19:30
Bush House, Strand Campus, London
07
May
Professor Emily Apter is giving the keynote lecture at the annual conference of the Department of Interdisciplinary Humanities, King's College London. This lecture is open to the public.
"Translation and justice, the focus of my book What is Just Translation? Changing Languages in the Political Present, engages Gayatri Chakravorty Spivak’s notion of “global criticality” as a rubric for a vision of language politics that straddles the fields of law, global language policy, non-monolingual pedagogies and reparations applied to forms of linguistic injustice and cultural appropriationism. I associate “global criticality” with translational workarounds - ways of working micropolitically with language and intermedial forms of expression. These microforms stand in contradistinction to one-size-fits-all paradigms or “isms” that are anchored in colonial Euro-chronology and beholden to reductive bipolarities between major and minor, metropole and periphery, written and performative. As a micropolitics of language, “global criticality” flows into Spivak’s notion of “living translation:” a triple play on living with translation, living life in translation, and “live” translation, which vivifies life itself."
About the speaker
Professor Emily Apter is Julius Silver Professor of Comparative Literature and French Literature, Thought and Culture at New York University. Her books include: Unexceptional Politics: On Obstruction, Impasse and the Impolitic (Verso, 2018); Against World Literature: On the Politics of Untranslatability (2013); Dictionary of Untranslatables: A Philosophical Lexicon (co-edited with Barbara Cassin, Jacques Lezra and Michael Wood) (2014); and The Translation Zone: A New Comparative Literature (2006). Since 2000 she has edited the book series Translation/Transnation with Princeton University Press. Essays have appeared in New Literary History, October, Public Culture, Crisis and Critique, History and Theory, Diacritics, PMLA, Comparative Literature, Critique, Les Temps qui Restent, Representations, Art Journal, Third Text, Paragraph, boundary 2, Artforum, Esprit Créateur and Critical Inquiry. In 2019 she was the Daimler Fellow at the American Academy in Berlin. In 2017–18 she served as President of the American Comparative Literature Association. In fall 2014 she was a Humanities Council Fellow at Princeton University and in 2003–2004 she was a Guggenheim Fellowship recipient. In 2022 she co-edited and introduced Gayatri Chakravorty Spivak’s Living Translation, a collection of Spivak’s contributions to translation theory. Her book What is Just Translation? Changing Languages in the Political Present is nearing completion. Her next book project, on the conceptualization of the unborn (or “prepersons”) is provisionally titled Conception: The Laws."
https://www.kcl.ac.uk/events/theorizing-global-criticality-and-the-politics-of-just-translation
#metaglossia
#metaglossia_mundus
"Vozo AI , an AI-powered video localization platform, today announced the beta launch of Visual Translate , a generative AI capability that automatically localizes on‑screen text while maintaining the original design, layout and animation. This release addresses a long-standing gap in AI video translation: while subtitles and dubbing translate what viewers hear, most tools still fail to translate the text viewers see within the video itself.
Vozo Visual Translate localizes on-screen text in videos.
In many videos—such as training materials, product demos, and explainer content—key information appears directly within visuals, including slide text, labels, callouts, diagrams, and charts. When that content remains in the original language, international viewers may understand the narration but still miss critical context.
Visual Translate closes this gap by automatically:
• Working directly from the video itself—no original project files required
• Detecting and translating on-screen text within videos
• Preserving the original layout, style, and animations
• Allowing text, fonts, colors, and positions to be edited and customized
The result is a fully localized video where both narration and visuals are translated coherently, giving international audiences the same clarity as native viewers.
During the alpha phase, a multinational manufacturing company used Visual Translate to localize slide-based training videos for global teams and distributor networks. By translating visual content directly within the video into nine languages, rather than manually editing, the company reduced localization time by over 96%—turning a two-day process into just 30 minutes.
By automating what was once a highly manual process, Visual Translate marks a shift in AI video translation—moving beyond basic dubbing and subtitles toward truly complete, scalable localization that preserves how meaning is conveyed visually. The capability is particularly valuable for education, corporate training, and marketing, where critical information often appears in step-by-step instructions, labels, and other visual elements rather than narration alone.
“Most video translation tools focus on speech,” said Dr. CY Zhou, Founder and CEO of Vozo AI. “But in many videos, meaning is conveyed visually—through slides, diagrams, and on-screen text. Visual Translate fills that missing layer, enabling truly complete video localization and allowing ideas and knowledge to move across languages with far greater clarity and impact.”
Visual Translate is currently available in beta.
About Vozo AI Vozo AI is an AI-powered video localization platform that enables teams and enterprises to scale video content across languages and markets. By translating both spoken audio and visual content, Vozo ensures that meaning is preserved across the entire video experience, delivering truly native viewing for global audiences. For more information, visit www.vozo.ai . https://lasvegassun.com/news/2026/mar/12/beyond-dubbing-vozo-ai-launches-visual-translate-f/ #metaglossia #metaglossia_mundus
"The European Commission’s Directorate-General for Translation (DG Translation) has invited students from the European Master’s in Translation (EMT) network to take part in a project assessing how well AI language models work in EU languages.
Students on EMT programmes — a network of university master’s courses in translation recognised by the EU — will be able to contribute to work aimed at improving the evaluation of AI models across different European languages, according to a statement published by the European Commission on Wednesday.
The project will involve examining how AI models perform and how that performance is measured, with a focus on making the tools better suited to EU languages.
Focus on evaluation of AI for EU languages The Commission said the work brings together language professionals and AI engineers, and the project will give students insight into how linguistic skills and AI development can be combined.
It added that participants will also be able to explore potential career paths linked to language technology as part of the project." Thursday 12 March 2026 By The Brussels Times Newsroom https://www.brusselstimes.com/2018989/ai-translation-tools-under-scrutiny-in-new-eu-backed-student-project #metaglossia #metaglossia_mundus
"The poet and writer Coleman Barks died last month at the age of 88. He was well known for his translations of the works of the 13th-century Persian mystic poet Jalaluddin Rumi. Coleman Barks even appears on a Coldplay album, "A Head Full of Dreams," reading a translation of Rumi’s “The Guest House.”
Here & Now's Lisa Mullins talks to Coleman Barks's sister, Elizabeth Barks Cox, who is also a writer, about his life and work.
This segment aired on March 12, 2026." https://www.wbur.org/hereandnow/2026/03/12/coleman-barks-obituary #metaglossia #metaglossia_mundus
Not so fast: A University of Houston professor of psychology is disputing a high-profile study claiming that people who live in multilingual countries show healthier brain aging, claiming instead that wealthy countries, with the best healthcare systems, offer longer life expectancies.
"University of Houston professor of psychology Arturo Hernandez is disputing a high-profile study published in the journal Nature Aging claiming that people who live in multilingual countries show healthier brain aging. Though the study got lots of attention, Hernandez reports in the journal Brain and Language that the findings warrant cautious interpretation and reframing of public health implications.
“We took a closer look and argued that the study’s conclusions go further than the data can support,” said Hernandez.
According to Hernandez, the countries with high multilingualism in Europe also happen to be the wealthiest, with the best healthcare systems and the longest life expectancies, sometimes by as much as six years. When those structural differences are accounted for, the apparent language effect largely disappears.
“There is a real temptation in science to find individual behavioral solutions: learn a language, do a puzzle, take a supplement - are all suggested as solutions to problems that are fundamentally structural,” said Hernandez. “When those solutions get oversold, it can erode public trust in science and distract from the harder work of building the conditions that actually support healthy aging: Access to healthcare, good nutrition, economic stability. We wanted to make sure the public gets an accurate picture of what the evidence shows.”
In the original article, researchers examined records in 27 European countries and claimed that multilingualism protects against accelerated aging whereas monolingualism increased risk of accelerated aging.
Countries with high multilingualism, like Luxembourg (82.5 years) and the Netherlands (82.5 years), have some of the highest life expectancies in the world. Meanwhile, countries with low multilingualism, such as Bulgaria (75.8 years) and Romania (76.3 years), lag nearly six or seven years behind.
“A six-year gap in life expectancy is unlikely to be explained by language. World-class healthcare, superior early-childhood nutrition, higher occupational safety, and lower chronic stress offer a more parsimonious account—the same structural forces that produce longevity in general,” said Hernandez, who points to Japan as another example.
As a largely monolingual society, it boasts an exceptional life expectancy of 84.5 years. “Low inequality, a healthy diet, and a robust universal healthcare system account for that advantage far better than language ever could,” said Hernandez.
“As scientists, we do a disservice to the public when we promote individual behavioral hacks as substitutes for structural resources. Learning a language is a beautiful, culturally enriching endeavor. It connects us to others and expands our world. But we must be careful not to overpromise it as a clinical intervention for aging,” Hernandez said.
Journal
Brain and Language
Article Title
Multilingualism and aging: Country-level patterns may not support individual-level causal claims
Article Publication Date
9-Mar-2026"
https://www.eurekalert.org/news-releases/1119284
#metaglossia_mundus
#metaglossia
Cérémonie de remise des diplômes 2026 - Faculté de traduction et d'interprétation - UNIGE
Université de Genève
La cérémonie de remise des diplômes se tiendra le vendredi 27 novembre 2026 à 18h, dans le hall d’Uni-Mail.
🎉 Accessible uniquement sur invitation, cet événement festif destiné aux personnes diplômées en 2025–2026 sera l’occasion de célébrer leur brillante réussite et partager des souvenirs marquants de leur cursus.
La cérémonie sera retransmise en streaming le jour J.
10 mars 2026
https://www.unige.ch/fti/a-la-une/ceremonie-de-remise-des-diplomes-2026
#metaglossia_mundus
#metaglossia
#métaglossie
"Published: March 11, 2026 12.16am SAST
Isabel Tello Fons, Universitat de València
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
“–Tú también te enojarías si tuvieras una peluca como la mía —prosiguió el Avispón–. Se meten con uno, y uno, que no le gusta que le tomen la ‘peluca’, pues se enfada… ¡natural! Y entonces es cuando me entra la murria, me arrebujo debajo de un árbol y me quedo tieso de frío. Y, para aliviarme, cojo un pañuelo amarillo y me lo ato alrededor de la cara… ¡Oséase, como ahora! ¡Natural!”.
Así tradujo Ramón Buckley la voz del Avispón en la novela A través del Espejo y lo que Alicia encontró allí, de Lewis Carroll. La versión original recrea el dialecto cockney londinense, muy ligado a la clase obrera, lo que Buckley transformó en un dialecto castizo madrileño, conservando el tono quejón y ordinario del personaje de la obra de Carroll:
“You’d be cross too, if you’d a wig like mine,” the Wasp went on. “They jokes, at one. And they worrits one. And then I gets cross. And I gets cold. And I gets under a tree. And I gets a yellow handkerchief. And I ties up my face –as at the present”.
Cuando leemos una novela traducida, no solo seguimos una historia: escuchamos voces. Voces que revelan quiénes son los personajes, de dónde vienen y qué lugar ocupan en su comunidad. Pero ¿qué pasa con esas voces cuando pasan de un idioma a otro? ¿Cómo se traducen los dialectos, acentos, ritmos y registros que forman parte de la identidad profunda de los personajes? Abordar estas cuestiones es uno de los desafíos más complejos y menos visibles de la literatura.
Voces que importan
La forma de “hablar” de los personajes, lo que llamamos variación lingüística, abarca rasgos diferentes como vocabulario local, jergas, expresiones propias de una comunidad, formas de una lengua pasada o maneras particulares de construir las frases. Estos rasgos no son adornos, son recursos de caracterización: cumplen funciones narrativas y de estilo importantes.
El dialecto de un lugar podría tener una función reivindicativa; el acento rural podría transmitir humor, ternura o jerarquía; una jerga juvenil podría significar cercanía o pertenencia a un grupo y un habla histórica sitúa al lector en otra época. Si estas voces desaparecen en la traducción, el personaje se vuelve más plano y la historia pierde parte de su trama original.
Por ejemplo, en Las aventuras de Huckleberry Finn, Mark Twain diferenció a sus personajes mediante siete dialectos diferentes, y en Oliver Twist, Dickens utilizó el argot de ladrones y rufianes para mostrar el habla del hampa londinense.
Sin equivalencias directas
Uno de los mayores retos de la traducción literaria es que los dialectos no son intercambiables. No existe un “equivalente” español del inglés del sur de Estados Unidos, ni un dialecto aquí que corresponda exactamente al de Liverpool. Cada variedad lingüística está anclada en su territorio, historia y contexto social.
Get your news from people who know what they’re talking about.
Get newsletter
Por eso, si tradujéramos de forma literal un dialecto extranjero, el resultado sería extraño o incluso cómico. Si cambiáramos un dialecto inglés por uno español real, convertiríamos a Huckleberry en un niño andaluz, canario o mexicano y manipularíamos su identidad original. Pero a la vez, si se ignora esa forma de hablar y se traduce a la lengua estándar, se pierde su personalidad lingüística.
La traducción literaria busca conseguir efectos equivalentes: que el lector perciba el mismo matiz social y emocional que quien lo lee en versión original, aunque se usen recursos distintos para conseguirlo.
La traducción más humana
La tarea del traductor literario no es mecánica; es un ejercicio de escucha y de interpretación. El traductor se hace preguntas como qué efecto produce esa voz en el lector del original, qué rasgos lingüísticos usar para conseguir ese efecto en la traducción o hasta qué punto marcar o no una variedad.
Puede que la mejor solución no sea apuntar hacia un dialecto concreto, sino usar un registro ligeramente desviado de la lengua estándar para insinuar un origen social que no desplace culturalmente al personaje. Otras veces, puede que un rasgo léxico o una estructura gramatical basten para recrear el ambiente.
Cada decisión requiere criterio y responsabilidad. La literatura representa grupos sociales reales, y tratarlos con respeto exige una mirada ética.
Como he comprobado en mi investigación (de próxima publicación), esa mirada ética es algo que la IA, por ahora, no posee. La IA no “entiende” las implicaciones sociales de la forma de hablar de un personaje. No sabe cuándo un dialecto transmite marginación o cuándo marca jerarquía social. Trabaja detectando patrones estadísticos, no intenciones humanas.
Cuando se le pide traducir voces no estándar, suele haber dos consecuencias. O bien el texto traducido aparece “limpio”, y un personaje que hablaba con un acento local termina hablando de forma normativa, con lo que su personalidad se diluye; o bien la IA imita marcas dialectales, pero mezcla jergas incompatibles o deforma palabras sin criterio. Esto crea estereotipos no deseados, es decir, caricaturas.
Por tanto, ante la reflexión y minuciosidad que conlleva la traducción de la variación lingüística, la IA genera respuestas rápidas que no tienen todavía suficiente sensibilidad para manejar ambigüedades, ironías o alusiones culturales.
¿Quiere recibir más artículos como este? Suscríbase a Suplemento Cultural y reciba la actualidad cultural y una selección de los mejores artículos de historia, literatura, cine, arte o música, seleccionados por nuestra editora de Cultura Claudia Lorenzo.
Por qué necesitamos decisiones
Las herramientas como la IA pueden ser muy útiles en las fases previas y complementarias de la traducción porque permiten localizar información rápidamente, comparar usos reales en grandes corpus, identificar patrones de estilo… Sin embargo, si tienden a igualar las voces, también igualarán las experiencias. Utilizándola sin control perderemos diversidad lingüística y, con ella, diversidad humana.
Y es que las variedades lingüísticas no son solamente desviaciones del estándar: son lenguas muchas veces minoritarias o minorizadas, vulnerables o en riesgo. Protegerlas ayuda a conservar nuestro patrimonio cultural y una valiosa pluralidad.
Para que las voces lleguen al lector sin perder su identidad, hace falta alguien que las escuche y las recree. Esa es una tarea esencialmente humana. Por eso, cada vez que una traducción literaria nos deja oír un mundo distinto, estamos también salvando una parte de nuestra diversidad cultural.
Una versión de este artículo se publicó en la revista Telos, de la Fundación Telefónica."
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
#Metaglossia
#metaglossia_mundus
#métaglossie
"In late 2025, generative AI crossed another critical threshold. Following GPT-5.1 in November, OpenAI released GPT-5.2 on 11 December — a model designed to generate adaptive, discipline-specific academic prose with fewer stylistic traces and greater structural variation. For universities, the concern was immediate: if AI can write fluently, unpredictably, and in discipline-appropriate academic language, does detectability still hold?
Early results show that it does.
How StrikePlagiarism responds to GPT-5.2
The release of GPT-5.2 reinforced a broader challenge facing higher education: AI development now outpaces institutional policy cycles. For StrikePlagiarism, this moment required immediate empirical validation rather than theoretical assumptions.
Within days of GPT-5.2 entering academic use, StrikePlagiarism.com was tested against newly generated and paraphrased GPT-5.2 texts under realistic academic conditions. The results were unambiguous:
Over 97% detection accuracy across GPT-5.2 outputs False results below 1%, preserving academic fairness Consistent performance after paraphrasing and stylistic diversification Rather than relying on surface-level markers, StrikePlagiarism.com analysed behavioural consistency across longer academic texts — identifying patterns that remain statistically improbable in authentic student work. Reports delivered probability-based, side-by-side comparisons, providing educators with interpretable evidence rather than automated verdicts.
Why GPT-5.2 remains detectable
GPT-5.2 demonstrates strong control over academic conventions and avoids obvious repetition. However, analysis across extended submissions consistently revealed:
non-random reasoning structures, unusually uniform transitions between claims, absence of natural cognitive drift. Individually, these signals are subtle. Taken together, they form a measurable behavioural profile. Detection no longer depends on awkward phrasing or stylistic errors, but on identifying improbably stable reasoning across complex texts. Fluency improves — invisibility does not.
Core advantages of StrikePlagiarism.com’s AI detection approach
StrikePlagiarism.com was designed to support institutions operating at scale, across disciplines and languages:
Multilingual AI-content detection at scale AI-generated content is detected across 100+ languages, enabling consistent integrity standards in international and multilingual academic environments. Proven accuracy against advanced generative models Detection accuracy exceeds 97%, including paraphrased and stylistically diversified GPT-5.2 texts — demonstrating reliability under real academic conditions. Ultra-low false-positive rates False results remain below 1%, protecting students from incorrect attribution and ensuring that detection strength never compromises fairness. Why AI detection is critical right now
GPT-5.2 makes one reality clear: the primary risk for universities is no longer obvious AI misuse, but large volumes of academically convincing AI-generated work entering assessment unnoticed. This is not a future concern — it is a present operational challenge.
StrikePlagiarism addresses this challenge at an institutional level. By combining high-accuracy AI behaviour analysis with transparent, probability-based reporting, StrikePlagiarism.com enables universities to respond now, not retrospectively. When academic decisions must be defensible at the moment they are made, evidence-based AI detection becomes essential infrastructure rather than an optional safeguard." 97% accuracy against GPT-5.2: inside StrikePlagiarism.com’s detection results | THE Campus Learn, Share, Connect https://share.google/hA12nxsAaMGdPGqDX #Metaglossia #metaglossia_mundus #métaglossie
"Since the start of this year’s Amar Ekushey Book Fair, readers and publishers have noticed a rise in Bengali translations of world literature classics, with the growing popularity of these works clearly reflected in readers’ enthusiastic response. Readers who feel less comfortable in studying English texts but have immense interests to enjoy the taste of literary works from diverse cultures and languages, transcending national boundaries, are searching for and purchasing translation works, said the publishers. According to them, they have published more Bengali translations of classics from other languages responding to the demand of readers, but translation of classics of Bangla literature in foreign languages has not increased in the expected way. Baatighar always brings a good number of translations. Publisher of Baatighar, Dipankar Das, said, “There is always a demand for translated literature. It will increase further. One may not read English comfortably, but has a penchant for world literature. Translation helps them get the taste of world literature.” There are considerable allegations about the quality of many newly published translations. With the increasing popularity of translated books, the number of substandard translations is also increasing. Responding to this complaint, Dipankar said if a reader does not understand the translation, then questions can be raised about quality. Baatighar, however, publishes books by ensuring quality. Salesperson of Seba Prokashoni, Azizul Hakim, said the sale of translated books has been going well for the last several years. “Translations and novels are our bestsellers. A big portion of our new books this time is translations. Average sales of these translated copies are getting better every day,” he said. Small or big, almost all publishing houses are bringing translations of novels, thrillers, detective series, biographies, theories and historical books of world-famous authors into Bangla. Also, many translated books are published without the permission of the original authors. As a result, the editing of these books is not done properly.
Read More 5,000 tons of diesel to arrive from India today
Translator Mostaq Sharif has translated “After Dark” by Japanese author Haruki Murakami. According to him, the state of translation literature is having a good time as the demand for such works is increasing gradually. He said, “Good to see that translation works are getting a good response from readers. It indicates the changing taste of readers. But the substandard works of translation are deceiving the bookworms. Publishers and readers need to be careful while selecting and purchasing books.” On the 12th day of the book fair, a discussion programme titled “Shahidullah Kaiser” was held at the main stage of the book fair at 3pm with Syed Azizul Haque Chowdhury in the chair."
https://www.daily-sun.com/metropolis/862418 #Metaglossia #metaglossia_mundus #métaglossie
"Grants and Prizes for Promoting Italian Books and Translations (New Zealand)
UPCOMING GRANTS IN MARCH 2026!
Deadline: 31-Mar-2026
The Ministry of Foreign Affairs and International Cooperation offers prizes and grants to promote Italian language and culture abroad through literary translations, scientific works, and audiovisual productions. Each prize is valued at €5,000, targeting high-quality translations, publications, dubbing, or subtitling of works created or published since January 2025. Eligible applicants include publishers, translators, production companies, and cultural institutions, with applications due by 31 March 2026.
Overview
This initiative aims to strengthen the global dissemination of Italian culture by supporting:
Translation and publication of Italian literary and scientific works into foreign languages
Production, dubbing, and subtitling of Italian short and feature films, as well as television series
Promotion of contemporary Italian literature and audiovisual content
Expansion of cultural exchange and international reach
The program ensures that both literary and audiovisual works maintain high-quality standards and reach wider international audiences.
Prize Details
Maximum number of prizes for 2026: 10
Prize value: €5,000 each
Language distribution:
Spanish: 5 prizes
Arabic: 1 prize
Chinese: 1 prize
French: 1 prize
English: 1 prize
German: 1 prize
Eligible Works
Literary and scientific works (including e-books) translated and published in a foreign language on or after 1 January 2025
Audiovisual productions (short/feature films, TV series) produced, dubbed, or subtitled on or after 1 January 2025"
https://www2.fundsforngos.org/arts-culture/grants-and-prizes-for-promoting-italian-books-and-translations-new-zealand/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Zoom announced on Tuesday, March 10, that it is bringing real-time audio translation to Zoom Meetings, allowing users to understand speakers in different languages during calls. The video communications platform also unveiled a new feature aimed at detecting synthetic audio or video in Zoom Meetings.
The new features coming to Zoom Meetings are among a handful of new AI-powered capabilities coming to Zoom’s enterprise-grade offerings, including Zoom Workplace, Zoom Phone, and Zoom CX.
The live voice translation feature will let Zoom users speak in their native language while others on the call can hear the translated speech in their preferred language in real-time. The feature is currently available in five languages, with support for more languages coming soon.
Zoom first gained widespread recognition during the COVID-19 pandemic, when remote work and virtual meetings became the norm. Since then, the company has looked to become more than a video conferencing platform by launching AI-powered productivity tools and customer support products for enterprises. It has sought to define its competitive edge in the crowded AI industry by taking a federated approach where multiple AI models, including Zoom’s own models and those from OpenAI, Anthropic, and Meta, are dynamically selected to provide cost-effective solutions.
The company’s new on-call deepfake risk detection feature arrives as AI-driven online scams continue to surge. It could play a key role in protecting users from ‘digital arrest’ scams, many of which rely on deceptive video calls to trick victims.
“The next phase of enterprise AI will be defined by the ability to move from conversation to action. Zoom’s agentic AI platform is designed to orchestrate action across systems, turning every meeting, call, and customer interaction into a trigger for workflow automation,” said Velchamy Sankarlingam, president of Product & Engineering at Zoom.
Alongside these Zoom Meeting features, the company also introduced a suite of AI-powered office apps such as AI Docs, Slides, and Sheets that can be used to generate document drafts, spreadsheets with data, or presentations based on meeting transcripts and data from other services.
Zoom further said that AI avatars, announced last year, will start becoming available to users later this month. The feature lets users create photo-realistic, AI-generated avatars of themselves to appear in online meetings on their behalf. Zoom’s AI Companion 3.0, its latest AI assistant on the web unveiled last year, will soon be accessible through a desktop app. The AI assistant is also being integrated across the Zoom Workplace app, Zoom Business Services, and Workvivo, its app for employee communication.
In addition, AI Companion 3.0 can be integrated with third-party platforms such as Slack, ServiceNow, Box, Google Drive, and OneDrive, enabling the AI assistant to synthesise enterprise data across applications and provide insights from multiple data sources.
Amid the rising popularity of AI agents, Zoom is letting users create and deploy custom as well as pre-built AI agents through no-code, natural language prompts. These custom AI agents can act on users’ behalf to automate workflows across third-party systems such as Salesforce, Slack, and ServiceNow, the company said.
For developers, Zoom announced a new suite of enterprise‑grade AI APIs which can be used to build apps that leverage the transcription, translation, summarization, deep reasoning, and image‑processing technologies powering Zoom’s own products.
In Zoom Phone, the company is rolling out agentic workflows that help enterprise clients automatically execute tasks such as drafting emails or sending out summaries. It is also adding new SMS capabilities for the 24/7 virtual receptionist to handle customer engagements via text, answer questions, collect information, support scheduling flows, and escalate to a human when needed.%" https://indianexpress.com/article/technology/tech-news-technology/zoom-new-live-voice-translation-deepfake-detection-video-calls-10575835/ #Metaglossia #metaglossia_mundus #métaglossie
County officials say the technology might limit burnout among call takers, but AI researchers are skeptical.
"Posted inOnondaga County
Onondaga County begins using AI to translate, transcribe and summarize 911 calls
County officials say the technology might limit burnout among call takers, but AI researchers are skeptical.
by Laura Robertson
March 11, 2026
The old Onondaga County court building is home to the county legislature. Credit: Mike Greenlar | Central Current
Onondaga County’s 911 center recently began using artificial intelligence technology to assist with calls. The technology allows for live transcription, location, and call summarization, and will in the future provide live translation, according to county executive spokesperson Justin Sayles.
The county will spend $350,000 this year on the technology, said Sayles, and the county will consider annual renewals of the product. The funding was approved as part of the Onondaga County Department of Emergency Communications’ 2026 budget.
The technology was developed by Prepared, a company that makes AI products.
The county’s 911 center has a significant staffing shortage, according to Sayles and Emergency Communications Commissioner Julie Corn. County officials hope AI will help prevent burnout.
“911 call-takers take call after call, day after day with the stress that there is zero room for error,” said Sayles. “Having tools that aid in their success and support them to be their best is one way we can limit burnout.”
We bring you the information you need about Central New York by holding those in power to account and holding a mirror up to our community. Syracuse is evolving and your stories are shaping it. Join us.
Email Address
Sign up
While the county expects the technology to help with burnout, some experts are more skeptical about AI’s utility for 911 call takers. Concerns range from AI’s ability to triage emergency calls to its ability to effectively translate for non-English speaking callers.
In an October presentation to the public safety committee, Corn said that the use of the technology “falls right in line with the county executive’s initiative to have AI programs in his vision.”
The AI 911 system works like this: Non-emergency phone calls — those that come into the 911 center on ten-digit numbers — will be transferred directly to an AI bot. If “key emergency words or scenarios” are mentioned, the AI bot is trained to transfer the call back to a human, said Sayles. He added that other calls would be determined to be non-emergencies “in the same way they are now” and transferred to the bot.
Prepared will provide a live transcription of emergency 911 calls, but a human will write the messages sent to emergency responders. Call takers will still be expected to take notes on their calls. The call recordings, notes and AI-drafted transcript will all be saved separately and will be able to be compared, Sayles said. Only the notes will be able to be edited, he said.
Ben Winters, the director of AI and data privacy at the Consumer Federation of America, said he would be “very worried” about the potential for false negatives if a bot were to triage emergency calls.
Winters also said AI is not equally good at all forms of transcription. When people are rushed, crying, using headphones or on speaker, it is more likely to miss what is being said. He added that 911 callers might not feel comfortable sharing exactly what is going on, and that call-takers are trained to try to get needed information from callers.
Winters said the redundancy in record keeping was good but questioned when the AI transcription might actually be used.
“What is the record that they go with?” he asked. “What are the ones they report and act on?”
Onondaga County is also linguistically diverse. As of 2024, the most commonly spoken languages among people with low English proficiency include Ukrainian, Nepali and Burmese. Sayles said that the technology would be able to translate all these languages.
But the AI tools powering translation are not always representative of how native speakers speak, said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology who researches multilingual AI. The AI powering translation services may have fewer natively-created digitized examples of some languages to train on, she said, which could mean that translations might feature overly complicated or outdated words that people do not use regularly and might not understand.
Bhatia gave an example: If AI translates the English word “vaccine” to an equivalent of the older ‘innoculation,’ the listener might not understand those words. She added that translation tools sometimes even make up new words when they don’t know the correct one.
Translation tools should be developed and evaluated with local language speakers and community-based organizations, Bhatia said.
“AI-based translation tools may come in handy when we need legible translations in a pinch but we shouldn’t confuse them as capable of the fluent, nuanced, and accurate translations people need when they are seeking emergency and life-saving services,” said Bhatia.
The county currently translates calls using Voiance. The program uses live interpreters. The county will still contract with Voiance, said Sayles. In the future, a bot will likely translate live, but in the meantime, the county will work to ensure the change in translation methods doesn’t leave service gaps.
Prepared is used in other 911 departments across the county, including in Baltimore. Sayles said Onondaga County had “solid working relationships” with other counties using Prepared. So far, those counties have raised no concerns, he said.
Prepared was recently purchased by Axon Enterprise, a company that develops technology for the military and police.
In a press release shortly after the acquisition, Axon boosted Prepared as a means of “owning the first 120 seconds” of an emergency call. Axon believes the technology could help supervisors to see “risk patterns and coaching opportunities” in their callers, the press release said.
One of Axon’s other AI products, the controversial Draft One generative AI police report writer, has been accused of being “designed to defy transparency” by the nonprofit Electronic Frontier Foundation.
Sayles did not directly say whether it would integrate other Axon AI programs — like Draft One — into county operations but said the county is “constantly evaluating opportunities to integrate AI” into county operations.
“The AI technology continues to improve,” said Sayles. “At the end of the day, the call-taker is still fully trained in listening to calls and capturing and translating information for dispatch.”"
https://centralcurrent.org/onondaga-county-begins-using-ai-to-translate-transcribe-and-summarize-911-calls/
#Metaglossia
#metaglossia_mundus
#metaglossia_mundus
"...Bien que l'Afrique abrite plus de 2 000 langues, la grande majorité des modèles d'apprentissage automatique qui alimentent les systèmes d'IA actuels sont principalement entraînés sur l'anglais, le mandarin et quelques autres langues dominantes à l'échelle mondiale. Pour des millions de personnes sur le continent, cela signifie que la prochaine génération d'outils numériques risque de rester inaccessible.
Le Nigeria franchit aujourd'hui une étape majeure pour changer cela.
L'espace Agence nationale de développement des technologies de l'information (NITDA) a conclu un partenariat avec NKENNEAi, une plateforme d'intelligence artificielle pour les langues africaines, afin d'accélérer le développement d'infrastructures conçues spécifiquement pour les langues africaines.
Cette collaboration vise à développer des technologies de traduction et de langage évolutives, capables de soutenir les services gouvernementaux, les systèmes de santé, les plateformes financières et les applications numériques pour l'ensemble de la population multilingue du Nigéria.
Avec plus de 500 langues parlées au Nigéria, la langue demeure l'un des principaux obstacles à l'inclusion numérique et à l'accès aux technologies. Le partenariat NKENNEAi–NITDA vise à combler cette lacune en développant des systèmes d'IA spécifiquement entraînés sur les langues africaines et leurs structures tonales.
De l'apprentissage des langues à l'infrastructure africaine de l'IA
NKENNEAi est né de NKENNE, l'une des plateformes d'apprentissage des langues africaines à la croissance la plus rapide.
Fondée dans le but de préserver et d'enseigner les langues africaines, NKENNE est devenue une plateforme mondiale comptant plus de 400 000 utilisateurs apprenant des langues telles que l'igbo, le yoruba, le swahili, le haoussa, le twi, le somali et le pidgin nigérian.
À mesure que la plateforme s'est développée, son corpus croissant de données textuelles et vocales en langues africaines a jeté les bases de quelque chose de bien plus vaste : le développement de systèmes d'intelligence artificielle capables de comprendre les langues africaines à grande échelle.
Ces recherches ont mené à la création de NKENNEAi, une plateforme d'IA multilingue axée sur la construction de l'infrastructure nécessaire à l'intelligence artificielle des langues africaines.
Michael Odokara-Okigbo, PDG de NKENNEAi, a déclaré que la croissance de NKENNE révélait une opportunité bien plus vaste : bâtir les fondements technologiques de l’IA pour les langues africaines. « NKENNE a débuté comme une mission culturelle visant à préserver et à enseigner les langues africaines », a expliqué M. Odokara-Okigbo. « Alors que notre communauté s’étendait à des centaines de milliers d’apprenants, nous avons réalisé que les données et les connaissances linguistiques que nous développions pouvaient alimenter un projet d’une tout autre envergure. NKENNEAi a pour objectif de construire l’infrastructure qui permettra aux langues africaines d’exister, de se développer et de prospérer au sein des systèmes d’intelligence artificielle. »
Une approche différente pour l'entraînement de l'IA aux langues africaines
La plupart des modèles d'IA mondiaux ont des difficultés avec les langues africaines car ils ne disposent pas des données d'entraînement et des cadres linguistiques nécessaires pour comprendre le ton, les variations dialectales et le sens contextuel.
NKENNEAi développe une approche différente.
L'entreprise développe des chaînes de traitement de données spécialisées, des systèmes d'annotation linguistique et des ensembles de données vocales conçus spécifiquement pour les langues africaines, permettant la création de modèles d'apprentissage automatique qui capturent avec précision le sens tonal et les nuances dialectales.
Cette méthodologie comprend des ensembles de données de phrases bilingues à grande échelle pour la traduction automatique, des ensembles de données vocales annotées pour les systèmes de transcription vocale, un étiquetage linguistique sensible au ton qui préserve le sens à travers les dialectes et une validation linguistique menée par la communauté avec des locuteurs natifs.
En combinant l'expertise linguistique et l'infrastructure d'apprentissage automatique, NKENNEAi développe des modèles d'IA sensibles à la tonalité, capables de comprendre les langues africaines avec une précision bien supérieure à celle des systèmes de traduction traditionnels.
La plateforme prend en charge des technologies telles que la traduction automatique par IA texte-texte, la transcription vocale en texte, la synthèse vocale texte-parole et les API d'IA multilingues pour les développeurs et les entreprises.
Ces outils permettent aux startups, aux gouvernements et aux entreprises d'intégrer directement le support des langues africaines dans leurs plateformes numériques.
Le système se concentre actuellement sur des langues telles que le yoruba, l'igbo, le haoussa, le swahili et le pidgin nigérian, et prévoit d'étendre sa couverture à d'autres langues africaines.
Soutenu par un soutien mondial à la recherche
Le développement de NKENNEAi a également bénéficié du soutien de financements internationaux pour la recherche, notamment de plusieurs subventions de la part de US National Science Foundation (NSF).
Dans le cadre du programme de recherche sur l'innovation des petites entreprises (SBIR) de la NSF, un financement a été accordé à ESM Global Productions, la société à l'origine de NKENNEAi, afin de faire progresser le développement d'une plateforme de traduction IA multilingue pour les langues africaines.
En 2024, la société a reçu une subvention de 1 million de dollars de la NSF (phase II) pour étendre son API de traduction des langues africaines et poursuivre le développement de modèles de parole et de langage conçus spécifiquement pour les langues tonales.
Ce travail soutient le développement de modèles de traduction multilingues pour les langues africaines, de systèmes de transcription vocale entraînés sur des ensembles de données vocales africaines, de modèles vocaux de synthèse vocale et d'API évolutives permettant l'intégration des langues africaines sur les plateformes numériques.
Ensemble, ces efforts contribuent à la mise en place de l'un des plus grands ensembles de données structurées et de pipelines d'entraînement à l'IA axés spécifiquement sur les langues africaines.
En accord avec les ambitions nationales du Nigéria en matière d'IA
Ce partenariat avec la NITDA s'inscrit dans la stratégie numérique globale du Nigéria, menée par le ministère fédéral des Communications, de l'Innovation et de l'Économie numérique, dirigé par le Dr Bosun Tijani, ministre nigérian des Communications, de l'Innovation et de l'Économie numérique.
Avant de rejoindre le gouvernement, Tijani a cofondé Pôle de co-création (CcHUB)Le Nigeria est l'un des centres d'innovation technologique les plus influents d'Afrique. Son ministère a piloté des initiatives visant à positionner le Nigeria comme un pôle mondial de l'intelligence artificielle et de l'innovation numérique.
Des programmes tels que l'initiative « 3 Million Technical Talent » visent à former des millions de Nigérians aux compétences numériques et liées à l'IA, tout en renforçant l'écosystème technologique du pays.
Grâce à sa collaboration avec NKENNEAi, NITDA étudie comment une infrastructure d'IA développée localement peut aider à servir la population multilingue du Nigéria tout en renforçant les capacités nationales du pays en matière d'IA.
Développer la main-d'œuvre pour l'IA en langues africaines
Au-delà de la modélisation, ce partenariat vise également à développer la main-d'œuvre nécessaire au maintien des systèmes d'IA en langues africaines.
Les initiatives prévues comprennent la formation d'annotateurs de données IA, d'ingénieurs en traitement du langage naturel et d'équipes techniques du secteur public qui soutiendront le développement d'ensembles de données linguistiques et le déploiement du système.
Ces programmes visent à garantir que l'IA linguistique africaine ne soit pas seulement conçue pour le continent, mais conçue par les personnes qui comprennent le mieux ses langues et ses cultures.
Importance des recherches sur la psychose
L'économie numérique africaine est en pleine expansion, mais la langue demeure l'un des principaux obstacles à l'accès au numérique.
Des millions d'Africains interagissent plus facilement dans leurs langues autochtones qu'en anglais, pourtant la plupart des plateformes numériques restent conçues principalement pour les utilisateurs anglophones.
Sans accessibilité linguistique, des services tels que la communication en matière de soins de santé, les outils financiers et les plateformes gouvernementales restent difficiles d'accès pour une grande partie de la population.
Une infrastructure linguistique basée sur l'IA pourrait transformer cela en permettant aux plateformes de communiquer avec les utilisateurs dans les langues qu'ils parlent au quotidien.
En compétition dans la course mondiale à l'IA du langage
Les entreprises technologiques mondiales commencent à reconnaître l'importance des langues africaines.
Des entreprises comme Google ont récemment étendu leur assistance en matière d'IA et de recherche à des langues comme le yoruba et le haoussa, témoignant d'un intérêt croissant pour les technologies linguistiques africaines.
Cependant, tandis que les entreprises mondiales commencent à intégrer les langues africaines dans leurs systèmes, NKENNEAi se concentre entièrement sur la construction d'une infrastructure d'IA conçue spécifiquement pour la complexité linguistique de l'Afrique.
« Les langues africaines sont profondément tonales, contextuelles et culturellement riches », a déclaré Odokara-Okigbo. « Développer une IA capable de les comprendre véritablement exige une infrastructure conçue spécifiquement pour ces langues. Notre mission est de faire en sorte que l’Afrique ne se contente pas de consommer l’intelligence artificielle, mais qu’elle construise les systèmes fondamentaux qui la sous-tendent. »
Construire l'avenir linguistique de l'Afrique
Le partenariat entre NKENNEAi et NITDA représente une étape importante pour garantir que les langues africaines soient pleinement représentées dans l'écosystème mondial de l'IA.
L'initiative sera déployée par étapes, selon une approche progressive comprenant des intégrations pilotes avec des agences gouvernementales, l'extension à d'autres langues, des programmes de formation de la main-d'œuvre et le développement éventuel d'une infrastructure d'IA linguistique nationale plus large.
En combinant le soutien gouvernemental et l'innovation du secteur privé, cette initiative vise à positionner le Nigéria comme un leader mondial des infrastructures d'intelligence artificielle pour les langues africaines.
Pour NKENNEAi, la mission va encore plus loin : construire les fondements technologiques qui garantissent que les langues africaines restent dynamiques, accessibles et utilisables à l'ère de l'intelligence artificielle." https://techcabal.com/fr/2026/03/09/Nitda-s%27associe-%C3%A0-Nkennea/ #Metaglossia #metaglossia_mundus #métaglossie
|
"
기자명Jinju Hong
2026-03-16 13:05:00
GDELT unveils AI experiments translating multilingual news, extracting leadership changes and turning a 3,100-page U.S. defence bill into an infographic.
(홍진주)] The GDELT Project, which collects and analyses global news and social data in real time, is releasing various experiments that use artificial intelligence to analyse large volumes of news and policy documents.
An online outlet, Gigazine, reported on March 15 local time that the GDELT Project is a global archive that continuously collects content published in more than 100 languages worldwide, including broadcasts, newspapers and web news, and builds it into a database. It links various elements, including people, organisations, places, events and news sources, into a single network. It provides data on events around the world, their background and trends in public opinion.
The project was founded by data scientist Kalev Leetaru and political scientist Philip Schrodt, and it collects news and social media (SNS) data from 1979 to the present. The collected data are used as a basis for analysing global political, economic and social trends by quantitatively coding social events and reactions to them.
GDELT in particular releases large datasets so researchers and journalists can use them for analysis. The data consist of three streams: event data that classify physical activity worldwide into more than 300 categories; relationship data that record people, organisations, places, topics and emotions; and data that analyse the visual story of news images. The data are updated about every 15 minutes.
GDELT also operates a translingual platform that processes global news written in 65 languages through real-time translation using its own translation system.
Recently, it has also been actively conducting analysis experiments using AI. The GDELT Project disclosed an experiment that uses a Gemini-based model to automatically extract announcements of leadership changes at governments or companies from global news and organise them into a knowledge graph. In the process, AI was used to generate reports by going beyond organising personnel information and inferring the political and economic background.
In another experiment, work was carried out to input the roughly 3,100-page U.S. National Defense Authorization Act into AI and convert the entire bill into a single infographic. In the process, various analyses were also performed, including topic analysis of the bill, organisation of related bills and generation of expected questions.
GDELT also disclosed a large-scale translation experiment. According to a February 2026 announcement, it translated about 3 million TV news broadcasts accumulated over 25 years using AI. The cost to translate a total of 62 billion characters of broadcast data amounting to about 6 billion seconds was about $74,634. This is work that is estimated to have required millions of dollars using past methods.
Such projects are assessed as examples showing the possibility that AI can comprehensively analyse vast amounts of news and policy documents. Experts say such data-based analysis could become a new tool for understanding global political and economic trends."
https://www.digitaltoday.co.kr/en/view/39425/ai-translates-25-years-of-news-in-100-countries-summarises-3100-page-bill-in-big-data-test
#metaglossia #metaglossia_mundus