 Your new post is loading...
|
Scooped by
Charles Tiayon
August 16, 2024 12:03 AM
|
Interpreters say they will refuse all work from the VITS LanguageLoop service unless previous rates of pay are restored. Hundreds of interpreters in Victoria are preparing to refuse jobs on Monday. Some staff have indicated the changes could result in an annual pay reduction of approximately $8,000 to $30,000. What's next? Interpreters say they will refuse all work from VITS LanguageLoop unless the previous rates of pay are restored. Hundreds of interpreters from Victoria's culturally and linguistically diverse (CALD) communities are set to take risky unprotected industrial action on Monday, after raising concerns about what they are calling a sudden "pay cut". They plan to "strike" by refusing to take new jobs, after the Victorian Interpreting and Translating Service (VITS) LanguageLoop quietly changed its payment structure for more than 3,000 contractors. Why Australia's political system is inaccessible for many migrants Booming Chinese and South Asian migrant communities are keen to engage in Australian democracy but many do not fully understand how politics works in their adopted home country, new research has shown. Several interpreters said the move will worsen their anxiety amid the cost-of-living crisis, with the "pay cut" resulting in an annual income reduction of approximately $8,000 to $30,000 per person. VITS, a Victorian government business enterprise, was established in 1975 to break language barriers for the growing CALD communities through its translation and interpreting platform LanguageLoop. The platform connects thousands of interpreters, who are fluent in more than 190 languages, with government and business clients. It pays them commissions as contractors. Many interpreters, who relied on finding jobs through VITS LanguageLoop for their livelihoods, said the new payment structure lacks "fairness, transparency, and communication". The changes, which include removing the one-off payments for full-day and half-day jobs, and splitting tasks into government and non-government streams, could lead to a significant reduction in the payments they receive. "These changes are severely impacting our daily income and the entire industry," said Lianzhen Kennedy, a 54-year-old Mandarin interpreter. Ms Kennedy has spent 13 years in the industry, and she said that she is set to lose about $8,000 to $10,000 annually. She said pay cuts have taken a toll on her ability to maintain her current standard of living, and she will join the "strike". She said she was concerned that VITS LanguageLoop made the fee structure change to "win more clients". "We are professionals. We need to be heard and our work needs to be respected." The ABC has contacted VITS LanguageLoop for a response. Cuts came 'without notice' Another change involved reducing the time allotted for various fixed translation tasks, which lowered compensation. Ms Kennedy said VITS LanguageLoop also removed the option for interpreters to book either four-hour or eight-hour slot durations due to client requests. The interpreter explained that for an eight-hour court booking, which costs $326, the new method of calculating the fee at $8.59 every 15 minutes amounts to $241, a difference of $85. She said that with the new method, the cost for a private law firm would be $218 instead of $326, a difference of $108. As the cost of living continues to rise, these professionals, who are essential to communication in sectors such like healthcare, legal services, and education, are facing increased financial strain. Laurice Erian, a 50-year-old Arabic interpreter, has been working with the agency since 2002. She expressed her dismay over the abruptness of the pay cuts, which she said have affected not just the amount she earns, but also the stability of her work. "How can I be expected to complete a 90-minute job in the city for $87, when I have to pay at least $7 per hour for off-street parking ... plus e-tag charges and petrol," Ms Erian said. Sarina Phan, a 57-year-old Vietnamese interpreter, has been in the profession for more than two decades. She noted that agencies have access to "a large pool" of interpreters. "[Those agencies] have the power to stipulate pay and conditions," she said. As one of the higher-paid interpreters in the field, she said she was less impacted because she anticipated something like this happening. She was able to implement backup strategies, like a shift in career focus. But she acknowledged those heavily reliant on this work will face significant hardships. A push for 'fairness' Another change is the differentiation between government and non-government translation jobs, with the latter having a lower cost. For a 90-minute booking, there's a $15.81 difference in payment, despite the nature of the jobs being largely identical. There's also discontent about VITS revising its after-hours pay policy, now only offering lower rates for some weekend jobs. Beyond the loss of income, interpreters say they feel like their contributions are being undervalued. Sixty-three-year-old Marina Del Greco, a Russian interpreter and translator who has worked with VITS for the last 30 years, echoed these sentiments. "The type of work we do with police, with criminals in jails, in courts, is so traumatic," Ms Greco said. "Most of us have PTSD and we have no one to turn to, because interpreters are always the meat in the sandwich." She said she will lose at least $10,000 a year. According to Australian Government statistics, 78 per cent of the translation profession is part-time, with women making up 72 per cent of the workforce. "We are shoved around like there is no tomorrow," Ms Greco said. "We are very quiet. We never yell. We always agree. But now I think even women have had enough." 'Strike' action is legally risky Translators and Interpreters Australia, a division of Professionals Australia, is the union for the interpreting industry. It said it is not organising or endorsing the proposal to refuse all new work from VITS LanguageLoop. "While we have the authority from the ACCC to collectively represent and negotiate for contractors, any organised boycott risks a breach of competition law," the union said in a statement. The union said it maintained that the change to the VITS LanguageLoop fee structure "constitutes a blatant attempt by Court Services Victoria (CSV) to cut costs at the expense of interpreters". It said union members will "explore avenues for further escalation", should the matter not resolve "within the coming days". James Mattson, a partner from the Bartier Perry law firm, said that while industrial action could prompt the organisation to reconsider, workers may face legal orders to stop unless they have authorisation. "How the company responds may very well depend on its ability to service their customers and any pressure from customers for services to be delivered," Mr Mattson said. "If these workers are contractors, then a lot will depend upon the terms of the arrangement with the company. "As contractors, they will likely be more vulnerable, including to termination, if they take strike action or if they do not agree to the new arrangements."
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"AI video translation is not yet a perfect substitute for human translation, according to new research from the University of East Anglia.
The new study, "Generative AI for Video Translation: Consumer Evaluation in International Markets" published in the Journal of International Marketing, shows that AI tools can be useful when speed and clarity are priorities.
But human translators remain crucial for tone, cultural nuance and for sounding natural.
Jiseon Han, a lecturer in digital marketing at UEA's Norwich Business School, said, "As brands race to reach global consumers on platforms like TikTok, Instagram, and YouTube, a new question has emerged—can generative AI truly replace humans in video translation?
"We decided to put it to the test."
How the research happened Researchers examined how consumers in different countries respond to marketing videos translated by generative AI tool HeyGen compared to videos translated and performed by human speakers.
In two experiments, one with Indonesian consumers and another with US and UK consumers, participants watched marketing videos delivered either by native human speakers or by AI-translated versions.
The AI-generated videos automatically converted language, voice, and even lip movements to match the target language—replicating what many global marketers are now testing in real campaigns.
The results reveal a mixed picture.
AI less natural Jiseon Han, a lecturer in digital marketing at UEA's Norwich Business School, said, "We found that viewers consistently found AI-translated videos less natural and less native-sounding than those performed by humans.
"However, AI performed better on language comprehension when translating into English, likely reflecting the greater availability of English-language training data in AI models.
"Interestingly, these perceptual differences did not affect engagement intentions that participants were just as likely to like, share, or comment on AI-translated videos as they were on human ones."
These insights suggest that AI video translation is not yet a perfect substitute for human translation, but it already offers practical value.
"For marketers, AI can be a great choice when speed and straightforward messaging matter most, but when it comes to capturing tone, personality, and cultural context, human expertise is still irreplaceable," she added.
"Generative AI can already handle parts of video translation that once required entire production teams," said the lead author Risqo Wahid from the University of Jyväskylä, Finland.
"But consumers still notice when something feels off. The human touch still matters, especially in how a message sounds and feels," he added.
As AI models evolve, the study provides a timely benchmark for understanding how consumers perceive AI-translated content today, highlighting both the opportunities and the limits of automation in global marketing communication."
by University of East Anglia edited by Sadie Harley, reviewed by Andrew Zinin
More information: Risqo Wahid et al, EXPRESS: Generative AI for Video Translation: Consumer Evaluation in International Markets, Journal of International Marketing (2025). DOI: 10.1177/1069031x251404843 https://techxplore.com/news/2025-12-ai-video-humans-edge.html #Metaglossia #metaglossia_mundus
"On February 12, the annual Albertine Translation Prize Ceremony returns, honoring translators and American publishers of English translations of contemporary French works.
This year, Villa Albertine is partnering with the Colloquy Series at World Poetry Books to celebrate the art of translation in a new and exciting way.
Bringing together translators and readers, the evening will feature translation jousts: two translators and a moderator will engage in a lively discussion of their different renderings of the same French texts, one a work of fiction, the other a poem. Through the translation jousts, the creativity, nuance, and choice behind each translation choice will be highlighted, and the often-invisible labor behind the books we read will be displayed.
Representing emerging trends across a variety of genres, including fiction, non-fiction, and comics, the Albertine Translation Fund helps cover publishing and translation costs for U.S. publishers of works translated from French to English.
We will announce the winner of the 2025 Albertine Translation Prize at the end of the evening.
We look forward to celebrating to the 2025 laureate with you!
The lists of titles supported by the program this year are available here (session 1) and here (session 2) (TBA).
To attend for the ceremony, please RSVP HERE.
The Albertine Translation Prize is made possible through the generous support of The Florence Gould Foundation and Albertine Foundation."
https://villa-albertine.org/va/events/albertine-translation-prize-ceremony-february-2026/
#Metaglossia
#metaglossia_mundus
"(WXYZ) — A Macomb Township man has been identified as the interpreter who was killed in Syria over the weekend while working with the U.S. Army.
According to an online obituary, Ayad Mansoor Sakat, 54, was killed when soldiers were ambushed in Syria by the Islamic State group on Dec. 13. Sgt. Edgar Brian Torres-Tovar, 25, of Des Moines, and Sgt. William Nathaniel Howard, 29, of Marshalltown, who were part of the Iowa National Guard, were also killed.
The news came as a huge shock to our family, and we are still struggling to believe it. My father worked as an interpreter for the U.S. Army during the Iraq invasion from 2003–2007, which is why my family was granted Special Immigrant Visas to come to the United States. Service to this country has been in his blood for a very long time, and all four of us—his children—have the utmost respect for everything he did alongside American soldiers.
Everything we have accomplished is a testament to his sacrifice and perseverance. Because of him, we became a general surgeon, a police officer, a medical student, and an IT coordinator.
We will honor his legacy by continuing to live the kind of life he worked so hard to make possible.
He was a devoted father and husband, a courageous interpreter, and a man who believed deeply in the mission he served.
Sakat, affectionately known as Eddie, was born in Bakhdida, Iraq, according to the obituary, and previously worked as an interpreter along with U.S. soldiers from 2003-2007.
"Ayad died in Syria while supporting U.S. forces, serving with the same courage and devotion that defined his life. His fellow soldiers affectionately called him Eddie, a nickname that reflected the trust, warmth, and friendship he inspired," his obituary reads.
Sakat is being remembered as a loving husband and father of four.
President Donald Trump was on hand on Wednesday and witnessed the dignified transfer of the two soldiers and Sakat." By: WXYZ Web Team Posted 8:26 PM, Dec 17, 2025 and last updated 5:51 AM, Dec 18, 2025 https://www.wxyz.com/news/macomb-man-identified-as-interpreter-killed-with-u-s-service-members-in-syria #Metaglossia #metaglossia_mundus
"CLACS Faculty Affiliate Professor Amy Olen (UWM Translation & Interpreting Studies) has recently published a new literary translation: the short story collection Marayrasu, released in December 2025 with Northwestern University Press.
The new book is the first English-language collection of short stories by award-winning Peruvian author Edgardo Rivera Martínez, and includes a foreword by CLACS Faculty Affiliate Professor César Ferreira (UWM Spanish).
The publisher’s website notes that “Amy Olen’s translation smoothly captures Rivera Martínez’s impressive stories, offering a unique lens into the region at the heart of this canonical author’s inimitable work.”
Read more about this new release on the publisher’s website." https://uwm.edu/clacs/professor-publishes-new-translation-of-peruvian-authors-short-stories/ #Metaglossia #metaglossia_mundus
"Japan may require language proficiency for permanent residency as new visa rules take shape
Japan may require language proficiency for permanent residency as new visa rules take shape Japan may require language proficiency for permanent residency as new visa rules take shape Japan is exploring the possibility of requiring Japanese language proficiency for foreign nationals seeking permanent residency, sources confirmed Thursday. This potential change is part of the government's preparations for an anticipated increase in applicants, according to the Japan Times. The new language requirement is expected to be incorporated into proposals for updated residency criteria by April 2027, when an amendment to Japan's Immigration Control and Refugee Recognition Act is set to take effect.
The shift is not limited to language proficiency. The revised law would also introduce provisions for revoking permanent residency if individuals intentionally neglect essential public responsibilities, such as paying taxes.
According to the Immigration Services Agency, the number of foreign residents in Japan reached a record high of 3.96 million by the end of June, with permanent residents comprising the largest group, at approximately 930,000, or 23.6% of the total foreign resident population.
Currently, applicants for permanent residency in Japan must have lived in the country for at least 10 years and demonstrate the ability to financially support themselves, among other requirements. With a projected increase in the number of permanent residents, the government is considering additional criteria, including proficiency in the Japanese language and participation in programs that promote community norms. There is also talk of raising the minimum income threshold for applicants.
The government is also reviewing stricter regulations for international students' part-time work. Under the current system, students are allowed to work up to 28 hours per week, provided they have received permission from immigration authorities. There are discussions to shift toward a system that evaluates factors like academic performance before granting work permissions, rather than allowing unrestricted work upon arrival.
Concerns have also been raised about foreign nationals holding engineer or humanities specialist visas being employed in unskilled labour, a violation of their residency status. The government is considering implementing stricter monitoring of staffing agencies and employers to ensure compliance with visa rules." Story by Business Today Desk 19 December 2025 https://share.google/Z3byXXdOXvVVb5UM8 #Metaglossia #metaglossia_mundus
Prix international Cheikh Hamad pour la traduction et la compréhension internationale
"La langue pulaar figure officiellement parmi les langues sélectionnées pour l’édition 2026 du Prix international Cheikh Hamad pour la traduction et la compréhension internationale, dans la prestigieuse catégorie « Réalisation », aux côtés de langues de rayonnement mondial telles que le chinois, l’anglais, l’italien, l’azéri et l’arabe.
Cette sélection constitue une reconnaissance sans précédent pour le pulaar, langue africaine parlée par des millions de locuteurs en Afrique de l’Ouest et au-delà.
Elle souligne les efforts soutenus menés ces dernières années pour la valorisation, la normalisation et la traduction de cette langue, longtemps marginalisée dans les grandes initiatives internationales de traduction.
Créé en 2015 et placé sous le haut patronage de Son Altesse l’Émir de l’État du Qatar, Cheikh Tamim ben Hamad Al Thani, le Prix Cheikh Hamad est aujourd’hui considéré comme l’une des plus importantes distinctions internationales dédiées à la traduction et au dialogue interculturel.
Il vise à encourager la traduction d’ouvrages majeurs entre les langues, à renforcer la compréhension entre les peuples et à promouvoir la diversité linguistique comme pilier du vivre-ensemble.
La catégorie « Réalisation », dans laquelle le pulaar a été retenu, récompense des contributions professionnelles structurantes, des initiatives collectives et des efforts durables ayant un impact significatif sur le développement de la traduction dans une langue donnée...
La nomination du pulaar ouvre de nouvelles perspectives académiques, culturelles et institutionnelles. Elle offre un cadre international propice à la traduction d’ouvrages de référence – littéraires, scientifiques, religieux ou philosophiques – vers et depuis le pulaar, renforçant ainsi sa présence dans les circuits mondiaux du savoir.
Dans les principales catégories linguistiques, la dotation globale peut atteindre 200 000 dollars américains, tandis que les prix d’excellence, notamment pour les langues moins courantes, peuvent aller jusqu’à 100 000 dollars. Les candidatures pour la session 2026 seront ouvertes du 1er janvier au 31 mars 2026, offrant aux traducteurs, chercheurs et institutions une occasion concrète de valoriser leurs travaux...
Au-delà du cas du pulaar, cette sélection envoie un signal fort en faveur de la diversité linguistique mondiale et de la reconnaissance des langues africaines dans les grandes instances culturelles internationales. Elle consacre le pulaar non seulement comme langue de communication, mais aussi comme langue de savoir, de création et de transmission universelle.
...Le pulaar s’affirme désormais comme un acteur à part entière du dialogue interculturel mondial, illustrant la capacité des langues africaines à contribuer pleinement à la construction d’un espace culturel international plus inclusif et équilibré."
https://www.cridem.org/C_Info
#Metaglossia
#metaglossia_mundus
"What is multilingual SEO? Multilingual SEO optimizes your website for discovery via search in other languages. It accounts for “linguistic, cultural, and technical differences to ensure the website performs well globally and the message resonates with users wherever they’re located,” says Antonio Santarsiero, senior SEO specialist at Shopify.
“Practically speaking, multilingual SEO involves considering technical aspects such as URL structure, as well as implementing hreflang tags and creating a market-specific site map,” Antonio says.
Multilingual SEO can be part of a broader multilingual marketing strategy or an effort that is layered on top of content localization work. Whereas localization involves translating and making your content culturally relevant for a specific market, while multilingual SEO ensures that this localized content is search engine optimized in the corresponding language.
Multilingual SEO vs. international SEO While they may sound similar, there are key differences between multilingual SEO and international SEO. Multilingual SEO targets users’ languages (such as Spanish, German, or French), while international SEO targets the right country or region, but the language might stay the same (for example, English pages for Canada versus the United Kingdom).
In some cases, you may need a combined approach. For example, Represent, a British fashion label, uses both language and regional targeting. As the company expanded to international markets, it launched separate sites for the United States and Europe. The European site features a regional subdomain: eu.representclo.com, but the content is in English. The European site includes euro pricing and local shipping options.
Then, they took it a step further, launching a German version (eu.representclo.com/de/) with a /de/ subfolder, and translated calls to action and payment options. This strategy doubled organic traffic and boosted conversions 30%.
How to implement multilingual SEO Identify your markets Do keyword research for each country you expand to Choose a URL structure Translate your pages Implement hreflang tags Build local backlinks Before implementing multilingual SEO, you need a localization strategy. Once that foundation is in place, SEO ensures your pages are discoverable in those markets. Here’s Antonio’s step-by-step framework:
1. Identify your markets Your existing search traffic can help guide your decisions about which languages to prioritize in your localization and multilingual SEO work. Check Shopify Analytics or Google Search Console to see where your organic visitors are coming from.
Free Ebook: Ecommerce Analytics for Beginners
Find out which metrics are the key to establishing and growing your online business. This free guide is the perfect first step in learning about ecommerce analytics.
Get the free ebook now For example, consistent traffic from countries such as Mexico or France signals interest. Even though your pages may not appear for local language queries, global users can land on English pages if they search in English or if information in their native language is limited. This would be a strong indicator that there is demand for your content in these regions.
Source: Shopify Analytics You can also look at competitor successes. If similar brands in your niche have successfully launched in certain international markets, it’s worth investigating why. Use tools like Ahrefs or Semrush to see which language version of your competitors’ site ranks the best locally and which local-language keywords they’re winning.
Once your localization priorities are set, you can focus on SEO optimization in each target language.
Sell where people search with Shopify
Shopify comes with powerful tools that help you list, promote and sell products on Google. Get your product in front of new shoppers who are looking for what you’re selling, from one back office.
Start selling on Google 2. Do keyword research for each country you expand to Once you know which markets you’re geographically targeting, learn a little more about the search behavior of the people who live there.
“Understanding the nuances of each market and tailoring your approach to local search behaviors is definitely the starting point,” Antonio says. “In English, users might search for ’luxury watches’ while in French, the equivalent ’montres de luxe’ might have different search patterns. Understanding these differences is crucial for targeting the right audience.”
Another way to conduct local keyword research is to use Google Translate, layered with Google autocomplete. For example, translating “luxury watches” into Spanish yields “relojes de lujo.” Next, Antonio advises using a tool like valentin.app to access the Spanish Google search results page (SERP). “Once there, start typing ’relojes de lujo’ into the search bar. You’ll notice several autocomplete suggestions appearing, which can provide valuable insights into popular local keywords.”
Antonio advises reviewing Google Keyword Planner, Ahrefs, and Semrush to find high-volume, low-competition keywords specific to each market. Filter by language and region to spot missed opportunities and keyword gaps.
Keyword volume can differ between countries, even in the same language, because of nuances in regional terminology and differences in search behavior. For example, the term “paella” has a varying search volume in Spanish-speaking countries:
Argentina: 9,400 monthly searches
Mexico: 19,000 monthly searches
Spain: 43,000 monthly searches
US: 127,000 monthly searches
Finally, consult native speakers to validate your terms for local nuance. “A native speaker can tell you which term fits your brand best,” Antonio says. For example, in French, both “POS” and “PDV” mean “point of sale,” but one may sound more natural depending on the audience or industry.
Free keyword research template
Use this free keyword research template to unlock opportunities and manage your SEO strategy. Drive targeted traffic to your website by tracking search volume, ranking difficulty, user intent, and content ideas.
Download template 3. Choose a URL structure Your URL structure in a multilingual SEO strategy informs search engines which version of your site to display to each audience.
Country code domains (ccTLDs) like example.fr or example.de. These provide you the strongest country signal, but it can be costly to maintain. Since Google treats each of these as separate sites, you’ll have to build backlinks and authority from scratch for each one.
Subdomains like fr.example.com. While you can manage these on the same server and share a CMS (like Shopify), each subdomain still needs its own SEO work. This means you’ll need to optimize each subdomain’s metadata, content, internal links, backlinks, and hreflang setup. Subdomains will inherit some authority from the main domain, unlike country code domains, which are entirely separate from the main domain.
Subdirectories like example.com/fr/. This URL structure is the most efficient for most Shopify stores. All versions share the same SEO strength and are easier to track in analytics, since you can track all regions in the same Google Search Console property and from a single CMS like Shopify. However, this option has the weakest country signals, so you’ll need to rely on localized content and hreflang tags to clarify targeting.
“When it comes to deciding which approach to take to structure the website for different language versions, there is no one-size-fits-all solution. Every solution has its pros and cons. Make sure it aligns with your business goals, team structure, and technical capabilities,” Antonio says.
4. Translate your pages Translation is a key step to building credibility within regions. Multilingual SEO ensures those translated pages can rank and be discovered. Prioritize optimizing high-impact pages that drive visibility and conversions, such as the homepage, product pages, checkout flow, and support content. Use GA4, Google Search Console, and internal data to identify these.
When translating pages, the best approach is to have a native speaker translate the page. At this point, you can also layer in local keywords. A native speaker would be able to confirm that keywords have been incorporated naturally. If you don’t have access to one, you can use Google Translate or ChatGPT to help by plugging in the English version verbatim. However, these often miss tone, idioms, and context.
“You can use AI to draft a first version, but you should never publish what a machine spits out. You need to capture the nuances of the language and make sure your message feels authentic,” Antonio says. If you must use AI, be sure to have a native speaker review and edit the content before publishing. If you don’t have access to one, you can use a professional translation service like RushTranslate or a freelance translator.
Don’t forget to translate your meta tags, including titles and meta descriptions, as well as your URL slugs. Once you have translated your pages, update your site map to include all translated pages so Google can find and index them quickly.
5. Implement hreflang tags Hreflang tags tell Google which language or region a page targets, so that the right version appears for the right audience and reduces duplicate content issues. If the pages are in the same language, adding in even small regional cues like currency and spelling can help Google understand their differences.
Antonio advises adding as many tags as needed for every language and region your site supports, so Google fully understands your site structure. For example, you might have a French version for France, Canada, or Belgium:
France: hreflang="fr-FR
Canada: hreflang="fr-CA"
Belgium: hreflang="fr-BE"
This helps Google serve /fr/ pages to French users, /ca/ pages to Canadian users, and /be/ pages to Belgian users, even if all pages are in French.
In addition to hreflang tags, a language or country switcher lets visitors (and Google) easily move between site versions, like switching from English to French in one click. This improves the user experience and helps search engines discover all language versions easily.
Country switcher on Gymshark.com 6. Build local backlinks After launching localized pages, help search engines trust them by adapting your link-building strategy for your multilingual sites. For example, French pages need links from French sites to rank well in France.
“Treat your local site like it’s a separate entity, even if the domain is the same,” Antonio says. “The goal is to have Google understand and distinguish between the English version and other versions, so the link-building strategy should focus only on those URLs in the subfolder.”
Localize your link-building efforts by connecting with regional media, influencers, blogs, and directories in your target language to increase your visibility. For example, you might try to get your German store featured in local fashion magazines or affiliate lists.
To support this effort, your localized site needs to feature content that is worthy of local attention. “Write content that’s so interesting to that specific market that other websites will want to cite it,” Antonio says. “For example, Square might have data about the top-selling items in retail in Japan. Then, other sites in Japan would want to use it as a data source.”
Each local link signals to Google that your store is relevant, trustworthy, and established in that market, not just translated for it.
Multilingual SEO FAQ What is the difference between international and multilingual SEO? International SEO helps your site target users in different countries (like Canada versus the UK), while multilingual SEO helps you reach users in different languages (like English versus French). Most global sites need both.
What is an example of multilingual SEO? A brand that offers translated versions of its site, such as Represent’s German site at eu.representclo.com/de/, is practicing multilingual SEO. The content, URLs, and metadata are all optimized for German speakers.
How to get search results in multiple languages? Create localized pages for each language, use hreflang tags to signal different language versions to Google, and build backlinks from local sites. This helps search engines show the right version to the right audience. Azra Kassam Dec 18, 2025 Reference: https://www.shopify.com/blog/multilingual-seo #Metaglossia #metaglossia_mundus
"CoeFont Launches AI-Powered Interpreter to Break Language Barriers for Global Teams / Source: CoeFont (EZ Newswire) TOKYO, Japan, December 18, 2025 (EZ Newswire) -- CoeFont, opens new tab, a leader in AI-driven communication solutions, launched the CoeFont Interpreter, an innovative AI-powered tool for simultaneous interpretation that enables seamless, real-time collaboration for international teams. For any company expanding globally, the "language barrier" is more than just a hurdle, it is often a ceiling on growth. In the era of remote work, cross-border teams are common, yet true collaboration is frequently stalled by the inability to communicate nuance in real-time. While text-based translation tools have existed for years, they often fail to capture the context of live business discussions. This leaves companies relying on human interpreters, a solution that is often prohibitively expensive, logistically difficult to schedule, and prone to creating bottlenecks. CoeFont Interpreter has emerged as a solution to this deadlock, offering AI-powered simultaneous interpretation that allows remote teams to communicate naturally, cost-effectively, and without the lag of traditional translation methods.
How CoeFont Works for International Teams Unlike standard text-to-speech tools or basic meeting captions, CoeFont focuses on the flow of conversation. It acts as a real-time bridge, listening to speech in one language and instantly delivering it in another with high accuracy. For remote international teams, this shifts the dynamic from "waiting for translation" to "having a conversation." Key advantages include: 24/7 availability: It eliminates the need to schedule human interpreters for late-night or early-morning calls across time zones. Context awareness: Unlike basic translation bots, it handles the context of business dialogue better than competitors, reducing the "broken telephone" effect. Cost efficiency: Operating at a fraction of the cost of human consultants, it democratizes access to high-quality interpretation for internal meetings and daily stand-ups...
The CoeFont Solution Manhattan Associates implemented CoeFont Interpreter in late 2025. The results were immediate. The most significant change was the removal of the "bridge" role. Fortunately, the CoeFont Interpreter was able to cut out the middleman and help foster direct relationships. "Interpreters became unnecessary," Takatani stated. "We no longer wait for translations. Meeting times have been cut to a fraction of what they were." Masahiro Sawada, Marketing Manager, highlighted the qualitative shift stating, "We can now speak directly with clients and overseas members. We can convey the temperature and nuance of our words without a filter. It allows us to build direct relationships rather than indirect ones." Secondly, the tool helped unlock global resources, allowing the Japanese team to instantly tap into the company's global talent pool. "We can now assign a product manager from overseas who handles multiple projects to a Japanese case without needing a dedicated translator," Takatani said. "It allows us to utilize global know-how efficiently." In terms of consistency and cost, the AI provided a consistent quality of translation that didn't fluctuate based on human fatigue or scheduling. At roughly 5,000 Japanese Yen (approximately $35) per hour, the cost was negligible compared to human interpretation, allowing the team to use it freely for internal syncs and late-night calls with its U.S. headquarters. The Future of Cross-Border Collaboration Manhattan Associates is now looking to expand the use of CoeFont beyond internal meetings to external marketing events. "Organizing events with foreign speakers used to be a logistical nightmare involving expensive simultaneous interpreters who sometimes quit mid-event due to technical difficulty," Sawada recalled. "With AI, we can solve that instantly." For foreign-affiliated companies and remote teams, the lesson is clear: The technology to bypass the language barrier is no longer science fiction. It is here, and it is reshaping how global business gets done.
https://www.reuters.com/press-releases/coefont-ai-powered-interpreter-language-barriers-global-teams-2025-12-18/ #Metaglossia #metaglossia_mundus
"Alors que Christopher Nolan s’apprête à porter L’Odyssée à l’écran dans une superproduction très attendue, une question essentielle se pose, bien au-delà du casting prestigieux ou des effets visuels annoncés : comment rendre Homère intelligible sans le vider de sa substance épique ? Et surtout, quelle langue peut encore porter, près de trois millénaires plus tard, la puissance fondatrice de ce texte européen majeur.
Car L’Odyssée n’est pas seulement un récit d’aventures. C’est un poème fondateur, un chant de la mémoire, du retour, de l’identité, de la fidélité et de l’épreuve. Or, toute adaptation – et toute traduction – engage une vision du monde. Traduire Homère, ce n’est pas seulement transposer des mots : c’est choisir ce que l’on fait de l’épopée elle-même.
Le défi de l’épopée en langue moderne
Le cinéma, par nature, dispose d’armes puissantes : images, musique, rythme, spectaculaire. Mais il lui manque une chose essentielle : la densité du verbe. Chez Homère, le sublime ne tient pas seulement aux exploits d’Ulysse, mais à la cadence du récit, à la répétition, à l’élan oral, à cette musique du langage qui porte l’auditeur autant que le sens.
Tout dépendra donc du texte sur lequel Nolan s’appuiera. Et c’est là que la question des traductions devient centrale.
Trois visions de Homère, trois mondes
Depuis des décennies, le monde anglophone oscille entre deux pôles : la fidélité érudite et la modernisation radicale. La traduction de Richmond Lattimore, longtemps dominante dans les universités, privilégie la rigueur philologique. Elle colle au grec, respecte les structures, mais produit un anglais souvent rigide, solennel, presque administratif. Le texte est exact, mais la poésie peine à respirer.
À l’inverse, la version d’Emily Wilson, souvent citée comme possible source du film, assume une rupture nette : langue contemporaine, syntaxe fluide, vocabulaire psychologique. L’ouverture – « Tell me about a complicated man » – frappe par sa clarté, mais aussi par sa banalité. Ulysse devient un personnage presque sociologique, analysable, domestiqué. L’épopée perd alors sa verticalité, sa gravité, son étrangeté fondatrice.
Michael Solot : faire entendre l’épopée
C’est dans cet entre-deux que s’inscrit la traduction de Michael Solot, publiée en 2025. Son approche refuse à la fois l’archaïsme figé et la modernisation plate. Solot ne cherche pas à imiter mécaniquement l’hexamètre grec – impossible en anglais – mais à retrouver le mouvement, la houle du vers, la respiration du chant.
Son Ulysse n’est ni un concept abstrait, ni un héros désacralisé. C’est un homme façonné par l’épreuve, la ruse, la douleur et la fidélité. Le langage reste élevé sans être compassé, incarné sans être trivial. Le rythme, fondé sur les accents et non sur un carcan métrique, redonne au texte une oralité vivante, proche de ce qu’a pu être la récitation homérique. Là où Wilson raconte une histoire claire, Solot fait entendre un chant. Là où Lattimore conserve un monument, Solot rend un texte habitable.
Ce débat n’est pas technique. Il est profondément civilisationnel.
L’Europe est née de récits comme L’Odyssée. De récits qui parlent du retour au foyer, de la fidélité à la terre, de la mémoire des morts, de la transmission. Réduire Homère à une narration efficace ou à une grille psychologique moderne, c’est l’arracher à ce qu’il est : une matrice culturelle. À l’inverse, redonner chair à l’épopée, c’est rappeler que les Anciens ne nous parlent pas depuis un musée, mais depuis un temps encore vivant en nous.
Le pari de Nolan
Si le film de Christopher Nolan parvient à être autre chose qu’un spectacle impressionnant – s’il devient réellement épique – ce sera parce qu’il aura su s’adosser à une langue capable de porter le sublime sans l’aplatir. Car l’épopée ne supporte ni la tiédeur ni la neutralisation.
Lire Homère aujourd’hui, ce n’est pas chercher le confort du présent. C’est accepter l’étrangeté, la grandeur, parfois la dureté d’un monde qui nous a précédés et nous a faits. À ce titre, la traduction de Michael Solot apparaît comme bien plus qu’une nouveauté éditoriale : une tentative rare de réconciliation entre modernité linguistique et fidélité spirituelle.
Avant de juger l’Odyssée de Nolan, peut-être faut-il donc relire celle de Homère. Et entendre, à nouveau, la voix du chant.
Illustration : DR
[cc] Article relu et corrigé (orthographe, syntaxe) par ChatGPT.
Breizh-info.com, 2025, dépêches libres de copie et de diffusion sous réserve de mention et de lien vers la source d’origine"
https://www.breizh-info.com/2025/12/19/254876/lodyssee-a-lepreuve-du-cinema-traduire-homere-sans-trahir-lepopee/
#Metaglossia
#metaglossia_mundus
«Éclaircie» élu meilleur livre étranger 2024 par de nombreux journaux sera traduit en huit langues
"Littérature sans frontières
Carys Davies, lauréate du Prix du meilleur livre étranger 2025
Publié le : 19/12/2025
Carys Davies a grandi au Pays de Galles avant de partir aux États-Unis. Elle est l’auteure de trois romans dont le premier, «West» (Seuil, 2021), a obtenu le Prix du livre de l’année au Pays de Galles. Son deuxième roman, «Le Voyage de Hilary Byrd» (Seuil, 2022), a été élu roman de l’année 2020 par le Sunday Times. «Éclaircie» a, quant à lui, été élu meilleur livre 2024 par de nombreux journaux, il sera traduit en huit langues. Carys Davies vit aujourd’hui à Édimbourg."
https://www.rfi.fr/fr/podcasts/litt%C3%A9rature-sans-fronti%C3%A8res/20251219-carys-davies-laur%C3%A9ate-du-prix-du-meilleur-livre-%C3%A9tranger-2025
#Metaglossia
#metaglossia_mundus
"Human language inspired AI – and now we can use that AI to learn about language
PhD candidate Yuchen Lian (LIACS) wants to understand why human languages look the way they do – and find inspiration to improve AI along the way. She defended her thesis on 12 December.
‘Languages change all the time. Think about ancient Chinese and modern Mandarin. There is a huge difference between them,’ Lian starts talking about her research enthusiastically. ‘And even more so between different languages.’
At the same time, there are universal characteristics, the PhD candidate explains. ‘Linguists want to understand which features are common and why they appear. With the increase in computational power of the last years, we can now learn about language evolution through computer models in increasingly realistic settings.’
‘I believe it's crucial to combine computer science and linguistics to reach a full understanding of language evolution.’
Artificial miniature languages
Lian completed her PhD under the supervision of Tessa Verhoef and Arianna Bisazza, and with support from the China Scholarship Council. As part of the programme, she spent three years at Leiden University before returning to China to continue her doctoral work. ‘I missed Leiden quite a lot, the atmosphere here is really good. I’m happy to return for my PhD defence.’
In her research, the PhD candidate is inspired by miniature languages used in linguistics experiments with people. Participants are given three words: a subject, object and verb. For example, cat, mouse, and chasing. In various assignments, researchers monitor how they use these words. In English, for example, the word order is fixed, explains Lian. “The cat chases the mouse” is the only correct sentence. But in Japanese, there are markers after a word to indicate its function. The order of words is therefore more flexible.
A conversation between AI agents
Lian models two or more AI agents that can communicate with each other. These agents are first trained on a pre-defined artificial language, like the experiments with human participants. ‘We basically give them a vocabulary at the start,’ says Lian. ‘Then we let the agents interact in pairs or even groups through interactive language games. When they accomplish a task successfully, all agents get a reward. That's how they learn.’
The different agents can have the same vocabulary, but different grammar rules to analyse various scenarios. The results show that these agents can replicate the trade-off between word order flexibility and the use of markers. ‘So we know the model works. Moreover, it allows us to extend the results with humans to a larger scale. This complements experiments with humans nicely.’
Human language inspires AI chatbots
Lian opened a new avenue to learn about language evolution. At the same time, her work has the potential to improve AI. ‘We can take inspiration from human language and use that to improve the outcomes of an AI chatbot, for example. Currently, they are trained predominantly through passive exposure to enormous amounts of data. But humans clearly acquire language in a much more interactive way. My simulations demonstrate that repeated communication can result in more efficient interactions and spontaneous emergence of human-like patterns.’
A computer scientist in the field of linguistics
‘My topic extends over the borders of computer science,’ reflects Lian on her PhD. ‘It's quite a challenge to get into this interdisciplinary field. We speak different languages and have different standards in publishing, for example. It was sometimes hard for me that language evolution has no ground truths, propositions cannot be proved in a mathematical sense.’
At the same time, she also found the combination to be fulfilling. ‘Luckily, my promotors had expertise in both fields. I believe it's crucial to combine computer science and linguistics to reach a full understanding of language evolution.’
https://www.universiteitleiden.nl/en/news/2025/12/human-language-inspired-ai---and-now-we-can-use-that-ai-to-learn-about-language
#Metaglossia
#metaglossia_mundus
"When filing evidence with a court, certified human translations still control.
Welcome to the new multilingual reality of global disputes. International lawsuits no longer involve just multiple jurisdictions; they involve multiple languages, multiple versions of the same document, and multiple interpretations of key communications. As companies expand supply chains, outsource operations, and sell globally, courts are confronting a surge in multilingual evidence:
cross-border emails bilingual contracts multilingual warning labels international HR and compliance records foreign-language consumer complaints multilingual product manuals In complex cases, what a party understood, and in which language, can swing liability dramatically. This has transformed multilingual evidence from a niche issue into a core driver of modern litigation outcomes. Legal authorities such as Crowell & Moring’s Cross-Border Litigation Guide has noted that multilingual evidence is increasingly decisive in global disputes.
Because case timelines move faster than translated documents can be certified, legal teams increasingly rely on MachineTranslation.com, a high-accuracy and free AI translation tool that supports over a million users, processes billions of translated words, and provides legal-friendly features needed to manage multilingual evidence without compromising defensibility.
1. How Courts Evaluate Multilingual Evidence Today
1.1. Courts Now Scrutinize “What Was Understood When”
Judges focus on whether a party reasonably understood:
the contractual obligation the warning label the risk disclosure the regulatory requirement The question is no longer “What does this document say?”
It is now: “In which language did the parties act, rely, or misinterpret the meaning?”
Liability frequently shifts when:
language versions conflict translations were inaccurate one party spoke both languages and leveraged ambiguity a party failed to verify a machine translation before acting on it 1.2. Machine Translations Are Admissible — With Limits
Most jurisdictions allow machine-translated documents for:
discovery early case assessment privilege review internal investigations multilingual e-discovery sorting red-flag identification But when filing evidence with a court, certified human translations still control. Courts also caution parties that reliance on faulty MT can increase liability. This proves a statement in a blog on Legal Translation, it will still need to be reviewed, shared, and adopted across jurisdictions. That means translating large volumes of legal text into dozens of languages quickly, but without compromising quality
2. The Five Litigation Areas Where Multilingual Evidence Changes Outcomes
2.1 Contract Disputes
Bilingual contracts often include a “controlling language clause.” If the versions diverge, courts evaluate:
reliance during negotiation whether mistranslation created unfairness whether ambiguity was exploited 2.2 Product Liability
Manufacturers face heightened exposure when:
safety instructions differ across languages manuals contradict one another incorrect translations mislead installers Regulators increasingly consider mistranslation a failure to warn.
2.3 Employment & Workplace Claims
Inconsistent multilingual HR documents impact:
consent discipline worker safety termination fairness Courts ask whether employees were given clear, equivalent information.
2.4 Data Privacy & Regulated Industries
Incorrectly translated privacy policies can lead to:
GDPR penalties consumer lawsuits cross-border compliance failures Even minor shifts like “may collect” → “will collect” can trigger exposure.
2.5 Cross-Border Fraud & Misrepresentation
Mistranslations can reveal:
deceptive intent knowledge of falsity reckless reliance on automated tools This area is expanding rapidly.
3. How MachineTranslation.com Supports Defensible Litigation Workflows
Legal teams trust MachineTranslation.com because it provides:
accuracy auditability speed scale 3.1 Immediate Translation of Massive Document Sets
MachineTranslation.com processes large collections of:
multilingual emails attachments WhatsApp messages supplier communication This enables lawyers to quickly identify:
custodians privilege key documents risk factors 3.2 Side-by-Side Translation View
Attorneys can simultaneously see:
source text MT output glossary-applied terminology 3.3 Built-In Quality Signals
MachineTranslation.com allows exporting:
confidence scores revision logs translation memory matches audit trails These features strengthen defensibility when challenged.
3.4 Multilingual Version Comparison
The platform identifies:
mismatched clauses altered definitions missing obligations ambiguous language across versions.
3.5 Enterprise Integrations
MachineTranslation.com is used by:
compliance organizations global HR teams in-house counsel external law firms investigations teams Because it’s secure and designed for legal workflows.
4. Example: How a Simple Mistranslation Alters Liability
Original German Clause
“Der Lieferant muss sicherstellen, dass alle Teile frei von Materialfehlern sind.”
Correct Translation
“The supplier must ensure that all parts are free from material defects.”
Strict obligation → full liability.
Faulty Translation
“The supplier should try to make sure that all parts are free of defects.”
Best-effort → reduced responsibility.
Impact on Liability
Impact on liability between accurate translations and faulty translations. Table by author. Such shifts can significantly alter:
breach analysis indemnification triggers warranty enforcement damages calculations settlement leverage 5. Where MachineTranslation.com Prevents These Errors
5.1 Higher Accuracy Engines
The platform blends multiple AI models + legal-domain training.
5.2 Glossaries for Legal Terms
You can lock crucial definitions such as:
indemnify strict liability must ensure 5.3 Alignment Tools
Side-by-side comparison reveals discrepancies immediately.
5.4 Exportable Audit Trails
MachineTranslation.com provides:
timestamps glossaries used translation history 6. Best Practices for Attorneys Handling Multilingual Evidence
Identify original source language Use MachineTranslation.com for early triage Escalate key passages to certified translators Compare MT, human, and original versions Preserve every language version Document all translation steps" https://www.legalreader.com/how-multilingual-evidence-is-reshaping-liability-in-cross-border-litigation/ #Metaglossia #metaglossia_mundus
"Questel Seamlessly Integrates AI-Assisted Patent Translation with Equinox IP Management Software
Translators are Increasingly Leveraging Innovative AI Tools to Improve Speed and Quality of IP Document Translations
Questel, a world leader in intellectual property including AI-assisted patent translation and filing, announced that it has created multiple Connectors to integrate its patent translation services with its Equinox IP Management Software and other third-party IP systems. The Connectors allow Patent professionals to initiate new cases and communicate with patent translation professionals about case progress. Also, Questel is expanding its application of AI solutions to raise global patent translation to new levels of quality and consistency, while responsibly upholding obligations for confidentiality and security.
IP Translation Services Connected to Equinox Questel’s substantial patent and IP translation services have now been integrated into Questel’s Equinox IP management software (and third-party platforms via API) via seamless Connectors, thereby expanding the end-to-end IP ecosystem. Due to this integration, IP professionals will experience unmatched ease of use, efficiency, and security throughout their patent translation project. Clients can obtain estimates and instruct translators in seconds using the Questel Services Portal, and can also download case instructions and upload deliverables.
AI-Driven Innovations in Patent Translation Always a pioneer in applying AI technology to improve translation outcomes, Questel has sharpened its focus on equipping its skilled translators with state-of-the-art AI tools to produce unparalleled quality. While the expertise of human translators is still at the heart of Questel’s patent translation resources, AI innovations allow their multilingual, highly educated professionals to be effective, accurate and confidential. Questel is one of few language service providers (LSP) using AI to check and fix machine translations successfully, and to safely automate specific segments of translation work while maintaining human quality.
Highlights of AI-enabled Translation by Questel’s IP and Patent Translation Services:
Machine Translation Quality Evaluation (MTQE) Translating IP and patent-related documentation ranges from being incredibly complex or rather straightforward. MTQE tools quickly identify potential problems or complexities to flag them for greater scrutiny from human translators. Effectively, MTQE AI focuses human translators’ efforts where their nuanced expertise and judgement are most needed. Hallucination Detection AI has become infamous for generating hallucinations, so Questel uses specialized AI to examine what the large language model or neural tool is producing to detect obvious hallucinations. In some cases, the hallucination can be fixed, while in other cases, the translation may need to be completely redone. Compliance Checking Compliance with government regulations and patent filing agents’ requirements is essential to success for any patent translation project. Questel strategically uses patent-specific AI to enhance efficiency and ensure data security compliance. This includes safeguarding consistency of terms used throughout the document and adherence to the filing agent’s unique preferences and formatting requirements. Scoring and Metrics The MQM (Multidimensional Quality Metric) is a benchmark Questel employs so that revisers can internally score translators and gauge their performance. MQM measures in an automated way to see what translators delivered and how well the AI assisted them. Risk Adjusted Service Levels IP professionals clearly want choice when it comes to service levels in patent translation. Not all clients require a premium level for every translation need. Some patents are worth larger monetary investments, whereas others can be achieved more economically. More routine documents, such as prior art and office actions, may not need high-touch human translation so AI’s role can be more substantial for those. Questel offers a wide range of service levels to meet clients’ requirements and budgetary allowances. These range from Premium-Filing Ready to AI Auto Translations with many gradations in between.
“Questel’s Translation Services are known for superior human expertise, ease of use, quality and speed,” remarked Jeremy Coombs, Vice President of Translation & Filing Services at Questel. “Our new Connector-driven integration between our Patent Translation Services and Equinox adds tremendous convenience and visibility to the translation process. Also, our translators are leveraging AI in innovative ways to augment their work, leading to more expedient and comprehensive results for our clients.”
Click here for more information about Questel’s IP Translation Services and more." https://www.morningstar.com/news/business-wire/20251217721972/questel-seamlessly-integrates-ai-assisted-patent-translation-with-equinox-ip-management-software #Metaglossia #metaglossia_mundus
«Sous-titrer ne suffit pas» : le doublage dopé à l’IA, nouvelle arme des influenceurs pour percer à l’international
DÉCRYPTAGE - Pour fidéliser les créateurs de contenu, YouTube annonce améliorer le doublage automatisé de leurs vidéos et intégrer la synchronisation labiale. Un moyen de se distinguer de ses rivaux TikTok et Instagram.
Charles Sterlings a confié récemment à ses fans ses astuces pour acheter des billets d’avion à prix doux. La vidéo de ce créateur de contenu français, spécialisé en économie, a fait 3 millions de vues sur Instagram. Mais une fois doublée en italien, elle a été visionnée par 5 millions de personnes supplémentaires. La version en espagnol a, elle, intéressé 2 millions d’internautes en plus. Chaque semaine, ce créateur, qui compte 2,7 millions d’abonnés sur ses différents réseaux sociaux, consacre environ dix heures à ses traductions et montages.
Contrairement à Charles Sterlings, la plupart des créateurs de contenu n’ont pas franchi le pas du doublage de leurs vidéos, coûteux et chronophage. Alors, pour leur permettre d’élargir leur audience, YouTube a dévoilé ce mardi de nouvelles fonctionnalités dopées à l’intelligence artificielle. En plus du doublage automatisé, déjà disponible depuis deux ans, la plateforme ajoute la synchronisation labiale..."
Par Keren Lentschner
https://www.lefigaro.fr/secteur/high-tech/sous-titrer-ne-suffit-pas-le-doublage-dope-a-l-ia-nouvelle-arme-des-influenceurs-pour-percer-a-l-international-20250916
#Metaglossia
#metaglossia_mundus
"Cette étude se penche sur les enjeux et les défis traductologiques liés à la traduction littéraire, en particulier dans le cas des nouvelles de Faribâ Vafi, l’une des figures de la littérature iranienne contemporaine. L’objectif principal est d’élaborer une stratégie traductive capable de préserver à la fois la structure narrative, les spécificités stylistiques et les éléments culturels propres à l’univers de l’auteure, tout en produisant un texte fluide, lisible et culturellement accessible en langue cible.
Notre corpus est constitué de quatre nouvelles issues du recueil Pas de vent, pas de rame que nous avons traduites en français. Ces textes, empreints d’un réalisme subtil et d’une écriture intimiste, posent des défis particuliers sur les plans lexical, syntaxique et culturel. Afin d’aborder ces problématiques, notre analyse s’appuie sur l’approche comparative de Vinay et Darbelnet, qui permet de mettre en lumière les procédés de transposition, d’équivalence ou encore d’emprunt mobilisés dans le passage du persan au français. Cette approche sera également enrichie par des apports théoriques issus de travaux récents sur la traduction des culturèmes et des spécificités littéraires dans les textes contemporains."
https://journals.atu.ac.ir/article_20020.html
#Metaglossia
#metaglossia_mundus
"The Tokyo bookstore where translated Korean literature sparks ‘conversation across borders’ Since 2015, publisher Kim Seung-bok has used her shop Chekccori to connect Japanese readers with top Korean authors ‘in the right way’
On a quiet street in Jimbocho, a Tokyo neighbourhood known for its second-hand bookshops and publishing houses, one shop stands out: Chekccori. The store’s shelves are lined with Korean literature translated into Japanese, as well as works in the original language. It has become a gathering place for readers eager to cross cultural borders one page at a time. The name Chekccori means “a celebration after finishing a book” in Korean. The store was founded in 2015 by Tokyo-based South Korean publisher Kim Seung-bok. In recent years, it has seen a surge in young women drawn by their love of K-pop, as well as middle-aged men who have discovered the charm of Korean novels after Han Kang won the Nobel Prize in Literature in 2024.
Ayano Tachibana visited the shop in late August to find books for her coming trip to Seoul. She said she first encountered Korean literature through friends who loved K-pop and later studied Korean at university. “I loved The White Book by Han Kang,” 23-year-old Tachibana said, referring to the author’s poetic exploration of grief and fragility through reflections on white objects such as ice and paper.
“Reading it with classmates, guided by a professor who was a fan, made me realise literature could be a conversation across borders.”
Chekccori stocks around 4,000 books, including titles from Kim’s own publishing company. Kim founded Cuon in 2007 to bring more Korean literature to Japanese readers, at a time when few bookstores stocked such works. Cuon’s first release was Han’s The Vegetarian, a novel that won the 2016 Man Booker International Prize, bringing her international acclaim. The novel, which tells the story of a woman whose decision to stop eating meat provokes a violent backlash from her ignorant husband and authoritarian father, has been acclaimed for its haunting portrayal of repression, desire and the struggle for autonomy.
Kim said that while not an easy read, it is the kind of work that serious readers will recognise as extraordinary. “I wanted to establish a reputation for publishing works of real literary achievement,” said Kim, who has been in Japan since the early 1990s, when she came to study literary criticism after learning creative writing at a university in Seoul. Originally from South Jeolla province on the southern tip of South Korea, Kim witnessed how Japanese culture flowed into the country in the 1980s through magazines such as Non-no and novels by Haruki Murakami and Banana Yoshimoto. “So I thought, literature could also flow the other way,” she said. After working in advertising, Kim launched Cuon in Tokyo but struggled to promote Korean titles because most bookstores had no dedicated section for them. “The category of ‘Korean Literature’ did not exist, making it hard to find shelf space. Rather than feeling disappointed, I instead decided to create that space myself,” she said.
That led to Kim opening Chekccori in 2015.
Over the past decade, the number of Korean books translated into Japanese has increased dramatically. Kim estimates that 300 to 400 South Korean titles are now published annually in Japan, compared to only about 20 per year around 2010. The trend was fuelled in part by the success of Cho Nam-joo’s Kim Jiyoung, Born 1982, which sold 290,000 copies in Japan after its 2018 release by publisher Chikuma Shobo. The novel, about a woman facing systemic misogyny in a patriarchal society, resonated deeply with readers.
Kim credits this popularity to the rise of social media, which has allowed ideas and movements – including feminism – to spread rapidly across borders. The feminist movement in South Korea gained momentum after a 2016 murder case in Seoul, followed by the global #MeToo movement in 2017. Kim’s publisher has released many feminist-themed books, including a collection of essays titled “#Living as a woman who speaks up” by author and lawyer Jeong So-yeon. Now celebrating its 10th anniversary, Chekccori has set a new goal to introduce more Korean poetry, a genre still relatively under-represented in translation. The bookstore held events for Korean poet Shin Mina, who was in Tokyo for two months earlier this year under a writer-in-residence programme. Interest in Korean poetry is growing. Yukinori Ebihara visited Chekccori for the first time after hearing Mariko Saito, translator of Han’s novels and many other works, read Korean poems on the radio. “Even without understanding the words, the sound was beautiful. It made me want to hear more, to feel that resonance,” the 74-year-old said. Today, Kim’s focus has shifted from growth to sustainability. After recovering from cancer a few years ago, she hopes to ensure that Chekccori continues connecting readers and writers for years to come. “What I’d like to do is to return to the basics – the craft of choosing excellent books, creating them with care, and placing them in the hands of readers in the right way,” she said." Kyodo Published: 5:15pm, 4 Nov 2025 https://amp.scmp.com/lifestyle/article/3331461/tokyo-bookstore-where-translated-korean-literature-sparks-conversation-across-borders #Metaglossia #metaglossia_mundus
Jobs in higher education.
"Sign Language Interpreter II, Office for Disability Equity
University of Montana
Missoula, MT
Type: Full-Time
Salary: $35 per hour
Posted: 1 day ago
Application Due: 12/22/2025
Category: Disability and Accessibility Services; +1
Sign Language Interpreter II, Office for Disability Equity
Salary: $35.00 Hourly
Location: Missoula Mountain Campus
Job Type: Hourly Staff Full-time
Job Number: 202400475
Sector:
Department: UM VP Student Success & Campus Life
Closing Date: 12/22/2025 11:59 PM Mountain
FLSA: Non-Exempt
Bargaining Unit:
Description
The Office for Disability Equity invites applications for a Sign Language Interpreter II to provide sign language interpreting services to students, faculty, staff and visitors of UM.
Examples of Duties and Responsibilities
Voice-to-sign and sign-to-voice interpretation of all courses, lectures, tutor sessions, labs, meetings, special events and community outreach.
Follow current trends in American Sign Language.
Adapt interpreting methods when necessary.
Engage in professional development.
Support the University's inclusive prosperity mission.
Other duties as assigned.
Minimum Qualifications
Bachelor's degree in ASL-English Interpreting or relevant field. An equivalent combination of education and experience, including a 4.0+ EIPA score may be considered.
Fluency in ASL & English.
RID or BEI certification.
Demonstrated ability to apply and analyze the RID Code of Professional Conduct.
Demonstrated knowledge of and skill in American Sign Language, voice-to-sign, and sign-to-voice interpretation.
...
Preferred Qualifications
Five-years' experience working in a post-secondary and/or community settings.
Completion of an interpreter training program; preference given to Bachelor's degree in ASL-English interpretation or related field.
Adherence to follow the NAD-RID Code of Professional Conduct.
Preference given to candidates with RID certification and/or BEI certification and/or EIPA score 4.0 or higher.
Additional Information
Position Number: 100070
Compensation Title: Sign Language Interpreter II
Bargaining Unit: FOCUS-MFPE
Work Schedule: Full-time, 1.0 FTE (40 Hours a week), Monday through Friday 8:00 am to 5:00 pm, 12 months/year
Probationary Period: Six (6) months minimum
Benefits Include: Insurance package, mandatory retirement plan, partial tuition waiver, and wellness program.
https://www.higheredjobs.com/admin/details.cfm?JobCode=179321983
#Metaglossia
#metaglossia_mundus
"MELBOURNE – On December 4 thousands of unionised healthcare workers from across Victoria, Australia banded together in a landmark protest for better pay – the first of its kind in 25 years.
Four Greek interpreters from The Alfred Hospital in Melbourne stood out as key participants, striking for improved wages and manageable workloads. Among them, long-time union member and Greek interpreter, Soula (Anastasia) Tousimis, who stressed the importance of the union and increased remuneration for interpreters.
“The union looks after our interests as interpreters … they recognize us … we feel protected,” Tousimis said. “A lot of people still don’t even know that we exist in the hospital system. We are highly qualified professionals who have been neglected for too long and deserve better.”
As a symbol of their loyalty to the Health Workers Union (HWU) and their profession in healthcare, the Greek interpreters walked off the job and onto the streets with the echoes of “union power” chants behind them.
HWU Lead Organiser Jake McGuinness welcomed all HWU members at 10 AM in front of the Victorian Trades Hall Council in Melbourne, where the t-shirt-printed capitalised message “have a heart for healthcare workers, stand together, win together” accompanied police escorts.
Persistent in pink, and in powerful demonstration with their main banner, Tousimis and her colleagues marched in solidarity with the crowd toward Melbourne’s iconic Parliament House.
“We’ve had enough and can’t wait any longer, we have to do something about it now”, she said.
Fellow Greek interpreters Komninos and Maria added that the protest is timely as the Victorian economy is still recovering from the losses of COVID-19 lockdowns.
“Cost of living is higher and wages are stuck …they (the Victorian government) haven’t accounted for that,” shared Komninos.
Maria urged that despite “high inflation” and working on salaries “lower than before COVID,” the “power of the people” is what can make all the difference. I have always believed in the power of the people. When people come together, it is always a success.”
According to the HWU website, the strike is in response to the Victorian Government’s recent 3% pay rise offer for healthcare workers, in comparison to a 7.1% increase for nurses, and 8.25% for paramedics.
Soula Tousimis and her daughter Christine Filippidis ahead of the protest. Photo: Christine Filippidis Health Workers Union bans In a video from the HWU Victoria Facebook page, McGuinness can be seen calling on the Victorian government to take action, stating that bans, including “a closure of one in four hospital beds”, “a pause on all category 2 and 3 elective surgeries,” and “a ban on cleaning non-clinical areas of the hospital” will be put in place if healthcare workers’ demands aren’t met.
“If the government continues to refuse to treat these workers with respect and offer them a liveable pay deal, then in January (2026) we will be forced to escalate these bans”, he said.
For now, Soula and her fellow Greek interpreters remain committed to providing a duty of care to their patients at The Alfred, yet maintain that their voices also be heard in their fight for fairness.
“We won’t ever give up” Soula said. “This is who we are, and we will keep pushing for better … for what’s fair for all interpreters and healthcare workers.”
To find out more about the rolling bans and how you can support interpreters and other healthcare workers visit the HWU website." https://www.thenationalherald.com/greek-interpreters-take-center-stage-at-historic-union-strike-in-australia/ #Metaglossia #metaglossia_mundus
"Oversight panel mulls probing interpreters helping non-citizens get Maine driving permits
The Government Oversight Committee had originally been asked to investigate whether two noncitizen drivers involved in pedestrian fatalities had improperly obtained driving credentials. Now the panel wants more information about interpreters who help noncitizens complete their written exams...
The Maine Legislature's Government Oversight Committee is weighing a probe into how interpreters help non-citizens obtain their driver's licenses. Some lawmakers are calling for an investigation after two people were recently killed in separate incidents in Lewiston and New Gloucester.
According to the Secretary of State's office, both of the fatalities involved drivers who were legally present in the U.S. when their license or drivers permit was issued. One of the drivers had his license for more than six years at the time of the crash and the other had his permit for about a week.
While a pair of Republican lawmakers had originally questioned whether the drivers had improperly obtained driving credentials, officials from the Bureau of Motor Vehicles provided the committee a timeline showing both were legally present, verified with the federal verification system and met the federal and state requirements.
Both drivers had temporary legal status. Neither Maine nor federal REAL ID laws require noncitizens to have permanent legal status to obtain driving credentials.
Nevertheless, Wednesday's oversight hearing drew additional questions and allegations that interpreters are helping non-English speaking applicants pass their written exams. Interpreters are allowed to assist non-English speaking applicants during their driving exams, but some lawmakers said whistleblowers at the BMV were prepared to testify under oath that they had witnessed cheating during written exams.
Several Democrats on the committee worried the potential inquiry is influenced by politically-inflamed suspicions about non-white immigrants, but others said the oversight panel should make sure there's no cheating and accountability for interpreters.
The committee requested more information from the BMV and is expected to revisit the matter at a later meeting.
Community support has always powered public media, and the power to keep this vital public service going rests with you. Join us with ongoing support as an Evergreen Friend and be part of the community that ensures Maine Public remains strong."
Maine Public | By Steve Mistler
Published December 17, 2025 at 6:21 PM EST
https://www.mainepublic.org/maine/2025-12-17/oversight-panel-mulls-probing-interpreters-helping-noncitizens-get-maine-driving-permits
#Metaglossia
#metaglossia_mundus
MIT-IBM Watson AI Lab researchers developed an AI expressive architecture called PaTH Attention, increasing the capabilities of large language models that can perform better state tracking and sequential reasoning over long text.
"Most languages use word position and sentence structure to extract meaning. For example, “The cat sat on the box,” is not the same as “The box was on the cat.” Over a long text, like a financial document or a novel, the syntax of these words likely evolves.
Similarly, a person might be tracking variables in a piece of code or following instructions that have conditional actions. These are examples of state changes and sequential reasoning that we expect state-of-the-art artificial intelligence systems to excel at; however, the existing, cutting-edge attention mechanism within transformers — the primarily architecture used in large language models (LLMs) for determining the importance of words — has theoretical and empirical limitations when it comes to such capabilities.
An attention mechanism allows an LLM to look back at earlier parts of a query or document and, based on its training, determine which details and words matter most; however, this mechanism alone does not understand word order. It “sees” all of the input words, a.k.a. tokens, at the same time and handles them in the order that they’re presented, so researchers have developed techniques to encode position information. This is key for domains that are highly structured, like language. But the predominant position-encoding method, called rotary position encoding (RoPE), only takes into account the relative distance between tokens in a sequence and is independent of the input data. This means that, for example, words that are four positions apart, like “cat” and “box” in the example above, will all receive the same fixed mathematical rotation specific to that relative distance.
Now research led by MIT and the MIT-IBM Watson AI Lab has produced an encoding technique known as “PaTH Attention” that makes positional information adaptive and context-aware rather than static, as with RoPE.
“Transformers enable accurate and scalable modeling of many domains, but they have these limitations vis-a-vis state tracking, a class of phenomena that is thought to underlie important capabilities that we want in our AI systems. So, the important question is: How can we maintain the scalability and efficiency of transformers, while enabling state tracking?” says the paper’s senior author Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab.
A new paper on this work was presented earlier this month at the Conference on Neural Information Processing Systems (NeurIPS). Kim’s co-authors include lead author Songlin Yang, an EECS graduate student and former MIT-IBM Watson AI Lab Summer Program intern; Kaiyue Wen of Stanford University; Liliang Ren of Microsoft; and Yikang Shen, Shawn Tan, Mayank Mishra, and Rameswar Panda of IBM Research and the MIT-IBM Watson AI Lab.
Path to understanding
Instead of assigning every word a fixed rotation based on relative distance between tokens, as RoPE does, PaTH Attention is flexible, treating the in-between words as a path made up of small, data-dependent transformations. Each transformation, based on a mathematical operation called a Householder reflection, acts like a tiny mirror that adjusts depending on the content of each token it passes. Each step in a sequence can influence how the model interprets information later on. The cumulative effect lets the system model how the meaning changes along the path between words, not just how far apart they are. This approach allows transformers to keep track of how entities and relationships change over time, giving it a sense of “positional memory.” Think of this as walking a path while experiencing your environment and how it affects you. Further, the team also developed a hardware-efficient algorithm to more efficiently compute attention scores between every pair of tokens so that the cumulative mathematical transformation from PaTH Attention is compressed and broken down into smaller computations so that it’s compatible with fast processing on GPUs.
The MIT-IBM researchers then explored PaTH Attention’s performance on synthetic and real-world tasks, including reasoning, long-context benchmarks, and full LLM training to see whether it improved a model’s ability to track information over time. The team tested its ability to follow the most recent “write” command despite many distracting steps and multi-step recall tests, tasks that are difficult for standard positional encoding methods like RoPE. The researchers also trained mid-size LLMs and compared them against other methods. PaTH Attention improved perplexity and outcompeted other methods on reasoning benchmarks it wasn’t trained on. They also evaluated retrieval, reasoning, and stability with inputs of tens of thousands of tokens. PaTH Attention consistently proved capable of content-awareness.
“We found that both on diagnostic tasks that are designed to test the limitations of transformers and on real-world language modeling tasks, our new approach was able to outperform existing attention mechanisms, while maintaining their efficiency,” says Kim. Further, “I’d be excited to see whether these types of data-dependent position encodings, like PATH, improve the performance of transformers on structured domains like biology, in [analyzing] proteins or DNA.”
Thinking bigger and more efficiently
The researchers then investigated how the PaTH Attention mechanism would perform if it more similarly mimicked human cognition, where we ignore old or less-relevant information when making decisions. To do this, they combined PaTH Attention with another position encoding scheme known as the Forgetting Transformer (FoX), which allows models to selectively “forget.” The resulting PaTH-FoX system adds a way to down-weight information in a data-dependent way, achieving strong results across reasoning, long-context understanding, and language modeling benchmarks. In this way, PaTH Attention extends the expressive power of transformer architectures.
Kim says research like this is part of a broader effort to develop the “next big thing” in AI. He explains that a major driver of both the deep learning and generative AI revolutions has been the creation of “general-purpose building blocks that can be applied to wide domains,” such as “convolution layers, RNN [recurrent neural network] layers,” and, most recently, transformers. Looking ahead, Kim notes that considerations like accuracy, expressivity, flexibility, and hardware scalability have been and will be essential. As he puts it, “the core enterprise of modern architecture research is trying to come up with these new primitives that maintain or improve the expressivity, while also being scalable.”
This work was supported, in part, by the MIT-IBM Watson AI Lab and the AI2050 program at Schmidt Sciences."
https://news.mit.edu/2025/new-way-to-increase-large-mlanguage-model-capabilities-1217
#Metaglossia
#metaglossia_mundus
"Arabic is one of the world’s most widely spoken languages with at least 400 million speakers, including 200 million native speakers and 200 million to 250 million non-native speakers.
Modern Standard Arabic (MSA) serves as the formal language for government, legal matters and education, and it is widely used in international and religious contexts. Additionally, more than 25 dialects are spoken primarily across the Middle East and North Africa.
Every year on December 18, the United Nations commemorates World Arabic Language Day, celebrating Arabic as “the pillar of the cultural diversity of humanity”. The date was chosen to mark the day in 1973 on which the UN General Assembly adopted Arabic as one of its six official languages.
In the following visual explainer, Al Jazeera lists some of the most common words in today’s English language that originated from Arabic or passed through Arabic before reaching English.
How Arabic words entered other languages
As the most spoken of the Semitic languages, a group of languages that originated across Southwest Asia and Africa, Arabic has influenced societies and other languages for centuries.
Linguists say the presence of Arabic words in other languages reflects long histories of contact through trade, scholarship and cultural exchange.
English, Spanish, French, Turkish and many other languages across the globe have borrowed hundreds to thousands of words from Arabic that are used in everyday language.
Muntasir Al Hamad, a linguist and professor of Arabic at Qatar University, says this type of borrowing is a “natural phenomenon” and languages have borrowed from one another for centuries.
“Arabic is no different in that sense. This is reflected in vocabulary, science, technology and civilisation,” he tells Al Jazeera.
An alphabet with many forms
Arabic uses an alphabet of 28 letters and is written from right to left. The script is cursive, and its letters change shape depending on their position in a word. Short vowels are typically omitted in everyday writing.
These features, together with Arabic’s extensive vocabulary, have contributed to the perception that the language is difficult for non-native speakers to learn.
However, Al Hamad says this perception is far from accurate for many people.
“One of the biggest misconceptions about Arabic is that it is among the most difficult languages in the world,” he said. “In reality, it is simply a language with systems that differ from English or from many European languages.”
He added that while the Arabic script may appear unfamiliar to some learners, it is “quite familiar” to speakers of other languages, such as Urdu and Farsi. Speakers of those languages, Al Hamad says, often find Arabic easier to read while Turkish speakers may find its vocabulary easier to memorise due to the thousands of Arabic words Turkish has absorbed.
From A for algebra to T for tariffs
One of the biggest contributions the Arabic language has made to the world is in the fields of mathematics and science.
Over time, some of these words entered other languages in shortened or adapted forms, becoming so familiar that their origins are often forgotten.
One example is algebra, a cornerstone of mathematics. The term comes from the Arabic word al-jabr, meaning “restoration” or “reunion”. It originally appeared in the title of a ninth-century work on solving equations by the Baghdad-based Persian scholar Muhammad ibn Musa al-Khwarizmi, after whom the word “algorithm” is derived.
Other Arabic words underwent more dramatic transformations. Carat, the unit used to measure the weight of gemstones, traces its roots to the Arabic word qirat.
According to Al Hamad, these changes reveal how English and other languages adapt unfamiliar sounds. “Because English has relatively few words beginning with Q,” he explains, “Arabic words such as qirat were reshaped using more familiar sounds like C, G or K, producing forms such as carat.”
The same process can be seen in everyday vocabulary beyond science and mathematics. The word giraffe, for instance, comes from the Arabic zarafa, and went through a similar transformation as English and other European languages reshaped the original sounds to fit their own phonetic patterns, much as they did with words beginning with the Arabic letter Q.
On the other hand, words such as tariff, which is derived from the Arabic word ta’rif, meaning “to notify” or “to announce”, entered English through contact with other languages involved in trade.
Al Hamad says these words “most likely entered the English language via Romance languages” although not necessarily in the forms we recognise today. He adds that they also passed through Turkish, which “borrowed heavily from Arabic” and influenced the medieval world through trade and warfare. Later, during the British colonial era, English both borrowed from and contributed words directly to Arabic.
According to Al Hamad, these changes reveal how English and other languages adapt unfamiliar sounds. “Because English has relatively few words beginning with Q,” he explains, “Arabic words such as qirat were reshaped using more familiar sounds like C, G or K, producing forms such as carat.”
The same process can be seen in everyday vocabulary beyond science and mathematics. The word giraffe, for instance, comes from the Arabic zarafa, and went through a similar transformation as English and other European languages reshaped the original sounds to fit their own phonetic patterns, much as they did with words beginning with the Arabic letter Q.
On the other hand, words such as tariff, which is derived from the Arabic word ta’rif, meaning “to notify” or “to announce”, entered English through contact with other languages involved in trade.
Al Hamad says these words “most likely entered the English language via Romance languages” although not necessarily in the forms we recognise today. He adds that they also passed through Turkish, which “borrowed heavily from Arabic” and influenced the medieval world through trade and warfare. Later, during the British colonial era, English both borrowed from and contributed words directly to Arabic..."
By Alma Milisic and Mohammed Haddad
18 Dec 2025
https://www.aljazeera.com/news/2025/12/18/from-a-for-algebra-to-t-for-tariffs-arabic-words-used-in-english-speech
#Metaglossia
#metaglossia_mundus
"PEN International Translation and Linguistic Rights Committee 17 Dec Written By PEN International
“At a time when so many voices risk being silenced, our Committee works to ensure that every language has the space to thrive. Protecting linguistic rights is not only about preserving heritage — it is about safeguarding dignity, diversity, and the freedom to imagine in one’s own words.” Urtzi Urrutikoetxea, Chair of PEN International Translation and Linguistic Rights Committee
In a world where thousands of languages are at risk, defending linguistic rights has never been more urgent. PEN International Translation and Linguistic Rights Committee (TLCR) stands at the centre of one of the most important struggles of our time: ensuring that all languages — and the people who speak, write, and dream in them — can flourish.
The Committee’s mission is rooted in the belief that translation is more than a literary act; it is an act of solidarity. When a language is marginalised, its stories and people risk disappearing. Translation offers a bridge, carrying those voices into new spaces, widening their reach.
A history of championing translation
Although the TLCR was officially created in 1978 during PEN International 43rd Congress in Stockholm, its commitment to translation began much earlier. As far back as 1928, PEN collaborated with the International Institute of Intellectual Co-operation, a League of Nations body, to promote translations across borders. PEN Centres identified works deserving translation, which the Institute then shared with publishers worldwide. A Geneva meeting in July 1928, attended by figures such as John Galsworthy and Salvador de Madariaga, formally approved the arrangement.
The creation of the TLCR By 1978, this vision had grown into a formal structure dedicated to translation. That year, under the global presidency of Mario Vargas Llosa, Swedish PEN used the Stockholm Congress to draw attention to the essential role of translators. Per Wästberg, then president of Swedish PEN, put together a coalition of PEN Centres to broaden access to world literature. Originally known as the Programme and Translation Committee, its first aim was simple yet radical: to champion translation from all literary traditions, especially those with little international presence. The Committee supported anthologies bringing lesser-translated voices into dialogue — for example collaborations between Portuguese and Catalan poets or Macedonian and French poets.
From “small” to “minoritised” languages One of the Committee’s key contributions has been to challenge the idea of “small languages” and advancing the concept of “minoritised languages” — languages marginalised by political, economic, and cultural forces. During the late 1980s and early 1990s, under the leadership of Portuguese poet Ana Hatherly, the Committee sharpened its focus on cultural rights. With strong support from several PEN Centres, it affirmed that translation cannot be separated from defending linguistic communities. This led to the Committee’s current name: the Translation and Linguistic Rights Committee.
The World Conference on Linguistic Rights This mission culminated in 1996, when 61 NGOs, 41 PEN Centres, and 40 experts gathered in Barcelona for the World Conference on Linguistic Rights. Organised by the Committee in partnership with the International Escarré Centre for Ethnic Minorities and Nations and with UNESCO’s support, the event aimed to create a global framework to protect linguistic rights.
Over three days, experts, activists, and writers drafted the Universal Declaration of Linguistic Rights (UDLR). Its scientific council, chaired by linguist Isidor Marí, warned that up to 80% of the world’s languages could disappear within the twenty-first century. The Declaration set out principles for linguistic justice and cultural coexistence and was formally approved at the University of Barcelona, with delegates from every continent signing it. UNESCO received the document a month later.
Building on the UDLR Fifteen years later, the Girona Manifesto (2011) and the Quebec Declaration on Literary Translation and Translators (2015) continued to advance linguistic and translation rights. The Committee has also held meetings outside Europe, including Johannesburg (2016), Bengaluru (2017), and San Cristóbal de las Casas (2019), where the first Indigenous PEN Centre joined the PEN family (PEN Chiapas Pluricultural). The COVID-19 pandemic inspired the launch of the Video-Poem Marathon in Indigenous and Minoritised languages, further supporting their visibility. Since then, more Indigenous writers and their languages are represented, and the anthology Lenguas Vivas, featuring 26 authors in Indigenous languages of Latin America, has become a milestone in PEN International’s history.
A legacy that matters From its earliest years, the TLCR has defended linguistic diversity, promoted translation, and upheld the rights of communities whose languages are endangered or overlooked. At a time when globalisation threatens to flatten cultures, the Committee reminds us that every language carries its own worldview and that every worldview is essential to humanity." https://www.pen-international.org/news/pen-international-translation-and-linguistic-rights-committee #Metaglossia #metaglossia_mundus
"Workshop 'Meet the Expert - Methodological Issues in Translation Studies Theses'
For whom: Employees, Students
When: 21-01-2026 from 10:00 to 12:00
Where: Campus Mercator, room BK.03, Groot-Brittanniëlaan 45, 9000 Ghent
Language: English
Organizer: Department of Translation, Interpreting and Communication - Faculty of Arts and Philosophy
Contact: piet.vanpoucke@ugent.be
The workshop is intended as a brainstorming session for discussing the presentation of methodological issues in Translation Studies Theses.
Each participant prepares a short abstract (up to 500 words) in English on their chosen topic. In the workshop, there will first be a short introduction to / survey of different types of methods’ sections in PhD dissertations in Translation Studies. Operationalisation is one of the key concepts here.
The theses discussed are all online, and a list of them will be provided, in case any of the participants are interested in having a look beforehand. After this survey, doctoral candidates will be discussing their methods and optimal ways of structuring the methods section, in smaller groups. Ideas, challenges and questions will be noted down and presented to the whole group afterwards.
Everyone has a chance to participate in the discussion, and the workshop organiser will comment on the groups’ suggestions as well as individual abstracts submitted earlier."
https://www.ugent.be/en/agenda/translation-studies
#Metaglossia
#metaglossia_mundus
"Led by a unique team of adjunct professionals, the program offers American Sign Language courses taught by qualified instructors who are native signers, along with interpreting classes led by nationally certified interpreters, preparing students for real-world careers. The prospective interpreter gets the best of both worlds.
Through its Interpreter Training Program, Oklahoma State University-Oklahoma City is working to meet that critical need: preparing qualified interpreters to serve communities statewide.
Jimmy Mitchell, department head of OSU-OKC’s Interpreter Training Program, said the program is focused on equipping students with the skills and ethics necessary to succeed in the workforce.
“We are very serious about developing qualified and skilled interpreters in every possible way that we can to make sure that they are ready for the workforce,” Mitchell said through an interpreter.
The program offers American Sign Language courses taught by qualified instructors, as well as interpreting classes led by nationally certified interpreters. Adjunct faculty bring real-world experience to the classroom, teaching both technical skills and professional ethics. Students also gain hands-on experience through internships and community projects coordinated by Mitchell.
Mitchell’s own path to interpreter education began during the COVID-19 pandemic, when he frequently interpreted at emergency press conferences for Gov. Kevin Stitt. That experience sparked a deeper interest in the interpreting community.
Previously a vocational rehabilitation counselor, Mitchell had seen and experienced firsthand the challenges clients faced when interpreters were under qualified.
“I knew that we had interpreting problems in the state of Oklahoma,” he said. “After COVID, I realized teaching was more of my calling.”
Mitchell later taught at OSU-Stillwater before joining OSU-OKC as department head three years ago. Since then, he has worked to expand the program’s reach and strengthen its ties to the community.
“I send interns out to the community to work, in order to gain valuable real time experience you can't get in a classroom setting.” he said. “Whatever they need, if we need an intern for observation or hands up, then I coordinate all of that.”
Looking ahead, Mitchell envisions developing microcredentials in specialized areas such as medical, educational and legal interpreting. He also hopes to establish a bachelor’s degree program at OSU-OKC, similar to the one offered at OSU-Stillwater, to give local students more opportunities to advance their education. “I would love to set up a program here for the local students who want to have a bachelor’s degree in interpreting, while being as flexible as possible for students who work while attending school or for those with family obligations” he said.
Partnerships with community organizations and workforce initiatives are also part of his vision for growth.
“More partnerships with the community and different organizations and the workforce would help develop the program more,” Mitchell said."
https://news.okstate.edu/articles/osu-okc/2025/osu-okc-interpreter-training-program-builds-pathways-for-skilled-professionals.html
#Metaglossia
#metaglossia_mundus
"Inside the World of a Sign Language Interpreter
Sign language interpreters are often unseen, yet they play a critical role in bridging communication between the deaf and hearing worlds. Bill Pugin, a veteran ASL interpreter of more than 50 years and author of the new book Fly on the Wall, shares how his deaf sister inspired his career and offers fascinating stories from decades of interpreting—especially in the entertainment industry. From working with Paul McCartney, Meryl Streep, Johnny Depp, and Natalie Portman to performing high-pressure live, on-stage interpreting, Pugin explains why interpreting is about conveying meaning, not word-for-word translation. His book is available now, with a local book signing set for January 14 in Palm Springs.
By: Thalia Hayden
December 16, 2025
https://www.nbcpalmsprings.com/2025/12/16/inside-the-world-of-a-sign-language-interpreter
#Metaglossia
#metaglossia_mundus
|
"Interpreters say they will refuse all work from the VITS LanguageLoop service unless previous rates of pay are restored.
Hundreds of interpreters in Victoria are preparing to refuse jobs on Monday.
Some staff have indicated the changes could result in an annual pay reduction of approximately $8,000 to $30,000.
What's next?
Interpreters say they will refuse all work from VITS LanguageLoop unless the previous rates of pay are restored.
Hundreds of interpreters from Victoria's culturally and linguistically diverse (CALD) communities are set to take risky unprotected industrial action on Monday, after raising concerns about what they are calling a sudden "pay cut".
They plan to "strike" by refusing to take new jobs, after the Victorian Interpreting and Translating Service (VITS) LanguageLoop quietly changed its payment structure for more than 3,000 contractors.
Why Australia's political system is inaccessible for many migrants
Booming Chinese and South Asian migrant communities are keen to engage in Australian democracy but many do not fully understand how politics works in their adopted home country, new research has shown.
Several interpreters said the move will worsen their anxiety amid the cost-of-living crisis, with the "pay cut" resulting in an annual income reduction of approximately $8,000 to $30,000 per person.
VITS, a Victorian government business enterprise, was established in 1975 to break language barriers for the growing CALD communities through its translation and interpreting platform LanguageLoop.
The platform connects thousands of interpreters, who are fluent in more than 190 languages, with government and business clients. It pays them commissions as contractors.
Many interpreters, who relied on finding jobs through VITS LanguageLoop for their livelihoods, said the new payment structure lacks "fairness, transparency, and communication".
The changes, which include removing the one-off payments for full-day and half-day jobs, and splitting tasks into government and non-government streams, could lead to a significant reduction in the payments they receive.
"These changes are severely impacting our daily income and the entire industry," said Lianzhen Kennedy, a 54-year-old Mandarin interpreter.
Ms Kennedy has spent 13 years in the industry, and she said that she is set to lose about $8,000 to $10,000 annually.
She said pay cuts have taken a toll on her ability to maintain her current standard of living, and she will join the "strike".
She said she was concerned that VITS LanguageLoop made the fee structure change to "win more clients".
"We are professionals. We need to be heard and our work needs to be respected."
The ABC has contacted VITS LanguageLoop for a response.
Cuts came 'without notice'
Another change involved reducing the time allotted for various fixed translation tasks, which lowered compensation.
Ms Kennedy said VITS LanguageLoop also removed the option for interpreters to book either four-hour or eight-hour slot durations due to client requests.
The interpreter explained that for an eight-hour court booking, which costs $326, the new method of calculating the fee at $8.59 every 15 minutes amounts to $241, a difference of $85.
She said that with the new method, the cost for a private law firm would be $218 instead of $326, a difference of $108.
As the cost of living continues to rise, these professionals, who are essential to communication in sectors such like healthcare, legal services, and education, are facing increased financial strain.
Laurice Erian, a 50-year-old Arabic interpreter, has been working with the agency since 2002.
She expressed her dismay over the abruptness of the pay cuts, which she said have affected not just the amount she earns, but also the stability of her work.
"How can I be expected to complete a 90-minute job in the city for $87, when I have to pay at least $7 per hour for off-street parking ... plus e-tag charges and petrol," Ms Erian said.
Sarina Phan, a 57-year-old Vietnamese interpreter, has been in the profession for more than two decades.
She noted that agencies have access to "a large pool" of interpreters.
"[Those agencies] have the power to stipulate pay and conditions," she said.
As one of the higher-paid interpreters in the field, she said she was less impacted because she anticipated something like this happening.
She was able to implement backup strategies, like a shift in career focus.
But she acknowledged those heavily reliant on this work will face significant hardships.
A push for 'fairness'
Another change is the differentiation between government and non-government translation jobs, with the latter having a lower cost.
For a 90-minute booking, there's a $15.81 difference in payment, despite the nature of the jobs being largely identical.
There's also discontent about VITS revising its after-hours pay policy, now only offering lower rates for some weekend jobs.
Beyond the loss of income, interpreters say they feel like their contributions are being undervalued.
Sixty-three-year-old Marina Del Greco, a Russian interpreter and translator who has worked with VITS for the last 30 years, echoed these sentiments.
"The type of work we do with police, with criminals in jails, in courts, is so traumatic," Ms Greco said.
"Most of us have PTSD and we have no one to turn to, because interpreters are always the meat in the sandwich."
She said she will lose at least $10,000 a year.
According to Australian Government statistics, 78 per cent of the translation profession is part-time, with women making up 72 per cent of the workforce.
"We are shoved around like there is no tomorrow," Ms Greco said.
"We are very quiet. We never yell. We always agree. But now I think even women have had enough."
'Strike' action is legally risky
Translators and Interpreters Australia, a division of Professionals Australia, is the union for the interpreting industry.
It said it is not organising or endorsing the proposal to refuse all new work from VITS LanguageLoop.
"While we have the authority from the ACCC to collectively represent and negotiate for contractors, any organised boycott risks a breach of competition law," the union said in a statement.
The union said it maintained that the change to the VITS LanguageLoop fee structure "constitutes a blatant attempt by Court Services Victoria (CSV) to cut costs at the expense of interpreters".
It said union members will "explore avenues for further escalation", should the matter not resolve "within the coming days".
James Mattson, a partner from the Bartier Perry law firm, said that while industrial action could prompt the organisation to reconsider, workers may face legal orders to stop unless they have authorisation.
"How the company responds may very well depend on its ability to service their customers and any pressure from customers for services to be delivered," Mr Mattson said.
"If these workers are contractors, then a lot will depend upon the terms of the arrangement with the company.
"As contractors, they will likely be more vulnerable, including to termination, if they take strike action or if they do not agree to the new arrangements.""
#metaglossia_mundus: https://www.abc.net.au/news/2024-08-15/victorian-interpreters-set-to-stop-work/104225350