 Your new post is loading...
|
Scooped by
Charles Tiayon
March 18, 2024 7:56 PM
|
Back in 2016, you may recall, there was an explosion of disparaging commentary about Hillary Clinton’s voice. It was shrill, people said, and too loud; it was harsh and flat and “decidedly grating”; it was the voice of a bossy schoolmarm whose “lecturing” or “hectoring” tone was widely agreed to be a total turn-off. No one, they said, would vote for a president with a voice like that. As feminists immediately recognized, this criticism wasn’t really about Clinton’s voice. Her voice was just a symbol of everything her critics didn’t like about her, beginning with the simple fact that she was a woman who wanted to be president. The words her detractors used, words like “shrill” and “harsh” and “bossy”, are commonly used to express dislike and disapproval of “uppity” women, women who occupy, or aspire to occupy, positions of authority and power. That these words have little if anything to do with what an individual woman actually sounds like is demonstrated by the fact that they’re contradictory—Clinton’s voice was said to be both “shrill” (high and piercing) and “flat” (low and monotonous)—and are applied to women who sound totally different (Greta Thunberg and the late Margaret Thatcher have both been described as “strident”). What “grates” is not the voice itself, but the temerity of the woman who raises it in public and expects others to listen to what she says. Calling her “strident” or “shrill” is a way of shaming her for that. Male politicians are not subjected to this voice-shaming: they may be criticized for any number of other things (as Trump was in 2016), but their voices rarely become an issue, because men’s right to a public voice is not in question. I found myself thinking about this last week while watching another female politician being voice-shamed: Alabama Senator Katie Britt, who responded on behalf of the Republican party to President Biden’s State of the Union address. As you’d expect, she was critical of Biden; as you’d also expect, her performance attracted a lot of criticism from non-Republicans. But much of that criticism focused not on what she had said, but on how she had said it, and especially on her use of something called “fundy baby voice”. Here’s one example, written by Cheryl Rofer for the leftist blog Lawyers, guns and money: I wasn’t going to watch the Republican response to President Biden’s State of the Union speech. But then social media posts started popping up: “What am I seeing?” “This porn sucks.” “Who is this?” …a United States Senator who presents herself with a dipping blouse neckline showing a gleaming stone-encrusted cross, speaking in a breathy childlike voice from a darkened and apparently unused kitchen… …That bizarre voice is called “fundy baby voice.” It is cultivated by women in what let’s call the fundy bubble…they use it deliberately to signal that they belong to that bubble and all it implies about women – submissive to men, stays in the home, and certainly no attempt to control the relationship of sex to pregnancy. …Her emotional presentation was also bizarre, with much too much smiling as she spoke about rape and household finances. But women are supposed to smile – men thought Hillary Clinton and Elizabeth Warren should smile more. …Here was a woman who is willing to smile more, before our very eyes. And also to choke up her voice as if she was about to cry, to show us how very sensitive she is to others’ plights. The way of speaking referred to here as “fundy baby voice” (“fundy” = [Christian] fundamentalist) is evidently in the process of being what sociolinguists call enregistered. Enregisterment happens when a linguistic phenomenon (usually one that’s been in existence for some time) becomes sufficiently noticeable to be identified, given a name (e.g., “Estuary English”, “uptalk”) and commented on. “Fundy baby voice” doesn’t yet have the same level of popular recognition as, say, uptalk: as last week’s commentary demonstrated, you still have to explain what it is if you’re writing for a general audience. But people who are aware of it can tell you not only what it’s called, but also who uses it (prototypically, white southern evangelical women), what it signifies (feminine submissiveness) and what its most salient characteristics are (it’s high in pitch, has a breathy or whispery quality and is produced with a smile). The discourse through which a way of speaking is enregistered doesn’t just explain what it is: typically it does two other things as well. One is to construct a stereotype—a generic representation which captures what makes the way of speaking distinctive, but which is simpler and more extreme than any real-life example of its use. When I listened to Katie Britt’s speech, for instance, I realized that the descriptions I’d read had exaggerated some elements of her performance while leaving out others entirely. Her voice was definitely breathy, but not as high-pitched (or as southern) as I’d expected; I was also surprised by how much she used creaky voice (which is not part of the stereotype: it’s similar to vocal fry, associated with speaking at a low pitch, and it doesn’t sound sweet or babyish). The only thing I thought the commentary hadn’t exaggerated was her frequent and incongruous smiling. The second thing this kind of discourse constructs is an attitude to the way of speaking that’s being enregistered. In the case of fundy baby voice that attitude is strongly negative, as you can tell not only from what is said about it (e.g., Cheryl Rofer’s description of it as “bizarre”), but also from the name it’s been given, which is obviously not neutral—it’s not a label you’d expect evangelical women to use themselves. Discourse about fundy baby voice is largely a matter of people outside what Rofer calls the “fundy bubble” criticizing the speech of women inside it. Which is not, of course unusual: commentary on uptalk, vocal fry and other alleged “female verbal tics” is also produced by people who don’t (or think they don’t) talk that way to criticize, mock or shame those who do. There are, to be fair, some exceptions: there’s a more nuanced take, for instance, in a post by the former Southern Baptist and now self-described “rural progressive” Jess Piper. Piper wrote about fundy baby voice well before Katie Britt made it a talking-point, and when she revisited the topic in the wake of Britt’s speech she reminded her readers that it isn’t bizarre to women like her who grew up with it: I know that voice well…in fact I can’t shake it myself. It was ingrained in every woman I knew from church and every time I speak about it, folks will point out that I sound that way myself. Yes, friends. That’s the point. Be sweet. Obey. Prove it by speaking in muted tones. Whereas Rofer suggests that evangelical women use fundy baby voice “deliberately”, Piper points out that speaking is a form of habitual behaviour shaped by lessons learned early in life. Though she no longer identifies with the values the voice symbolizes or the community it signals membership of, she hasn’t been able to eliminate the habits she acquired during her formative years—habits which were modelled, as another ex-fundamentalist, Tia Levings, explains, by “older generations speaking in a soft baby whisper to the younger”, and reinforced through “an invisible reward system of acceptance and attention”. Girls learned, in other words, how to speak so that others would listen to them. That is not, lest we forget, something that only happens in the “fundy bubble”. We are all products of gendered language socialization, which is practised in some form in all communities. Of course, the details vary: when I was a girl what was modelled and rewarded wasn’t the “soft baby whisper” Tia Levings and Jess Piper learned. But it was just as much a linguistic enactment of my community’s ideas about “proper” femininity. Sounding “ladylike”, for instance, was constantly harped on: girls got far more grief than boys for things like yelling, laughing loudly, using “coarse” language, speaking with a broad local accent and addressing adults without due politeness. And the process continues into adulthood: it’s what’s happening, for instance, in all the modern, “diverse” and “inclusive” workplaces where women are told they sound too “abrasive” and need to “soften their tone”. At least in the “fundy bubble” the speech norms prescribed to women are consistent with the overtly professed belief that women should be sweet and submissive; they’re not enforced by bosses who claim they haven’t got a sexist bone in their body. Jess Piper thinks we shouldn’t be too quick to judge women like the ones she grew up with, who “used the voice because they were trained to use it”. They aren’t all terrible people: in many cases, she says, They are kind women who show up for others in sickness and in need. They take care of their families and their neighbors and their church sisters and brothers. They are living the life they feel called to lead—I give them grace and understanding. They are not out to harm others. Piper does not, however, want to give “grace and understanding” to women like Katie Britt, who have real power and who do want to use it to harm others. “I am jolted awake”, she writes, “when I hear the voice dripping sugar from a mouth that claims to love all while stripping rights from many”. If her point is that these women are hypocrites, then she’ll get no argument from me. But is it right, factually or morally, to make that argument only about fundamentalist women? Isn’t anyone a hypocrite who claims to follow Jesus’s commandment to “love thy neighbour as thyself” while preaching intolerance towards anyone who isn’t white or straight or Christian? Even the hypocrisy of a woman who forges a successful career in national politics while maintaining that women’s place is in the home is not hers alone: presumably women like Britt made their choices with the support of the husbands, fathers and pastors who, as Piper says herself, have more power within the community than they do. If those men are happy for some women to pursue high-powered careers because they think it will advance the community’s political goals, then they are hypocrites too. But by making a specifically female way of speaking into a symbol of the hypocrisy of the religious Right, we are, in effect, scapegoating the women. To be clear, I’m not suggesting we shouldn’t criticize Katie Britt. But it would surely be possible to hold her to account—for what she said in her speech, for her record of espousing repellent political views, and indeed for her general hypocrisy—without bringing her voice into it. Is the voice-shaming of right-wing Christian women by leftists and feminists not itself hypocritical? How is it different from what feminists objected to so strenuously in 2016, the voice-shaming of Hillary Clinton by conservatives and woman-haters? Some feminists might reply that the question is obtuse: the two cases are obviously completely different. Whereas Clinton was criticized for flouting patriarchal speech-norms (e.g., that women should be nice, be humble, speak softly and wear a smile), Katie Britt and other fundy baby voiced women are putting on a bravura display of conformity to those norms: criticizing their way of speaking is therefore a feminist act. But while I do understand that logic, there are two reasons why I don’t accept it. First, it is my belief that when anyone sets out to shame a woman for something they wouldn’t shame a comparable man for, be that her marital status, her sex-life, her weight, the clothes she wears or the sound of her voice, that is, by definition, sexist. It relies on the existence of a double standard which feminists should be criticizing, not exploiting—especially if we’re going to criticize it when it’s used against us. Which brings me to the second point. Making high-profile women the subject of endless public commentary about how nasty or stupid or babyish they sound is a form of sexist language-policing that has a negative effect on all women. Not just the ones who really are nasty or stupid; not even just the ones who are individually subjected to criticism. What gets said about those women is intended to teach the rest of us a lesson—to make us more hesitant about speaking publicly, more self-conscious about our speech and more cautious about how we express ourselves. If we think that’s a problem, we can’t pick and choose which forms of it to be against. We can’t argue that it’s OK when the targets are reactionary anti-feminist women, but totally out of order when they’re on our side of the political fence. Any woman who chuckled at the tweet quoted by Cheryl Rofer—“this porn sucks”, a reference to the fact that fundy baby voice has things in common with the more overtly eroticized “sexy baby voice”—should remember that ideas about how women should or shouldn’t speak are many and varied, and available to be used by anyone who feels the urge to put a woman—any woman—in her place. You may not talk like Katie Britt, but you almost certainly talk in some way that someone somewhere could decide to mock or shame you for—because the basic problem, whether you like it or not, is one that you, like every other woman, share with Katie. None of this is meant to imply that feminists shouldn’t be critical of the norms which define “feminine” speech: what I’m saying is that there’s a difference between critically analysing those norms and criticizing, mocking or shaming women whose speech exemplifies them. I (still) don’t understand why language-shaming is so often seen as acceptable when other kinds of shaming are not. If feminists wouldn’t criticize a female politician by making disparaging comments on her appearance–for instance, saying that Marine Le Pen looks like an old hag and Giorgia Meloni dresses like a bimbo–it’s odd that they don’t seem to have similar scruples about mocking the way women’s voices sound. But even if you don’t share my reservations about voice-shaming women whose politics you don’t like, in this case it could be seen as a trap. When we ridicule Katie Britt’s performance (as Scarlett Johansson did in her “scary mom” parody on Saturday Night Live) we may actually be doing her a favour, politically speaking, by treating her as a joke rather than a threat. On that point we could learn something from the great Dolly Parton, who has often said that she built her career on being underestimated by people who couldn’t see past the surface trappings of her femininity—the elaborate wigs, the breasts, and indeed the voice (high, sweet and southern accented)—to the inner core of steel. Katie Britt and her ilk may not share Dolly Parton’s values (or her talents), but they are no less ambitious and determined; the threat they represent is real, and we underestimate them at our peril. I didn’t watch the first series of The Traitors (I’m not generally a fan of reality shows where people compete for money), but the buzz it generated made me curious enough to start watching the second, which the BBC is showing this month. It’s now reached the halfway mark, and I’m still watching. If you’re interested, as I am, in the way people talk–and more specifically in how gender affects group interaction–this show offers plenty of food for thought. In case anyone’s unfamiliar with the format, here’s a quick rundown. Twenty two players are gathered in a Scottish castle and sent on “missions” where they work in teams to earn the prize money they’re hoping to win. A small number of them have been secretly assigned the role of Traitors, and if any of them make it to the end they’ll take all the money, leaving the non-Traitors (“Faithfuls”) with nothing. By that point most players will have been eliminated: the Traitors murder one Faithful each night, meeting in secret to choose their victim, and there’s also a daily Round Table meeting at which the whole group banish someone they think is a Traitor (or in the case of the actual Traitors, someone they want the others to think is a Traitor). This process starts with an unstructured group discussion, and ends with each person casting a vote: whoever gets the most votes must leave, revealing their true allegiance (Traitor or Faithful) on their way out. Verbal communication plays a central role in this game: to succeed, players need both the ability to read people (paying close attention to their actions, demeanour and–crucially–their speech) and the ability to speak persuasively in a group (since decisions require majority agreement). Individuals will vary in how they approach these tasks and how skilfully they perform them, which is partly a question of experience and temperament. But what happens in group talk isn’t just about individuals: it’s also affected by social factors....
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
En rompant avec l’anglais pour écrire en kikuyu, il a incarné une pensée radicale de la décolonisation.
"L'héritage vivant de Ngũgĩ wa Thiong’o : décoloniser l’esprit et les langues
Published: June 5, 2025 3.59pm SAST
Christophe Premat, Stockholm University
Ngũgĩ wa Thiong’o est mort le 28 mai 2025 à l’âge de 87 ans. Son nom restera dans l’histoire non seulement comme celui d’un grand romancier kenyan, mais aussi comme celui d’un penseur radical de la décolonisation. À l’instar de Valentin-Yves Mudimbe, disparu quelques semaines plus tôt, il a su interroger les conditions mêmes de la possibilité d’un savoir africain en contexte postcolonial. Mais là où Mudimbe scrutait les « bibliothèques coloniales » pour en dévoiler les présupposés, Ngũgĩ a voulu transformer la pratique même de l’écriture : en cessant d’écrire en anglais pour privilégier sa langue maternelle, le kikuyu, il a posé un geste politique fort, un acte de rupture.
En tant que spécialiste des théories postcoloniales, j'analyse la manière dont ces parcours critiques se sont efforcés à repenser la manière dont le savoir est produit et transmis en Afrique.
Pour Ngũgĩ, la domination coloniale ne s’arrête pas aux frontières, aux institutions ou aux lois. Elle prend racine dans les structures mentales, dans la manière dont un peuple se représente lui-même, ses valeurs, son passé, son avenir.
Langue et pouvoir : une géopolitique de l’imaginaire
Dans Décoloniser l’esprit (1986), Ngũgĩ wa Thiong’o explique pourquoi il a décidé d’abandonner l’anglais, langue dans laquelle il avait pourtant connu un succès international. Il y pose une affirmation devenue centrale dans les débats sur les héritages coloniaux : « Les vrais puissants sont ceux qui savent leur langue maternelle et apprennent à parler, en même temps, la langue du pouvoir. » Car tant que les Africains seront contraints de penser, de rêver, d’écrire dans une langue qui leur a été imposée, la libération restera incomplète.
À travers la langue, les colonisateurs ont conquis bien plus que des terres : ils ont imposé une certaine vision du monde. En contrôlant les mots, ils ont contrôlé les symboles, les récits, les hiérarchies culturelles. Pour Ngũgĩ, le bilinguisme colonial ne relève pas d’un enrichissement, mais d’une fracture : il sépare la langue du quotidien (la langue vernaculaire) de celle de l’école, de la pensée, du droit, de la littérature. Il y voit une violence structurelle, une « dissociation entre l’esprit et le corps », qui rend impossible une appropriation pleine et entière de l’expérience africaine.
Une aliénation tenace
L’analyse de Ngũgĩ met en lumière les impasses des politiques linguistiques postcoloniales qui ont souvent continué à faire des langues européennes des langues d’État, de savoir et de prestige, tout en reléguant les langues africaines à la sphère privée. C’est en ce sens qu’on peut parler de diglossie, c’est-à-dire de situation de cohabitation de deux langues avec des fonctions sociales distinctes. Cette diglossie instituée produit une hiérarchisation des langues qui reflète, en profondeur, une hiérarchisation des cultures.
We’re 10! Support us to keep trusted journalism free for all.
Support our cause
Loin d’en appeler à un retour passéiste ou à une clôture identitaire, Ngũgĩ veut libérer le potentiel des langues africaines : leur permettre de dire le contemporain, d’inventer une modernité qui ne soit pas un simple calque des modèles européens. Il reprend ici à son compte la tâche historique que se sont donnée les écrivains dans d’autres langues « mineures » : faire pour le kikuyu ce que Shakespeare a fait pour l’anglais, ou Tolstoï pour le russe.
Il s’agit non seulement d’écrire dans les langues africaines, mais de faire en sorte que ces langues deviennent des vecteurs de philosophie, de sciences, d’institutions — bref, de civilisation. Ce choix d’écrire en kikuyu ne fut pas sans conséquences. En 1977, Ngũgĩ coécrit avec Ngũgĩ wa Mirii une pièce de théâtre, Ngaahika Ndeenda (« Je me marierai quand je voudrai »), jouée en langue kikuyu dans un théâtre communautaire de Kamiriithu, près de Nairobi.
La pièce, portée par des acteurs non professionnels, dénonçait avec virulence les inégalités sociales et les survivances du colonialisme au Kenya. Son succès populaire inquiète les autorités : quelques semaines après la première, Ngũgĩ est arrêté sans procès et incarcéré pendant près d’un an. À sa libération, interdit d’enseigner et surveillé de près, il choisit l’exil.
Ce bannissement de fait durera plus de vingt ans. C’est dans cette période de rupture qu’il entame l’écriture en kikuyu de son roman Caitaani Mutharaba-Ini (Le Diable sur la croix), qu’il rédige en prison sur du papier hygiénique.
Une pensée toujours actuelle
L’œuvre de Ngũgĩ éclaire la manière dont les sociétés africaines contemporaines restent prises dans des logiques de domination symbolique, malgré les indépendances politiques. La mondialisation a remplacé les formes les plus brutales de l’impérialisme, mais elle reconduit souvent les logiques de domination symbolique. Dans le champ culturel, les ex-puissances coloniales continuent d’exercer une influence considérable à travers les réseaux diplomatiques, éducatifs, éditoriaux.
La Francophonie, par exemple, se présente comme un espace de coopération linguistique, mais elle perpétue souvent des asymétries dans la validation des productions culturelles. Le fait de présenter les langues coloniales comme des langues de communication dépassant les clôtures des langues vernaculaires est une illusion que Ngũgĩ dénonce avec virulence.
Des penseurs comme Jean-Godefroy Bidima ou Seloua Luste Boulbina ont montré à quel point les politiques linguistiques postcoloniales ont tendance à officialiser certaines langues au détriment d’autres, créant une nouvelle forme de langue de bois, souvent coupée des réalités populaires. La philosophe algéro-française évoque à ce propos un espace public plurilingue à instituer: un espace qui ne se résume pas à opposer langues coloniales et langues vernaculaires, mais qui réinvente les usages, les hybridations, les ruses du langage.
Cette réflexion fait écho à la position de Ngũgĩ : écrire dans sa langue ne suffit pas, encore faut-il produire un langage à la hauteur des luttes sociales et politiques.
Pour une mémoire active de Ngũgĩ wa Thiong’o
À l’heure où les débats sur la décolonisation se multiplient, souvent vidés de leur substance ou récupérés par des logiques institutionnelles, relire Ngũgĩ wa Thiong’o permet de revenir à l’essentiel : l’émancipation passe par un changement de regard sur soi-même, qui commence dans la langue. La véritable libération n’est pas seulement politique ou économique ; elle est aussi culturelle, cognitive, symbolique.
En refusant de penser depuis des catégories importées, en assumant le risque d’un geste radical, Ngũgĩ wa Thiong’o a ouvert la voie à une pensée authentiquement africaine, enracinée et universelle. Son œuvre rappelle qu’il ne suffit pas de parler au nom de l’Afrique ; encore faut-il parler depuis elle, avec ses langues, ses imaginaires, ses luttes. À l’heure de sa disparition, son message reste plus vivant que jamais."
https://theconversation.com/lheritage-vivant-de-ngugi-wa-thiongo-decoloniser-lesprit-et-les-langues-258209
#metaglossia_mundus
"New York, the World’s Most Linguistically Diverse Metropolis
In Language City, Ross Perlin, a linguist, takes readers on a tour of the city’s communities with endangered tongues.
June 05, 2025
As co-director of the Endangered Language Alliance, Ross Perlin has managed a variety of projects focused on language documentation, language policy, and public programming around urban linguistic diversity. Photo by Cecil Howell.
Half of all 7,000-plus human languages may disappear over the next century, and—because many have never been recorded—when they’re gone, it will be forever. Ross Perlin, a lecturer in Columbia’s Department of Slavic Languages and co-director of the Endangered Language Alliance, is racing against time to map little-known languages across New York. In his new book, Language City, Perlin follows six speakers of endangered languages deep into their communities, from the streets of Brooklyn and Queens to villages on the other side of the world, to learn how they are maintaining and reviving their languages against overwhelming odds. He explores the languages themselves, from rare sounds to sentence-long words to bits of grammar that encode entirely different worldviews.
Seke is spoken by 700 people from five ancestral villages in Nepal, and a hundred others living in a single Brooklyn apartment building. N’ko is a radical new West African writing system now going global in Harlem and the Bronx. After centuries of colonization and displacement, Lenape, the city’s original indigenous language and the source of the name Manhattan, “the place where we get bows,” has just one native speaker, along with a small band of revivalists. Also profiled in the book are speakers of the indigenous Mexican language Nahuatl, the Central Asian minority language Wakhi, and Yiddish, braided alongside Perlin’s own complicated family legacy. On the 100th anniversary of a notorious anti-immigration law that closed America’s doors for decades, and the 400th anniversary of New York’s colonial founding, Perlin raises the alarm about growing political threats and the onslaught of languages like English and Spanish.
Perlin talks about the book with Columbia News, along with why New York is so linguistically diverse, and his current and upcoming projects.
How did this book come about?
I’m a linguist, writer, and translator, and have been focused on language endangerment and documentation for the last two decades. Initially, I worked in southwest China on a dictionary, a descriptive grammar, and corpus of texts for Trung, a language spoken in a single remote valley on the Burmese border. While finishing up my dissertation, I came back to New York and joined the Endangered Language Alliance (ELA), which had recently been founded to focus on urban linguistic diversity.
ELA’s key insight was a paradox: Even as language loss accelerates everywhere—with as many as half of the world’s 7,000 languages now considered endangered—cities are more linguistically diverse than ever before, thanks to migration, urbanization, and diaspora. This means that speakers of endangered, indigenous, and primarily oral languages are now often right next door, and that fieldwork can happen in a different key, with linguists and communities making common cause as neighbors, for the long term.
For the last 12 years, I’ve been ELA’s co-director, together with Daniel Kaufman. Leading a small, independent nonprofit has been its own adventure. We do everything from in-depth research on little-studied languages (projects on Jewish, Himalayan, and indigenous Latin American languages, for example, with much of the research happening right here in the five boroughs) to public events, classes, collaborations with city agencies, childrens books, and the first-ever language map of New York City. At some point, I knew that all my notes on everything I was learning and seeing around the city and beyond had to become a book. The urgency only grew as a series of unprecedented crises started hitting the immigrant and diaspora communities we work with. The crises are still unfolding.
Can you share some details about the six people you portray in the book, and the endangered languages they speak?
Language City is both the story of New York’s languages—the past, present, and future of the world's most linguistically diverse city—and the story of six specific people doing extraordinary things to keep their embattled mother tongues alive. They come from all over the world, but converge in New York. They represent a variety of strategies for language maintenance and revitalization in the face of tremendous odds. All of them are people I’ve known and worked with for years. Between them, in all their multilingualism, they actually speak about 30 languages, but they are also regular people you might run into on the subway.
Rasmina is one of the youngest speakers of Seke, a language from five villages in Nepal, and now a sixth, vertical village in the middle of Brooklyn. Husniya, a speaker of Wakhi from Tajikistan, has gone through every stage of her life and education in a different language, and can move easily along New York City’s new Silk Road. Originally from Moldova, Boris is a Yiddish novelist, poet, editor, and one-man linguistic infrastructure.
Ibrahima, a language activist from Guinea, champions N’ko, a relatively new writing system created to challenge the dominance of colonial languages in West Africa. Irwin is a Queens chef who writes poetry in, and cooks through, his native Nahuatl, the language of the Aztecs. And Karen is a keeper of Lenape, the original language of the land on which New York was settled.
How did New York end up as such a linguistically diverse city?
Four hundred years ago, this Lenape-speaking archipelago became nominally Dutch under the West India Company. In fact, the territory evolved into an unusual commercial entrepôt at the fulcrum of three continents, where the languages of Native Americans, enslaved Africans, and European refugees and traders were all in the mix. A reported 18 languages were spoken by the first 400-500 inhabitants. This set the template, but the defining immigration waves of the 19th and 20th centuries were of another order, as New York became a global center for business, politics, and culture, as well as the pre-eminent gateway to the U.S. and a bridge between hemispheres. From the half-remembered myth of Ellis Island to the very present reality of 200,000 asylum seekers arriving in the last few years, I argue—in what I think is the first linguistic history of any city—that applying the lens of language helps us understand the city (and all cities) in profoundly new ways. From Frisian and Flemish 400 years ago, to Fujianese and Fulani today, deep linguistic diversity, although always overlooked, has been fundamental.
What did you teach in the spring semester?
The class I’ve taught every year for the last six years—Endangered Languages in the Global City. I designed it around our research at ELA, and it’s completely unique to Columbia. Hundreds of students sign up, testifying to the massive interest Columbia students have in learning about linguistic diversity, though we can admit only a fraction of them. Many of the endangered language speakers and activists featured in Language City have visited the class, and we also take students out to some of the city’s most linguistically diverse neighborhoods, where many students have done strikingly original fieldwork.
The spring semester was something of a departure: I stepped in to teach Language and Society, as well as a course I designed last year called Languages of Asia, which focuses on the continent’s lesser-known language families and linguistic areas. I’ve also been supervising senior theses together with Meredith Landman,who directs the undergraduate program in linguistics. Across the first half of the 20th century, Columbia and Barnard became foundational sites for the study of linguistic diversity and for documenting languages, thanks to Franz Boas, who, in 1902, became the head of Columbia’s anthropology department—the first in the country—and his students. There is hope today for Columbia’s formative role to be revived: The University's local and global position makes this as urgent and achievable as ever. Additionally, the linguistics major was recently (at last) restored, and an ever-growing number of students are doing remarkable work on languages from around the world.
What are you working on now?
ELA keeps on ELA’ing, with a huge range of projects. On any given day, we might be recording and working with speakers of languages originally from Guatemala, Nepal, or Iran. With 700+ languages and counting, our language map (languagemap.nyc) is continually being updated, and we have people here making connections between language and health, education, literacy, technology, translation. This fall, I’ll be in Berlin as a fellow of the American Academy, researching via an obscure film (in 2,000 languages and counting) how missionary linguists and Bible translators shape global linguistic diversity.
Any summer plans related to language preservation at Columbia or elsewhere?
Ongoing work at ELA: The sky, if not for the budget, would be the limit. There have been recent sessions on a little-known language from Afghanistan, with a woman who is probably the only speaker in New York. Someone whose family still remembers a highly endangered Jewish language variant from Iran recently got in touch. And we met a speaker of a Mongolic language of western China at a restaurant. I also hope to continue soon some scouting of urban linguistic diversity in the Caucasus, past and present, with support from the Harriman Institute."
https://news.columbia.edu/news/new-york-worlds-most-linguistically-diverse-metropolis
#metaglossia_mundus
"Newswise — Creativity often emerges from the interplay of disparate ideas—a phenomenon known as combinational creativity. Traditionally, tools like brainstorming, mind mapping, and analogical thinking have guided this process. Generative Artificial Intelligence (AI) introduces new avenues: large language models (LLMs) offer abstract conceptual blending, while image (T2I) and T2-three-dimensional (3D) models turn text prompts into vivid visuals or spatial forms. Yet despite their growing use, little research has clarified how these tools function across different stages of creativity. Without a clear framework, designers are left guessing which AI tool fits best. Given this uncertainty, in-depth studies are needed to evaluate how various AI dimensions contribute to the creative process.
A research team from Imperial College London, the University of Exeter, and Zhejiang University has tackled this gap. Their new study (DOI: 10.1016/j.daai.2025.100006), published in May 2025 in Design and Artificial Intelligence, investigates how generative AI models with different dimensional outputs support combinational creativity. Through two empirical studies involving expert and student designers, the team compared the performance of LLMs, T2I, and T2-3D models across ideation, visualization, and prototyping tasks. The results provide a practical framework for optimizing human-AI collaboration in real-world creative settings.
To map AI's creative potential, the researchers first asked expert designers to apply each AI type to six combinational tasks—including splicing, fusion, and deformation. LLMs performed best in linguistic-based combinations such as interpolation and replacement but struggled with spatial tasks. In contrast, T2I and T2-3D excelled at visual manipulations, with 3D models especially adept at physical deformation. In a second study, 24 design students used one AI type to complete a chair design challenge. Those using LLMs generated more conceptual ideas during early, divergent phases but lacked visual clarity. T2I models helped externalize these ideas into sketches, while T2-3D tools offered robust support for building and evaluating physical prototypes. The results suggest that each AI type offers unique strengths, and the key lies in aligning the right tool with the right phase of the creative process.
“Understanding how different generative AI models influence creativity allows us to be more intentional in their application,” said Prof. Peter Childs, co-author and design engineering expert at Imperial College London. “Our findings suggest that large language models are better suited to stimulate early-stage ideation, while text-to-image and text-to-3D tools are ideal for visualizing and validating ideas. This study helps developers and designers align AI capabilities with the creative process rather than using them as one-size-fits-all solutions.”
The study's insights are poised to reshape creative workflows across industries. Designers can now match AI tools to specific phases—LLMs for generating diverse concepts, T2I for rapidly visualizing designs, and T2-3D for translating ideas into functional prototypes. For educators and AI developers, the findings provide a blueprint for building more effective, phase-specific design tools. By focusing on each model’s unique problem-solving capabilities, this research elevates the conversation around human–AI collaboration and paves the way for smarter, more adaptive creative ecosystems.
###
References
DOI
10.1016/j.daai.2025.100006
Original Source URL
https://doi.org/10.1016/j.daai.2025.100006
Funding information The first and third authors would like to acknowledge the China Scholarship Council (CSC)." Released: 5-Jun-2025 7:35 AM EDT Source Newsroom: Chinese Academy of Sciences https://www.newswise.com/articles/designing-with-dimensions-rethinking-creativity-through-generative-ai ##metaglossia_mundus
"AI reveals hidden language patterns and likely authorship in the Bible by Duke University
edited by Sadie Harley, reviewed by Robert Egan
AI is transforming every industry, from medicine to film to finance. So, why not use it to study one of the world's most revered ancient texts, the Bible?
An international team of researchers, including Shira Faigenbaum-Golovin, assistant research professor of Mathematics at Duke University, combined artificial intelligence, statistical modeling and linguistic analysis to address one of the most enduring questions in biblical studies: the identification of its authors.
The study is published in the journal PLOS One.
By analyzing subtle variations in word usage across texts, the team was able to distinguish between three distinct scribal traditions (writing styles) spanning the first nine books of the Hebrew Bible, known as the Enneateuch.
Using the same AI-based statistical model, the team was then able to determine the most likely authorship of other Bible chapters. Even better, the model also explained how it reached its conclusions.
But how did the mathematician get here?
In 2010, Faigenbaum-Golovin began collaborating with Israel Finkelstein, head of the School of Archaeology and Maritime Cultures at the University of Haifa, using mathematical and statistical tools to determine the authorship of lettering found on pottery fragments from 600 B.C. by comparing the style and shape of the letters inscribed on each fragment.
Their discoveries were featured on the front page of The New York Times.
"We concluded that the findings in those inscriptions could offer valuable clues for dating texts from the Old Testament," Faigenbaum-Golovin said. "That's when we started putting together our current team, who could help us analyze these biblical texts."
The multidisciplinary undertaking was made up of two parts. First, Faigenbaum-Golovin and Finkelstein's team—Alon Kipnis (Reichman University), Axel Bühler (Protestant Faculty of Theology of Paris), Eli Piasetzky (Tel Aviv University) and Thomas Römer (Collège de France)—was made up of archaeologists, biblical scholars, physicists, mathematicians and computer scientists.
The team used a novel AI-based statistical model to analyze language patterns in three major sections of the Bible. They studied the Bible's first five books: Deuteronomy, the so-called Deuteronomistic History from Joshua to Kings, and the priestly writings in the Torah.
Results showed Deuteronomy and the historical books were more similar to each other than to the priestly texts, which is already the consensus among biblical scholars.
"We found that each group of authors has a different style—surprisingly, even regarding simple and common words such as 'no,' 'which,' or 'king.' Our method accurately identifies these differences," said Römer.
To test the model, the team selected 50 chapters from the first nine books of the Bible, each of which has already been allocated by biblical scholars to one of the writing styles mentioned above.
"The model compared the chapters and proposed a quantitative formula for allocating each chapter to one of the three writing styles," said Faigenbaum-Golovin.
In the second part of the study, the team applied their model to chapters of the Bible whose authorship was more hotly debated. By comparing these chapters to each of the three writing styles, the model was able to determine which group of authors was more likely to have written them. Even better: the model also explained why it was making these calls.
"One of the main advantages of the method is its ability to explain the results of the analysis—that is, to specify the words or phrases that led to the allocation of a given chapter to a particular writing style," said Kipnis.
Since the text in the Bible has been edited and re-edited many times, the team faced big challenges finding segments that retained their original wording and language.
Once found, these biblical texts were often very short—sometimes just a few verses—which made most standard statistical methods and traditional machine learning unsuitable for their analysis. They had to develop a custom approach that could handle such limited data.
Limited data often brings fears of inaccuracy. "We spent a lot of time convincing ourselves that the results we were getting weren't just garbage," said Faigenbaum-Golovin. "We had to be absolutely sure of the statistical significance."
To circumvent the issue, instead of using traditional machine learning, which requires lots of training data, the researchers used a simpler, more direct method. They compared sentence patterns and how often certain words or word roots (lemmas) appeared in different texts, to see if they were likely written by the same group of authors.
A surprising find? The team discovered that although the two sections of the Ark Narrative in the Books of Samuel address the same theme and are sometimes regarded as parts of a single narrative, the text in 1 Samuel does not align with any of the three corpora, whereas the chapter in 2 Samuel shows affinity with the Deuteronomistic History (Joshua to Kings).
Looking forward, Faigenbaum-Golovin said the same technique can be used for other historical documents. "If you're looking at document fragments to find out if they were written by Abraham Lincoln, for example, this method can help determine if they are real or just a forgery."
"The study introduces a new paradigm for analyzing ancient texts," summarized Finkelstein.
Faigenbaum-Golovin and her team are now looking at using the same methodology to unearth new discoveries about other ancient texts, like the Dead Sea Scrolls. She emphasized how much she enjoyed the long-term cross-disciplinary partnership.
"It's such a unique collaboration between science and the humanities," she said. "It's a surprising symbiosis, and I'm lucky to work with people who use innovative research to push boundaries."
More information: Shira Faigenbaum-Golovin et al, Critical biblical studies via word frequency analysis: Unveiling text authorship, PLOS One (2025). DOI: 10.1371/journal.pone.0322905
Journal information: PLoS ONE" https://phys.org/news/2025-06-ai-reveals-hidden-language-patterns.html #metaglossia_mundus
Depuis 25 ans, le programme Goldschmidt a formé près de 250 traducteurs et traductrices, contribuant à structurer un réseau international au service des échanges littéraires entre espaces francophones et germanophones...
"Goldschmidt : 25 ans au service des jeunes traducteurs littéraires
Jeudi 12 juin 2025, au Goethe-Institut de Paris, le programme Georges-Arthur Goldschmidt fête son quart de siècle. Créé en 2000 pour accompagner les jeunes traducteurs littéraires, ce dispositif, désormais franco-germano-suisse et, pour la première fois cette année, autrichien, marque un tournant avec une soirée dédiée à la traduction, à ses défis contemporains et à ses figures marquantes.
Le 05/06/2025 à 13:20 par Dépêche
Soutenu par l’OFAJ, France Livre, la Foire de Francfort, Pro Helvetia et le ministère autrichien des Affaires européennes et internationales, l’événement se veut à la fois professionnel, réflexif et convivial.
Le programme s'ouvrira à 17h30 avec une table ronde réunissant trois traducteurs issus de différentes générations du programme : Andreas Jandl, Régis Quatresous et Lionel Felchlin. Modérée par Hannah Sandvoss (France Livre), la discussion portera sur l’insertion professionnelle, les pratiques collectives de traduction et l’impact croissant de l’intelligence artificielle sur le métier.
À 18h45, l’auteur allemand Ulrich Effenhauser dialoguera avec Carole Fily, sa traductrice française formée au programme en 2011. Ensemble, ils reviendront sur leur collaboration autour du roman Le Fantôme de Mexico, un polar historique publié en février 2025 chez Actes Sud, où l'intrigue mêle espionnage, guerre froide et catastrophe de Tchernobyl.
À 19h45, Justine Coquel et Valentin Decoppet animeront un atelier original de « craduction » : une initiation ludique à la traduction sans connaissance préalable de la langue source, mêlant invention collective et écriture créative. Ouvert à tous, cet exercice interroge les frontières entre traduire, interpréter et écrire.
À LIRE - Dans les marges de Kafka : un manuscrit rare vendu à prix d’or
La soirée se conclura à partir de 20h30 par un moment festif, propice aux échanges informels entre professionnels, anciens participants et partenaires institutionnels.
Depuis 25 ans, le programme Goldschmidt a formé près de 250 traducteurs et traductrices, contribuant à structurer un réseau international au service des échanges littéraires entre espaces francophones et germanophones.
Crédits illustration : Programme Georges-Arthur Goldschmidt
Par Dépêche
Contact : depeche@actualitte.com
https://actualitte.com/article/124252/rencontres-dedicaces/goldschmidt-25-ans-au-service-des-jeunes-traducteurs-litteraires
#metaglossia_mundus
Nous sommes allés à la rencontre de Marine Borel à l’occasion de la remise du grand prix de linguistique et de philologie 2024 de l’Académie royale de langue et de littérature française de Belgique en mars 2025 pour son livre "Les formes verbales surcomposées en français".
"[Interview] Marine Borel, récipiendaire du grand prix de linguistique et de philologie 2024 de l’Académie royale de langue et de littérature française de Belgique
Publié le 4/06/2025
Nous sommes allés à la rencontre de Marine Borel à l’occasion de la remise du grand prix de linguistique et de philologie 2024 de l’Académie royale de langue et de littérature française de Belgique en mars 2025 pour son livre Les formes verbales surcomposées en français.
Tout d’abord, toutes nos félicitations ! Pouvez-vous nous en dire plus sur votre travail ?
Marine Borel : Cet ouvrage est la version retravaillée de ma thèse de doctorat, qui portait, comme l’indique son titre, sur les formes verbales dites « surcomposées » en français. Il s’agit des formes verbales qui comportent un élément auxiliaire de plus que les formes composées – comme par exemple j’ai eu fait ou j’avais eu fait. Dans mon livre, je montre qu’il existe en fait deux paradigmes distincts de formes surcomposées. Pour illustrer cela, prenons l’exemple du passé surcomposé. Il existe d’une part un passé surcomposé dit « standard », qui est attesté dans l’ensemble du monde francophone, qui se trouve le plus souvent dans des subordonnées temporelles et qui peut être considéré comme un homologue du passé antérieur. Par exemple : « Lorsqu’elle a eu mangé, elle est allée se coucher. » Et il existe d’autre part un passé surcomposé dit « régional », qui n’est utilisé que dans les régions où on parlait autrefois des dialectes occitans ou francoprovençaux et qui a une valeur qu’en linguistique on appelle « expérientielle » – ce qui signifie que ces formes régionales peuvent toujours être glosées par la formule ‘il est déjà arrivé que’. Par exemple, l’énoncé (authentique) « J’ai eu mangé de la marmotte à un anniversaire » signifie ‘il m’est déjà arrivé de manger de la marmotte à un anniversaire’.
Ces formes « régionales » semblent plutôt exotiques à des oreilles lorraines. Mais elles sont, comme vous le dites, bien attestées dans le Sud de la France et dans le domaine francoprovençal, dont fait partie la Suisse romande, d’où vous êtes originaire.
MB : Oui, même si ces formes étonnent celles et ceux qui ne les emploient pas, la région dans laquelle elles sont utilisées est loin d’être marginale : le domaine d’oc couvre tout le tiers Sud de la France, dans lequel se trouvent notamment les villes de Marseille, de Toulouse, de Nice, de Montpellier, de Bordeaux, de Perpignan et de Limoges ; quant au domaine francoprovençal, il correspond à la zone, à l’Est, dans laquelle se trouvent Lyon, Saint-Étienne et Grenoble et, bien sûr, la Suisse romande – où elles sont particulièrement vivaces. En Romandie, on les entend tous les jours, non seulement avec l’auxiliaire « avoir » mais aussi avec l’auxiliaire « être ». Et pour celles et ceux qui ne les utilisent pas (comme les Lorrain·e·s), les formes avec « être » semblent encore plus exotiques que celles avec « avoir », puisqu’elles se construisent sur le modèle de je suis eu parti·e – et non sur celui de j’ai été parti·e comme c’est le cas des formes dites standard. Voici deux exemples authentiques recueillis en Suisse romande : « Il est eu venu boire l’apéro chez moi » et « Tu es eu allée à Nancy pour le voir ? ». Ce dernier exemple a été produit par une amie qui me demandait s’il m’était déjà arrivé d’aller à Nancy pour rencontrer celui qui était à l’époque mon directeur de thèse, Denis Apothéloz.
Pouvez-vous nous en dire plus votre parcours et sur ce qui vous a amenée à Nancy ?
MB : Je suis née en 1987, j’ai grandi en Suisse romande, dans le canton de Neuchâtel, puis j’ai fait des études de langue et de littérature françaises à l’Université de Fribourg. Après mes études, j’ai décidé de faire une thèse de doctorat en linguistique française et, alors que je réfléchissais à un sujet de recherches, je suis tombée sur la thèse d’un linguiste suisse, Maurice Cornu, publiée en 1953, qui portait précisément sur les formes verbales surcomposées. J’ai lu cette thèse d’une seule traite et j’ai tout de suite su que je voulais consacrer mes recherches à ce sujet. J’ai aussitôt lu tout ce qui était à disposition sur cette question. C’est ainsi que j’ai découvert les travaux de Denis Apothéloz, également suisse d’origine, professeur de linguistique (aujourd’hui émérite) à l’Université de Lorraine et membre de l’ATILF, des travaux qui ont marqué un pas décisif dans la compréhension des formes surcomposées régionales. Je l’ai donc contacté pour lui demander s’il acceptait de co-diriger ma thèse dans le cadre d’une cotutelle de thèse entre l’Université de Fribourg et l’Université de Lorraine. Et il s’est tout de suite montré enthousiaste.
C’est donc dans ce cadre que vous êtes venue passer une année à Nancy et que vous avez été rattachée comme doctorante à l’ATILF ?
MB : Oui, j’ai eu la chance d’obtenir deux soutiens financiers qui m’ont permis de venir régulièrement à Nancy et d’y passer une année : la bourse de mobilité pour doctorant·e·s du Fonds national suisse de la recherche scientifique (FNS) et une contribution financière à la cotutelle de thèse de la conférence des recteurs des universités suisses (CRUS). Lors de mon séjour à Nancy, en 2016, j’ai été rattachée au laboratoire ATILF, où j’ai été extrêmement bien accueillie. Je garde notamment de très bons souvenirs des réunions de l’équipe de recherche « Discours, langue et cognition » à laquelle j’étais intégrée et des conférences qui étaient régulièrement organisées. J’ai beaucoup appris durant cette année et, surtout, j’ai pu avancer dans mes recherches dans un cadre idéal. Depuis, je suis bien sûr restée en contact non seulement avec le professeur Denis Apothéloz mais également avec les anciens collègues de l’ATILF – je me réjouis d’ailleurs beaucoup de ma participation à deux journées d’études organisées en novembre prochain par le professeur Yvon Keromnes, qui porteront précisément sur les formes verbales surcomposées.
Que faites-vous aujourd’hui ? Où travaillez-vous ?
MB : Aujourd’hui, j’enseigne la linguistique française, en tant que lectrice à l’Université de Zurich et en tant que chargée de cours dans les universités de Fribourg et de Bâle. Je travaille également comme responsable de projets au Forum Helveticum, une association qui s’engage pour les échanges et la cohésion entre les quatre régions linguistiques de la Suisse.
Où pouvons-nous trouver votre ouvrage ?
MB : Mon livre est disponible en libre accès sur le site de la maison d’édition Peter Lang | 2024 | Monographies | 622 pages"
https://factuel.univ-lorraine.fr/node/30288
#metaglossia_mundus
"The 3rd Spring School in Translation Studies Conference: "Translation & Imagination" The third Lisbon Spring School in Translation Studies Conference will be held in early June at the Universidade Católica Portuguesa in Lisbon, Portugal. This year's Spring school, with the title 'Translation and the Imagination', will explore the multiple ways in which imagination weaves itself into the interpretation and the textual fabric of different forms of translation, from age-old forms to the new challenges posed by intermedial and transmedial texts, and computer-assisted translation. It will also look into the part imagination plays in translation and how this is affected by the emergence of AI translation.
Dr. Delphine Grass from the School of Global Affairs will deliver a paper entitled 'A Particular Kind of Relation: Translation, Praxis and the Social Imagination’. Her paper explores creative-critical translation practices which have engaged a critique of the politics of translation by deviating, slowing down or mattering translational relations in visual and textural terms to side-step oppressive rules of international and ecological engagement. It will also delve into the relationship between creativity and translation in the era of GenAI from the critical perspective of creative-critical translation practices.
Alongside its new MA Global Leadership programme, the School of Global Affairs also offers an established MA Translation Studies degree which allows students to develop their linguistic and technical skills to move into a wide range of careers in translation." https://www.lancaster.ac.uk/global-affairs/news/lancaster-researcher-to-keynote-international-translation-studies-conference #metaglossia_mundus
The promise echoed in diplomatic halls and joint venture signings embedded in a partnership forged in "mutual respect" (互相尊重) and "win-win cooperation" (共同利益) seems to have lost its true definition
Translation must breed respectful relationships
Translator's lament: When words bridge worlds, but respect falters
Dr Lucy Anning
Jun - 04 - 2025 , 09:57
The promise echoed in diplomatic halls and joint venture signings embedded in a partnership forged in "mutual respect" (互相尊重) and "win-win cooperation" (共同利益) seems to have lost its true definition.
For Chinese-to-English translators, particularly those in Ghana, these are not mere slogans but the very foundation upon which our professional existence is supposed to rest.
Translators are the linguistical bridges, the cultural interpreters and the vital conduits facilitating the vast economic engagement between China and Africa.
Yet, for too many of us, the reality behind the boardroom doors and factory floors is a soul-eroding indignity that starkly contradicts the official rhetoric.
This is the hidden ordeal of the translator, caught between nations and battling for basic human dignity.
The challenges within the ecosystem of Chinese companies operating in Ghana are not merely professional hurdles; they are profound assaults on personhood.
The core wound, as articulated with raw pain by one translator, is the pervasive sense of being viewed as "lower than human beings."
This dehumanisation manifests in chillingly tangible ways.
Many translators have reported salaries that are an insult to their skilled labour, a sheer pittance, "nothing to write home about."
Beyond the financial disrespect lies a spectrum of mistreatment consisting of routine bullying, verbal abuse that stings in any language, blatant infractions of Ghanaian labour laws and even alleged physical intimidation.
The workplace becomes a pressure cooker of injustice for these staff.
Rule of law, FDI initiatives?
The dissonance deepens when considering the broader context. These companies operate in the Ghanaian economy, extracting significant profit, often amidst accusations of tax avoidance and aggressive acquisition of local assets.
As one victimised voice starkly put it, "They are buying lands and properties like crazy. In tandem, for Zhongguo (Chinese economy), no foreigner has these kinds of privileges there."
This economic imbalance fuels the bitter question: Why must Ghanaians (and other African economies), particularly those facilitating this very enterprise, endure such degradation within their own homeland?
The principle seems grotesquely one-sided, in that "Chinese can mistreat African people within their jurisdiction, thus home country.
However, Africans can't take this abuse within their jurisdictive home countries... and allow these foreigners to go scot-free."
Corrupts leaders
The litany of grievances extends to unethical practices encapsulated in bribes, illegal dealings and opaque connections, creating an environment where workers, translators included, feel treated not as partners or even employees, but as expendable machines.
This is not just about poor working conditions; it’s a fundamental betrayal of the translator’s purpose. Our mastery of Putonghua is not merely a technical skill to facilitate transactions; it is the power to communicate as equals.
"It is also to be able to address them, speak to them like equal men, for them to hear us in a language that they understand," the source implores.
Yet, the language of respect we translate for official pronouncements rarely permeates daily interactions.
The grand narrative of mutual benefit rings hollow when met with daily doses of contempt.
Mitigating mechanisms
Mitigating this deep-seated crisis demands concerted, multi-level action.
Firstly, translators must organise. Collective voices are harder to ignore.
Forming professional associations or leveraging existing unions provides a platform to share experiences, document abuses systematically and advocate collectively for fair contracts, decent wages and dignified treatment.
Secondly, legal empowerment is non-negotiable. Translators need accessible legal resources and unwavering support to challenge labour law violations, unsafe conditions and illegal practices.
Reporting channels must be strengthened and protected.
Thirdly, Ghanaian (and other African countries) authorities must enforce their sovereignty.
This means rigorous application and monitoring of labour laws, tax regulations and investment codes pertaining to foreign businesses.
Permissiveness towards exploitation because of investment is a Faustian bargain that erodes national dignity and long-term stability.
Diplomatic channels must also be utilised to consistently raise these specific worker grievances, moving beyond abstract cooperation principles.
Finally, translators must wield their unique power, thus using their linguistic skills not just to transmit, but to assert.
Clearly articulating boundaries, expectations of respect and legal rights in the language understood is a form of resistance and empowerment.
Conclusion
From whispered grievances to a roar for respect, the whispered frustrations in translation booths and on factory floors are coalescing into an undeniable roar.
The ordeal of the Chinese-to-English translator in Ghana is a microcosm of a partnership dangerously out of balance.
It exposes the chasm between the polished language of diplomacy and the raw language of daily indignity.
Ghanaian translators are not mere linguistic tools; they are Ghanaian professionals, deserving fundamental respect on their own soil.
To ignore their suffering, to tolerate the blatant hypocrisy where "mutual respect" is preached but subjugation is practised, is to undermine the very foundation of the Ghana-China relationship itself.
The path forward isn't a revolution born of desperation, but a resolute demand for the respect already promised.
It’s time for Ghana to ensure its laws protect its people, for companies to uphold their professed values and for translators to cease being the unheard interpreters of their own oppression. Enough is indeed enough.
The bridges we build must be walked upon with dignity by all, or they will crumble under the weight of hypocrisy.
The next words translated must herald genuine change, or they will be the last before the silence of defiance falls.
The writer is with the DevAfrica Institute,
Accra, Ghana.
E-mail: Lucyann@
devafricainstitute.org"
https://www.graphic.com.gh/features/opinion/ghana-news-translators-lament-when-words-bridge-worlds-but-respect-falters.html
#metaglossia_mundus
"British translator Karen Leeder and German writer Durs Grünbein win Griffin Poetry Prize
Nicole Thompson, The Canadian Press
TORONTO — As German writer Durs Grünbein accepted the Griffin Poetry Prize at a ceremony in Toronto on Wednesday night, he pointed to the Ukrainian flag ribbon pinned to his lapel and launched into an explanation.
"It's not only for Ukraine. This is now a war on Europe. If we do not stand together, one day, this continent will vanish," he said of Russia's invasion of Ukraine.
"I hope poetry is a support to ideas of democracy, humanity ... That's why I'm writing poetry."
The Berlin-based Grünbein shares the $130,000 award with British scholar Karen Leeder, a professor of German language and literature at Oxford University, who translated "Psyche Running: Selected Poems, 2005-2022" into English.
The Griffin judges praised the collection as being "universal, lyrical, philosophical."
Leeder, who as the translator receives 60 per cent of the prize money, said she was shocked to hear her name called. In fact, she hasn't thought of what she'll do with the winnings.
"We were onstage with such amazing poets, all of them," she said of the other finalists.
"I was really struck by how each poet or translator-poet read a different kind of extraordinary poetry at the highest level. They were all so different."
For his part, Grünbein said he was humbled by the experience — and didn't know how much money came with the prize.
This is the second time he has been up for the award.
"Ashes for Breakfast: Selected Poems," translated by Michael Hoffmann, was a finalist for the International Griffin Poetry Prize in 2006, before the Canadian and global prizes were combined.
Margaret Atwood also received the $25,000 Lifetime Recognition Award at Wednesday night's ceremony.
In his acceptance speech, Grünbein said he hadn't realized she was such a skilled poet.
That changed when he read two of her poems on Wednesday night, he said.
"I realized immediately that these lines were so concentrated, focused on — in that case — very important historical, political things," he said.
Earlier in the evening, Atwood received a standing ovation as she walked onstage.
Though she’s best known for novels including “The Handmaid’s Tale,” Atwood got her start as a poet. She’s published more than a dozen collections since the 1960s.
She’s also a founding trustee of the Griffin Poetry Prize, which was first handed out in 2000.
“We did have trustee meetings and they consisted of (prize founder Scott Griffin) telling us what to do, and us making suggestions that were rejected,” Atwood recounted.
Whitehorse poet Dawn Macdonald also read from her debut collection “Northerny," which won the $10,000 Canadian First Book Prize.
“Margaret Atwood is a bit of a tough act to follow,” she quipped before reciting a poem.
The other finalists for the Griffin were "The Great Zoo," translated by Aaron Coleman from Nicolas Guillen's original Spanish; “Kiss the Eyes of Peace," translated by Brian Henry from the original Slovenian by Tomaž Šalamun; Carl Phillips for "Scattered Snows, to the North”; and Diane Seuss for "Modern Poetry."
Each of the finalists receives $10,000.
This report by The Canadian Press was first published June 4, 2025.
Nicole Thompson, The Canadian Press"
https://www.westernwheel.ca/lifestyle/british-translator-karen-leeder-and-german-writer-durs-grunbein-win-griffin-poetry-prize-10756993
#metaglossia_mundus
Left and right brains hear speech differently, yet how this divide forms was unclear − until mouse studies showed each hemisphere runs on its own developmental clock.
Published: June 4, 2025 2.45pm SAST Hysell V. Oviedo, Washington University in St. Louis Some of the most complex cognitive functions are possible because different sides of your brain control them. Chief among them is speech perception, the ability to interpret language. In people, the speech perception process is typically dominated by the left hemisphere.
Your brain breaks apart fleeting streams of acoustic information into parallel channels – linguistic, emotional and musical – and acts as a biological multicore processor. Although scientists have recognized this division of cognitive labor for over 160 years, the mechanisms underpinning it remain poorly understood.
Researchers know that distinct subgroups of neurons must be tuned to different frequencies and timing of sound. In recent decades, studies on animal models, especially in rodents, have confirmed that splitting sound processing across the brain is not uniquely human, opening the door to more closely dissecting how this occurs.
Yet a central puzzle persists: What makes near-identical regions in opposite hemispheres of the brain process different types of information?
Answering that question promises broader insight into how experience sculpts neural circuits during critical periods of early development, and why that process is disrupted in neurodevelopmental disorders.
Timing is everything Sensory processing of sounds begins in the cochlea, a part of the inner ear where sound frequencies are converted into electricity and forwarded to the auditory cortex of the brain. Researchers believe that the division of labor across brain hemispheres required to recognize sound patterns begins in this region.
For more than a decade, my work as a neuroscientist has focused on the auditory cortex. My lab has shown that mice process sound differently in the left and right hemispheres of their brains, and we have worked to tease apart the underlying circuitry.
For example, we’ve found the left side of the brain has more focused, specialized connections that may help detect key features of speech, such as distinguishing one word from another. Meanwhile, the right side is more broadly connected, suited for processing melodies and the intonation of speech.
Sound information moves through the cochlea to the brain. Jonathan E. Peelle, CC BY-SA We tackled the question of how these left-right differences in hearing develop in our latest work, and our results underscore the adage that timing is everything.
We tracked how neural circuits in the left and right auditory cortex develop from early life to adulthood. To do this, we recorded electrical signals in mouse brains to observe how the auditory cortex matures and to see how sound experiences shape its structure.
Surprisingly, we found that the right hemisphere consistently outpaced the left in development, showing more rapid growth and refinement. This suggests there are critical windows of development – brief periods when the brain is especially adaptive and sensitive to environmental sound – specific to each hemisphere that occur at different times.
To test the consequences of this asynchrony, we exposed young mice to specific tones during these sensitive periods. In adulthood, we found that where sound is processed in their brains was permanently skewed. Animals that heard tones during the right hemisphere’s earlier critical window had an overrepresentation of those frequencies mapped in the right auditory cortex.
Adding yet another layer of complexity, we found that these critical windows vary by sex. The right hemisphere critical window opens earlier in female mice, and the left hemisphere window opens just days later. In contrast, male mice had a very sensitive right hemisphere critical window, but no detectable window on the left. This points to the elusive role sex may play in brain plasticity.
Our findings provide a new way to understand how different hemispheres of the brain process sound and why this might vary for different people. They also provide evidence that parallel areas of the brain are not interchangeable: the brain can encode the same sound in radically different ways, depending on when it occurs and which hemisphere is primed to receive it.
Speech and neurodevelopment The division of labor between brain hemispheres is a hallmark of many human cognitive functions, especially language. This is often disrupted in neuropsychiatric conditions such as autism and schizophrenia.
Reduced language information encoding in the left hemisphere is a strong indication of auditory hallucinations in schizophrenia. And a shift from left- to right-hemisphere language processing is characteristic of autism, where language development is often impaired.
Children with certain neurodevelopmental conditions may have trouble processing speech. Towfiqu Ahamed/iStock via Getty Images Plus Strikingly, the right hemisphere of people with autism seems to respond earlier to sound than the left hemisphere, echoing the accelerated right-side maturation we saw in our study on mice. Our findings suggest that this early dominance of the right hemisphere in encoding sound information might amplify its control of auditory processing, deepening the imbalance between hemispheres.
These insights deepen our understanding of how language-related areas in the brain typically develop and can help scientists design earlier and more targeted treatments to support early speech, especially for children with neurodevelopmental language disorders." https://theconversation.com/your-left-and-right-brain-hear-language-differently-a-neuroscientist-explains-how-257436 #metaglossia_mundus
"...In ancient India, the most powerful god was known as “sky father,” or in the Sanskrit language, Dyaus pita. Sound it out. Can you see where this is going? In Greece, his equivalent was Zeus pater; in Rome, Jupiter. English speakers have always been used to tracing the etymologies of their words back to the classical languages of Europe, but the suggestion, in the late 18th century, that there were also clear and consistent features in common with languages spoken much farther to the east was an astonishing and exhilarating one.
If we think of languages as belonging to families, this is like finding out that an acquaintance is really a distant cousin: It points to a shared ancestor, a unity – and a divergence – somewhere in the past. What would it mean for those cultural origin stories – for Who We Are – to speak not of European but of Indo-European? “Of what ancient and fantastic encounters were these the fading echoes?” asks Laura Spinney in her latest book, “Proto: How One Ancient Language Went Global.”
There are about 7,000 languages spoken in the world today; they can be divided into about 140 families. Nevertheless, the languages most of us speak belong to just five: Indo-European, Sino-Tibetan, Niger-Congo, Afro-Asiatic and Austronesian. And of these, Indo-European and Sino-Tibetan (which includes Mandarin) are the biggest. As Spinney notes, “Almost every second person on Earth speaks Indo-European.”
- ADVERTISEMENT -
Such a large family is, naturally, a diverse one. Among the Indo-European languages are English, German, Spanish, Italian, Russian, Hindi-Urdu and Persian. At the head of the family tree is the postulated language we call Proto-Indo-European, or PIE for short. It was spoken – but not written down, as it emerged in the days before writing – some five or six millennia ago, somewhere around the Black Sea. (Exactly where is an open question. We’ll come to that later.)
Although we have no documentation of what PIE sounded like, it is possible to triangulate backward using the languages we do know. Over time, the way we speak – the set of sounds we use – gradually changes. This is not a chaotic, willy-nilly drift; rather, the shifts obey a set of general rules. To give you an example, compare the group of words where Latin has a p sound but the English equivalent has f: pater/father; pisces/fish, and so on. We can posit, then, that at an early point in their development, Germanic languages went down an f path from their PIE root, while other languages stayed with p.
Using these sound-change rules to identify ancestral similarities among languages, we can reconstruct a corpus of about 1,600 PIE words, including (with apologies for the somewhat rebarbative notation) family relations like *méh₂tēr and *bʰréh₂tēr and numbers like *duó-, *tréy. If you were living on the eastern fringes of Europe some five millennia ago, you might stay around your *domo and drink some *h₂melǵ or maybe ride out on your *éḱwos to lie under the night sky and fix your eye on a *h₂ster.
So this is the original material. But how, to use Spinney’s subtitle, did this ancient language go global? Most of “Proto” is spent picking through the genetic and archaeological evidence to explain how PIE, or its descendants, came to be spoken from Galway to Kolkata.
Whatever its contested origins, by the third millennium B.C., PIE was being spoken in the Pontic steppe, the region of grasslands to the north of the Black Sea, by a culture known as the Yamnaya, who were nomadic and had ox-drawn wagons as well as, probably, domesticated horses. It was the Yamnaya – whose migration routes were studded with large burial mounds, still discernible, that they built into the landscape – who carried the language deep into Asia. Later branches of the tree, each with its own history, include the Baltic and Slavic languages, not to mention Tocharian, Albanian, Armenian, Greek, Celtic, Germanic and Indo-Iranian.
“Proto,” it has to be said, is a dense read. Spinney’s approach is a thorough one, so that the chapters tack back and forth in time as we follow each of these branches in turn. It is easy to lose sight of the timeline while concentrating on the differences between Hittites, Hattians and Hurrians, between Lydians and Luwians. Spinney’s background is as a science journalist, and it shows in the way she has assimilated a vast amount of up-to-date scholarly information and presented it concisely and accurately. Nevertheless, this is still a knotty story.
Spinney leavens the history lesson with sections dealing with the present: pen portraits of academics she has interviewed in the archaeological trenches of Ukraine and Russia as well as on the more rarefied battlefields of international academic conferences. As an observer rather than a participant, Spinney is comfortable laying out different sides of some of the ongoing debates, the biggest of all being where exactly the ancestral language was first spoken. Did PIE arise in Anatolia (roughly modern-day Turkey) before spreading first west into Europe along with farming and then east again with the Yamnaya? Or did it originate in the steppe, borne west by marauding nomads who replaced the populations they drove out? Or – the latest linguistic hypothesis – were these Anatolian and steppe languages both offspring of an even older ancestor spoken in the northern Caucasus (now southern Russia) some 6,000 years ago?
“Proto” is an impressive piece of work – hard but ultimately rewarding. And if the debates Spinney has delved into are still far from being resolved, time at least is an abundant resource. One Bulgarian archaeologist who spoke to Spinney addresses the ebb and flow of academic funding with an admirable lack of ego: “These people have lain in the ground for thousands of years. Who cares if it’s me or someone else who digs them up.” – – – Dennis Duncan teaches English literature at University College London... By Dennis Duncan June 5, 2025 https://www.newsindiatimes.com/our-languages-have-more-in-common-than-you-might-think/ #metaglossia_mundus
Editors spend hours cutting clips, adding text, fixing sound, and more. Now, smart editors are changing the way they work. Instead of spending so much time editing, they are using smart tools to help them finish faster. This gives them more time to focus on the fun and creative parts—like making stories, adding humour, or talking to their audience.
"Why smart editors are spending less time editing and more time creating
By Dimsumdaily Hong Kong -11:30PM Wed
4th June 2025 – (Hong Kong) In today’s world, video content is everywhere. From TikTok to YouTube, people watch short and long videos every day. But what many don’t know is that editing videos takes a lot of time. Editors spend hours cutting clips, adding text, fixing sound, and more. Now, smart editors are changing the way they work. Instead of spending so much time editing, they are using smart tools to help them finish faster. This gives them more time to focus on the fun and creative parts—like making stories, adding humour, or talking to their audience. One tool that many editors now use is CapCut’s text to speech. It helps them turn written words into a real-sounding voice. So, if you don’t want to use your own voice in a video, or if you are camera shy, you can just write what you want to say and use this tool. It will read it out loud in the video for you. This saves a lot of time and also makes your videos sound more professional.
The Old Way of Editing Was Too Slow
Before all these smart tools came, editors had to do everything manually. They needed to cut videos one by one, fix the sound, remove background noise, add music, and write subtitles—all by hand. This made the process very slow. Also, if you made a mistake, you had to go back and fix everything again. Even small changes would take hours. Editors had less time to think about ideas or try new things. Most of their time went into fixing videos instead of creating something new.
New Tools Are Helping Creators
But now, things are different. With new tools like AI and automation, editing has become much easier and faster. Editors don’t need to do everything by hand anymore. They can use smart software that edits videos for them, adds subtitles automatically, or even creates a full video based on a script. One great example is the CapCut’s AI video generator. It lets you create a full video just by writing some text. You can choose styles, music, and visuals, and the tool will build the video for you. This is super helpful for busy creators who want to post often but don’t have time to do heavy editing every day. By using tools like these, editors can now focus more on being creative. They can think about fun video ideas, talk to their followers, or plan their next big content piece. Editing becomes just a small step in the bigger process.
How Smart Editors Spend Their Time Now
Now that editing is easier, creators are doing more exciting things with their time. They spend more time planning, writing better scripts, and finding new trends online. Some are learning how to make videos that go viral, while others are building strong personal brands.They also test new video styles like animations, reactions, or storytelling shorts. Instead of fixing sound or editing frame-by-frame, they’re creating meaningful content. They collaborate with others and grow their communities. This shift has changed how people view video making. Today, even one person with a laptop and internet can build a huge audience if they use the right tools smartly.
How to Use Text to Speech in Your Videos (3 Easy Steps)
If you want to try it yourself, here is how you can use text to speech in your videos:
Step 1: Import your video
Go to the CapCut desktop video editor and click on the New Project > Import and upload your desired file.
Step 2: Convert text to speech
Go to Text > Add text and then type what you want to say in your video. Choose a voice from the options by clicking the “Text to Speech” tab. You can pick male or female voices, different accents, or even emotions like happy or serious.
Step 3: Export video
Once you’re happy with the result, export the video. You can now share it on TikTok, YouTube, Instagram, or wherever you like.
This process takes just a few minutes, but it gives your video a big boost in quality.
Smart Editors Use Voice Tools for a Creative Edge
Besides using text to speech, many smart creators are also using tools like CapCut’s voice changers. These let you change your voice in funny or dramatic ways. For example, you can sound like a robot, a cartoon character, or even an old man. Why is this useful? Because it makes your content more fun! People like to watch videos that are creative, unexpected, or entertaining. Using a voice changer can help you stand out from the crowd. It also helps if you’re doing storytelling or gaming content where different characters have different voices. And again—it saves time. You don’t need to act or use your real voice. The software does it all.
Why Less Editing Means Better Content
Some people might think that doing less editing is bad. But actually, it’s the opposite. When you spend less time editing, you have more time to do things that really matter.Here are a few benefits:
You can post more videos every week
You have time to test new ideas
You don’t get tired or stressed from long editing hours
You can grow your audience faster
You can make better content because you are focused on the message, not the edits
Final Thoughts
Video editing is changing. Smart editors are not spending long hours fixing small things anymore. Instead, they are using tools like text to speech, AI video generator, and voice changer to finish videos faster and better.If you’re a creator, student, teacher, or business owner—this is your chance. You don’t need to learn complicated software. Just use these tools and start creating! Be smart. Spend less time editing and more time sharing your message with the world."
https://www.dimsumdaily.hk/why-smart-editors-are-spending-less-time-editing-and-more-time-creating%EF%BF%BC/
#metaglossia_mundus
"These newsrooms desperately need the help these technologies provide, but they’re the ones being left out because they work in languages that are considered low-resource, so they are not a big priority for tech companies to support."
"These journalism pioneers are working to keep their countries’ languages alive in the age of AI news
“These newsrooms desperately need the help these technologies provide, but they’re the ones being left out because they work in languages that are considered low-resource, so they are not a big priority for tech companies to support.”
By GRETEL KAHN
JUNE 4, 2025, 10:28 A.M.
This story was originally published by the Reuters Institute for the Study of Journalism.
Since the launch of ChatGPT in 2022, newsrooms have been grappling with both the promise and the peril posed by generative AI. But not every publisher is equally prepared to pursue these opportunities. While newsrooms in the U.S. and Europe innovate and experiment with large language models (LLMs), many newsrooms in the Global South are being left behind.
While AI models in languages like English, Spanish, and French have ample training and resources, profound linguistic and cultural biases embedded within many mainstream AI tools pose substantial challenges for newsrooms and communities operating outside of dominant Western languages and cultural contexts.
For these newsrooms, gaps in data aren’t merely technical glitches but existential threats to their ability to participate equitably in this evolving digital ecosystem. What threats does this lack of data present to newsrooms in the Global South? How can these AI gaps be narrowed? To answer these questions, I spoke to six journalists and experts from India, the Philippines, Belarus, Nigeria, Paraguay, and Mali who are aiming to level the field.
What AI can (and can’t) do for low-resource languages
AI tools do not work well (or at all) for local, regional, and indigenous languages, according to all the sources I spoke to. This is particularly evident in countries where many of these languages are spoken beyond dominant ones like English, Spanish, or French. This language gap exacerbates inequalities for newsrooms and communities that operate in non-dominant languages.
Jaemark Tordecilla is a journalist, media advisor, and technologist from the Philippines focusing on AI and newsroom innovation. Having worked as a consultant for newsrooms across Asia, Tordecilla has seen a lot of curiosity among journalists about AI, but uptake has been mixed. For transcriptions — the AI functionality he has seen used the most — cost and language have often been an issue.
“For the longest time, [transcription] worked well for English and it didn’t work at all for Filipino, so journalists in the Philippines are only very recently getting to use these tools for their own reporting, but cost is an issue,” he said.
He described instances where journalists in the Philippines were forced to share accounts for paid subscriptions to transcription tools, which can create security issues. Things are even worse for journalists who use regional languages, for which these tools are essentially useless.
“The roll-out for support for regional languages has been slow and they are being left behind,” he said. “If your interview is in a regional language, then obviously you can’t use AI to process that. You can’t use AI to translate that into another language. You can’t use AI to say monitor town hall meetings in a regional language and make [them] more accessible to the public.”
Indian journalist Sannuta Raghu, who heads Scroll.in’s AI Lab and is now a journalist fellow at the Reuters Institute, has documented what these linguistic and cultural inequities look like in practice for newsrooms like hers.
AI tools don’t work very efficiently for most of the 22 official languages in India. Raghu listed issues like inaccurate outputs, hallucinations, and incorrect translations. A big issue, she said, is that, for a long time, AI tools were unable to account for nuances in language. Unlike English, for example, many Indian languages have large differences between spoken and written discourse.
“Asian countries speak in code-mixed language — for example, using multiple Hindi words and English words in a normal conversation,” she said, “which means we need rich enough data to be able to understand that.”
If there isn’t sufficient “good data” to train models on these specific languages and contexts, Raghu said, linguistic and cultural inequities are going to happen. Raghu attributed this lack of training data to a combination of complexity and lack of interest by Big Tech. But she also said the situation is starting to improve.
“Is it really a priority for you to optimize for all those languages in India from a tech product sales perspective? As you move eastward, the complexity of how societies use language changes. It becomes far more multilingual. It becomes far more complex. It becomes far more code-mixed. Those are the complexities that we live with, and that is not reflected in any of the models,” Raghu said.
AI ignores political biases
Beyond these inefficiencies, my sources also pointed out the cultural and political nuances that are missing from these models, which make them even more problematic for newsrooms.
For example, Raghu said that newsrooms are already noticing an American slant in everything AI generates. She described an instance where they were testing a tool to see how useful it’d be to help them write copy on cricket. For something as simple as explaining the sport, she says there were hallucinations, with players being made up, and the model simply not understanding the rules of the game.
“Up to 2.6 billion people follow cricket. It’s a huge cultural thing for us, Australia, Bangladesh, England…but the U.S. doesn’t play cricket, which is why a lot of the cultural aspects of this are not included in the models,” she said. “There is a lack of contextual training data. Cricket is very important for us, but we’re not able to do this with the LLMs because these models don’t quite understand the rules.”
Daria Minsky is a Belarusian media innovation specialist focusing on AI applications in journalism. After working with several newsrooms in exile, she has seen a lot of skepticism toward AI use — not because of factual errors, but because some of these models lack nuance in politically sensitive contexts.
Minsky used her own country as an example of how LLMs may simply repeat narratives put forth by authoritarian regimes.
“The word Belarusian is very politically loaded in terms of how you spell it, so I compared different models. ChatGPT actually spells the democratic or version of it, while DeepSeek uses the old, Soviet version of it,” she says. “It’s Belorussian versus Belarusian. Belorussian is this imperialistic term used by the regime. If newsrooms use Belorussian instead of Belarusian, they risk losing their audience and their reputation.”
AI models are trained based on data available online, which is why these models are more fine-tuned to English (and its contexts) than, say, Belarusian. Since the official narrative of authoritarian regimes is what is most available online, AI is being trained to follow that narrative.
These gaps in the training system have already been exploited by bad actors. A recent study by NewsGuard revealed that a Moscow-based disinformation network is deliberately infiltrating the retrieved data of AI chatbots, publishing false claims and propaganda for the purpose of influencing the responses of AI models on topics in the news. The result is more output that propagates propaganda and disinformation.
“I’ve heard of the same problems in Burma, for example, because the opposition uses Burma instead of Myanmar. I can see this type of problem even within the United States where I’m based, with the debate around using ‘Gulf of Mexico’ or ‘Gulf of America’ because Trump started to rename things just like in other dictatorial regimes,” she says.
How to close the gap
Despite these issues, some newsrooms in the Global South are taking the development of AI tools into their own hands.
Tama Media, a francophone West African news outlet, has launched Akili, an app, currently in beta, that provides fact-checking in local African languages through vocal interaction. Moïse Mounkoro, the editorial advisor in charge of Akili, traced the origin of the idea to two things: the fact that misinformation is rampant in West Africa and the realization that many people communicate via voice messages rather than through reading and writing.
“Educated people can read articles and they can fact-check,” Mounkoro said. “But for people who are illiterate or who don’t speak French, English or Spanish, the most common way to communicate is orally. I’m originally from Mali, and most of the conversations there are through WhatsApp voice messages. If you want to touch those people, you need to use their language.”
The Akili app uses AI to fact-check information by taking the question and finding an answer through its database of sources, which range from BBC Africa to Tama Media. The answer is then given orally to the user. To include more African languages like Wolof, Bambara, and Swahili, Akili’s team is experimenting with either using Google Translate’s API or building their own through online dictionaries.
“These AI technologies came from the West so they focus on their own languages. It’s like cameras: in the beginning, they were not made to photograph Black skin,” he says. “People need to be aware that they need to integrate other languages. Google has at least tried to make an effort by integrating many African languages these last couple years.”
In Paraguay digital news outlet El Surti is developing GuaraníAI. While the project is still in development, their goal is to build a chatbot to detect whether someone is speaking in this language and provide them with a response. In order to do that, they are developing a dataset of spoken Guaraní so that LLM engines can recognize oral speech in this Indigenous language, spoken by almost 12 million people.
Sebastián Auyanet, who leads on the project, told me they wanted to explore how those who don’t speak dominant languages are shut out from accessing LLMs. Guaraní is a language spoken still spoken widely throughout the Southern Cone, mainly in Paraguay, where it is an official language along with Spanish. Up to 90% of the non-indigenous population in Paraguay speaks Guaraní.
“Guaraní is an oral, unwritten language,” Auyanet said. “What we need is for any LLM engine to be able to recognize Guaraní speech and to be able to respond to these questions in Spanish. News consumption may be turning into ChatGPT, Perplexity, and other models. So there’s no way to get into that world if you speak in a language none of these systems can use.”
El Surti is organizing hackathons throughout Paraguay to test this dataset. Mozilla’s Common Voice is a platform designed to gather voice data for diverse languages and dialects to be used by LLMs. El Surti is using Common Voice to develop their minimum viable product, which aims to achieve 70% validation for spoken Guaraní within Mozilla’s datasets. With this degree of validation, the chatbot would be able to respond to queries in this Indigenous language.
A picture of the first hackathon of the project, which included conversations between El Surti’s journalists and Guaraní native speakers. (El Surti)
Ideally, Auyanet said, this project will allow El Surti to build an audience of Guaraní speakers who eventually will be able to interact with the outlet’s coverage by asking the chatbot questions in their own language.
“Right now we are excluding people who are only Guaraní speakers from the reporting that El Surti does,” he said. “This is an effort to bring them closer.”
In Nigeria, The Republic, a digital news outlet, is developing Minim, an AI-powered decentralized text-to-speech platform designed to support diverse languages and voices. They are actively training an AI model on specific African languages such as Nigerian Pidgin, Hausa, and Swahili, with plans to add more over time. The team is aiming to have a minimum viable product by the end of the year.
This model would allow independent creators to lend their own voices and train the AI in their unique vocal quality, including age, regional accents, and other demographics.
Editor-in-chief Wale Lawal told me that their goal is to engage audiences who speak these languages and to make their outlet more relevant to them. “We believe that global media has an indigenous language problem,” he said. “In a place like Africa you have a lot of people who are just automatically locked out of global media because of language.”
Minsky, the media consultant from Belarus, has been working with newsrooms in exile to develop an AI tool that allows for the automation of news monitoring from trusted sources. Her goal is to account for all the cultural and political nuances missing from the current models by allowing newsrooms to monitor very specific contextual sources, including local and hyper-local channels like Telegram.
This would include uploading archival data and historical data to fine-tune the output, and to explicitly prompt to control terminology and spelling (i.e. “don’t call Lukashenko president”).
No newsroom left behind
Newsrooms have been working without AI for decades before the first generative AI tools were released to the public. So what’s the problem if these tools are not available to everyone?
Tordecilla stressed that the gap between newsrooms is getting wider and pointed to things already happening: there are newsrooms in Manila who are doing AI investigations and newsrooms in rural Philippines who could benefit from the efficiencies that AI provides that are struggling for their survival.
“We are talking about newsrooms who have five people on their team, so every minute they spend on transcription is a minute they don’t spend reporting and editing stories,” Tordecilla said. “These newsrooms desperately need the help these technologies provide, but they’re the ones being left out because they work in languages that are considered low-resource, so they are not a big priority for tech companies to support.”
Indian journalist Sannuta Raghu said that addressing the AI digital gap is important for access, scale, and reaching a broader audience. Raghu mentioned a specific goal that her newsroom had when they started their AI lab: to build a multilingual text-to-video tool, as video content is extremely popular in India. With thousands of millions of Indians using smartphones and having access to very cheap internet, there is a significant opportunity to deliver journalism via video in these local languages. They have created Factivo 1.0, a URL-to-MP4 tool that creates accurate videos from news articles in a matter of minutes and which is available for major languages.
“For a small English-language newsroom like us, the universe available to us right now is about 30 million Indians who understand English,” she explains. “But to tap into the larger circle that is India, which is 1.4 billion people, we would need to be able to do language at scale.”"
https://www.niemanlab.org/2025/06/these-journalism-pioneers-are-working-to-keep-their-countries-languages-alive-in-the-age-of-ai-news/
#metaglossia_mundus
Tokenizer design significantly impacts language model performance, yet evaluating tokenizer quality remains challenging.…
"Beyond Text Compression: Evaluating Tokenizers Across Scales Authors Jonas F. Lotz†‡, António Vilarinho Lopes, Stephan Peitz, Hendra Setiawan, Leonardo Emili
Tokenizer design significantly impacts language model performance, yet evaluating tokenizer quality remains challenging. While text compression has emerged as a common intrinsic metric, recent work questions its reliability as a quality indicator. We investigate whether evaluating tokenizers on smaller models (350M parameters) reliably predicts their impact at larger scales (2.7B parameters). Through experiments with established tokenizers from widely-adopted language models, we find that tokenizer choice minimally affects English tasks but yields significant, scale-consistent differences in machine translation performance. Based on these findings, we propose additional intrinsic metrics that correlate more strongly with downstream performance than text compression. We combine these metrics into an evaluation framework that enables more reliable intrinsic tokenizer comparisons.
† Work done while at Apple ‡ University of Copenhagen & ROCKWOOL Foundation Research Unit https://machinelearning.apple.com/research/beyond-text-compression #metaglossia_mundus
"Development of intercultural communicative competence. Challenges and contributions for the training of health professionals Authors Juan Beltrán-Véliz Universidad de la Frontera José Luis Gálvez-Nieto Universidad de la Frontera Maura Klenner Loebel Universidad de la Frontera Ana María Alarcón Universidad de la Frontera Nathaly Vera-Gajardo Universidad Autónoma de Chile Vol. 31 No. 1 (2025): NARRATIVAS, BIOÉTICA Y MEDICINA Abstract This paper reflects on intercultural communicative competence as a contribution to the training of professionals in health sciences. A reductionist, technocratic and culturally decontextualized health model is visualized, and the scarce development of "intercultural communicative competence" in health professionals is seen as a great barrier, which hinders communication with patients from other cultures. Therefore, it is vitally important to develop this competence in the training of these professionals. This will imply developing components that underlie this competence, that is, awareness, sensitivity and intercultural efficacy, which will allow the acquisition of cognitive, affective and behavioral skills that benefit effective and appropriate communication with patients from diverse cultural and social contexts. Likewise, they will allow them to understand in depth aspects, values and practices that underlie a culturally different health system, and will promote the advancement in the knowledge, understanding and development of innovative and relevant practices in the health system, from an intercultural and comprehensive approach. Finally, it will make it possible to overcome inequalities, misunderstandings, prejudices, exclusions and social injustices." https://actabioethica.uchile.cl/index.php/AB/article/view/78367 #metaglossia_mundus
Scientists discover how story language evolves from scene-setting to action to resolution, analyzing thousands of fictional narratives.
"Scientists decode the universal language pattern in 40,000 stories ELLSWORTH TOOHEY 8:46 AM WED JUN 4, 2025 "The Hare's Bride", a fairy tale from Buckow in Mecklenburg, by Tom Seidmann-Freud, (1924) Public Domain A study analyzing 40,000 stories reveals what Aristotle intuited centuries ago — narratives follow a common, predictable structure consisting of three key elements.
Using computer analysis, researchers Ryan Boyd, Kate Blackburn, and James Pennebaker revealed how language patterns evolved across novels, short stories, romance novels, movies and other narratives. Their findings, published in Science in 2929, identified three consistent components that form narratives: At the beginning, articles and prepositions dominate as authors set the stage. The middle sees a rise in "cognitive tension words" like "think" and "realize" as characters grapple with conflicts. By the end, pronouns and auxiliary verbs increase as the action resolves.
This pattern held true regardless of genre, length, or even quality — highly-rated books and movies followed the same structure as poorly-rated ones. However, the researchers found that non-fiction formats like newspaper articles and TED talks deviated from this pattern, particularly in how they handle cognitive tension.
"We find that all different types of stories — novels, films, short stories, and even amateur writing — trend toward a universal narrative structure," the researchers write in their report. Rather than constraining creativity, they suggest this structure may provide an optimal way for humans to process and share information through stories.
They've created a website where anyone can analyze the narrative patterns of hundreds of published books and movie scripts, or upload their own texts for analysis." https://boingboing.net/2025/06/04/scientists-decode-the-universal-language-pattern-in-40000-stories.html #metaglossia_mundus
"Bíblia: Comissão da CEP divulga nova tradução do «Génesis», assumindo «tensão» entre sentido original e «interpretações posteriores» 2 Junho, 2025 10:25 Especialistas alertam para «desconhecimento do contexto em que nasceram os textos»
Lisboa, 02 jun 2025 (Ecclesia) – A comissão coordenadora da tradução da Bíblia, da Conferência Episcopal Portuguesa (CEP), lançou hoje a nova versão do texto do Génesis, primeiro livro da Bíblia, alertando para o “desconhecimento do contexto” em que o mesmo nasceu.
“As discussões sobre a responsabilidade do judaísmo-cristianismo na inquinação do planeta terra, a relação da ciência com a Bíblia e as teorias do evolucionismo e do criacionismo, bem como as do big bang, têm a ver com as narrações da criação do mundo e da humanidade nos primeiros onze capítulos do Génesis”, indicam os especialistas, na introdução ao texto divulgado neste mês de junho.
Os tradutores destacam a “importância e atualidade” do livro do Génesis, do qual partem “várias linhas de força teológicas”.
“A religião, a arte e a cultura recorrem à inspiração do Génesis para explorar e articular conhecimentos sobre a vida humana”, indicam.
O desconhecimento do contexto em que nasceram os textos do Génesis gerou tensão entre o seu sentido original e as interpretações posteriores. Estas mantiveram viva e fresca a força dos relatos, mas divergem significativamente do horizonte cultural que produziu o seu sentido original.”
Os especialistas da CEP sublinham o “estilo narrativo” do primeiro livro da Bíblia e a sua “força antropológica”, precisando que este “fundo cultural, literário e religioso deve ser tido em conta na sua interpretação, pois o que um texto quer dizer depende também daquilo que está por trás dele e que o ilumina”.
“Motivos literários, temas, conceções cosmológicas, religiosas e culturais (criação de todos os seres pela divindade, o conhecimento como distintivo do ser humano em relação aos outros seres, a grandeza e os limites da vida humana, união sexual entre seres divinos e mulheres, um dilúvio, genealogias…) remetem para o contexto histórico do Próximo Oriente antigo, o chamado mundo bíblico”, refere a equipa de tradução.
No judaísmo, o livro do Génesis é designado pela expressão inicial, bere’shit (“No princípio”).
O título Génesis vem da tradução grega dos Setenta (assumido pela tradução latina) e resume o seu conteúdo: origem do universo, da humanidade e de Israel.
No princípio, quando Deus criou os céus e a terra, a terra estava caótica e vazia, as trevas pairavam por cima do abismo e um vento impetuoso soprava sobre a superfície das águas.”...
OC" https://agencia.ecclesia.pt/portal/biblia-comissao-da-cep-divulga-nova-traducao-do-genesis-assumindo-tensao-entre-sentido-original-e-interpretacoes-posteriores/
Wisconsin Republicans are advancing a bill that could expand the role of virtual language interpretation in state courts. Backers say it's an effort to address a shortage of interpreters, but opponents worry the changes could erode the rights of victims.
"Wisconsin bill would allow court interpreters to work remotely during trials
Backers say it could address a shortage of interpreters but opponents worry it could open the door to miscommunication
BY SARAH LEHR
JUNE 3, 2025
Court interpreter Floralba Vivas speaks Spanish into a microphone as a man is sentenced in Milwaukee County Circuit Court on Friday, March 21, 2025, in Milwaukee, Wis. Angela Major/WPR
Wisconsin Republicans are advancing a bill that could expand the role of virtual language interpretation in state courts.
Supporters say they hope to enable flexibility in a state with an unmet demand for qualified interpreters.
But opponents worry the changes could erode the rights of victims and defendants by making it easer for miscommunication to happen.
Under Wisconsin law, people with limited English proficiency have the right to a qualified interpreter when they appear before a circuit or appellate court.
Currently, those courts can only use tele-interpretation during some types of legal proceedings. During trials, interpreters need to appear in-person.
But, under a bill that cleared Wisconsin’s GOP-controlled Senate last month, interpreters could appear by telephone or videoconference even during trials.
“I think that we’re all aware of the staffing shortages and backlogs plaguing our court system, and additionally, county budgets are also feeling the pinch,” state Sen. Van Wanggaard, R-Racine, said during a hearing on the bill this spring.
In Wisconsin, the state partially reimburses counties for the costs of court interpreters. Although those rates of pay vary county-by-county, many local courts also reimburse interpreters for their mileage, and sometimes their driving time, before traveling to court.
Court interpreters across the state are in high demand, which in some cases, has forced judges to postpone cases while they search for a qualified interpreter. That’s especially true for interpreters in less commonly spoken languages.
In 2023, courts across Wisconsin billed for more than 26,200 hours of interpretation — a 27 percent increase compared to five years prior.
“One way to help alleviate some of that pressure is to remove burdensome requirements that the state places upon our circuit courts,” said Wanggaard, who introduced the bill earlier this year.
Some say in-person interpretation is most effective during lengthy, complex trials
Under Wisconsin law, the right to an interpreter extends to witnesses, people accused of crimes, victims and their family members.
But some groups contend that right could be diminished if tele-interpretation is expanded. In written testimony opposing the bill, Elena Kruse of the Wisconsin State Public Defender’s Office, said the changes could make it more difficult for a non-English speaking client to have a private, side conversation with their attorney.
And she noted that, compared to many pre-trial proceedings, trials are most often lengthy and complex.
“Jury trials, more than most other court proceedings, carry extremely high stakes: individuals’ liberty and freedoms,” Kruse wrote. “We are not willing to risk technical and logistical difficulties hindering a person’s ability to fully participate in the legal process, especially when their liberty is at stake.”
The American Civil Liberties of Union of Wisconsin and victims’ rights groups including the Wisconsin Coalition Against Sexual Assault also registered against the bill. The Wisconsin Defense Counsel, which represents civil trial lawyers, registered in favor.
Nadya Rosen, an attorney with Disability Rights Wisconsin, says the proposal could be especially harmful to people who are deaf and hard of hearing.
American Sign Language relies on someone’s hand gestures and facial expressions, and Rosen said it can be difficult to fully grasp those visuals via video.
“We had a case recently in a rural county where there was a video interpreter, and the setup in the courtroom was such that the litigants were so far away from the camera that the ASL interpreters that were in a remote location were unable to see the faces of the people that they were supposed to be interpreting for,” Rosen said, adding the case had to be postponed until an in-person ASL interpreter could be appointed.
Rosen says unimpeded communication is especially important during a trial, when jurors are expected to decide whether or not they believe in the truthfulness of someone who’s testifying.
Christina Green, a freelance court interpreter who works in Wisconsin, echoed those concerns. Green says remote interpretation can be useful during brief legal proceedings. But, during trials, Green said she believes remote interpretation could actually lead to more delays because of technical issues.
“Most courthouses in Wisconsin have inadequate equipment, so they have low-quality cameras, low-quality microphones, screens that are not clear,” said Green, who sits on the board of the American Translators Association, a professional advocacy group.
Court interpreter Reme Bashi speaks into a microphone so the court proceedings can be understood by Spanish speakers Friday, March 21, 2025, in Milwaukee, Wis. Angela Major/WPR
Amendment would require all parties to agree to remote interpretation
In response to concerns about the bill, lawmakers agreed to add an amendment stipulating that remote interpretation could only be used during a trial if all parties in a case agree.
Even with that change, Rosen, of Disability Rights Wisconsin, says her group still opposes the bill. Rosen says the amendment puts the onus to request an in-person interpreter on the person who needs that service.
“Being a litigant in court can be a really difficult process, and you don’t want to slow things down,” she said. “You don’t want to be perceived as being a problem, and so having to assert your need for interpretation and quality interpretation can be just another barrier for people who need to be able to effectively communicate.”
Effort comes as other lawmakers push separate AI interpretation bill
Under Wanggaard’s bill, Wisconsin courts would still be required to use real, live people as interpreters even if some of those interpreters appear remotely.
But a separate, more-recently introduced bill would allow Wisconsin courts to use machine-assisted interpretation enabled by artificial intelligence. That AI option could be used in addition to or instead of human interpreters, the bill says.
“By integrating AI-assisted translation tools, courts can deliver faster, more efficient services to individuals with limited English proficiency while significantly reducing the costs associated with hiring human interpreters,” state Rep. Dave Maxey, R-New Berlin, and Sen. Chris Kapenga, R-Delafield, wrote in a memo attached to a draft bill.
Green, who interprets in Spanish, French and Italian, said replacing human interpreters with AI could be disastrous. She says AI hallucinations and mistranslations could lead to lawsuits and overturned convictions.
“AI will always struggle with things like nuances in the legal language, specialized terminology, idiomatic expressions, Spanglish,” Green said. “AI may misinterpret the context, confuse the pronouns (and) introduce certain biases.”
Lawmakers formally introduced the AI interpretation bill on Friday. So far, the ACLU of Wisconsin has registered against it while the Wisconsin Counties Association has come out in favor."
https://www.wpr.org/news/wisconsin-bill-court-interpreters-work-remotely-during-trials
#metaglossia_mundus
"Preparing and handling digital documents for conference interpreters
20/06/2025
On Friday June 20, 2025, Andy Gillies, will be giving a one day training on “Efficiently handling PDF documents, navigating between documents and files and annotating in preparation for conferences”.
Here are the details:
Date: Friday, June 20, 2025 from 09:00 to 17:30.
Location: Paris, France
Trainer: Andy Gillies
Training: Efficiently handling PDF documents, navigating between documents and files and annotating in preparation for conferences (see description below)
Participants: 20 (maximum number of participants)
Registration fee: The course costs 400€ excl. VAT. For French participants with freelance status, the course is eligible to a 250€ HT refund via the FIFPL training fund. Real cost for the participant: 150€ HT
Registration link: https://www.sft-services.fr/produit/interpretation-conference-documents-numerique/
About this event
Training by SFT Services in collaboration with AIIC France
Learn how to handle multiple documents in PDF format. Rename, organise and navigate seamlessly between files to keep your eye on what’s relevant in a high-intensity environment.
We’ll explore how to create links between documents or key sections of documents to access them effortlessly during interpreting.
Also, discover a few tips on how best to prepare and annotate documents to enhance your performance in the booth. "
https://aiic.at/client/event/roster/eventRosterDetails.html?productId=747&eventRosterId=9
#metaglossia_mundus
"...In a significant step toward more inclusive technology, researchers at Florida Atlantic University (FAU) have developed a real-time American Sign Language (ASL) interpretation system that uses artificial intelligence (AI) to bridge communication gaps for individuals who are deaf or hard of hearing.
The system, built by a team from FAU’s College of Engineering and Computer Science, leverages the combined strengths of YOLOv11, an advanced object detection model, and MediaPipe, a tool for real-time hand tracking. This integration allows the system to interpret ASL alphabet letters from video input with remarkable accuracy and speed—even under inconsistent lighting or with complex backgrounds—and translate it into readable text.
“What makes this system especially notable is that the entire recognition pipeline—from capturing the gesture to classifying it—operates seamlessly in real time, regardless of varying lighting conditions or backgrounds,” Bader Alsharif, the study’s first author and a PhD candidate in FAU’s Department of Electrical Engineering and Computer Science, said in a press release. “And all of this is achieved using standard, off-the-shelf hardware. This underscores the system’s practical potential as a highly accessible and scalable assistive technology.”
A key innovation lies in the use of skeletal hand mapping. A webcam captures the ASL translator’s hand gestures, which are rendered into digital frames. MediaPipe identifies 21 key points on each hand—including fingertips, knuckles, and the wrist—creating a structural map that YOLOv11 then uses to distinguish between ASL letters, even those that look similar, such as “A” and “T” or “M” and “N.”
This approach helped the system achieve a mean Average Precision (mAP@0.5) of 98.2%, indicating high classification accuracy. The results, published in the journal Sensors, demonstrate minimal latency, making the system ideal for applications requiring real-time communication, such as virtual meetings or interactive kiosks.
The researchers also developed a robust dataset to train and test their model. The ASL Alphabet Hand Gesture Dataset consists of 130,000 images under a wide range of conditions, including various lighting scenarios, hand orientations, and skin tones. Each image was annotated with 21 landmarks to ensure the model could generalize across diverse users and environments.
“This project is a great example of how cutting-edge AI can be applied to serve humanity,” Imad Mahgoub, PhD, co-author and Tecore Professor in FAU’s Department of Electrical Engineering and Computer Science, said in the release. “By fusing deep learning with hand landmark detection, our team created a system that not only achieves high accuracy but also remains accessible and practical for everyday use.”
The team later extended the project to explore how another object detection model—YOLOv8—performed when combined with MediaPipe. In a separate study published in Franklin Open, researchers trained the model using a new dataset of nearly 30,000 annotated images. Results from this effort were similarly promising, demonstrating 98% accuracy and recall. The model maintained strong performance across a range of hand gestures and positions, reinforcing its real-world applicability.
Beyond academic validation, the system’s practical implications are significant. According to the National Institute on Deafness and Other Communication Disorders, approximately 37.5 million adults in the U.S. report some trouble hearing, while about 11 million are considered deaf or functionally deaf.
“The significance of this research lies in its potential to transform communication for the deaf community by providing an AI-driven tool that translates American Sign Language gestures into text, enabling smoother interactions across education, workplaces, health care, and social settings,” said Mohammad Ilyas, PhD, co-author and professor at FAU.
Future development will focus on expanding the model’s capability from recognizing static letters to interpreting full ASL sentences, as well as optimizing performance on mobile and edge devices. This would allow more natural conversations and greater accessibility on widely used platforms."
BY ERIK CLIBURN JUNE 3, 2025 https://insightintoacademia.com/asl-ai-translation/
#metaglossia_mundus
"NHS England has warned that healthcare organisations need to be careful in their use of AI translation apps.
It said that they carry risks, particularly regarding their accuracy of translations and the potential impact on patient safety.
This comes in response to a number of organisations beginning to use the apps in consulting with and treating patients with limited English.
NHS England’s warning has come within a new improvement framework document on community language and translation services.
It acknowledges the potential value of apps that can translate language for people with limited English, saying they provide a convenient and timely means of translation.
Accuracy factor
But it also cites research published in the US National Journal of Medicine stating that apps, and informal interpreters, may not always ensure that key medical information is interpreted or communicated accurately, and that clinicians are unable to get assurance that their advice is being received.
The framework includes a series of recommendations on interpreting services, including that organisations note the risk and liability issues over the use of AI interpreting tools, and that they should only be used within clearly defined trust policies and risk assessments.
They should also look at the appointment of a champion for them in a primary care network, feedback systems for patients and staff to identify issues, recording language needs on the primary care electronic record, and involving patients in the development and improvement of interpreting services."
04/06/25
Mark Say
Managing Editor
@markssay
https://www.ukauthority.com/articles/nhs-england-sounds-warning-over-translation-apps
#metaglossia_mundus
"All-in-One Default Translator in
By Reverso Jun 3, 2025 Updated Jun 3, 2025
Reverso, the global leader in online language tools.
By Reverso
Latest app provides instant translation in iOS 18.4 and up along with an innovative dictionary and upgraded learning tools
NEW YORK, June 3, 2025 /PRNewswire/ -- Reverso, the global leader in online language tools, has released a powerful new version of its mobile app, Reverso Translate and Learn 15.3, the first of its kind able to be set as a third-party default translation app on iPhones and iPads running iOS 18.4 and higher.
The new Reverso app integration provides one-tap AI translation and all-in-one learning in Apple iOS.
"Our latest Reverso app is an all-in-one translation tool right there with you whatever, wherever, whenever you are reading, writing, and communicating on your phone or tablet," said Théo Hoffenberg, founder and CEO of Reverso. "We're extremely proud of our collaboration with Apple in making this game-changing integration happen."
Providing one-tap AI translation for over 100 language combinations in texts, emails, instant messages, news articles, books, and more, the state-of-the-art app goes beyond translation to promote genuine mastery with a robust suite of learning capabilities:
Translation by text, image, voice, or real-time conversation providing relevant, real-world results in context
An autoplay mode with a pronunciation interface that reads target vocabulary aloud for studying as well as listening and speaking practice
An English dictionary with over 500,000 clear and concise meanings and examples for words, expressions, idioms, acronyms, informal language, and inflected forms
An enhanced learning dashboard that sets goals, shows streaks, and tracks progress
Quizzes, flashcards, word lists, synonyms, conjugation guides, and other study aids
For over the past 20 years, Reverso and its dedicated team of engineers, data scientists, and linguistic specialists have been driving innovation in translation and language learning with its proven AI-based models.
"The more sophisticated our technology gets, the more human our mission at Reverso becomes," said Hoffenberg. "We're passionate about going beyond one-off, soon-forgotten lookups to lifelong learning so that people can understand what they read, chat with natives, work with foreigners, and meaningfully communicate with people across languages, across the globe."
Reverso's app has 5 million active monthly users and over 30 million downloads. It recently won the 2025 People's Voice Webby Award for Learning & Education Apps & Software and has been featured as the App of the Day in Apple's App Store in over 50 countries.
Download the latest version of the Reverso app for iOS: https://apps.apple.com/us/app/reverso-translate-and-learn/id919979642
The app is also available for Android. Discover more: https://context.reverso.net/translation/mobile-app/
About Reverso
Reverso is a global leader in online translation and language tools, helping millions of people and professionals read, write, learn, and communicate across the world's languages. For over 100 language combinations, Reverso provides AI-powered contextual voice, image, text, and document translation and an all-in-one learning ecosystem with a top grammar checker and user-first English dictionary. Each month, it serves over 50 million active users on the web, 5 million app users, and 5 million users in corporate environments.
John Kelly
john.kelly@boldsquare.com"
https://www.news-journal.com/reverso-releases-first-of-its-kind-all-in-one-default-translator-in-apple-ios/article_c7674e7c-d88d-5f9f-8e14-c676d2533e25.html
#metaglossia_mundus
No African writer has as many major, lasting creative achievements in such a wide range of genres as Ngũgĩ wa Thiong’o.
"Published: June 4, 2025 3.20pm SAST Charles Cantalupo, Penn State
Celebrated Kenyan writer and decolonial scholar Ngũgĩ wa Thiong'o passed away on 28 May at the age of 87. Many tributes and obituaries have appeared across the world, but we wanted to know more about Thiong'o the man and his thought processes. So we asked Charles Cantalupo, a leading scholar of his work, to tell us more.
Who was Ngũgĩ wa Thiong'o – and who was he to you? When I heard that Ngũgĩ had died, one of my first thoughts was about how far he had come in his life. No African writer has as many major, lasting creative achievements in such a wide range of genres as Ngũgĩ wa Thiong'o. His books include novels, plays, short stories, essays and scholarship, criticism, poetry, memoirs and children’s books.
Read more: Five things you should know about Ngũgĩ wa Thiong'o, one of Africa's greatest writers of all time
His fiction, nonfiction and plays from the early 1960s until today are frequently reprinted. Furthermore, Ngũgĩ’s monumental oeuvre is in two languages, English and Gĩkũyũ, and his works have been translated into many other languages.
Heinemann Educational Books From a large family in rural Kenya and a son of his father’s third wife, he was saved by his mother’s pushing him to be educated. This included a British high school in Kenya and Makerere University in Uganda.
When the brilliant young writer had his first big breakthrough at a 1962 meeting in Kampala, the Conference of African Writers of English Expression, he called himself “James Ngũgi”. This was also the name on the cover his first three novels. He had achieved fame already as an African writer but, as is often said, the best was yet to come.
Not until he co-wrote the play I Will Marry When I Want with Ngũgĩ wa Mirii was the name “Ngũgĩ wa Thiong’o” on the cover of his books, including on the first modern novel written in Gĩkũyũ, Devil on the Cross (Caitaani Mũtharaba-inĩ).
I Will Marry When I Want was performed in 1977 in Gĩkũyũ in a local community centre. It was banned and Ngũgĩ was imprisoned for a year.
East African Publishers And still so much more was to come: exile from Kenya, professorships in the UK and US, book after book, fiction and nonfiction, myriad invited lectures and conferences all over the world, a stunning collection of literary awards (with the notable exception of the Nobel Prize for Literature), honorary degrees, and the most distinguished academic appointments in the US, from the east coast to the west.
Yet besides his mother’s influence and no doubt his own aptitude and determination, if one factor could be said to have fuelled his intellectual and literary evolution – from the red clay of Kenya into the firmament of world literary history – it was the language of his birth: Gĩkũyũ. From the stories his mother told him as a child to his own writing in Gĩkũyũ for a local, pan-African and international readership. He provided every reason why he should choose this path in his books of criticism and theory.
East African Educational Publishers Ngũgĩ was also my friend for over three decades – through his US professorships, to Eritrea, to South Africa, to his finally moving to the US to live with his children. We had an ongoing conversation – in person, during many literary projects, over the phone and the internet.
Our friendship started in 1993, when I first interviewed him. He was living in exile from Kenya in Orange, New Jersey, where I was born. We both felt at home at the start of our working together. We felt the same way together through the conferences, books, translations, interviews and the many more literary projects that followed.
What are his most important works? Since Ngũgĩ was such a voluminous and highly varied writer, he has many different important works. His earliest and historical novels like A Grain of Wheat and The River Between. His regime-shaking plays.
Random House His critical and controversial novels like Devil on the Cross and Petals of Blood. His more experimental and absolutely modern novels like Matigari and Wizard of the Crow.
His epoch-making literary criticism like Decolonising the Mind. His informal and captivating three volumes of memoirs written later in life. His retelling in poetry of a Gĩkũyũ epic, The Perfect Nine, his last great book. A reader of Ngũgĩ can have many a heart’s desire.
My book, Ngũgĩ wa Thiong’o: Texts and Contexts, was based on the three-day conference of the same name that I organised in the US. At the time, it was the largest conference ever held on an African writer anywhere in the world.
Africa World Press Books What I learned back then applies now more than ever. There are no limits to the interest that Ngũgĩ’s work can generate anytime anywhere and in any form. I saw it happen in 1994 in Reading, Pennsylvania, and I see it now 30 years later in the outpouring of interest and recognition all over the world at Ngũgĩ’s death.
In 1993, he had published a book of essays titled Moving the Centre: The Struggle for Cultural Freedoms. Focusing on Ngũgĩ’s work, the conference and the book were “moving the centre” in Ngũgĩ’s words, “to real creative centres among the working people in conditions of gender, racial, and religious equality”.
What are your takeaways from your discussions with him? First, African languages are the key to African development, including African literature. Ngũgĩ comprehensively explored and advocated this fundamental premise in over 40 years of teaching, lectures, interviews, conversations and throughout his many books of literary criticism and theory. Also, he epitomised it, writing his later novels in Gĩkũyũ, including his magnum opus, Wizard of the Crow.
Currey Moreover, he codified his declaration of African language independence in co-writing The Asmara Declaration, which has been widely translated. It advocates for the importance and recognition of African languages and literatures.
Second, literature and writing are a world and not a country. Every single place and language can be omnicentric: translation can overcome any border, boundary, or geography and make understanding universal. Be it Shakespeare’s English, Dante’s Italian, Ngugi’s Gĩkũyũ, the Bible’s Hebrew and Aramaic, or anything else, big or small.
Random House Third, on a more personal level, when I first met Ngũgĩ, I was a European American literary scholar and a poet with little knowledge of Africa and its literature and languages, much less of Ngũgĩ himself. He was its favourite son. But this didn’t stop him from giving me the idea and making me understand how African languages contained the seeds of an African Renaissance if only they were allowed to grow.
I knew that the historical European Renaissance rooted, grew, flourished and blossomed through its writers in European vernacular languages. English, French, German, Italian, Spanish and more took the place of Latin in expressing the best that was being thought and said in their countries. Yet translation between and among these languages as well as from classical Latin and Greek culture, plus biblical texts and cultures, made them ever more widely shared and understood.
Read more: Drama that shaped Ngũgĩ’s writing and activism comes home to Kenya
From Ngũgĩ discussing African languages I took away a sense that African writers, storytellers, people, arts, and cultures could create a similar paradigm and overcome colonialism, colonial languages, neocolonialism and anything else that might prevent greatness."
Jabulani Sikhakhane" https://theconversation.com/3-things-ngugi-wa-thiongo-taught-me-language-matters-stories-are-universal-africa-can-thrive-258074 #metaglossia_mundus
Businesses in Quebec are now required to follow updated regulations for French-language commercial signage and packaging
"New French signage rules now mandatory for businesses in Quebec with fines up to $90,000 Businesses in Quebec are now required to follow updated regulations for French-language commercial signage and packaging.
It's official: Companies like Canadian Tire, Best Buy, and Second Cup must now add French descriptions to their storefronts, covering two-thirds of the text space.
Despite a request from business groups to extend the deadline, as of Sunday, June 1,2025, several French-language requirements related to commercial signage and packaging came into force under Law 14 (formerly Bill 96).
With French required to be the dominant language on store signs and stricter guidelines for product packaging, the key changes include any business name featuring a specific term (such as a store name) in a language other than French and visible from outside must now be accompanied by French wording—such as a generic term, a description, or a slogan—to ensure the clear predominance of French.
This also applies to recognized trademarks, whether fully or partially in another language, if they appear in signage visible from outside a premises.
"Visible from outside" includes displays seen from the exterior of a building or structure, within a shopping mall, or on terminals and standalone signage like pylons.
Photograph: Stéphan Poulin What to know about Quebec’s new language rules? Under the new rules, French must occupy twice the space of other languages on storefronts, meaning businesses with English names must add prominent French descriptions.
While trademarks can remain in other languages, new rules require generic terms within them—like "lavender and shea butter"—to be translated into French.
Critics warn this could limit product availability if global suppliers don't adapt, pushing customers to online retailers.
Quebec's language requirement, previously for businesses with 50+ employees, now applies to those with 25–49 staff, who must register with the language office—even if no changes are ultimately needed.
Businesses that violate the new rules face fines from $3,000 to $30,000 per day, rising to $90,000 for repeat offences—though officials say penalties may be delayed if efforts to comply are underway.
What is Bill 96 Quebec 2025? Bill 96, which amends Quebec's Charter of the French Language, introduces changes that impact businesses in Quebec, particularly regarding language use in commerce and business.
Specifically, on June 1, 2025, a key element of Bill 96 regarding trademarks comes into effect, requiring translation of descriptive or generic terms within trademarks into French." By Laura Osborne Editor, Time Out Canada Monday June 2 2025 https://www.timeout.com/montreal/news/new-french-signage-rules-now-mandatory-for-businesses-in-quebec-with-fines-up-to-90-000-060225 #metaglossia_mundus
|
"Back in 2016, you may recall, there was an explosion of disparaging commentary about Hillary Clinton’s voice. It was shrill, people said, and too loud; it was harsh and flat and “decidedly grating”; it was the voice of a bossy schoolmarm whose “lecturing” or “hectoring” tone was widely agreed to be a total turn-off. No one, they said, would vote for a president with a voice like that.
As feminists immediately recognized, this criticism wasn’t really about Clinton’s voice. Her voice was just a symbol of everything her critics didn’t like about her, beginning with the simple fact that she was a woman who wanted to be president. The words her detractors used, words like “shrill” and “harsh” and “bossy”, are commonly used to express dislike and disapproval of “uppity” women, women who occupy, or aspire to occupy, positions of authority and power. That these words have little if anything to do with what an individual woman actually sounds like is demonstrated by the fact that they’re contradictory—Clinton’s voice was said to be both “shrill” (high and piercing) and “flat” (low and monotonous)—and are applied to women who sound totally different (Greta Thunberg and the late Margaret Thatcher have both been described as “strident”). What “grates” is not the voice itself, but the temerity of the woman who raises it in public and expects others to listen to what she says. Calling her “strident” or “shrill” is a way of shaming her for that. Male politicians are not subjected to this voice-shaming: they may be criticized for any number of other things (as Trump was in 2016), but their voices rarely become an issue, because men’s right to a public voice is not in question.
I found myself thinking about this last week while watching another female politician being voice-shamed: Alabama Senator Katie Britt, who responded on behalf of the Republican party to President Biden’s State of the Union address. As you’d expect, she was critical of Biden; as you’d also expect, her performance attracted a lot of criticism from non-Republicans. But much of that criticism focused not on what she had said, but on how she had said it, and especially on her use of something called “fundy baby voice”.
Here’s one example, written by Cheryl Rofer for the leftist blog Lawyers, guns and money:
The way of speaking referred to here as “fundy baby voice” (“fundy” = [Christian] fundamentalist) is evidently in the process of being what sociolinguists call enregistered. Enregisterment happens when a linguistic phenomenon (usually one that’s been in existence for some time) becomes sufficiently noticeable to be identified, given a name (e.g., “Estuary English”, “uptalk”) and commented on. “Fundy baby voice” doesn’t yet have the same level of popular recognition as, say, uptalk: as last week’s commentary demonstrated, you still have to explain what it is if you’re writing for a general audience. But people who are aware of it can tell you not only what it’s called, but also who uses it (prototypically, white southern evangelical women), what it signifies (feminine submissiveness) and what its most salient characteristics are (it’s high in pitch, has a breathy or whispery quality and is produced with a smile).
The discourse through which a way of speaking is enregistered doesn’t just explain what it is: typically it does two other things as well. One is to construct a stereotype—a generic representation which captures what makes the way of speaking distinctive, but which is simpler and more extreme than any real-life example of its use. When I listened to Katie Britt’s speech, for instance, I realized that the descriptions I’d read had exaggerated some elements of her performance while leaving out others entirely. Her voice was definitely breathy, but not as high-pitched (or as southern) as I’d expected; I was also surprised by how much she used creaky voice (which is not part of the stereotype: it’s similar to vocal fry, associated with speaking at a low pitch, and it doesn’t sound sweet or babyish). The only thing I thought the commentary hadn’t exaggerated was her frequent and incongruous smiling.
The second thing this kind of discourse constructs is an attitude to the way of speaking that’s being enregistered. In the case of fundy baby voice that attitude is strongly negative, as you can tell not only from what is said about it (e.g., Cheryl Rofer’s description of it as “bizarre”), but also from the name it’s been given, which is obviously not neutral—it’s not a label you’d expect evangelical women to use themselves. Discourse about fundy baby voice is largely a matter of people outside what Rofer calls the “fundy bubble” criticizing the speech of women inside it. Which is not, of course unusual: commentary on uptalk, vocal fry and other alleged “female verbal tics” is also produced by people who don’t (or think they don’t) talk that way to criticize, mock or shame those who do.
There are, to be fair, some exceptions: there’s a more nuanced take, for instance, in a post by the former Southern Baptist and now self-described “rural progressive” Jess Piper. Piper wrote about fundy baby voice well before Katie Britt made it a talking-point, and when she revisited the topic in the wake of Britt’s speech she reminded her readers that it isn’t bizarre to women like her who grew up with it:
Whereas Rofer suggests that evangelical women use fundy baby voice “deliberately”, Piper points out that speaking is a form of habitual behaviour shaped by lessons learned early in life. Though she no longer identifies with the values the voice symbolizes or the community it signals membership of, she hasn’t been able to eliminate the habits she acquired during her formative years—habits which were modelled, as another ex-fundamentalist, Tia Levings, explains, by “older generations speaking in a soft baby whisper to the younger”, and reinforced through “an invisible reward system of acceptance and attention”. Girls learned, in other words, how to speak so that others would listen to them.
That is not, lest we forget, something that only happens in the “fundy bubble”. We are all products of gendered language socialization, which is practised in some form in all communities. Of course, the details vary: when I was a girl what was modelled and rewarded wasn’t the “soft baby whisper” Tia Levings and Jess Piper learned. But it was just as much a linguistic enactment of my community’s ideas about “proper” femininity. Sounding “ladylike”, for instance, was constantly harped on: girls got far more grief than boys for things like yelling, laughing loudly, using “coarse” language, speaking with a broad local accent and addressing adults without due politeness. And the process continues into adulthood: it’s what’s happening, for instance, in all the modern, “diverse” and “inclusive” workplaces where women are told they sound too “abrasive” and need to “soften their tone”. At least in the “fundy bubble” the speech norms prescribed to women are consistent with the overtly professed belief that women should be sweet and submissive; they’re not enforced by bosses who claim they haven’t got a sexist bone in their body.
Jess Piper thinks we shouldn’t be too quick to judge women like the ones she grew up with, who “used the voice because they were trained to use it”. They aren’t all terrible people: in many cases, she says,
Piper does not, however, want to give “grace and understanding” to women like Katie Britt, who have real power and who do want to use it to harm others. “I am jolted awake”, she writes, “when I hear the voice dripping sugar from a mouth that claims to love all while stripping rights from many”.
If her point is that these women are hypocrites, then she’ll get no argument from me. But is it right, factually or morally, to make that argument only about fundamentalist women? Isn’t anyone a hypocrite who claims to follow Jesus’s commandment to “love thy neighbour as thyself” while preaching intolerance towards anyone who isn’t white or straight or Christian? Even the hypocrisy of a woman who forges a successful career in national politics while maintaining that women’s place is in the home is not hers alone: presumably women like Britt made their choices with the support of the husbands, fathers and pastors who, as Piper says herself, have more power within the community than they do. If those men are happy for some women to pursue high-powered careers because they think it will advance the community’s political goals, then they are hypocrites too. But by making a specifically female way of speaking into a symbol of the hypocrisy of the religious Right, we are, in effect, scapegoating the women.
To be clear, I’m not suggesting we shouldn’t criticize Katie Britt. But it would surely be possible to hold her to account—for what she said in her speech, for her record of espousing repellent political views, and indeed for her general hypocrisy—without bringing her voice into it. Is the voice-shaming of right-wing Christian women by leftists and feminists not itself hypocritical? How is it different from what feminists objected to so strenuously in 2016, the voice-shaming of Hillary Clinton by conservatives and woman-haters?
Some feminists might reply that the question is obtuse: the two cases are obviously completely different. Whereas Clinton was criticized for flouting patriarchal speech-norms (e.g., that women should be nice, be humble, speak softly and wear a smile), Katie Britt and other fundy baby voiced women are putting on a bravura display of conformity to those norms: criticizing their way of speaking is therefore a feminist act. But while I do understand that logic, there are two reasons why I don’t accept it.
First, it is my belief that when anyone sets out to shame a woman for something they wouldn’t shame a comparable man for, be that her marital status, her sex-life, her weight, the clothes she wears or the sound of her voice, that is, by definition, sexist. It relies on the existence of a double standard which feminists should be criticizing, not exploiting—especially if we’re going to criticize it when it’s used against us.
Which brings me to the second point. Making high-profile women the subject of endless public commentary about how nasty or stupid or babyish they sound is a form of sexist language-policing that has a negative effect on all women. Not just the ones who really are nasty or stupid; not even just the ones who are individually subjected to criticism. What gets said about those women is intended to teach the rest of us a lesson—to make us more hesitant about speaking publicly, more self-conscious about our speech and more cautious about how we express ourselves. If we think that’s a problem, we can’t pick and choose which forms of it to be against. We can’t argue that it’s OK when the targets are reactionary anti-feminist women, but totally out of order when they’re on our side of the political fence.
Any woman who chuckled at the tweet quoted by Cheryl Rofer—“this porn sucks”, a reference to the fact that fundy baby voice has things in common with the more overtly eroticized “sexy baby voice”—should remember that ideas about how women should or shouldn’t speak are many and varied, and available to be used by anyone who feels the urge to put a woman—any woman—in her place. You may not talk like Katie Britt, but you almost certainly talk in some way that someone somewhere could decide to mock or shame you for—because the basic problem, whether you like it or not, is one that you, like every other woman, share with Katie.
None of this is meant to imply that feminists shouldn’t be critical of the norms which define “feminine” speech: what I’m saying is that there’s a difference between critically analysing those norms and criticizing, mocking or shaming women whose speech exemplifies them. I (still) don’t understand why language-shaming is so often seen as acceptable when other kinds of shaming are not. If feminists wouldn’t criticize a female politician by making disparaging comments on her appearance–for instance, saying that Marine Le Pen looks like an old hag and Giorgia Meloni dresses like a bimbo–it’s odd that they don’t seem to have similar scruples about mocking the way women’s voices sound.
But even if you don’t share my reservations about voice-shaming women whose politics you don’t like, in this case it could be seen as a trap. When we ridicule Katie Britt’s performance (as Scarlett Johansson did in her “scary mom” parody on Saturday Night Live) we may actually be doing her a favour, politically speaking, by treating her as a joke rather than a threat. On that point we could learn something from the great Dolly Parton, who has often said that she built her career on being underestimated by people who couldn’t see past the surface trappings of her femininity—the elaborate wigs, the breasts, and indeed the voice (high, sweet and southern accented)—to the inner core of steel. Katie Britt and her ilk may not share Dolly Parton’s values (or her talents), but they are no less ambitious and determined; the threat they represent is real, and we underestimate them at our peril.
Gender, talking and The Traitors
I didn’t watch the first series of The Traitors (I’m not generally a fan of reality shows where people compete for money), but the buzz it generated made me curious enough to start watching the second, which the BBC is showing this month. It’s now reached the halfway mark, and I’m still watching. If you’re interested, as I am, in the way people talk–and more specifically in how gender affects group interaction–this show offers plenty of food for thought.
In case anyone’s unfamiliar with the format, here’s a quick rundown. Twenty two players are gathered in a Scottish castle and sent on “missions” where they work in teams to earn the prize money they’re hoping to win. A small number of them have been secretly assigned the role of Traitors, and if any of them make it to the end they’ll take all the money, leaving the non-Traitors (“Faithfuls”) with nothing. By that point most players will have been eliminated: the Traitors murder one Faithful each night, meeting in secret to choose their victim, and there’s also a daily Round Table meeting at which the whole group banish someone they think is a Traitor (or in the case of the actual Traitors, someone they want the others to think is a Traitor). This process starts with an unstructured group discussion, and ends with each person casting a vote: whoever gets the most votes must leave, revealing their true allegiance (Traitor or Faithful) on their way out.
Verbal communication plays a central role in this game: to succeed, players need both the ability to read people (paying close attention to their actions, demeanour and–crucially–their speech) and the ability to speak persuasively in a group (since decisions require majority agreement). Individuals will vary in how they approach these tasks and how skilfully they perform them, which is partly a question of experience and temperament. But what happens in group talk isn’t just about individuals: it’s also affected by social factors...."
#metaglossia_mundus: More at https://debuk.wordpress.com/