 Your new post is loading...
|
Scooped by
Charles Tiayon
May 10, 2023 12:07 AM
|
- VOLUME 136
- ISSUE 7
- MAY 2023
The principle of the arbitrariness of the sign is not doubted by anyone, but it is often easier to discover a truth than to assign it its rightful place. This principle dominates all of linguistics — its consequences are innumerable. It is true that they do not appear all at once with equal clarity. It is after many detours that you discover them, and with them the primordial importance of the principle. — Ferdinand de Saussure1 If you’re reading this, you probably understand English. It happens to be the de facto and sometimes de jure language of the U.S. legal systems,2 but some forms of “English” are more equal than others. Much work has reckoned with the contours of due process when people come to court with little to no English proficiency,3 but what about dialectal misinterpretation?4 English has numerous dialects, many birthed on what is now U.S. soil, but state and federal law have no coherent dialectal jurisprudence.5 The potential for error is worrying.6 Here, the goal is not to prove errors happen but to see whether the Constitution cares errors happen. It does. More precisely, the Due Process Clauses of the Constitution demand that the executive and judicial branches maintain procedures to avoid inaccurate transmission of linguistic data that adversely affects litigants. A mouthful to be sure, but the argument is that it is a violation of procedural due process to maintain procedures that will reliably cause misinterpretation of plain English and make it harder for litigants, especially criminal defendants, to win their cases. Intuitively, something’s amiss when the legal system, seemingly arbitrarily, messes up when interpreting some forms of English and not others. Past scholarship on dialect has brought invaluable attention to the subject, and this Note seeks to continue that burgeoning tradition by showing how the status quo raises constitutional concerns and by serving as a resource and model for dialectal analysis going forward. Part I showcases a few prominent English-to-English errors the legal system has made before, using Black English as the lens. One may wonder at the choice to use examples from only Black English when the point applies to any dialect. This is because of the insidiousness of the errors. Black English is a widespread dialect and one whose population of speakers is disproportionately represented in criminal adjudication, where colloquial testimony often features prominently. The point is general, but the readily available evidence is not. Furthermore, while it is probably true that racial minorities disproportionately bear the brunt of interpretive mistakes, any equal protection implications are for a different Note.7 Here, the focus is on language qua language, which, while correlated with race, needn’t be inextricably tied to it. Part II demonstrates that the legal system should think of these mistakes as procedural in nature with judicially administrable remedies. Part III argues that the Constitution has an open door for dialectal due process claims. And Part IV tours English legal history and documents the expansion of dialectal diversity to show how principles of linguistic fairness run deep. I. Linguistic Mistakes The linguistic mistakes here are not the relatively predictable ones that arise from bad or no interpreters for a non-native English speaker. Instead, these mistakes happen when a native English speaker, indeed someone who might not speak any other language whatsoever, has difficulty being understood, which affects a litigant’s case. Misinterpretation can happen during live trial testimony or even before the police have begun an interrogation.8 These errors are both dangerous and insidious. They are dangerous because cases big and small are won and lost on the minutiae of language, and they are insidious for two reasons. First, if someone misinterprets speech and then carves that misinterpretation into the stone of the judicial record, it might be nigh impossible to uncover that a mistake happened at all. And second, if a monolingual anglophone judge from Macon, Georgia, hears Spanish, the judge knows they don’t understand, but if that same judge hears someone from England say “biscuit” and thinks of their grandmother’s delightful gravy-soaked masterpieces, the judge might not even realize their mistake. The careful work of estimating exactly how much dialectal misinterpretation happens is for another day, but here, a few examples of the misinterpretation of Black English will serve to illustrate that it happens and matters at least sometimes. In each of these cases, a dialectal misinterpretation occurred that did indeed matter or obviously could have mattered. Before embarking on this parade of misadventure, a note on something linguists have been screaming from the rooftops for decades, but maybe from rooftops a little too far out of the earshot of the legal system: No dialect is superior or more “correct” than another. Southern English, Black English, Chicano English, Appalachian English, American Indian English, or what have you are not degenerate, lazy, sloppy, or merely slang. Each dialect has an internal structure with rules.9 It’s possible to get things wrong. What has become standard English in the United States is merely one dialect that had the historical fortune of being propelled to something approaching formal codification.10 It is certainly the written lingua franca, but faithful interpretation requires approaching the language on its own terms. English dialects vary immensely, and any attempt to make a consistent distinction between language and dialect is doomed from the outset.11 Linguists like to say that “a language is a dialect with an army and a navy”12 and that “there are as many languages as speakers.”13 With that in mind, consider these errors. A. The Lawyer Dog The “lawyer dog” case is probably the most famous recent clash between the legal system and English dialect. In 2015, New Orleans police wanted to interrogate a twenty-two-year-old black man named Warren Demesme on suspicion of sexual assault.14 Police had brought him in for questioning once before, and Demesme was reportedly getting frustrated, so he said: “if y’all, this is how I feel, if y’all think I did it, I know that I didn’t do it so why don’t you just give me a lawyer dog cause this is not what’s up.”15 The police did not give Demesme a lawyer, and he confessed.16 While Demesme was awaiting trial, his attorneys filed a motion to suppress the confession because the police got it out of him only after an unheeded invocation of the right to counsel.17 The prosecution argued that the statement, which the district attorney’s office provided as written above, was equivocal and therefore did not constitute a request for a lawyer.18 Eventually, the dispute got up to the Supreme Court of Louisiana, where the court denied Demesme’s petition.19 Justice Crichton additionally concurred. He argued that Demesme’s “ambiguous and equivocal reference to a ‘lawyer dog’ does not constitute an invocation of counsel that warrants termination of the interview.”20 He relied on Davis v. United States,21 where the U.S. Supreme Court held that the statement “[m]aybe I should talk to a lawyer” was ambiguous.22 To anyone scarcely familiar with Black English, it is painfully obvious that Demesme was using “dog” here (or, maybe had the defense lawyers been the ones to transcribe the statement, “dawg”) as a more familiar version of “sir.”23 He could have just as easily said “nigga,” “dude,” “man,” or “my guy.” If Demesme had said, “lawyer sir” or “lawyer man,” there would be little debate. This dialectal misinterpretation is likely clear to many who do not speak Black English, but as the next few examples will show, sneakier errors can happen as well. B. He Finna Shoot Me In the following case, a federal judge in dissent misinterpreted the Black English present tense as possibly being the past tense, and the majority didn’t disagree. The admissibility of one piece of evidence, whether Joseph Arnold had a gun, hinged on whether a particular statement fit the excited utterance exception to the hearsay rule.24 The majority thought yes, the dissent no. The majority and the dissent also adopted different versions of the testimony in question from a Black woman25 named Tamica Gordon. The majority had her as saying: “I guess he’s fixing to shoot me.”26 Judge Moore disagreed and wrote that “[a]fter listening to the tape multiple times” she did not hear Gordon say “he’s fixing to shoot me” but instead “he finna shoot me.”27 The difference mattered because Judge Moore believed that “[t]he lack of an auxiliary verb renders determination of whether Gordon intended to imply the past or present tense an exercise in sheer guesswork.”28 And conditional on Gordon having said “he finna shoot me,” the majority did not disagree with the dissent’s linguistic analysis.29 But the linguistic analysis was wrong and the tense unambiguous. In Black English, the auxiliary verb in a sentence like this is omittable in the present tense, and only in present tense.30 So “he shooting” means “he is shooting,” not “he was shooting.” Insofar as the admissibility of the evidence turned on whether “he finna shoot me” was in the past tense, Judge Moore’s analysis was incorrect. Finally, another disturbing feature of Judge Moore’s reasoning. To define “finna,” she used Urban Dictionary, arguing that the consensus nature of the site made it “unusually appropriate” for defining “slang, which is constantly evolving.”31 First, Black English, including “finna,” is not slang. And second, while in this case the definition she used was only marginally wrong32 and likely would not have affected her decision, Urban Dictionary is not the most reliable source. For the uninitiated, this is a user-generated definition site. To demonstrate why the use of Urban Dictionary is troubling, here is one definition of “judge” that is currently on the site: The unmerciful uncivilized unfair pieces of shit they hire in the so-called “judicial system” which is about the biggest crock of shit there is out there. Judges are pansies, often punishing the innocent and letting the guilty walk free. That’s why nobody has faith in the judicial system anymore. You’re better off to take the law into your own hands. Ever been to divorce court? The judge almost always hears out the womans side of the case and totally ignores the mans side. . . .33 The madness continues, but for propriety’s sake the rest of the definition has been omitted. Despite these warning signs, Judge Moore is not alone in relying on Urban Dictionary for definitions.34 C. I’m Gonna Take the TV Judges don’t always have direct access to a recording like they did in the previous case. A recording might not exist at all, in which case it is up to the transcriber to ensure accurate transmission. Consider one jail call from 2015. Two linguists listened to a recording of a call the police had transcribed. They noted two particularly important errors. When the suspect said, “He come tell (me) bout I’m gonna take the TV,” the police had transcribed “??? I’m gonna take the TV,” and where the suspect said “I’m fitna be admitted” the police had “I’m fit to be admitted.”35 If these transcripts got to a trial, they could make a dangerous difference. This is just one phone call. In a now-landmark study, experimenters tested certified court reporters on Black English, and they failed in dramatic fashion: Despite certification at or above 95% accuracy as required by the Pennsylvania Rules of Judicial Administration, the court reporters performed well below this level . . . . 40.5% of the utterances were incorrectly transcribed in some way. The best performance on the task was 77% accuracy, and the worst was 18% accuracy. . . . [T]he very best of these court reporters, all of whom are currently working in the Philadelphia courts, got one in every five sentences wrong on average, and the worst got more than four out of every five sentences wrong, under better-than-normal working conditions, with the sentence repeated.36 There are obvious limitations to this study.37 It is only one study with only twenty-seven court reporters from only the City of Philadelphia.38 But the potential danger of inaccurate transcription is clear. Just as worryingly, transcribers sometimes intentionally change dialectal grammar in an effort to “sanitize[]” what they see as defects,39 transmogrifying meaning. Just like writing down a speech loses tone of voice, translating dialect might elide important information if the transcriber doesn’t know what to look for. For example, Black English has more aspectual markings than standard English. So, “he be running” is different than “he running.” The latter means “he’s running,” while the former means something like “he habitually runs, but not necessarily now.” A transcriber who doesn’t know that fact might think “be” is a mistake and transcribe “he is running,” changing the meaning. D. Tryna Get Ah Glick? If a dialectal speaker writes something themselves, no one can mishear or mistranscribe, but errors still happen. Cedric Antonio Wright, a defendant, responded “Yea” in a Facebook message to the question of if he was “tryna get ah glick.”40 The Eighth Circuit held that the “Facebook conversation revealed that Wright attempted to trade” for the gun.41 The panel’s ultimate conclusion that a reasonable jury could have determined Wright had the gun is correct given all the evidence, linguistic and not, in the case. However, the conclusion, insofar as the judges tried to make it, that answering in the affirmative to “tryna” reveals an “attempt” is false. A lesser-known feature of Black English is the bivalence of “tryna.” It can mean “attempting to” as its etymology from “trying to” would suggest, but especially in questions or negative sentences, the term has another meaning — to desire. “You tryna eat?” in most contexts does not mean “are you attempting to eat?” but rather “do you want to eat?”42 Similarly, “you tryna get ah glick?” might not mean “are you attempting to get a Glock?” but rather “do you want to get a Glock?” That is, Wright could have, in theory, responded, “Yeah, I want and need one, but I can’t because that would be illegal.” Even in situations where judges have direct access to written records in the originator’s own hand, dialectal errors can still happen because judges are unaware of the differences. II. Remedial Procedures The previous Part showed that errors can occur at any point in the process, whether professional transcribers are hired or judges have direct access to writing or audio. The goal of this Part, then, is to show that these are not just unfortunate, inevitable errors, but unfortunate, preventable errors — preventable through procedure. Consider these potential remedies. A. Acknowledging Dialect To ensure reviewability and a complete record, judges, law enforcement interviewers, and transcribers should explicitly record the dialect they are dealing with as specifically as possible. It might be very difficult to tell if something is written in an unfamiliar dialect without reference to something besides the text itself,43 so making sure a police interview or a jail call that gets transcribed indicates the dialect spoken opens the door to more self-conscious interpretation. Suppose the transcript provided to Justice Crichton in the lawyer-dog case had “BLACK ENGLISH” written on the first page. Of course, misinterpretation can still happen, but in that situation, any prospective interpreter must acknowledge that if they are going to hang their hats on technicalities, they have to contend with dialect as well. A second reason to record the dialect in question when transcribing or interpreting is future proofing. An expert at trial might find it useful to have an earlier assessment of the dialect in question, and an appellate court might be more likely to do a robust linguistic analysis if the dialect is apparent from the get-go. Without recording the dialect, during trial or on appeal, everyone might have to just guess at the proper interpretive framework. Besides making interpretation more self-conscious and reviewable, acknowledging the dialect serves to advance the legitimacy of nonstandard dialects and to promote popular knowledge that they exist and impact court proceedings. Finally, this idea is not so crazy. Case law already implicitly acknowledges the dialects’ legal relevance. Courts have admitted lay testimony of accent to identify people in the tradition of “linguistic profiling.”44 The most famous example is probably the O.J. Simpson trial — Mr. Johnnie Cochran, Simpson’s lawyer, failed in his objection to the question: “When you heard that voice, you thought that was the voice of a young white male, didn’t you?”45 — but the Kentucky Supreme Court’s language is more explicit: No one suggests that it [is improper for a lay witness] to identify [a voice] as female. We perceive no reason why a witness could not likewise identify a voice as being that of a particular race or nationality, so long as the witness is personally familiar with the general characteristics, accents, or speech patterns of the race or nationality in question . . . .46 Now, language is not intrinsically tied to race, as the Fifth Circuit has pointed out,47 but insofar as eyewitness testimony as to race is reliable enough to be admissible in a court of law (not to argue it should be), earwitness testimony as to race is astonishingly reliable. One experiment found that people are able to correctly distinguish between Black English speakers, Chicano English speakers, and Standard American English speakers more than seventy percent of the time — when they only heard the word “hello.”48 If courts acknowledge the existence of dialects when identifying suspects, it stands to reason they should acknowledge their existence when it comes to interpretation. B. Jury Instructions Pending further research on the best form for such instructions, judges could give juries cautionary instructions when dialects with public opprobrium show up in the courtroom. Factfinders might have prejudice against certain dialects. Linguists have shown that people are very good at identifying dialects very quickly49 and that potential jurors are prejudiced against certain dialects, notably Black English.50 Some find speakers of Black English to be less believable, less trustworthy, more criminal, less comprehensible, and more likely to be in a gang.51 Judges could explicitly instruct jurors that dialect, accent, and nonstandard grammar have no bearing on the truth of testimony or the person’s potential guilt. This might prompt self-conscious deliberation of those prejudices. The actual instruction would need to be more specific and probably different depending on the testimony presented, but if a speaker of Black English were to testify, the judge might say the following to the jury before they do: The next witness you will hear speaks Black English. This is a valid dialect of English and not wrong. Some things you hear might be more difficult to understand. Some things you hear might have a different meaning than you might initially think because of grammatical differences. But you must take care not to allow the differences in language alone to affect your judgments of the witness’s credibility. It would be unfair to the parties to ignore or discredit someone’s testimony just because of how they speak. This Note strongly welcomes and encourages further-refined jury instructions that are most effective and avoid abuse. This particular instruction might very well not do that, and empirical scholarship might have something to say. A word about implicit bias. While there is almost no reason to doubt implicit bias exists, it is important to note that there is little empirical reason to believe conventional implicit bias training works in changing behavior.52 Similarly, research on the comprehensibility of jury instructions and the effectiveness of limiting instructions and admonitions is far from promising.53 That said, cautionary instructions, which the above model attempts to be, have mixed results in the empirical literature, resulting in either no or a slightly positive effect.54 Given the jury instructions’ tiny cost, further research into benefits and abuses is extremely worthwhile. C. Dialectal Interpreters The nuclear option is to get interpreters. This might sound crazy at first, but in 2010 the Drug Enforcement Administration released a memo to much media fanfare requesting nine “Ebonics” translators.55 The benefits of a competent translator are obvious: reduction of misinterpretation and formal recognition of the dialects as valid. And several prominent linguists think such an idea for Black English is at least going in the right direction.56 Now the drawbacks. First, no standardized tests exist for Black English and many other dialects, so for the time being, consistently measuring competency might be impossible. Second, interpreters are costly to train and hire. Third, having a native English speaker use an interpreter might be seen as a slight. And finally, if all there is is a written transcript, an interpreter might be of only limited value.57 One partial solution is to increase the number of jurors familiar with dialects important to the trial.58 When possible, this would help because they might act as informal interpreters,59 but in the case of less common dialects or if the trial happens in a place far from the epicenter of an important dialect, it might not be feasible. D. Reputable Interpretive Tools Anyone interpreting dialect, especially judges, should use the best evidence available for determining the meaning of the language in question. While science is never perfect, it only makes sense that borrowing the tools that linguists, lexicographers, semioticians, and historians provide will make interpretation better. The first step is to not use sources like Urban Dictionary if another option exists. Judges should strongly disfavor any resource that is publicly created, has little moderation, and does not cite sources. For dialects a judge speaks, they might be able to discern what is useful and what is not from less rigorous academic sources, but to rely on such sources for language the judge is unfamiliar with is dangerous. The second step is to find reputable sources on the dialect in question. A somewhat more accurate, quick-and-dirty, crowdsourced definition site is Wiktionary, which at times explicitly notes the dialect of the word present, gives etymologies, and cites sources. If you question its usefulness, ask yourself if you know the Black English definition of “kitchen.”60 One of Wiktionary’s main limitations is its focus on words and not grammar. Furthermore, many dialects do not have formal dictionaries61 or textbooks, so the next best thing is academic linguistics sources. As with any specialized area, these papers might be opaque to anyone without specialized training, which is unfortunate, but the law asks judges to do many things other than pure legal interpretation. The Federal Rules of Civil Procedure acknowledge this interdisciplinary approach. Rule 44.1 allows judges to consider “any relevant material or source” when trying to figure out what foreign law means.62 And what this means when judges are faced with other languages can engender colic. Judges Posner and Easterbrook thought “[j]udges should use the best of the available sources,”63 which meant looking at official translations and secondary literature to determine foreign law instead of relying on party-provided expert testimony, which “adds an adversary’s spin.”64 Furthermore, Judge Posner had characteristically colorful words, arguing that the United States’s “linguistic provincialism does not excuse intellectual provincialism.”65 Similar reasoning applies to what judges should do when it comes to dialect. It seems equally provincial, if the term can be excused, to uncritically adopt the prosecution’s transcript when dialect is involved, as happened to Warren Demesme. E. Audio Recording If possible, audiovisual or audio recording of statements would reduce error and single points of failure in the system. If someone mistranscribes, it’s permanent unless the source remains. Compared with mere acknowledgment and citing reputable sources, recording and storing audio have more-than-negligible costs,66 but they’re still probably much cheaper than professionally qualified court interpreters who get $495 per day in federal court.67 Recording’s main drawback is its inability to be universally applied. If the only thing the court has is earwitness testimony of something that happened out of court, nothing can be done. But when it is possible for the government to record testimony, doing so would both reduce error in the first instance and make it possible to remedy errors that do happen. F. Transcriber Training The shocking statistics from the Philadelphia experiment, if anywhere near generalizable, indicate that federal and state governments should mandate training in common dialects for all transcribers, and the National Court Reporters Association should create standard curricula for transcribing both common and uncommon dialects. Having standards will both reduce error and make accurate transcription more accessible. Currently, at the federal level, the Judicial Conference recognizes as certified those who pass the Certified Realtime Reporter exam,68 which requires an accuracy of ninety-six percent on five minutes of real-time testimony at two hundred words per minute.69 Notably, the Association has no standardized testing, training, or certification for dialect.70 Building that infrastructure will be costly and take time, but the current certification process might mean very little for many English speakers. III. Dialectal Due Process as Procedural Due Process Procedures that produce dialectal misinterpretation create constitutional concerns. Just because a particular procedural safeguard would reduce the likelihood of error does not mean the government has to do it, but courts do need to impose some measures. The question here is not whether the state is depriving someone of a right in the first place but whether dialectal misinterpretations implicate due process at all. They do for the simple reason that the “fundamental requirement of due process is the opportunity to be heard ‘at a meaningful time and in a meaningful manner.’”71 Many courts have required an interpreter when a litigant understands little to no English,72 but this Part argues intra-English issues are also worthy of remediation. “For all its consequence, ‘due process’ has never been, and perhaps can never be, precisely defined.”73 But if there is a flagship article for what procedural due process means, it is Judge Friendly’s “Some Kind of Hearing,” and if there is a flagship case for what procedural due process means, it is Mathews v. Eldridge.74 This Part’s work is to analyze which factors dialectal misinterpretation implicates and why consistent misinterpretation can lead to less-than-meaningful hearings. As the title of Judge Friendly’s article hints, current law around what process is due is fluid and depends on the particular circumstances, and since dialectal misinterpretation transcends circumstance, the Constitution will demand no uniform rule. The conclusion here is not that such and such a procedure with respect to dialect is required but rather that some procedure with respect to dialect is required. A. Mathews v. Eldridge Mathews requires an opportunity to be heard “in a meaningful manner.”75 Reasonable minds can disagree about what this means, but analyzing Mathews’s balancing test shows that dialectal misinterpretation implicates exactly what judges must consider when shaping the contours of constitutionally required process. Mathews commands that to decide whether the Constitution requires more or different process in a case, courts must consider: First, the private interest that will be affected by the official action; second, the risk of an erroneous deprivation of such interest through the procedures used, and the probable value, if any, of additional or substitute procedural safeguards; and finally, the Government’s interest, including the function involved and the fiscal and administrative burdens that the additional or substitute procedure requirement would entail.76 If the mistakes showcased in Part I tell anything, it is that these mistakes do matter at least sometimes, and in big ways. This section will touch on all three factors and the case’s axiomatic proclamation that due process requires an opportunity to be heard in a “meaningful manner.” Dialectal misinterpretation can feature in any case, so the private interest will vary considerably. Empirical work is necessary to determine when dialect does indeed come up most prominently, but it will certainly feature in cases where litigants have a lot at stake, whether that’s criminal charges like in State v. Demesme77 and United States v. Arnold,78 deportation, or parental rights termination. Dialect is always a potential problem. Moving to the second factor and the risk of erroneous deprivation, more work is necessary before anyone can in good faith make an estimate as to exactly how much dialectal misinterpretation happens and matters for the deprivation. But if the Philadelphia experiment proves generalizable, mistakes might be happening with alarming frequency. The probability will naturally differ by dialect, region, and court. And, when one considers how badly certified court transcribers did in the study that does exist, it’s a hard sell that the risk of erroneous deprivation from the lack of a coherent approach to dialects is too small to be judicially cognizable. For one, Louisiana should have thrown out Demesme’s confession. And, as Part II demonstrated, there do exist procedural safeguards that have a good chance of reliably protecting against dialectal misinterpretation, both in the short and the long term. If the Louisiana Supreme Court had acknowledged that Demesme was speaking Black English and looked to academic sources on the dialect, he probably would have prevailed. If police-employed jailhouse transcribers had training and testing on dialects, they would likely not transcribe “fitna be admitted” as “fit to be admitted.” Getting precise statistical measurements of how often these problems occur is not possible at the moment, nor is knowing exactly how much any particular procedure will help. That said, Mathews itself said that “[b]are statistics rarely provide a satisfactory measure of the fairness of a decisionmaking process.”79 When it comes to misinterpreting dialect, the principal unfairness is that by having the misfortune of speaking or relying on testimony in the wrong kind of English, defendants find a legal system unprepared to treat them fairly. On to the government’s interests. Every procedural safeguard will have a different cost. Interpreters are likely the costliest, followed by dialect certification for transcribers. Acknowledging dialects, instructing juries, and using reputable sources are basically costless. And increasing the use of audio recording lies somewhere in between. As with the entire analysis, the strength of the government’s interest is context dependent. But it seems unlikely that the cheapest procedures’ minimal costs would often overcome the specter of erroneous deprivation. Audio recording already happens in many situations, and transcribers already exist. Increasing the adoption of audio recording and improving the accuracy of transcribers for dialects would impose real costs, but for important deprivations, such as physical liberty, due process might often require both. The argument for interpreters is weakest, particularly if there are ways to improve the accuracy of transcribers, but it is not implausible that they should be required for tricky dialects when physical liberty is at stake. Finally, as has been said but bears repeating, linguistic issues go to due process’s root because the requirement of a “meaningful” opportunity to be heard requires, if anything at all, the hearer to understand the litigants. Dialectal barriers infringe upon understanding at the most basic level, even when the interpreters might not think so. In some circumstances, the litigant or their counsel might catch mistakes as they happen, but in the course of litigation there is not constant confirmation of a sort of consensus ad idem. So, the system cannot place the duty of clearing up misinterpretations at the feet of litigants. B. “Some Kind of Hearing” Judge Friendly’s article is incredibly influential, cited by more than 300 cases and by the Supreme Court itself eleven times.80 The article lists eleven factors “that have been considered to be elements of a fair hearing, roughly in order of priority.”81 The factors are not a Restatement-like list of what proceedings require, but a framework for the ways a procedure might implicate due process.82 Dialectal misinterpretation strongly implicates several of these factors, including the bias of the tribunal, a decision based only on the evidence presented, and the making of a record. More tenuously, it also implicates an opportunity to present reasons why the proposed action should not be taken. Starting with the most fundamental, a tribunal that does not prepare itself for dialectal misinterpretation is biased. Judge Friendly called an unbiased tribunal “a necessary element in every case where a hearing is required.”83 And unselfconscious dialectal interpretation can bias a tribunal. Juries are more likely to evaluate speakers of certain dialects poorly,84 and that is clearly a thumb on the scale against those who rely on such speakers’ credibility. More generally, even if the misinterpretation is not the result of invidious evaluation of the speaker, a tribunal that fails to treat each dialect on its own terms and instead forces them all to conform to one mainstream structure has biased itself against speakers of all those dialects it fails to recognize. Dialect A speakers get good interpretation, but Dialect B speakers get bad interpretation and are thus less able to press their case to the court. Next, consider the idea that when judges or juries freestyle interpretations of dialectal testimony or base their interpretations on unreputable sources of linguistic information, they are making a decision based on evidence other than that presented. It is not just a mistake but also an injection of unnecessary randomness into the decisionmaking process. Whether the interpreter knows it or not, if they are unprepared for dialects, they might, based on the random similarities or differences with their own dialect, hold incorrectly. The fact that “dawg” sounds like “dog” is historical happenstance, but the result is that Louisiana deprived someone of the right to counsel. An illustration of the point in the extreme: a judge, knowing a smidge of Spanish, would be mistaken if they thought a person who said they couldn’t come to court because they were “embarazada” meant they were too embarrassed to come in. Similarly, a judge cannot conclude “tryna” means “attempt” simply because it sounds like the Standard English “trying to.” Otherwise, stochastics, not evidence, is determining outcomes. Third, consider the record. The record’s importance is so ingrained that Judge Friendly felt “Americans are as addicted to transcripts as they have become to television; the sheer problem of warehousing these mountains of paper must rival that of storing atomic wastes.”85 And the main purpose of this mass of paper, so the argument goes, is the ability for judicial review or administrative appeal.86 When stenographers and transcribers are systematically bad at interpreting English, then, they vitiate that purpose. The Supreme Court said “denial [of free transcription services to indigent criminal defendants] is a misfit in a country dedicated to affording equal justice to all and special privileges to none in the administration of its criminal law”87 because “[t]here is no meaningful distinction between a rule which would deny the poor the right to defend themselves in a trial court and one which effectively denies the poor an adequate appellate review accorded to all who have money enough to pay the costs in advance.”88 As has been shown, dialectal misinterpretation can not only reduce a transcript’s accuracy but also introduce harmful errors, like changing hearsay (“He come tell (me) bout I’m gonna steal the TV.”) to a confession (“??? I’m gonna steal the TV.”). Finally, dialectal difficulties implicate the ability to present reasons why a proposed action should not be taken. Pro se litigants might have an “inability to speak effectively for” themselves,89 not only because they don’t understand the law but also because their dialect might not reach the judge as easily. In the situation where a litigant represents themself, if they speak a dialect for which the hearer is not prepared, how much of an opportunity is it? IV. The Weight of History Principles of linguistic justice run deep in English legal tradition. And, maybe counterintuitively, dialectal due process was likely much less of a problem in the past because justice was more local and because English probably has more varieties today than ever. So, attempts to foreclose dialectal due process claims on the basis that they didn’t exist historically (assuming they didn’t arguendo) are misguided. The “primary guide in determining whether the principle in question is fundamental is, of course, historical practice.”90 This idea has special weight when considering the state power to regulate procedure because: [I]t is normally “within the power of the State to regulate procedures under which its laws are carried out,” . . . and its decision in this regard is not subject to proscription under the Due Process Clause unless “it offends some principle of justice so rooted in the traditions and conscience of our people as to be ranked as fundamental.”91 As the previous Part shows, dialectal due process implicates fundamental principles like unbiased tribunals, but historical practice and circumstances also show linguistic fairness has a pedigree in its own right. The aim is to show not that earlier courts self-consciously accommodated dialect in particular but that they were structurally less likely to have dialectal-misinterpretation issues because juries worked radically differently and there were simply fewer dialects of English to contend with — a problem that will likely only get worse. This Part highlights trends reaching back to pre-Norman England and uses principles of historical linguistics to argue that early courts didn’t necessarily need self-conscious dialectal due process, but courts today do. A. The Medieval View of Linguistic Fairness The idea of linguistic due process is old. The Pleading in English Act 136292 rebuffed the “great Mischiefs” resulting from the fact that people had “no Knowledge nor Understanding of that which is said for them or against them” in court, which was in French.93 The statute felt that “reasonably the . . . Laws and Customs [the rather shall be perceived] and known, and better understood in the Tongue used in the said Realm.”94 Fourteenth-century statutes likely don’t explicitly declare dialect’s importance,95 but that does not foreclose constitutional dialectal due process because the Constitution incorporates many common law ideas of fairness. The Act was the beginning of an almost seven-century-long tradition of conducting court in the vernacular. To say, then, that the idea of linguistic misinterpretation (and one subset of that, dialectal misinterpretation) has no historical basis is false. B. Local Justice Even if no dedicated procedures for dialectal due process existed historically, justice’s local nature meant that procedure had dialectal protection baked in. If the lion’s share of people investigating and passing judgment on you are your literal neighbors, they are much more likely to understand or even speak your dialect. Local justice has been significantly diluted from the prebiotic broth of medieval England. First, terminology. A “hundred” was the second-smallest administrative unit in England, bigger than a parish but smaller than a shire.96 The name comes from its origins as consisting of one hundred “hides,” a maddeningly inconsistent unit thought equivalent to the land needed to support a peasant family.97 Hundred courts empaneled juries from the hundred.98 And in Anglo-Saxon England, the most important interaction people had with the Crown occurred in these courts.99 To be sure, some historical Germanic practices, such as defending court judgments by duel,100 fell out of favor, but many of the practices and procedures developed in hundred courts became a model “all over England in the courts of the manors.”101 The earliest juries were something like “a body of neighbours . . . summoned . . . to give upon oath a true answer to some question.”102 The key word is “neighbours.” Hundred jurors would be coming from a much smaller pool than juries today in the United States, meaning they were much more likely to truly be from the same community and speak the same dialect. The number of hundreds into the nineteenth century was somewhere around 800.103 The English population from 1790 to 1800 was around eight million.104 That means a pool of 10,000 people per hundred. For reference, Harvard University has a workforce of around 13,000 people with a student population of around 23,000.105 The low population density itself was a procedural safeguard against dialectal misinterpretation because the factfinder was more likely to speak the relevant dialects. The average judicial-district population density in the United States at both state and federal levels falls well outside the average in England around 1800 before the nineteenth-century population explosion. The states average a judicial-district population density of around 16 times higher and the federal government’s ninety-four districts leave a population density 350 times higher.106 In sum, jurors today come from more populous districts and serve at random,107 meaning they are less likely to be from truly the same linguistic community as the litigants and witnesses. Through the centuries these hundred courts slowly lost importance. By the 1830s only a few remained, with even fewer active,108 and other courts certainly existed and empaneled juries.109 Some had even smaller jury pools than the hundred courts.110 And during the hundred courts’ twilight in the 1800s, England saw the rise of county-level justice with justices of the peace or magistrates,111 who still make up about eighty-five percent of the English judiciary,112 and the county courts, which ascended in importance in the mid-1800s.113 But the structure protecting dialectal understanding was not immediately lost. The jurors’ level of involvement ensured a level of dialectal understanding. In the earliest times, jurors served because they knew facts concerning the case and the accused.114 These early juries were self-informing, investigating the facts separate and apart from the trial.115 And although jurors were no longer selected with input from the judges starting in 1730116 or self-informing, they were much more involved in the trial process than they are today. Jurors could ask their own questions, request more witnesses, and, crucially, volunteer their own pertinent knowledge about local custom, people, and places.117 Blackstone gave a categorical answer on jury involvement as it existed when he published the first edition of Book III of his Commentaries in 1768. He wrote that “the practice . . . now universally obtains, that if a juror knows any thing of the matter in issue, he may be sworn as a witness, and give his evidence publicly in court.”118 Such a notion is almost anathema today. But the lawyerization of the courtroom and the separation between the juror’s judicial and personal role was only beginning in this era.119 Since jurors were more local and involved in trial process historically, they would have had many more chances to clear up dialectal misinterpretation in view of the judge and litigants. C. Dialectal Divergence The analysis thus far makes a key assumption — that the number of dialects that exist has remained constant. I
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
A written interview between Henry Widener, Portuguese Language Reference Librarian in the Hispanic Reading Room, and Brazilian author and translator Vanessa Bárbara on the author’s lifelong engagement with translation, both through translating authors like Lewis Carroll, F. Scott Fitzgerald, and Virginia Woolf, and her own award-winning novel Noites de Alface.
"Through its blog series Conveyances, the Library of Congress’ International Collections explore the ways in which the translated words held in the Library’s collections link us across continents, cultures, and centuries. The following is a written interview between Henry Widener (HW), Portuguese Language Reference Librarian in the Hispanic Reading Room, and Brazilian author and translator Vanessa Bárbara (VB) on the author’s lifelong engagement with translation.
HW: How did you get into working as a translator? How long have you worked with translation?
VB: I was a fact-checker for the publisher Companhia das Letras, but I always enjoyed working with translation. I started out translating children’s literature for CosacNaify and then I took a test to start translating adult literature for Companhia das Letras. The first title I translated was The Raw Shark Texts (Cabeça Tubarão) by Steven Hall, in 2007.
HW: Many people say translation is an art, something which involves the creativity of the translator. How does your voice or your hand appear in your translations?
VB: Translation is not a mechanical act. It’s not like you can send a text through a machine and get a result that is always the same. My concern is always achieving a balance between loyalty to the work itself, solidarity with the reader, and containing my own creative impulses, which can run totally wild.
In the end, I am a very slow translator because it takes me a long time to assimilate the voice of the author. This can only happen as the work progresses, which is why I often have to revisit the initial chapters to correct my initial lack of familiarity. Despite all that, my voice always comes through in translation, one way or another. I try to at least make sure that the final product is a duet.
HW: You have translated the works of F. Scott Fitzgerald, Art Spiegelman, Virginia Woolf, Lewis Carroll, and Gertrude Stein, among others. What ties together these authors or your translations of them? Were there any particular challenges?
VB: Lewis Carroll was the most difficult, though, of all the works I’ve translated, his was the most similar to my own style of nonsense humor. Translating his poetry was quite nearly impossible. I think that, more towards the end of my life, I can try again, and I will find better solutions.
Translating The Great Gatsby was like traveling in the novel: full of pain yet delightful. I even made a map to help locate and guide myself through the story’s setting. It was very difficult to find Gertrude Stein’s voice.
Something that links all these authors is they all demanded a lot of research, finding references and studies on which to base my own choices of translation – most of all with Virginia Woolf. I really enjoy this part of the work, which I think speaks to my fact-checking spirit.
HW: Your book Noites de alface (The Lettuce Nights) has been translated into six languages. What about this book has enabled it to speak to audiences across languages and cultures? Has the reception of this book differed according to the language of translation?
VB: I think the story is almost universal – it’s about relationships, neighbors, grief, introversion. It’s also pretty crazy, with elements of suspense and strange characters. I would like to acknowledge the French translation by Dominique Nédellec, which was exceptional and crucial for the book to win the Prix du Premier Roman Étranger, in 2016. (One example: he translates the phrase “cansada para burro”, which closes the novel, to “vachement fatiguée”, which I thought was genius.)
HW: Many people prefer to read works in their original language as they feel it brings them closest to the author and their work. What would you say to someone who can only engage with an author through translation? Are there any authors you love whom you have only engaged in through translation?
VB: Ah yes! All of the Russians (I love Tolstoy), as well as some French authors: I fell in love with Flaubert through the Portuguese translations. I think something will always be lost when reading a work in translation, but you also gain something: the translator’s work of cultural mediation is rich and beneficial to the reader in many ways. For instance, a translator’s hand takes a reader to a universe that is so different from their own.
I grew up reading old translations that were full of old-fashioned syntax and spellings. This more antiquated language engaged the text in a certain way, as if I were really reading in another language, though it was my own. Literature is like this, too, I think. You arrive at the book with a desire to dive into another universe, even if it seems out of place in relation to your own."
https://blogs.loc.gov/international-collections/2025/12/conveyances-vanessa-barbara/
#Metaglossia
#metaglossia_mundus
"Behind the Glass Booth: The Realities of Being a Simultaneous Conference Interpreter
When the audience sits back and listens to a seamless stream of speech — whether in Polish, English, or any other language — it is easy to forget that behind the transparent booth, someone’s mind is working at full capacity. Simultaneous conference interpreting is one of the most cognitively demanding professions in the world. It requires linguistic precision, deep subject-matter understanding, technical mastery, and nerves of steel. The recent Jubilee Gala celebrating the 75th anniversary of the Medical University of Lublin, held on 21 November, offered a perfect illustration of how vital — and how challenging — this role can be.
Reflections inspired by the Jubilee Gala of the Medical University of Lublin, 21 November 2025.
The Interpreter’s Task: More Than Just Words
At first glance, interpreting seems simple: listen, understand, and repeat the message in another language. But simultaneous interpreting requires doing all three at the same time, with only a two-to-three-second delay. The interpreter must understand the speaker’s intention, tone, and emotion, not just the vocabulary. Every sentence becomes a puzzle solved in real time.
Interpreters at academic and medical events, such as the MU Lublin Jubilee Gala, must often navigate:
Complex terminology
Long technical sentences
Discipline-specific abbreviations and acronyms
Cultural references
Formal protocol and ceremonial language
This means being excellent in both languages is not enough. One must also be prepared for specialised content ranging from medical achievements to academic reforms and institutional history.
At the Jubilee Gala (21 November)
Ceremonial speeches included academic titles, medical terminology, and historical references—a challenging combination for interpreters.
Multiple international guests required real-time interpretation for inclusivity and protocol.
The interpreters’ work ensured that the 75-year history of the Medical University of Lublin was accessible to everyone in the hall.
Equipment: The Invisible Partner
Modern conference interpreting relies heavily on technology. At the Gala, as at any major academic event, interpreters work with:
Soundproof booths
High-fidelity microphones
Noise-cancelling headsets
Receiver systems for the audience
When the equipment works perfectly, the interpreter becomes almost invisible—an ideal outcome. But even minor technical issues can turn the job into a high-pressure crisis. A crackling microphone, delayed audio feed, or poor acoustics can make comprehension nearly impossible. In simultaneous interpreting, every fraction of a second counts.
Why Speakers Make the Interpreter’s Job Hard
It is a truth known to all conference interpreters: even the most experienced professionals struggle when speakers unintentionally sabotage the process. Common difficulties include:
1. Speaking too fast
Some speakers accelerate when excited, emotional, or pressed for time. Interpreters then must condense content while trying to preserve meaning.
2. Poor articulation or accent issues
Mumbling, unclear diction, or heavy accents can severely hinder comprehension.
3. Reading written speeches at high speed
A speaker reading from paper tends to use unnatural pacing and dense phrasing, making it harder to follow.
4. Using humour, idioms, or wordplay
These rarely have direct equivalents across languages. Interpreters must decide instantly whether to explain, adapt, or omit.
5. Diverting from the script
Improvised additions, personal anecdotes, and last-second changes are common at gala events—and difficult to predict.
6. Using technical terminology without context
This is especially significant at medical celebrations. Even a skilled interpreter may need to rely on prior preparation or on-the-spot inference.
At the MU Lublin Jubilee Gala, the combination of ceremonial speeches, academic terminology, historical references, and expressions of gratitude created a rich but demanding environment for any interpreter.
The Mental Effort Behind the Scenes
Simultaneous interpreting is often compared to performing music, solving puzzles, and doing live broadcasting—simultaneously. Cognitive research shows that interpreters use working memory, long-term memory, multitasking skills, and rapid decision-making continuously during a session.
To prevent fatigue, interpreters typically work in pairs, switching every 20–30 minutes. Even short intervals can feel like marathons when they involve complex medical terms or long, protocol-heavy speeches like those heard during the MU Lublin celebration.
Why the Interpreter Matters
An anniversary gala such as the 75-year celebration of the Medical University of Lublin brings together international guests, partner institutions, and dignitaries. Without professional interpreters:
Non-Polish-speaking guests would miss crucial parts of the ceremony
The university’s achievements could not be communicated effectively
Collaboration and international relations would suffer
Interpreters enable institutions to present themselves confidently on the global stage. Their work ensures that the message is conveyed accurately, respectfully, and in real time.
A Profession Built on Excellence
Being a simultaneous conference interpreter requires not only bilingual proficiency but near-native command of both languages, superb listening skills, cultural sensitivity, and emotional resilience. Events like the Jubilee Gala are a reminder that the quality of interpretation shapes audience experience, international visibility, and even institutional reputation.
Behind the dignified speeches, ceremonial music, and celebratory atmosphere, the interpreter’s work remains largely unseen—but absolutely vital.
What the Photos Reveal: A Glimpse Into the Booth
The images from the gala offer a rare look into the interpreters’ environment. They show a portable simultaneous interpreting booth, the standard for events hosted in venues without built-in infrastructure. The booth is:
sound-insulated to prevent external noise from entering
compact, often warmer than the rest of the room
equipped with transparent panels to allow a view of the hall
furnished with two chairs and minimal desk space
Inside, two interpreters work side by side—also standard practice. They switch every 20–30 minutes, because the mental load is too heavy for one person alone.
On the desk are the essential tools of the trade:
an interpreting console with volume, channels, and microphone controls
headsets delivering the speaker’s voice
a desk lamp, because booths are typically dim
laptops and display monitors with a video feed of the stage
printed scripts, notes, terminology lists, and event programmes
From these details alone, one can see the professionalism required: interpreters prepare in advance, organise their materials, and rely on technology to keep up with speakers they cannot always see directly.
Simultaneous interpreters listen and speak at the same time, with only a 2–3 second delay—one of the most complex mental tasks measured by cognitive science.
Professional interpreters switch every 20–30 minutes to avoid mental fatigue.
The booth used at university galas is soundproof and often kept at a cooler temperature—interpreters heat up quickly from the mental effort!
On average, interpreters process 120–160 words per minute. Some speakers go beyond 200.
Interpreters hear everything through headphones—even pages turning or someone tapping their pen near the microphone.
Before events like university jubilees, interpreters often prepare by reading programmes, biographies, academic abstracts, and even scanning the campus website for terminology.
Quotes from the Booth
“Please, slow down—my brain can only sprint for so long!”
—Every simultaneous interpreter at least once in their life
“Interpreting is like dancing: the speaker leads, the interpreter follows.”
“A good interpreter is invisible. A great interpreter makes the speaker sound brilliant.”
“When the microphone crackles, we pray. When the speaker improvises, we improvise too.”
Written by Katarzyna Karska (Departmet of Foreign Languages, UMLub)"
https://umlub.edu.pl/reports/reports_item/behind-the-glass-booth-the-realities-of-being-a-simultaneous-conference-interpreter
#Metaglossia
#metaglossia_mundus
"Traduire la bande dessinée : entre bulles, cultures et contraintes en Asie orientale
10 et 11 avril 2026, université Paris Nanterre, CRPM (en hybride)
Organisation :
Marie LAUREILLARD (Université Paris Nanterre), Jaqueline BERNDT (Université de Stockholm) avec l’aide de XIANG Wenlan (Université Paris Nanterre)
Langue de communication : anglais (français possible avec PPT en anglais)
—
La bande dessinée, par sa forme hybride mêlant texte et image, constitue un terrain de réflexion particulièrement stimulant pour les traducteurs et traductologues. Cette journée d’étude se propose d’explorer les spécificités de la traduction de la bande dessinée, à l’intersection du littéraire, du visuel et du culturel. Quels sont les défis liés à la spatialité du texte ? Comment rendre l’humour, les jeux de mots, les références culturelles ou les effets typographiques dans une autre langue ? Quels rôles jouent les normes éditoriales ou la censure dans certaines aires géographiques ?
Nous accueillons des propositions portant sur les traductions des langues asiatiques ou vers les langues asiatiques, avec une attention particulière portée aux bandes dessinées issues de l’espace sinophone, japonais et coréen (manhua, manga, manhwa). L’adaptation d’œuvres littéraires nationales ou étrangères sera également considérée comme une forme de traduction. On pourra aussi réfléchir à l’aspect éditorial des traductions : quelles sont les œuvres japonaises traduites en chinois, par exemple, ou inversement ?
Les contributions pourront aborder, entre autres :
La traduction du texte dans ses différentes composantes (dialogues, onomatopées, titres…) L’adaptation culturelle et les stratégies de localisation Les contraintes éditoriales et graphiques (format, sens de lecture, lettrage…) Les enjeux spécifiques à certaines traditions graphiques (manga, manhua, webtoon…) Des études de cas de traductions publiées ou en cours La place du traducteur dans la chaîne de production La question de l’adaptation La sélection éditoriale
—
Date de la journée : 10 et 11 avril 2026
Lieu : Université Paris Nanterre, CRPM
Modalité : Hybride
Envoi des propositions (300 mots + bio-bibliographie de 5 lignes) avant le 15 janvier 2026 à marie.laureillard@parisnanterre.fr et 44020957@parisnanterre.fr
— Call for Papers – International Conference
Translating Comics: Between Bubbles, Cultures, and Constraints in East Asia
April 10, 2026, Paris Nanterre University, CRPM (in a hybrid format)
Organizers: Marie LAUREILLARD (Paris Nanterre University), Jaqueline BERNDT (Stockholm University), assisted by XIANG Wenlan (Paris Nanterre University)
Communication language: English
As a hybrid form that merges text and image, comics offer a particularly stimulating field of inquiry for translators and translation scholars. This study day aims to explore the specific challenges of comic translation, at the intersection of literature, visual semiotics, and culture. What are the difficulties posed by the spatial layout of text? How can humor, puns, cultural references or typographic effects be rendered in another language? What roles do publishing norms or censorship play in different cultural areas?
We welcome proposals focusing on translations from or into Asian languages, with particular attention to comics from the Sinophone, Japanese, and Korean spheres (manhua, manga, manhwa). The adaptation of national or foreign literary works will also be considered a form of translation. We can also consider the editorial dimension of translations: for example, which Japanese works have been translated into Chinese, and which Chinese works into Japanese?
Possible topics include, but are not limited to:
Translation of the various textual components (dialogue, onomatopoeia, titles…) Cultural adaptation and localization strategies Editorial and graphic constraints (format, reading direction, lettering…) Specific issues in various graphic traditions (manga, manhua, webtoon…) Case studies of published or ongoing translations The translator’s role in the production chain The issue of adaptation The editorial selection.
—
Date of the event: April 10 and 11, 2026
Location: University of Paris Nanterre, CRPM
Format: Hybrid.
Submission deadline: Proposals (300 words + 5-line bio-bibliography) to be sent before January 15, 2026, to marie.laureillard@parisnanterre.fr and : 44020957@parisnanterre.fr
—
Some bibliographical references
BERNDT, Jaqueline & KÛMMERLING-MEIBAUER, Bettina (ed.). Manga’s Cultural Crossroads. Routledge, 2013.
BORODO, Michał (ed.). Reimagining Comics: The Translation and Localization of Visual Narratives, inTRAlinea, 2023.
BORODO, Michał et DIAZ-CINTAS, Jorge (ed.). The Routledge Handbook of Translation and Young Audiences, Routledge, 2025.
BOUVARD, Julien ; DANYSZ, Norbert ; LAUREILLARD, Marie (ed.). La bande dessinée en Asie orientale : un art en mouvement, Paris : Maisonneuve & Larose / Hémisphère, 2025.
HUTCHEON, Linda. A Theory of Adaptation, 2ᵉ éd., Routledge, 2013.
KAINDL, Klaus. « Comics in Translation ». In Handbook of Translation Studies, vol. 1. John Benjamins, 2010.
MARTINEZ, Nicolas. Reframing Western Comics in Translation: Intermediality, Multimodalities & Cultural Transfers. Routledge, 2022.
MITAINE, Benoît ; ROCHE, David ; SCHMITT-PITIOT, Isabelle (dir.). Bande dessinée et adaptation : littérature, cinéma, TV. Clermont-Ferrand : Presses universitaires Blaise-Pascal, 2015.
VENUTI, Lawrence. The Translator’s Invisibility, Routledge, 2008 (1995).
ZANETTIN, Federico (ed.). Comics in translation, Routledge, New York, 2016.
Responsable : Marie Laureillard, CRPM Url de référence : https://ceei.hypotheses.org/26465 Adresse : Université Paris Nanterre, Nanterre, France" https://www.fabula.org/actualites/131623/traduire-la-bande-dessinee-entre-bulles-cultures-et-contraintes.html #Metaglossia #metaglossia_mundus
"L’Agence universitaire de la Francophonie (AUF) et la Direction des services linguistiques du Gouvernement du Vanuatu (DSL) ont signé une convention de collaboration visant à professionnaliser et à renforcer les capacités des traductrices, traducteurs et interprètes au service de l’administration vanuataise.
L’accord, signé le 11 décembre 2025 par M. Nicolas Mainetti, directeur régional de l’AUF – Asie-Pacifique, et par M. Stewart Garae, directeur de la DSL, marque une nouvelle étape dans l’appui de l’AUF aux politiques publiques en faveur du multilinguisme et de la bonne gouvernance.
Cette convention définit le cadre d’un programme de formation ambitieux à destination des personnels en charge de la traduction et de l’interprétation au sein des services du Gouvernement du Vanuatu. L’AUF apportera son expertise pour la conception pédagogique, la coordination et la mise en œuvre des modules, ainsi que pour l’évaluation des activités. De son côté, la Direction des services linguistiques assurera l’identification des bénéficiaires, l’organisation logistique sur le terrain et le suivi de l’impact des formations dans le fonctionnement quotidien de l’administration.
Le programme prévoit l’intervention de plusieurs formatrices et formateurs internationaux sur des thématiques variées : techniques de traduction spécialisée, interprétation de conférence et de liaison, terminologie juridique et administrative, outils numériques et bonnes pratiques professionnelles. Des sessions d’interprétation seront également organisées afin de permettre aux participantes et participants de se confronter à des situations réelles de travail et de consolider leurs compétences.
Au-delà du renforcement des compétences individuelles, cette collaboration vise à structurer un véritable pôle de services linguistiques au Vanuatu, capable d’accompagner les réformes, les coopérations internationales et les échanges avec les partenaires régionaux. Elle s’inscrit pleinement dans la mission de l’AUF, opérateur de la Francophonie, qui soutient les politiques éducatives et linguistiques dans l’espace francophone, en particulier dans la région Asie-Pacifique.
Par cet accord, l’AUF et le Gouvernement du Vanuatu réaffirment leur volonté commune de promouvoir le plurilinguisme, de faciliter l’accès à l’information et de renforcer la qualité de la communication institutionnelle. Les premières activités de formation seront déployées dès 2026, avec l’objectif de constituer un réseau durable de professionnels de la traduction et de l’interprétation au service du développement du pays."
https://www.auf.org/lauf-et-le-gouvernement-du-vanuatu-sassocient-pour-renforcer-les-competences-des-services-de-traduction-et-dinterpretation/
#Metaglossia
#metaglossia_mundus
"The 8th International Conference of the International Association for Translation and Intercultural Studies (IATIS) concluded at Sultan Qaboos University under the patronage of Prof. Amer bin Saif Al-hinai, Deputy Vice-chancellor for Postgraduate Studies and Research.
The event brought together scholars, practitioners and students from around the world to explore sustainable translation in the context of knowledge extraction, technological change and global challenges.
Prof Kyung Hye Kim of Dongguk University, chief of the IATIS conference, praised the organising committee and volunteers for their efforts, highlighting the conference’s focus on inclusivity and “lived experience, not just secondhand knowledge.”
Hosting the conference in the GCC for the first time, Prof. Kim said the gathering allowed participants from diverse regions to exchange ideas, build networks, and carry forward a spirit of dialogue. She noted that “conversation does not require visas” and looked ahead to the next IATIS meeting in New Zealand in 2027.
Prof Julie Boéri, the new president of IATIS, described the Muscat event as a defining moment for the association. She emphasised the ethical and political responsibility of translation and intercultural studies in a fractured world, noting that humility, solidarity and care are central to addressing contemporary challenges.
The conference concluded with recommendations reaffirming translation’s role in knowledge production, social justice and environmental responsibility. Key points included protecting translators’ well-being, valuing less widely spoken languages, promoting openaccess knowledge, and addressing the environmental impact of emerging technologies, including AI." 8th IATIS conference concludes at SQU https://www.pressreader.com/oman/muscat-daily/20251215/281724095884330
"Science communicator and journalist Sibusiso Biyela says the future of inclusive science on the continent depends on whether scientific knowledge can be meaningfully communicated in African languages – not as a symbolic gesture, but as a necessity.
Inside Education spoke to Biyela about his dedication to making science accessible beyond the confines of English.
Biyela’s commitment to African-language science journalism took shape in 2017, while he was attempting to write a science news article about the discovery of Ledumahadi mafube, a newly identified dinosaur species found in South Africa.
Although the dinosaur’s name was scientifically derived from Sesotho, which he said he found interesting, the process exposed a deeper problem.
“I found it difficult to write much about the discovery when every second scientific term needed translating without any Zulu language counterparts,” Biyela said.
Growing up, Biyela learned science exclusively in English, while isiZulu remained the language of his cultural and everyday life.
He describes this linguistic and cultural divide as more than an inconvenience, creating a lasting barrier between science and his identity.
“As I immersed myself further into the universe science opened for me, I found that barrier existing between myself and the rest of my cultural and linguistic identity as a Zulu,” he said.
“Having benefitted so much from the satisfaction of my curiosity that science provides, it pains me to not be able to share that joy with others through my mother tongue.”
He said the lack of scientific discourse in African languages contributes to the perception that science and technology are foreign or inaccessible to African communities.
He added that the loss is mutual: African-language speakers miss out on science, and science misses out on their perspectives, including the dignity of engaging in institutions through a language they are proud of.
Biyela placed these challenges within a broader discussion about decolonising science communication. He said this does not mean rejecting science, but rather acknowledging its complex and often violent colonial history, while opening scientific inquiry to new voices and ways of knowing.
“Decolonising science means that we understand that what we understand about science today is coloured by colonial history of violence and the many excuses that justified the Atlantic Slave Trade and Apartheid, and continues to justify many people’s understanding of human history that justifies black people’s lot in life in the present day,” he said.
He said wider participation in scientific discourse — particularly beyond a small group of dominant global languages — could fundamentally expand what questions science asks and what knowledge is valued.
Reflecting on the impact of writing about dinosaurs in isiZulu, Biyela said it changed how audiences engaged with and talked about these ancient creatures, making them more responsive and culturally connected in ways English-language communication never could.
Despite growing interest, Biyela acknowledged that many African researchers and communicators he has spoken to still face structural barriers — particularly limited access to resources — which often pushes them to seek opportunities abroad.
Although some governments have promised increased research funding, he said the long-term impact remains uncertain.
While progress has been slow over the past decade, Biyela sees more African-language science discussions emerging through community radio, social media, and podcasts.
“If I could predict the future, I would quit my job as a journalist and become a stockbroker or crypto-bro, but my best would be that in ten years’ time, there will be a lot more people like myself doing this kind of work,” he said.
“That can only happen if we all stay motivated to continue this work. And that can happen with support from the government and other institutions, not for handouts, but for the value that we continue to demonstrate comes from this kind of work”.
One of the most ambitious aspects of his work involves explaining complex concepts, such as particle physics terms like “flavour,” “colour,” and “spin”, in isiZulu. He said these concepts are challenging because their scientific meanings differ entirely from everyday English usage.
“I do not want to be the next clever science communicator or linguist to create terms that no one else uses, so the best way to balance scientific accuracy with cultural relevance would be to create these terms publicly with the help of the very people who would be making use of these terms,” he said.
He said that rather than imposing scientific terminology, he and his team — through the iLukuluku podcast — co-create new isiZulu scientific terms with linguists and listeners in public, drawing on existing but underused words and leaving room for community feedback.
For Biyela, African-language science communication is not about translation alone, but about participation — ensuring that African languages are not only vehicles for culture, but also for curiosity, inquiry, and discovery itself. https://insideeducation.co.za/why-africas-science-future-must-speak-african-languages/ #Metaglossia #metaglossia_mundus
Google Translate now supports real-time Gemini audio translation, bringing us one step closer to Star Trek's universal translator.
"In Star Trek, it took until the year 2151 for a functional universal translator to come into existence, but Google is leveraging Gemini to accelerate that timeline. The company is offering real-time speech-to-speech translation via Google Translate, using your existing phone's audio hardware. It's hard not to be critical of generative AI and its content and copyright abuses, but employing AI for real-world use-cases like live translation is great—and based on Google's demo, it seems to work quite well. Google Translate has already proven capable of translating text in real-time, and
can even identify food and products in photographs, so this is a natural expansion of Gemini's feature set.
While there are bound to be errors and hiccups for various reasons (for example: translating audio in a crowded room, from a poor-quality mic, or both), technology like this is as impressive as it is useful. Though not the same as actually learning a different language, this is still a great tool for travelers hoping to conversate with folks that speak a different language.
Does Google's new feature achieve the same things as Star Trek's universal translator? Of course not—that fictional technology is real-time, functions on brain waves, and is most importantly a plot tool that explains how characters from different planets seemingly speak perfect English. But seeing real-world technology inch closer and closer to that goal is a welcome development, and it should bode well for the future of both casual communication and education, despite language barriers.
Google's official blog post states that the feature is now available for testing across the United States, Mexico, and India, and supports 70 languages. The feature will also expand to iOS in 2026."
by Chris Harper — Saturday, December 13, 2025, 02:12 PM EDT
https://hothardware.com/news/google-translate-now-turns-your-earbuds-into-a-real-time-interpreter
#Metaglossia
#metaglossia_mundus
"Google annonce l’arrivée de nouvelles fonctions liées à la traduction. L’une d’elles est la compatibilité de la traduction en temps réel à l’ensemble des accessoires audio et à tous les smartphones. Si le déploiement initial est limité, elle s’étendra bientôt à de nombreux pays et à plusieurs systèmes d’exploitation. Voici comment en profiter.
Ces dernières années, la traduction en temps réel est presque devenue une réalité. En s’appuyant sur Gemini, deux interlocuteurs parlant deux langues (plus de 70 langues sont prises en charge) différentes peuvent se comprendre. Il suffit pour cela d’un smartphone et d’une paire d’écouteurs. Le smartphone écoute l’interlocuteur et traduit ses paroles en temps réel dans les écouteurs de l’utilisateur. Et inversement. À l’occasion du lancement des Pixel 10, nous avons assisté à une démonstration assez bluffante.
Lire aussi – Vous avez l’impression que vos émojis Gboard sont devenus énormes ? Ne changez pas de lunettes : c’est la faute de Google et ça pourrait encore évoluer
La traduction en temps réel, qui va certainement briser certaines barrières linguistiques, ne va pas rester cantonnée à l’écosystème de Google. Elle va être étendue à l’ensemble des smartphones et à tous les écouteurs et casques audio. En effet, Google annonce l’extension de cette fonction à tous les casques et écouteurs Bluetooth du marché. Vous n’aurez donc plus l’obligation d’utiliser des Pixel Buds. Et cela marche lors d’une conversation, mais aussi tout autre contenu, comme un film, une série, ou une conférence.
Il est désormais plus facile d'utiliser la traduction temps réel de Google
Pour en profiter, il y a plusieurs conditions. La première est l’installation de la version bêta de l’application Google Traduction. Cette fonction n’est actuellement pas disponible dans sa version « publique ». Ensuite, il faut utiliser un smartphone sous Android. Google promet toutefois que les propriétaires d’iPhone en profiteront en 2026, sans plus de précision. Enfin, la fonction n’est accessible que dans trois pays seulement : les États-Unis, le Mexique et l’Inde. Le déploiement dans les autres pays sera progressif.
La traduction en temps réel avec n’importe quel smartphone Android et n’importe quel accessoire audio n’est qu’une des quelques nouveautés annoncées pour Google Traduction. Google confirme également la prise en charge des expressions, du contexte et des tics de langage courant par Gemini, pour une compréhension plus naturelle des propos. Enfin, le mode d’apprentissage des langues est étendu non seulement en nombre de langages disponibles, mais aussi en nombre de pays où ce mode est disponible."
https://www.phonandroid.com/google-etend-la-traduction-temps-reel-a-tous-les-casques-et-tous-les-smartphones.html
#Metaglossia
#metaglossia_mundus
"L’UNESCO célèbre le 18 décembre la Journée mondiale de la langue arabe, en organisant à son siège parisien un événement consacré aux voies innovantes pour un avenir linguistique inclusif.
Cette rencontre, qui se tiendra de 10h45 à 16h30 (GMT+2) dans la salle IV, se déroulera en français, arabe et anglais.
Placée sous le thème « Voies innovantes pour l’arabe : orientations et pratiques pour un avenir linguistique plus inclusif », l’édition 2025 mettra en avant le rôle de l’innovation et de l’inclusion dans le développement de la langue arabe. Éducation, médias, technologies numériques et politiques publiques seront au cœur des discussions visant à renforcer la présence de l’arabe dans les systèmes éducatifs, les plateformes numériques et l’espace public, en particulier dans les contextes multilingues ou à ressources limitées.
Au fil des siècles, la langue arabe a joué un rôle central dans la création de liens entre les sociétés et la promotion du développement culturel, scientifique et intellectuel. Aujourd’hui, elle est parlée par 450 millions de personnes, coexiste avec de nombreux dialectes et figure parmi les six langues officielles des Nations Unies. Sa calligraphie est inscrite au Patrimoine immatériel de l’UNESCO et son influence se retrouve dans plus de 50 langues à travers l’Asie, l’Afrique et l’Europe. Des générations de scientifiques et de penseurs ont également produit des découvertes majeures en arabe, illustrant son rôle durable dans la transmission du savoir et des valeurs à l’échelle mondiale.
Depuis 2016, l’UNESCO s’engage à renforcer l’usage de l’arabe en son sein, avec le soutien de la Fondation Sultan Bin Abdulaziz Al Saoud, partenaire clé de l’événement. La Fondation contribue à la promotion de la langue arabe, à la transmission du patrimoine linguistique et à l’innovation dans l’éducation, et considère la langue comme un vecteur de cohésion, d’autonomisation des communautés et d’inspiration pour les générations futures.
Selon Gabriela Ramos, Sous-Directrice générale pour les Sciences sociales et humaines de l’UNESCO, « La langue arabe joue un rôle majeur dans la promotion de la compréhension mutuelle et de la création de connaissances. Sa contribution à l’humanité ne peut être réduite à un seul peuple, car elle est un héritage civilisationnel destiné au monde entier ».
Organisée en collaboration avec la Délégation permanente du Royaume d’Arabie saoudite auprès de l’UNESCO, la célébration met également en lumière le Programme Prince Sultan Bin Abdulaziz Al Saoud pour la langue arabe, qui soutient la recherche, la formation et la coopération internationale pour renforcer la place de l’arabe dans les milieux académiques et scientifiques.
L’événement se veut une plateforme internationale de dialogue, d’innovation et de promotion de la diversité linguistique, réaffirmant le rôle de la langue arabe comme patrimoine universel et vecteur de savoir et de culture."
https://www.webmanagercenter.com/2025/12/15/557623/journee-mondiale-de-la-langue-arabe
#Metaglossia
#metaglossia_mundus
Poems, readings, poetry news and the entire 110-year archive of POETRY magazine.
"Sometimes, choosing the “wrong” word can reveal what the “right” one can’t.
By Yuki Tanaka
Originally Published: December 15, 2025
Poets on Translation is a series of short essays in which poets examine the intersections of poetry and translation in relation to questions of language, identity, authorship, and more.
When I was a child in a small Japanese fishing town in the 80s, translation didn’t seem to exist. Whenever foreign words entered Japanese, they’d become fully domesticated: from shirt to shatsu, from elevator to erebētā. They sounded as if they had been part of the language forever. I grew up watching American sitcoms like Full House and Alf, all dubbed, often by well-known Japanese actors or voice actors. Luke Skywalker spoke perfect Japanese and even looked Japanese, dressed in a white robe that resembled a judo uniform.
The first time I became conscious of language differences, I was six or seven. English had been in the air at home since I was little, as my mother enjoyed studying it and encouraged me to learn. Each night, when she tucked me into bed, she’d say, “Have a nice dream,” in English. One night, I asked her why milk was called milk. She was smoking on the stairs after our family inn had closed for the night. When I asked, she exhaled smoke and said nothing. I continued: the Japanese word gyūnyū has two characters, one meaning “cow” and the other meaning “milk,” so it all made sense, but m-i-l-k had none of that cow-ness. She said, “It’s just the way it is in English.” I refused to accept that. She explained again and again until finally she stubbed out her cigarette, released one last puff, and walked away. I wanted milk to match gyūnyū exactly, and the realization that it didn’t frustrated me.
Years later, after I’d moved to the United States for college and was traveling back to Japan each summer, I began to see the gap between languages differently. One afternoon in Japan, I went to a small neighborhood rice shop and asked if they had rice that didn’t require rinsing. “We don’t rinse rice,” the owner said, “we sharpen it.” In Japanese, to sharpen rice is an idiom for rinsing rice, a phrase that once referred to rubbing the grains together to polish away the bran. The verb togu (“to sharpen,” “to hone”) still carries that trace of abrasion, long after modern rice no longer needs it.
I’d never thought about this etymology before: after years of speaking, reading, and writing in English, the phrase reached me as if for the first time. My unwitting mistranslation made me aware of what I’d forgotten, “to sharpen” sleeping inside “to rinse.” That mistake was accidental, but it taught me something I’ve since tried to do on purpose, in both my poems and my translations: to keep shifting between my native language and my adopted language until they become defamiliarized. While my slip at the rice shop revealed the semantic possibilities of togu, my later translations would explore how choosing the “wrong” word might reveal what the “right” one can’t.
The novelist Yoko Tawada, who writes in both Japanese and German, has said she would rather fall into the valley between two languages than master either one. When we travel between two or more languages, each language drifts into the orbits of the others, producing a new language that feels fresh and full of possibility. I see the gap between languages not as a translator’s nightmare but as a field of creative agency.
Translating a tanka by the contemporary poet Shizuka Omori, I encountered the word mizu aoi (水葵), which, translated literally, means “water hollyhock,” and is the name for a plant with bluish-purple flowers. But I didn’t like the sound—the h’s in “hollyhock” huffed like a horse. Aoi is also a homonym for “blue,” and when I hear mizu aoi, I picture blue water. I chose a mistranslation: “water hyacinth.” Hollyhocks and hyacinths are different plants though both have bluish flowers, but “hyacinth” avoids the huffing in “hollyhock,” and has a softness that feels true to the poem’s mood.
If all I wanted was pure accuracy, AI could do the work. But I’m more interested in replicating the mood and feeling a poem creates. Because one-to-one correspondence is impossible, especially between languages as different as Japanese and English, sometimes what’s required is willful mistranslation.
Once, while my mother was visiting me in St. Louis, we were walking along a gray, worn intersection on our way to a coffee shop when she suddenly asked me to stand beside a traffic light pole, before taking a photo of me. When I asked why, she pointed to a sign above that read: “Photo Enforced.” She thought it meant, “You must take a photo here.” My mother’s misreading turned a bureaucratic warning into an invitation. She stripped away the threat in favor of something playful, even friendly. In that moment, dictionary definitions loosened, and we stepped into the fluid in-between space where words float free, up for grabs..."
https://www.poetryfoundation.org/articles/1758992/poets-on-translation-huffing-like-a-horse
#Metaglossia
#metaglossia_mundus
"Reference and Beyond JR: Moving on, you have a new book that’s just come out, Professor Devitt: Reference and Beyond: Essays in Philosophy of Language (Oxford 2025): ”a selection of published papers in philosophy of language, accompanied by many new footnotes and postscripts,” like the abstract has it. Could you shed light on these footnotes and postscripts? Do they include any key changes to your existing views?
MD: The postscripts seemed like an obvious thing to do when producing a collection of old papers. You don’t mess with the papers themselves, that’s just going to confuse everyone. You confine the minor things into the footnotes, and there’s a lot of that. If you’ve got something major to say, put it in the postscript. I did enjoy writing them.
Take the oldest article in the collection, ”Singular Terms” (1974).1 It wasn’t my first publication, but it was the first in a proper place, as it were. I’m still quite proud about a lot of it, but it suffered from something which a lot of work in those days suffered from, including the work of Keith Donnellan. We didn’t acknowledge this crucial Gricean distinction between speaker-meaning or speaker-reference and semantic meaning or semantic reference.2 So, you can read ”Singular Terms” with that distinction in mind and wonder which of these meanings and references I’m talking about, and the unhappy truth is I’m sometimes almost talking about them both at once. Mostly I’m talking about semantic meaning and reference though. The failure to make that distinction was a flaw, so, in the footnotes, I’ve made clear when the distinction is important to make, and in the Postscript I present a theory of speaker-meaning. I reject the Gricean theory of speaker-meaning which, as you know, is a very complicated theory based on the speaker’s communicative intentions. That is a mistake already, but it is compounded by the incredible complexity of the intentions.
In my dissertation I made the first attempt to give a unified account of what are often called ”singular referring expressions”, like proper names and demonstratives, and arguably, following Keith Donnellan, referential descriptions as well. I prefer calling them designational expressions. I gave a sort of causal theory of them all. In the first Postscript, I really wanted to clarify that theory. ”Singular Terms” was predominantly about names, so I wanted to be absolutely clear where I stood with demonstratives. In recent years, I have come to think of demonstrations as an independent referential device; I’m talking about gestures, pointings, and so on, which often accompany referential phrases, most notably demonstratives. So, I might say, ”that is a cat” while pointing at a cat, and in my view what you’ve got here are two linguistic devices both of which designate the cat, if all has gone well. There’s the demonstrative ”that” and then there’s the demonstration. I wanted to give a theory of demonstrations too, and since I’d recently written about that I wanted it extracted and put in the Preface. So, my unified theory of referential devices now covers proper names, definite descriptions, demonstratives and demonstrations.
Another postscript that I had wanted to write for 50 years originates from the time when I was so influenced by Saul Kripke, the first time I heard him lecture in Harvard in 1967. Around that time, Gareth Evans came to Harvard too. We were friends and associated with each other quite a lot. Gareth had of course heard all about Kripke’s ideas3, and wrote a very excellent, very insightful paper criticising Kripke. Panu, you know the name, what was it called?
PR: ”The Causal Theory of Names.”4
MD: This was after he and I had left Harvard. At the time Evans was writing that, I was writing my ”Singular Terms”. My paper is presented as a development of Saul’s, and Evans’ paper is presented as a critique of Saul’s views. What had always struck me from the start was that quite independently Evans and I had come up with quite similar ideas. So, what I wanted to do in the first Postscript, what I had in a way wanted to do for fifty years, was to draw out the similarities and resemblances between Gareth’s 1973 paper but also his 1982 book5, where he went much more against Kripke. I did enjoy doing that. That covers the Postscript for that era.
Another thing that I like doing in the volume is giving the broad outline of the views that pop up in the book, to gather them together. Other Postscripts deal with criticisms that some of the papers have met. Take the paper ”Rigid Application”. Saul famously introduced the notion of rigid designation which he explained like this. A term is a rigid designator if it designates the same object in every possible world in which that object exists. (Actually, there are a lot more subtleties going on here as became clear when Saul and David Kaplan started arguing about it, but that’s a clear enough basic idea.) Saul then extended in his Naming and Necessity lectures the use of this term to what he called natural kind terms like ”gold” and ”yellow”, ”heat” and ”tiger”. So just as proper names were rigid, so were many sorts of general terms. But how could they be rigid designators? They don’t seem to be designators at all!6
That was a problem everyone faced, and there is quite a literature trying to deal with this. Some people tried to accommodate Saul’s original idea by saying that the general terms are rigid designators, but what they designate are abstract entities. I thought that’s a very bad idea, and it was criticized by a number of people in the literature; this criticism I endorsed and added on.
So, how can we extend Kripke’s idea of rigidity to general terms and mass terms? Well, we move away from designation to application, I thought that is the way to go. A singular term like a proper name or a demonstrative designates a certain object, but general terms like ”tiger” or ”atom” apply to many objects. Application is a one-many relationship. That seemed to me like helpful terminology for semantics generally. The good idea, or truth, behind Saul’s notion of the rigidity of general and mass terms could be captured with the idea of rigid application. If a term rigidly applies to an object, it applies to that object in every possible world in which the object exists. Even if Saul didn’t have that in mind, it seemed to me like something he should’ve had in mind, because that would be the sort of notion of rigidity that could serve his theoretical purposes. What were his theoretical purposes? He wanted to use rigidity as another weapon to beat description theories, and rigid application does that job just as effectively for general and mass terms as rigid designation does for singular terms. Of course, there has been a lot of disagreement about this; in the Postscript to ”Rigid Application”, I took up some criticisms of this suggestion. But that is probably enough about the Postscripts.
JR: That is plenty indeed. May I ask what you would consider to be your most important, or favourite, idea in this book?
MD: Oh, I’ve got to tell you, Jaakko, ever since I was a child I’ve hated questions with superlatives, like ”who’s your best friend?”. So, I’m sorry, but I’m not going to answer.
Naturalism JR: Perhaps a more encompassive question, then. You have become a famous defender of naturalism in many, if not all, areas of philosophy, most importantly in the methodological sense. But how far and deep does naturalism reach? Is it really the only game in town?7
MD: I wish!
JR: Perhaps it’s better to ask whether naturalism should be the only game in town for philosophy? Are there other legitimate methodologies beyond naturalism?
MD: This raises a very interesting general question which I’ve had to confront over my career. I do believe in naturalism and think it’s the right way to do philosophy. Do I think therefore that it’s not respectable to do anything else? I’m a great believer in the idea that you should let a thousand flowers bloom, a hundred schools of thought contend. That was Mao’s slogan which, of course, Mao didn’t follow. So, I’ve always been in favour, while pushing naturalism, for it having to exist in a dialectic with people who are not naturalists, in order to make progress. I don’t think it’s healthy that people should be cocooned from their opponents.
JR: I see. Well, moving on, you just said that you don’t like superlatives. Might I still dare to ask if there is any philosopher whom you’d consider naming as your most worthy opponent?
MD: I’m not going to do it. I mean, I’ve been opposed to some enormously important philosophers, like Noam Chomsky, one of the greatest intellectuals of the 20th century. He made these enormously important contributions to the theory of language, generative linguistics. I disagreed with him, not about the idea that generative linguistics is the way to go, but the sort of metatheory he had behind it, which was that a grammar is all about the mind – not about a system of representations that exists outside the mind like I think.8
I also argued at great length against Michael Dummett, who seemed to me obviously to be an extraordinarily smart and able philosopher who had terrific influence. Far too big an influence in my view because I basically thought his views wrongheaded, but I still have a great admiration for the seriousness of his work, the simple intellectual force with which he presented his views. Before the interview began you mentioned Davidson. I don’t have the same sort of admiration for Davidson as for Dummett.
There are a lot of people I took things from while disagreeing with them. I used to regard myself as in a way being a sort of Gricean, even though I did not go with certain important thoughts of his.
And then there’s my old teacher, Hilary Putnam. Well, there were actually many Putnams. The Putnam of my youth was a really important figure for me when I was at Harvard. He didn’t make me into a realist. I’m a Sydney boy, you know; we’re realists. Brutal realists. So, I was already a realist when I met Putnam. I hadn’t even thought about realism so much because it had always seemed so obviously true and I was worrying about more important things like epistemology and semantics and so on. I wasn’t worrying about it at Harvard, either, when Hilary was famous for his arguments to do with realism, including mathematical realism, and I was very impressed with his arguments like the inference to the best explanation for scientific realism.
It is really true to say that it was Putnam who converted me thoroughly to naturalism. It wasn’t that I wasn’t a naturalist before; I was a bit at sea. You see, I was brought up like everyone else those days, surrounded by a priorism. I mean, that’s what philosophy was. There was the Wittgensteinian a priorism, the ordinary language philosophy a priorism, the positivist a priorism… The whole history of philosophy. And so, I was sitting in an undergraduate class where Putnam was talking about epistemology and the history of philosophy from Descartes onwards. He was a wonderful teacher. And then, in a few deft strokes, after presenting what the French call the sceptical problematic, he solved it by presenting Quine, basically. I mean, I’d read Quine at Sydney, but I’d always been focused on the language stuff, and hadn’t really absorbed his naturalistic picture. When Putnam presented it, it was like a road to Damascus experience for me. Everything fell into place, I can remember.
Then, what happens in the mid 1970’s? Suddenly, Putnam goes anti-realist, abandons left-wing politics, and becomes religious all in a few weeks. It was a terrific shock to me, and I spent a lot of time, probably more time than I have spent on anyone else, arguing against Putnam’s new stuff; I was antithetical to what we might call the middle-Putnam. To give an idea of how opposed I was to what was happening then – you probably couldn’t pull this off these days – I published two critical notices9 of Putnam’s book Meaning and the Moral Sciences (1978)10. I was appalled by what was in that book. Not just by the anti-realism, but also the terrible mess that was made of what realism was. It just seemed to me to be spreading confusion around. I was absolutely, terribly bothered by what had happened to Hilary. A lovely man, but not reliable in keeping his views. Surely a worthy foe if there ever was one.
Quine, of course, is an interesting case. Like I said, thanks to Hilary I went and read Quine very carefully, and did my dissertation with Quine. But I don’t agree, as many people don’t, with his behaviourism about the mind or his deflationary view of meaning and reference.
Friendship with Kripke JR: Moving on to the end of the interview, I was hoping you might shed some light on Saul Kripke, from whom you’ve not only gathered inspiration for your work but whom you also knew as a friend. For example, there is a nice anecdote that I learned from Panu related to what you have called ”the shocking idea about meaning”. Briefly, the idea is that at least for some theoretical purposes, the notion of Fregean sense could be identified with a certain non-descriptive, causal-historical mode of presentation of a term’s referent.11 Now, in the 1972 version of Naming and Necessity, Kripke had a footnote which Panu brought to my attention. The footnote goes like this: ”Hartry Field has proposed that, for some of the purposes of Frege’s theory, his notion of sense should be replaced by the chain which determines reference.”12
This footnote, however, is missing from the 1980 book edition. Panu reports that: ”At the 2013 Buenos Aires workshop (where both Devitt and I were present), Kripke explained that he had deleted the note simply because someone had informed him that he should have credited the idea to Devitt and not to Field.”13
I always thought this anecdote gives sort of an odd picture about Kripke. Could you elaborate the context here?
MD: I don’t know why you think it gives an odd picture about Kripke. Actually, there is a lot in this. First of all, notice that it is Panu who had to point this out. You’d think that I’d know about Kripke’s stuff, having thought about it from the 1960’s onwards, yet that footnote had never registered with me. But Panu is so much a better scholar than I am, so he drew my attention to it as well.
So, Saul said that. First, let me be clear about my view on the ”shocking idea”. I think the meaning of every expression – barring perhaps some syncategorematic expressions14 – should be understood as the mode of presentation of the reference. In the traditional Fregean view, the mode was descriptive. Sometimes it may indeed be. But what I think we should learn from Kripke is that the mode is not descriptive for many terms, like proper names. So, if Kripke’s ideas about borrowing are right, then the meaning for these terms is a certain type of causal chain. Something like that has got to be right. We can’t simply suppose that the meaning is the reference, because then we’re unable to explain a whole lot of things, most strikingly the informativeness of identity statements, the truth of negative singular existence statements and so on. We can’t explain them with direct reference. We’ve got to have something richer as the meaning than reference, and Frege got it right – it’s the way the reference is presented. When that isn’t descriptive, it has to be something else, and it seems to me that at least sometimes the something else has to be the causal way. That will do the job that Frege rightly thought the sense, or meaning, has to do.15
So, who came up with the shocking idea? Well, I’d been urging it for forever from my dissertation onwards. The question of who originally came up with it has always been sort of uninteresting to me. Hartry and I, from the moment when we first sat in – we’d only been in Harvard for a week or so – Saul’s lectures in 1967, started talking about it. And we talked about it forever. We explored everything. So, God knows who first came up with the shocking idea.
The real truth about the shocking idea is that Saul said that I was the one who made a fuss about it. I don’t know if Hartry ever mentioned it at all except in conversations with me and Saul.
JR: So, if I get that right, the reason why Kripke omitted the footnote in the later version of Naming and Necessity is that he didn’t think much of the shocking idea?
MD: No, I think it is because he first attributed the idea to Hartry, then came to think it wasn’t Hartry’s idea. I mean, Saul hated to say anything that wasn’t right. He was obsessive about saying only things which he was certain were true. That isn’t to say it isn’t true that Hartry came up with the shocking idea; like I said, Hartry and I talked so much, I don’t know who really came up with it.
The shocking idea itself wasn’t so shocking to Saul, I think. That doesn’t mean he embraced it. But if you know the history of direct reference, you know that people influenced by Saul, notably Nathan Salmon and Scott Soames, went down this direct reference route in which the meaning of a proper name is simply its referent. The history of this idea is really weird, because many people attribute it to Saul. (For anyone who’s interested in this, I tell the history in the book.) But Saul never embraced that view. And neither did he embrace my view. He sat on the fence. And no one could get him off the fence. As you said, I was friendly with Saul, and I would tease him quite a lot. I remember one conference in the CUNY Graduate Center, sometime in 2005 or 2006, when we were all gathered in honour of Saul. Nathan was there, Scott was there, and I was up on the podium talking about something I don’t remember, and Saul was sitting there too. I said to Saul in front of everyone: “So, you’ve heard Scott, Nathan, and me. Now it’s your turn: time to get off the fence.” No response. So, no one knows where Saul stood on this.
PR: I’d like to add that I recall a discussion with Saul where he insisted that he definitely didn’t believe in direct reference in Naming and Necessity. He admitted he came at least quite close to it in the late 1970’s, but my impression is that he regretted that phase. The problem with Saul was that if he wasn’t absolutely confident that this is the way something is, if he was even a little bit uncertain, he didn’t say it.
MD: Panu is speaking words of wisdom here. If Saul wasn’t absolutely confident, he wouldn’t say his view. He was mortified at the thought of ever saying something false. He was obsessive about this. Do you want to hear a personal anecdote?
JR: Please!
MD: Saul didn’t do a great deal of travelling, but he did do a little bit, of course. And when you travel around the world, you often get presented with various forms. For example, it at least used to be that, when returning to America you have to fill in a form, and you have to say a whole lot of things about what you have and have not done. It would take Saul hours! Because he would think about every section. ”Have I been near a farm or not?” and things like that. ”Well, I did go about half a mile away from one… But on the other hand…” And so on for every single question. He just couldn’t bear to say anything false, even on those silly forms.
JR: Well, that sort of answers my second question, which was why Kripke never developed a rigorous theory of language, meaning and reference based on the many ideas he had. Instead, it was left to you, among others, to build a theory out of the ”better picture” which Kripke presented. Kripke himself says in Naming and Necessity that he was ”sort of too lazy” to do it16.
MD: That’s just a joke though. One thing Saul wasn’t, was lazy. I mean, he was thinking all the time; it was a chronic condition which prevented him from sleeping.
If you want to explore this more, you might wonder what’s the sort of personal difference between me and Saul that left me to develop the better picture into a theory, as you said. A key thing, and we’ve already touched on this, is naturalism. Now, Saul was never a naturalist. He didn’t approve of naturalism. I already said that he wasn’t shocked by my shocking idea, but he was shocked by my naturalism. And he made this very clear on many occasions. For example, I published a textbook on philosophy of language.17 You might’ve thought that Saul would really love this textbook because it’s a sort of ”hooray for Saul!” for many chapters. It’s a setting-out of the Kripkean revolution in the theory of reference in a very supportive way. But Kim and I also had to confront the awful problem of writing a textbook in the philosophy of language, and we thought right from the beginning that there was no way we could write, as it were, a neutral book. We were just going to present the philosophy of language from a naturalistic perspective, as we say in the beginning. Even the blurb on the back says this.
Saul was outraged at this. He complained about it to me – you couldn’t mention the textbook without him going ”You even say it in the blurb it’s not neutral. This is not a textbook!” He was so funny. You didn’t want to have a thin skin if you dealt with Saul.
There were only two comments that I ever got from him on the book. I never got a thanks from him for presenting the revolution, but he did criticise the naturalism. So far as I know, there was only one bit of critical stuff on Saul in the book, and that was the discussion on ”Kripkenstein”. He did not like that at all. And he would always come to that, too. ”You say I don’t have an argument?” He’s a riot; I do miss him a lot.18
JR: Was Kripke’s opposition to naturalism part of his general unwillingness to commit to a view he was uncertain about or was it something more specific?
MD: I don’t know. I mean, that’s a very deep question. Why do so many philosophers have anti-naturalist positions? We know the consequences of this: they believe in the a priori; Saul said he believed in the a priori.
JR: Really?
MD: Oh God, yes. Oh yes.
PR: He even believed in contingent a priori.19
MD: We naturalists are really a minority. So, if you ask why Saul wasn’t, you need to ask that about humans in general. The whole history of philosophy seems to me to demonstrate a tension between naturalistic approaches and a priori approaches. Right the way through you see both. You see science being brought into philosophy and then philosophers going off and doing their own thing. My favourite example of this is John Locke. See his discussion of realism. It’s just a wonderful interplay between good empirical science and old a priori philosophy. And I think this ran right through philosophy until Quine. I mean, there were always naturalistic elements and a priori elements. One of the many contributions that Quine made was to make this stark and clear, because he laid down, with his vivid metaphors, what philosophy should be. It made so well the distinction that needed to be made. I think the whole subject moved forward just by being clear about this.
I hope you don’t think this is terribly rude, but I think that if we could understand the appeal of religion, we might understand the appeal of the a priori. Do you think that’s a bit overboard, Panu?
PR: You are famous for your shocking ideas.
* * *
JR: One last question, again a personal one. Are there any research topics that you’ve wanted to pursue but haven’t found the time to? Any blind spots?
MD: I like that question. I think one of the great things about philosophy is that there’s no end to interesting topics. There are lots. There’s virtually no broad area of philosophy (except the philosophy of religion) which I don’t find interesting. Let me just take one that I’ve never done anything in. I have done a very small amount of work in moral philosophy – I wrote a paper on moral realism20 – but I’ve never done anything in aesthetics. And this doesn’t mean I don’t think that is interesting.
When I was in Maryland, I got roped at once into being on a committee for a student writing her dissertation in aesthetics. Actually, I can remember her name: Monique Roelofs. Monique’s dissertation, I thought, was fascinating. Really insightful. I got quite engaged with the issues: ”Gee, I’d love to work on this.” But I’ve never done it. That’s just one example. You just never run out of topics.
JR: That is a sentiment easy to agree with. Thank you for the interview, Professor Devitt.
References 1 Michael Devitt, Singular Terms, The Journal of Philosophy, Vol. 71, No. 7, 1974, 183–205. 2 Grice distinguished between two senses of sentential meaning: what the speaker meant by the sentence and what the semantic meaning is. The semantic sense is close to the literal meaning of the sentence, whereas the speaker-meaning means the use of the sentence in context. For example, the sentence ”Grass is green” literally means that grass is green, but some speaker might use it in context to mean that the summer is not over yet. 3 Kripke’s Naming and Necessity, originally held as a series of lectures in 1970, started a revolution in the philosophy of language and beyond by criticising the previously dominant descriptivist theories of reference and meaning. According to descriptivism, the meaning of an expression, such as a proper name, are based on the descriptions commonly associated with the referent of the name. For example the meaning of ”Aristotle” would be something like ”The teacher of Alexander the Great”. Kripke showed in several ways how the name’s reference and meaning are independent of such descriptions 4 Gareth Evans, The Causal Theory of Names. Proceedings of the Aristotelian Society. Supplementary Volumes. Vol. 47, No. 1, 1973, 187–208. 5 Gareth Evans, The Varieties of Reference. Oxford University Press, Oxford 1982. 6 Rigid designation” was one of the key technical terms which Kripke coined in the revolutionary lectures of Naming and Necessity. Roughly, a term is a rigid designator if and only if it refers to the same thing in every possible world in which the referent exists and never refers to anything else. 7 ”Naturalism” in philosophy means roughly the view that philosophical theories should not only seek to be compatible with the findings of empirical sciences but also seek to conform to their methodologies and worldview as much as possible. 8 Noam Chomsky is famous, among other things, for being for one of the founders of generative grammar theory, which displaced the previously popular behaviorist views about language. According to Chomsky, language is not only based on biology, but in a sense biology itself is linguistic, and language exists in the brain. Devitt has criticized this view by claiming that we shouldn’t confuse linguistic competence, which does require a brain, with language itself, which exists primarily outside individual minds. 9 Michael Devitt, Critical Notice of Meaning and the Moral Sciences by Hilary Putnam. Australasian Journal of Philosophy 58 (1980), 395–404; Realism and the Renegade Putnam: a Critical Study of Meaning and the Moral Sciences by Hilary Putnam. Nous 17 (1983), 291–301. 10 Hilary Putnam, Meaning and the Moral Sciences. Routledge, London 1978. 11 One of Kripke’s most famous ideas is that the meaning and reference of a proper name is not determined by an associated description, but rather by a causal-historical chain of borrowing the name from other speakers, some of whom down the chain have been in contact with the referent. Devitt has argued that it is possible to understand the meaning of the name as constituted by such a chain. Many have considered this idea shocking, and that’s how Devitt has named it in his published works. 12 Saul Kripke, Naming and Necessity. In Semantics of Natural Language. Ed. Donald Davidson & Gilbert Harman. Reidel, Dordrecht 1972, 253–355 (346n22). 13 Panu Raatikainen, Theories of reference: what was the question? In Language and reality from a naturalistic perspective: Themes from Michael Devitt. Ed. Andrea Bianchi. Cham: Springer International Publishing, 2020. 69–103 (99n65). 14 Syncategorematic expressions include words such as ”and”, ”or”, ”if” and ”because”. They are used to connect sentences together. 15 Frege noticed that two different names, though they refer to the same person, can have different ”meaning” in the sense that someone might not know that one name (e.g. ”Robert Zimmerman”) refers to the same as the other (”Bob Dylan”). Frege then thought that the difference in meaning must correspond to some difference in the descriptions associated with the names. Since Kripke’s criticism of this view in Naming and Necessity, Devitt urged the ”shocking idea” that the mode of presentation of the name can be non-descriptive, which is in opposition to so-called ”direct reference” theories, according to which the meaning of a proper name just is the referent. Such views have problems explaining the apparent meaningfulness of empty names and the apparent truth of negative existential statements. 16 Saul Kripke, Naming and Necessity. Cambridge University Press, Cambridge (MA) 1980 (93). 17 Michael Devitt & Kim Sterelny, Language and Reality: An Introduction to the Philosophy of Language. Basil Blackwell, Oxford 1987. 18 The term ’Kripkenstein’ refers to Kripke’s ideas based on the thoughts of Ludvig Wittgenstein. In 1982 Kripke published a book on Wittgenstein (Wittgenstein on Rules and Private Language, Harvard University Press) which is rivalled in fame only by Naming and Necessity. In the book, Kripke attributed a sceptical challenge about meaning to Wittgenstein. The textbook by Devitt and Sterelny discusses the challenge briefly and somewhat dismissively. 19 A priori knowledge means knowledge not based on empirical knowledge. Prior to Naming and Necessity¸ it was common to think that if something is known a priori, it must be necessary, and that if something is necessary, it must be knowable a priori. Kripke criticized this connection between necessity and a priori and introduced, for the first time in the history of philosophy, the notions of contingent a priori and necessary a posteriori. 20 Michael Devitt, Moral Realism: A Naturalistic Perspective. Croatian Journal of Philosophy 4 (2002), 1–15. Numero niin & näin 4/25" https://netn.fi/artikkelit/interview-with-michael-devitt-on-philosophy-of-language-saul-kripke-and-naturalism/ #Metaglossia #metaglossia_mundus
"The Literature Translation Institute of Korea has selected three winners for this year’s translation awards.
The three are Lee Ki-hyang, Tayfun Kartav and Justyna Agata Najbar-Miller.
The LTI Korea Translation Award was established in 1993 to encourage outstanding translators who contribute to communication between Korean and world literature, and to promote Korean literature overseas.
Lee, who heads publishing house Märchenwald Verlag München, translated Bora Chung's "Cursed Bunny" into German and received critical praise for effectively conveying the book's tension and fear.
Kartav won for his Turkish rendering of Chang Kang-myoung's "Homodominans," and Najbar-Miller, an assistant professor in the Korean studies department at the University of Warsaw, won the award for her Polish translation of Han Kang's "We Do Not Part."" https://world.kbs.co.kr/service/news_view.htm?lang=e&Seq_Code=197861 #Metaglossia #metaglossia_mundus
" L'ONU célèbre la première Journée mondiale de la famille des langues turques
L’UNESCO, l’agence culturelle des Nations Unies, se prépare à célébrer lundi la toute première Journée mondiale de la famille des langues turques, suite à la décision de sa Conférence générale à Samarcande d’établir le 15 décembre comme date de célébration annuelle.
Cette nouvelle commémoration met en lumière le patrimoine linguistique et culturel commun des peuples turcophones et réaffirme l’engagement de l’Organisation des Nations Unies pour l'éducation, la science et la culture (UNESCO) en faveur du multilinguisme et de la diversité culturelle.
Une date historique Le choix du 15 décembre s’inscrit dans un moment charnière de la linguistique. Ce jour-là, en 1893, le linguiste danois Vilhelm Thomsen annonçait avoir déchiffré l’alphabet des inscriptions de l’Orkhon, parmi les plus anciens témoignages écrits connus de la famille des langues turques.
Cette découverte majeure a ouvert la voie à une meilleure compréhension d’une tradition linguistique qui unit aujourd’hui des dizaines de communautés à travers l’Eurasie.
Des langues parlées par 200 millions de personnes Les langues turques – dont l’azéri, le kazakh, le kirghize, le turc, le turkmène et l’ouzbek – sont parlées comme langue maternelle par plus de 200 millions de personnes sur un territoire d’environ 12 millions de kilomètres carrés.
L’UNESCO souligne que ces langues possèdent un riche patrimoine écrit, de fortes traditions orales et des pratiques culturelles diverses partagées par de nombreux États membres.
La proclamation de cette nouvelle Journée fait suite à une demande conjointe de l'Azerbaïdjan, du Kazakhstan, du Kirghizistan, de la Turquie et de l'Ouzbékistan et a reçu le soutien de 21 États membres, témoignant d'une large reconnaissance de la valeur de la diversité linguistique.
Renforcement de la coopération L'UNESCO indique que cette commémoration annuelle s'inscrit dans le cadre plus large du programme multilingue des Nations Unies, tel qu'il est énoncé dans la résolution 71/328 de l'Assemblée générale.
En consacrant une journée à la famille des langues turques, l'agence vise à encourager la coopération linguistique, les échanges culturels et le dialogue entre les civilisations.
Les activités prévues comprennent des initiatives de sensibilisation, des recherches universitaires et des programmes de sauvegarde des langues turques et des traditions orales.
Célébration annuelle Cette journée sera marquée par des expositions, des conférences, des événements littéraires et des spectacles artistiques destinés à mettre en lumière la richesse historique et le dynamisme contemporain des langues turques.
L'UNESCO affirme que cette commémoration est l'occasion d'honorer la diversité linguistique en tant que patrimoine commun de l'humanité et de renforcer les efforts internationaux visant à protéger les langues, vecteurs essentiels d'identité, de savoir et d'expression culturelle." 14 décembre 2025 https://news.un.org/fr/story/2025/12/1158079 #Metaglossia #metaglossia_mundus
"Cherokee Nation Launches Digital Dictionary to Support Language Revitalization
Cherokee Nation leaders and Cherokee language speakers joined representatives from Kiwa Digital Ltd. on Tuesday to launch the new Cherokee Language Dictionary app during an event at the Durbin Feeling Language Center.
“Every Cherokee family, no matter where they live, can now carry this resource in their pocket,” Principal Chief Chuck Hoskin Jr. said. “This app represents our sovereignty, our knowledge, and our commitment to keeping the Cherokee language strong for generations to come.”
Durbin Feeling completed the first Cherokee Language Dictionary 50 years ago, laying the foundation for the tribe’s modern language revitalization work. In 2025, Cherokee Nation partnered with Kiwa Digital Ltd. to digitize the resource and make it publicly accessible as a mobile app.
Team members from Kiwa Digital traveled internationally for the launch. The company specializes in Indigenous language preservation through digital tools. Chief Hoskin first announced the partnership during his State of the Nation Address at the Cherokee National Holiday.
“Chief Hoskin and I have always said that it is critical we not only protect and save the Cherokee language, but that we perpetuate the language so that it continues to grow within our Cherokee families and communities,” Deputy Chief Bryan Warner said. “We can harness the power of technology to help us teach others how to speak Cherokee, and the Cherokee language dictionary app is a great resource.”
The app is available for download on the Apple App Store and Google Play Store. It currently includes more than 6,000 Cherokee words, audio recordings, grammar notes, phonetics, syllabary, and biographical information on first-language speakers. Cherokee Nation translators and Kiwa staff plan to continually add new entries.
“In just a few months, Kiwa Digital took what we have documented of our language and made it accessible to our citizens,” said Howard Paden, executive director of the Cherokee Language Department. “Their efforts will prevent the erosion of our language from continuing and empower us to revitalize and normalize this language in our communities. Our goal is to get at least 25,000 to 50,000 words on the app in order to have a more comprehensive overview of the language.”
The app also includes advanced search tools, pronunciation guides and a private AI learning assistant. Data is stored on a secure AWS platform.
During Tuesday’s launch, the tribe encouraged users to submit feedback through the app to support ongoing updates.
“As an Indigenous-owned company from Aotearoa New Zealand, we are honored to support the Cherokee Nation in developing this groundbreaking digital resource,” said Jill Tattersall, executive director of Kiwa Digital. “We look forward to Cherokee community feedback to help this treasured resource grow in impact and value.”
In October, the Cherokee Nation hosted its Seventh Annual First-Language Cherokee Speakers Gathering, where Chief Hoskin announced $2.3 million from the tribe’s Public Health and Wellness Fund Act to support the Language Department’s Peer Recovery Program, home care for fluent elders in vulnerable health, repairs for speaker homes and the Little Cherokee Seeds program.
The Durbin Feeling Act of 2019—authored by Chief Hoskin and Deputy Chief Warner with support from the Council of the Cherokee Nation—continues to drive the largest language investment in tribal history. This year’s language budget is nearly $25 million, and the act provides more than $20 million annually for language programs, totaling more than $68 million in capital projects to date." By Levi Rickert December 12, 2025 https://nativenewsonline.net/sovereignty/cherokee-nation-launches-digital-dictionary-to-support-language-revitalization #Metaglossia #metaglossia_mundus
Two Army soldiers and a civilian interpreter – were killed by a suspected Islamic State attacker.
"The attack comes as the United States and Syria’s new government, led by interim President Ahmed al-Sharaa, are looking at closer relations more than a year after rebels toppled Bashar al-Assad, who became known as one of the world's most brutal despots.
Since then, the United States and Syria have cooperated on anti-terrorism actions, which Trump said will be useful in the response to the Dec. 13 killings.
"There will be very serious retaliation," Trump said in a post on Truth Social..." https://www.usatoday.com/story/news/politics/2025/12/13/us-soldiers-killed-syria/87750585007/ #Metaglossia #metaglossia_mundus
"Not a Big Reader? Google Translate Rolls Out Real-Time Audio Translation for Headphone Users Google says you’ll be able to get live audio translations of conversations, speeches, and lectures in a different language, listen to a speech or lecture while abroad, or watch a TV show or film in another language through your headphones.
If your language learning journey isn’t going as planned, or your Duolingo streak is long since broken, Google has introduced a new tool that could make navigating the world of foreign languages more intuitive—with no reading required.
Rolling out this week on Android in beta, Google Translate will now support real-time audio translation delivered via your headphones. The beta will support over 70 languages, including Spanish, Hindi, Chinese, Japanese, and German, via the Translate app. Google says this means you’ll be able to understand conversations in a different language, listen to a speech or lecture while abroad, or watch a TV show or film in another language.
At present, the new feature will be limited to users located in the US, India, and Mexico using Android, while an iOS rollout is planned for the future. To try the tool out, connect your headphones, open the Google Translate app on your mobile device, and tap “Live translate” to hear a real-time translation in your preferred language.
In addition, Google is rolling out tools to improve its translations when it comes to things like idioms, local expressions, or slang, which might not make much sense in a purely literal translation. For example, phrases like “stealing my thunder.”
This update is also rolling out this week in the US and India, translating between English and almost 20 languages, such as Spanish, Hindi, Chinese, Japanese, and German. It will be available on Android and iOS, and on the web version.
After first introducing dedicated language practice features back in August this year, Google Translate is now also introducing "improved feedback" for users' speaking practice, as well as tools to track how many days in a row you've been learning, broadly comparable to language learning tools like Duolingo and Babbel. It's also expanding support for new language combinations such as German and Portuguese for English speakers, as well as English for Bengali, Mandarin Chinese, Dutch, German, Hindi, Italian, Romanian, and Swedish." By Will McCurdy December 13, 2025 https://www.pcmag.com/news/not-a-big-reader-google-translate-rolls-out-real-time-audio-translation #Metaglossia #metaglossia_mundus
By Sabrina Machetti and Raymond Siebetcheu
It is nearly ten years since the concept of lingue immigrate (Bagna et al., 2003), was formulated. To date, immigrant minority languages are poorly investigated in Italy. Actually, when referring to applied linguistics in the Italian context
"What happens when Italian and Immigrant languages come in contact with each other? A clear example that is useful to approach this question is called 'Camfranglais', an urban variety that stems from a mixture of French, English, Pidgin English and Cameroonian local languages. This working paper examines the outcome of the interaction between Italian and Camfranglais."
By Sabrina Machetti and Raymond Siebetcheu
https://www.diggitmagazine.com/working-papers/tpcs55-use-camfranglais-italian-migration-context #Metaglossia #metaglossia_mundus
Global Voices interviewed Moharaj Sharma to explore his path as a poet, journalist, and documentarian, and his enduring efforts to elevate Nepali literature, linguistic traditions, and diaspora narratives.
"Words across worlds: Moharaj Sharma on language, culture, and belonging in Nepal
Sharma is a poet, journalist, and documentarian amplifying Nepali literature and diaspora voices
Written by
Sangita Swechcha
Posted 15 December 2025
Poet, journalist, and documentary maker Moharaj Sharma is a leading figure in Nepal’s literary and media landscape, with two decades of influential work in radio and television. Widely respected for his cultural insight and integrity, he is known for poetry that reflects on identity, social change, and the human experience, resonating across Nepal and its global diaspora.
A long-time member of the International Nepali Literary Society (INLS), he currently serves as News Editor at AP1 Television, where he also hosts a weekly literary segment that brings writers and thinkers into national conversation. His research on the linguistic roots of Nepali and Sanskrit, along with his documentary on the resilience of Nepali-speaking Bhutanese refugees, highlights his commitment to cultural preservation. Recognised with honours from INLS, Gauhati University, and literary institutions in Bhutan, the US, and South Korea, Sharma continues to shape contemporary Nepali literature through a powerful blend of journalistic clarity and poetic vision.
Sangita Swechcha of Global Voices interviewed Moharaj Sharma via email to learn more about his journey as a poet, journalist, and documentarian, and his longstanding work in amplifying Nepali literature, linguistic heritage, and the stories of diasporic communities.
Sangita Swechcha (SS): Your work spans poetry, journalism, and documentary storytelling. How do these different forms of expression influence one another in your creative process?
Moharaj Sharma (MS): Poetry, journalism, and documentary — although these three subjects appear separate — have complemented one another in my creative journey. The inner dialogue among all three has inspired me to stay focused on my work. Just as Eastern philosophy describes the power of a mantra, I feel a similar power in poetry within literature. It is something that shakes society. Poetry speaks to the joys and sorrows of society in a deep and subtle way. I sense this same sensitivity in journalism as well. News is not merely information; it is a reality intertwined with human life, dreams, and struggles. The discipline of journalism — honesty toward facts, commitment, and respect for authentic voices — makes my writing responsible. Documentary ties these two worlds together in a single thread. In visual storytelling, I try to blend the factual discipline of journalism with the human sensitivity of poetry.
SS: Much of your writing explores identity, culture, and the Nepali diaspora. What personal experiences or encounters have most shaped your understanding of these themes?
MS: For the past two decades, I have been close to ordinary lives through journalism. Nepal has great ethnic, linguistic, and cultural diversity, with 142 ethnic groups and over 120 languages. Each ethnic group has its own language, religion, customs, culture, and traditions. This diversity and identity make Nepali society ‘many in one and one in many.’
Through my travels and learning experiences in the UK, USA, South Korea, India, and Indonesia, I gained a deep understanding of the importance of language, culture, and identity. In today’s global world, people cannot forget their roots; instead, they work to preserve and promote their heritage, enjoying the sweetness of their identity.
Statistics show Nepali-speaking people have reached around 150 countries. Wherever they go, they carry Nepali language, culture, and civilization. These learning and research experiences have greatly energized my professional and literary journey.
SS: You have documented the stories of Nepali-speaking Bhutanese refugees — a community whose decades-long displacement, life in refugee camps in eastern Nepal, and global resettlement have shaped a profound story of resilience and cultural survival. What continues to stay with you from their journey?
MS: In 1624 AD, after an agreement between Bhutan’s religious leader Zhabdrung Ngawang Namgyel and Nepal’s King Ram Shah, sixty Nepali households were taken to Bhutan. Though they helped unify and develop Bhutan, the government later suppressed them as their influence grew.
In the 1990s, the ‘One Nation, One People’ policy restricted Nepali language and culture, leading to the expulsion of over 100,000 Nepali speakers, who lived as refugees in Nepal for nearly two decades. Even then, they preserved their language and heritage, running classes and promoting literature through groups like the Literary Council of Bhutan.
After resettling in eight countries including the USA, Australia, and Canada, their struggle for identity continues. Preserving language and culture remains central, strengthening their presence abroad.
I traveled across the USA to study this community. Once stateless, they now show cultural prosperity: schools teach Nepali, government offices hire language experts, and communities maintain global cultural presence. Even after losing everything, they kept their pride and are active in politics and policy-making in their resettled countries.
Moharaj Sharma with Nepali-speaking Bhutanese children in the USA, studying Eastern philosophy. Image provided by Moharaj Sharma.
SS: As someone deeply involved in promoting Nepali literature through radio and television, how do you see the role of media evolving in nurturing literary culture?
MS: When I began in radio, access to media was limited, and poets, writers, and cultural scholars rarely reached the public. Over two decades, technology has advanced so much that the global community now fits within a mobile click. Earlier, poetry recitations, literary interviews, and TV discussions gave writers recognition and shaped cultural interest. Today, media not only promotes literature but sparks debates on new dimensions and global practices.
Digital media has broken the center-periphery divide. Those once absent from print now emerge via social networks, including poets, writers, cultural workers, migrant laborers, and homemakers. Yet media’s responsibility has grown, as confusion, exaggeration, and commercial content can overshadow meaningful creation, making the role of journalists, editors, producers, and cultural workers vital.
SS: Your research engages with the linguistic roots of Nepali and Sanskrit. What draws you to this historical and philosophical exploration of language?
MS: Nepal is a multilingual, multicultural, and multi-traditional country, with people living from 58 meters to nearly 5,000 meters above sea level, showing differences in language, culture, and lifestyle, yet inter-community tolerance is strong.
Vedic literature and Eastern philosophy highlight Nepal’s sacredness. It is where sages attained knowledge through yoga, meditation, and ascetic practice. According to Buddhist tradition, Kanakamuni, Krakuchhanda, and Shakyamuni Buddha were born here and spread wisdom. I am deeply interested in studying the languages, cultures, arts, and folk traditions that give Nepal its unique identity.
Media plays an important role in preserving and promoting these subjects. I traveled from the Sinja Valley of Jumla — where the Nepali language originated — to Oxford University, where Sanskrit is taught.
As the mother of many languages, including Nepali, Sanskrit is key to understanding history and linguistic evolution, which inspires me greatly.
SS: With your forthcoming poetry collection, what themes or perspectives are you most excited to share with readers?
MS: I like simple poetry that tells the stories of ordinary people. As a long-time editor showing countless faces and events, those experiences naturally appear in my work. I try to capture the pain, hope, and journey of the Nepali diaspora struggling for identity.
Our generation has witnessed many key historical moments of Nepal; through poetry, these experiences will remain as witnesses for the future. I continue to explore themes of identity, social change, and the tension between tradition and modernity."
https://globalvoices.org/2025/12/15/words-across-worlds-moharaj-sharma-on-language-culture-and-belonging/
#Metaglossia
#metaglossia_mundus
"Google has also introduced a new speech-to-speech translation feature for headphones.
Google is rolling out new Gemini-assisted functionality to Search and its Translate app. It says its AI can now provide more natural and accurate text translations for phrases that have more "nuanced meanings." Translate will now take slang terms and colloquial expressions into consideration rather than provide sometimes unhelpful direct translations.
The latest update to its text translation feature is rolling out first in the US and India, translating between English and just under 20 other languages, including German, Spanish, Chinese and Arabic. It works in the Translate app for iOS and Android and on the web.
Gemini’s speech-to-speech translation feature has also been updated, so you can now hear real-time translations in your headphones, like with Apple’s AirPods Pro 3. Google says the new functionality, which is now in beta in the Translate app for Android (iOS is coming next year) in the US, tries to "preserve the tone, emphasis and cadence of each speaker" so you better understand the direction of the conversation and who said what. It works with any headphones and supports more than 70 languages.
Advertisement
Finally, Google is adding more tools to its potentially Duolingo-rivaling AI-powered language learning tools, which it introduced to the Translate app in August. Like Duolingo, Translate can now track how many days in a row you’ve been attempting to learn a new language, so you can check your progress over time. Whether it will nag you as persistently as the Duolingo owl famously does for slacking off is not clear.
The feedback feature has also been improved, so you should receive more useful tips on how you’re pronouncing words or phrases. Germany, India and Sweden are among the 20 new countries that can now use these educational tools.
After not showing it much love for a while, Google has been busy adding new features to Translate recently. As well as the new language practice feature, an update last month added the ability to select between "Fast" and "Advanced" translations that allow you to prioritize speed when you’re in a rush (ordering a drink at the bar, for example) or receiving more accurate translations using Gemini."
Matt Tate
Contributing reporter
Fri, December 12, 2025 at 6:34 PM GMT+1
https://www.engadget.com/apps/google-translate-is-now-better-at-translating-slang-terms-and-idioms-using-ai-173428316.html
#Metaglossia
#metaglossia_mundus
"‘How we built an AI translator to help everyone in your church hear the gospel’ By Mike Ashelby1 December 20253 min read
Save articlePlease Sign in to your account to use this feature Kingdom Code’s hackathon saw a room of Christian coders come together to tackle the language barrier isolating churchgoers whose first language isn’t English. Mike Ashelby tells the story behind the innovative AI translation tool they created.
Christmas is the season of the open door. It is the one time of year when the “stranger” is most likely to walk into our churches — neighbours, international students, and extended family members drawn by the carols and the candlelight.
But for the more than five million people in England and Wales who do not speak English as their main language, that open door often leads to a closed experience. They are physically welcomed, but spiritually isolated. They stand in the crowd, surrounded by the warmth of the community, but the message of the service remains locked behind a language barrier.
The incarnation was the ultimate act of translation. It wasn’t just God speaking to us; it was God becoming human so that we could truly know him. At Breeze Translate, our mission is to help the UK Church reflect that heart. We believe that if someone walks into a church this December, language should not stop them from hearing the most powerful message of all: Emmanuel, God is with us.
From Pizza and Code to a “Digital Pentecost” This mission didn’t start in a cathedral, but in a room full of coders, snacks, and a tight deadline.
The setting was Kingdom Code, an annual Christian hackathon where technologists gather to ask: How can we use our skills to serve the Kingdom? Tim Moger from NEFC Church stood up and pitched a problem that is becoming increasingly common: our communities are diversifying, but our church services are leaving people out.
For me, this problem wasn’t theoretical; it was sitting in the seat next to me.
Two Iranian asylum seekers had recently joined our congregation. One spoke limited English; his mother-in-law spoke none. I remember the helplessness I felt trying to welcome them into the community. We did what we could — we pasted Persian text onto the projector for the liturgy — but the moment the service moved on, they were cut off.
It wasn’t just the sermon they missed. It was the notices, the updates on community life, the small invitations to belong. After the service, the young man could manage a basic conversation, but the heart of the message — and the invitation to participate in the family of the church — was inaccessible. His mother-in-law sat through the entire service in silence. We were welcoming them into the building, but we lacked the tools to welcome them into the fellowship.
A Romanian woman, who sat silently through services for three years was literally crying with joy the first time she could hear the sermon in her own language.
That weekend, a team formed around Tim’s idea, led on the technical side by Ben Hartman. Ben brought extensive expertise in real-time communications to the table, but perhaps more importantly, he brought a missionary’s heart. Living in Germany and speaking German as a second language, he knew intimately the fatigue of trying to process faith in a non-native tongue.
Over 24 hours, the team built a prototype. Originally, we called it “deBabel” — a reference to the Tower of Babel, seeking to reverse the confusion of languages. But as the project grew, we realised we didn’t just want to tear down a barrier; we wanted to invite the Spirit in.
We renamed it Breeze Translate, a nod to Acts 2. At Pentecost, the Holy Spirit came like a “mighty rushing wind” — a breeze — and suddenly, everyone heard the good news in their own native tongue. That became our hope: to build a tool that clears the way for a similar connection today.
How It Works Since coming on board to help expand the reach of Breeze, I’ve seen that simplicity is key. We didn’t want to create an app that people had to download (a barrier in itself). Instead, Breeze is browser-based.
The church connects their sound desk to a computer — or simply places a mobile phone on the lectern — and the system does the rest. The congregation scans a QR code, and their own phone becomes a personal interpreter, providing live, real-time translation in their own language.
Crucially, it works both ways. The system supports a host of different input languages with automated language switching. This means a service can be truly multilingual — a contributor can get up and share a testimony in Farsi or pray in Ukrainian, and the English speakers in the room will see the translation instantly. With hundreds of output languages available, it allows everyone to participate, not just listen.
In Slough Baptist Church, the leadership used Breeze to support an Italian woman who had attended for years, relying on her husband’s faltering whispers to understand. When they switched on Breeze, she told the leadership it was the first time she felt she could truly connect with the service independently."
https://www.premierchristianity.com/real-life/how-we-built-an-ai-translator-to-help-everyone-in-your-church-hear-the-gospel/20567.article #Metaglossia #metaglossia_mundus
The Trump administration argues that providing real-time American Sign Language interpretation for events like White House press briefings would intrude on the president’s control over his public image. This stance is part of a lawsuit filed by the National Association for the Deaf, which claims the lack of ASL interpretation denies deaf Americans access to important communications. The Justice Department suggests alternatives like online transcripts and closed captioning provide what's needed. A federal judge recently ordered the White House to provide the interpreting, but the administration has appealed.
"Trump administration says sign language services ‘intrude’ on Trump’s ability to control his image
20 hours ago By MEG KINNARD - Associated Press
The Trump administration is arguing that requiring real-time American Sign Language interpretation of events like White House press briefings “would severely intrude on the President’s prerogative to control the image he presents to the public,” part of a lawsuit seeking to require the White House to provide the services.
Department of Justice attorneys haven’t elaborated on how doing so might hamper the portrayal President Donald Trump seeks to present to the public. But overturning policies encompassing diversity, equity and inclusion have become a hallmark of his second administration, starting with his very first week back in the White House.
The National Association for the Deaf sued the Trump administration in May, arguing that the cessation of American Sign Language interpretation — which the Biden administration had used regularly — represented “denying hundreds of thousands of deaf Americans meaningful access to the White House’s real-time communications on various issues of national and international import.” The group also sued during Trump’s first administration, seeking ASL interpretation for briefings related to the COVID-19 pandemic.
In a June court filing opposing the association’s request for a preliminary injunction, reported Thursday by Politico, attorneys for the Justice Department argued that being required to provide sign language interpretation for news conferences “would severely intrude on the President’s prerogative to control the image he presents to the public,” also writing that the president has “the prerogative to shape his Administration’s image and messaging as he sees fit.”
Government attorneys also argued that it provides the hard of hearing or Deaf community with other ways to access the president’s statements, like online transcripts of events, or closed captioning. The administration has also argued that it would be difficult to wrangle such services in the event that Trump spontaneously took questions from the press, rather than at a formal briefing.
A White House spokesperson did not immediately comment Friday on the ongoing lawsuit or answer questions about the administration’s argument regarding the damage of interpretation services to Trump’s “image.”
In their June filing, government attorneys questioned if other branches of government were being held to a similar standard if they didn’t provide the same interpretative services as sought by the association.
As home to Gallaudet University, the world’s premier college for the deaf and hard of hearing, Washington likely has an ample pool of trained ASL interpreters into which the White House could tap. Mayor Muriel Bowser has made ASL interpretation a mainstay of her appearances, including a pair of interpreters who swap in and out.
Last month, a federal judge rejected that and other objections from the government, issuing an order requiring the White House to provide American Sign Language interpreting for Trump and Leavitt’s remarks in real time. The White House has appealed the ruling, and while the administration has begun providing American Sign Language interpreting at some events, there’s disagreement over what services it has to supply.
On his first week back in office, Trump signed a sweeping executive order putting a stop to diversity, equity and inclusion programs across the U.S. government. In putting his own imprint on the Pentagon, Defense Secretary Pete Hegseth in January issued an order stating that DEI policies were “incompatible” with the department’s mission,
This week, Secretary of State Marco Rubio ordered diplomatic correspondence to return to the more traditional Times New Roman font, arguing that the Biden administration’s 2023 shift to the sans serif Calibri font had emerged from misguided diversity, equity and inclusion policies pursued by his predecessor.
Meg Kinnard can be reached at http://x.com/MegKinnardAP"
https://www.fox21online.com/i/trump-administration-says-sign-language-services-intrude-on-trumps-ability-to-control-his-image/
#Metaglossia
#metaglossia_mundus
"Methodological Framework for Specialized Translation Curricula Development
Abstract
This article examines the scientific and methodological principles underlying the development of comprehensive educational and methodological complexes (EMC) for specialized translation courses, with particular emphasis on military translation. The study addresses the structural components of modern EMCs within the framework of competency-based approaches and Federal State Educational Standards (FSES) requirements. The research analyzes the sequential organization of educational content, the selection and systematization of thematic material, and the integration of didactic resources. Special attention is devoted to the development of textbooks for specialized translation, including their structural organization, the selection of authentic source materials, and the implementation of multimedia technologies. The article presents a practical case study of an electronic textbook for Italian military translation, demonstrating how theoretical principles can be applied in practice. The study emphasizes that modern EMCs function as flexible, variable, and non-linear scenarios of the educational process, incorporating not only traditional educational materials but also electronic resources, assessment tools, and methodological guidelines for both instructors and learners. The research concludes that effective EMC development requires consideration of subject-specific characteristics, linguistic and cultural particularities of the target language, and contemporary geopolitical contexts. The proposed approach to EMC design for specialized translation can be extrapolated to various language pairs and subject domains, thereby expanding its practical application in higher education systems.
Keywords: Educational and Methodological Complex, Specialized Translation, Military Translation, Competency-Based Approach, Textbook Design, Didactic Materials, Electronic Educational Resources, Authentic Source Materials, Translation Competencies, Higher Education, Curriculum Design, Language Pedagogy, Translation Studies."
Posted: 11 Dec 2025
Maria Smirnova
Moscow State University of International Relations (MGIMO University)
Date Written: October 14, 2025
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5838062
#Metaglossia
#metaglossia_mundus
"The art of translation is a specific and important one. In this workshop for younger translators, experts in the field provided feedback and support for challenges that may come with the work.
A “small-but-mighty community” Translators are the heart of the international publishing industry. They are often the first to read a story that could be brought to a new market, the sole voice advocating for a work to be published, and the steward of the original writer’s story, voice, and intention. In the last several years, questions have been raised about whether or not artificial intelligence will take the place of human translators, if the next generation of readers will be reading in their native language or reading books in English, or if markets, particularly the US in its current political state, will welcome stories from beyond its borders. All of these potential challenges ahead make the support of translators and their work and the building of community all the more important.
Award-winning translator and workshop leader Liz Lauffer, Image Sabine Schwarz
With that in mind, last month in New York City, the Goethe-Institut NYC and Frankfurter Buchmesse hosted a virtual translator workshop to provide community, education, and support for young German translators, hosted by acclaimed translator and winner of the 2014 Gutekunst Prize Liz Lauffer.
Rohan Kamicheril, senior editor at Farrar, Straus, and Giroux, founder of Tiffin and former editor at Words without Borders was a guest editor during the workshop to provide additional feedback and guidance to the translators.
Participants included Juliane Scholtz, Elizabeth Raab, Betsy Carter, Hayden Toftner, Jennifer Jenson, and Aziza Kasumov. Each of these translators had 8 weeks to translate an excerpt from Sara Gmuer’s Achtzehnter Stock, published in Germany by Hanser Verlag.
“I was eager to meet this merry band, discover how the next generation of literary translators is approaching our work, and dip into my own experience to see what I might pass on,” said Lauffer.
“They gave me a sense of belonging in this small-but-mighty community of ours. My hope is that this cohort will draw on the connections formed—whether it’s passing each other jobs, consulting on tricky bits, celebrating or commiserating with one another—and keep the flame lit.”
The translators submitted their translation samples two weeks before the workshop, on which Lauffer provided edits. Lauffer then chose an excerpt from each translation and shared those with Kamicheril who sent individual edits and comments back to each translator.
In preparation for the workshop, the translators were asked to each identify a paragraph they were struggling with. During the workshop, each participant shared what they found particularly challenging in the paragraphs they had chosen, which led to a lively exchange which highlighted the different perspectives and versions each could create.
Kamicheril stressed that there is right formulation because it is always about the bigger picture. What is the sound, the tone of the text? Which solution fits best in a particular context?
During the workshop, it became clear that the art of translation is closely tied to emotions, interactions, and interdependence, something AI cannot replicate.
“Though that’s been slowly changing, there’s just not a ton of infrastructure in the US to support emerging translators, so this was a rare opportunity to not only hone our skills but also learn more about the inner workings of the industry,” said Kasumov.
“I’m grateful to have been part of this group! My favorite part of the workshop itself was probably seeing how each and every one of us translated certain turns of phrases differently–a beautiful reminder that translation is an art, not something that can be automated away.”" By Erin L. Cox, Publisher | @erinlcox December 12, 2025 https://publishingperspectives.com/2025/12/building-the-future-of-translation-a-workshop-in-new-york-city/ #Metaglossia #metaglossia_mundus
"Norfolk is celebrating the anniversary of INTRAN, the county's homegrown interpretation and translation service.
Founded in December of 2000 to offer interpretation and translation services in 54 languages to public sector organisations in Norfolk, INTRAN has grown to provide over 300 organisations with translation and interpretation support across 174 languages, including British Sign Language and braille.
Cllr Robert Savage, Vice-Chairman of Norfolk County Council, said: "It's wonderful to celebrate a quarter century of INTRAN's work: this is a clear example of what can be achieved when organisations collaborate, bringing together 6 original partners to create a service that helps save public resources by avoiding duplication and now helps hundreds of organisations communicate clearly and swiftly with their service users. The work of INTRAN has helped improve lives, deliver better outcomes and ensure access to services for thousands of people who might otherwise have struggled. Here's to another 25 years of such success!"
Before INTRAN was established those who needed translation and interpretation support often had to arrange for specialist help to be brought in from far afield, with Interpreters as far away as Glasgow travelling to Norfolk to support face to face events. INTRAN's creation changed all that, allowing a range of organisations in Norfolk to access swift and local translation services. Today INTRAN offers telephone and video interpreters as well as face to face options, with written translation services and staff training also available.
Julie Dwyer, member of the Norwich Deaf Club, explained that "Without an interpreter, I often feel invisible — INTRAN helps me being heard and understood clearly".
In 2024-25, interpretation and translation services were requested in over 82,000 individual bookings, covering 111 different languages for Norfolk and 126 in the region (data shows that 174 different languages are spoken in Norfolk and over 200 in the East of England). Today, languages such as Lithuanian, Arabic and Polish are the most in demand, a sharp contrast to the early 2000s when Portuguese, Russian and British Sign Language were the most requested languages.
Valerie Gidney, INTRAN Partnership Manager, said: "The risks of non-effective communication are now widely recognised and regulated by law. By putting themselves together to deliver a common goal, members of our partnership have been able to reduce delays for service users, and help staff deliver their duty of care in confidence, helping avoid service delays, clearly communicate processes and consent, improve diagnoses and speed up accurate interventions.
With the continuous evolution of technology, over the past 25 years we've introduced new solutions, such as video interpreting on demand, which staff use to respond to emergency needs, access languages that are harder to source locally (such as Oromo, Rohingya or Nuer), and bridge gaps. New opportunities are currently sought which we are confident will further improve accessibility for members of our local deaf communities. Watch this space!"
Top 10 Languages in Norfolk 2024-25 In 2024-25, interpretation and translation services were requested in 111 different languages for Norfolk and 126 in the region, of which 30 had not been requested during 2023-24. Between 2023-24 and 2024-25, Norfolk has seen some changes to the Top 10 most requested languages, which account for 69.94% of all bookings:
Lithuanian Arabic Polish Portuguese Russian Romanian Pashto Kurdish-Sorani Bulgarian Dari Ukrainian was number 14 on the Norfolk list of top languages that year. Last modified: 11 December 2025" Norfolk County Council ,11 December 2025 https://www.norfolk.gov.uk/article/74416/INTRAN-Celebrates-25-Years-of-Inclusive-Communication #Metaglossia #metaglossia_mundus
|
"If you don’t pay attention, the almost entirely arbitrary differences between Englishes can cause a huge fuss, whether in U.S. courts or somewhere else.126 But the dialectal diversity in this country means the consequences of seemingly minor linguistic differences are innumerable. Analyzing Supreme Court precedent, population statistics, everyday prejudice, and dialectal grammar reveals that “English” contains multitudes. Maybe the most angst-inducing part of it all is the lack of data, both because this is an understudied area and because misinterpretation is so capable of repetition and very adept at evading review. The legal system relies deeply on language and, a fortiori, on dialect. The latter seeks but recognition."
#metaglossia mundus