Useful Tools, Information, & Resources For Wessels Library
484 views | +1 today
Follow
Your new post is loading...
Your new post is loading...
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
February 14, 2023 12:24 PM
Scoop.it!

The Rivalry Behind The Translation Of The Rosetta Stone

The Rivalry Behind The Translation Of The Rosetta Stone | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code

The Rosetta Stone
Published: September 27, 2022 at 3:25 pm
For more than 40 generations, no living soul was able to read an ancient Egyptian text. Even before the last-known hieroglyphic inscription was carved (in August AD 394), detailed understanding of the script had all but died out in the Nile Valley, save for a few members of the elite. As those with the specialist knowledge also dwindled, speculation took over and fanciful theories sprang up about the meaning of the mysterious signs seen adorning Egyptian monuments.

As early as the first century BC, the Greek historian Diodorus Siculus had averred that the script was “not built up from syllables to express the underlying meaning, but from the appearance of the things drawn and by their metaphorical meaning learned by heart”. In other words, it was believed hieroglyphics did not form an alphabet, nor were they phonetic (signs representing sounds). Instead, they were logograms, pictures with symbolic meaning.

This was a fundamental misconception, and deflected scholars from decipherment for the following 19 centuries. The European Enlightenment’s ablest philologists (those who study the history and development of languages) deemed the task to be impossible.

English antiquarian William Stukeley said in the early 18th century: “The characters cut on the Egyptian monuments are purely symbolical… The perfect knowledge of ’em is irrecoverable.” Five decades later, French orientalist Antoine Isaac Silvestre de Sacy dismissed the work of deciphering the writing as “too complicated, scientifically insoluble”.

Only at the end of that century did a bold Danish scholar named Georg Zoëga suggest that some of the hieroglyphs might be phonetic after all. “When Egypt is better known to scholars,” he wrote, “it will perhaps be possible to learn to read the hieroglyphs and more intimately to understand the meaning of the Egyptian monuments.”

More like this
Zoëga’s statement was a prescient one. A year later, in 1798, Napoleon launched his expedition to Egypt, taking a large scientific and scholarly expedition to study the ancient remains. In July 1799, his soldiers discovered the Rosetta Stone: a stela carved with a royal decree promulgated in the name of Ptolemy V in the second century BC.

The languages on the Rosetta Stone
While the decree itself was not significant, the fact that it had been inscribed in three scripts (hieroglyphics; an equally enigmatic form of Egyptian now known as demotic; and the still-understood ancient Greek) was what offered hope of finally making the unreadable Egyptian writing readable. Copies of the stone’s inscriptions circulated in Europe and cracking the code became one of the greatest intellectual challenges of the new century.

It was not long before the challenge was taken up by two brilliant minds of the age: Thomas Young and Jean-François Champollion, who could not have been more different in talent or temperament.

Young was a dazzling polymath of easy, self-effacing erudition, while Champollion was a single-minded obsessive, a self-conscious and jealous intellectual. And for added piquancy, the former was English, the latter French. The scholars were destined to be bitter rivals in the decipherment race.

Thomas Young and the Rosetta Stone
Thomas Young was born in Somerset in 1773 to Quaker parents who placed a high value on learning. Showing an early aptitude for languages, it is said that by the age of two he had learned to read and by 14 had gained some proficiency in French, Italian, Latin, Greek, Hebrew, Arabic, Persian, Turkish, Ethiopic, and a clutch of obscure ancient languages. When old enough, Young went out in search of a profession to support himself, so he trained in medicine and moved to London in 1799 to practise as a doctor. Science, however, remained his passion.

Pioneer philologist English polymath Thomas Young
Thomas Young (1773-1829), English physicist and Egyptologist. Discovered the undulatory (wave) theory of light. Managed to decipher the Rosetta Stone. (Photo by Oxford Science Archive/Print Collector/Getty Images)
In 1801, Young was appointed professor of natural philosophy at the Royal Institution and for two years gave dozens of lectures, covering virtually every aspect of science. For sheer breadth of knowledge, this has never been surpassed. With his supreme gifts as a linguist, it is not surprising that he should have become interested in the philological conundrum of the age: the decipherment of hieroglyphics. In his own words, he could not resist “an attempt to unveil the mystery, in which Egyptian literature has been involved for nearly twenty centuries”.

He began studying a copy of the Rosetta Stone inscription in 1814. It had quickly been determined that the three scripts said the same thing, if not word for word, so being able to read one inscription (the ancient Greek) would be a starting point for another (the hieroglyphics). The hieroglyphic inscription, however, was incomplete due to damage to the top of the stone, so scholars began by studying the second script (demotic). Young, blessed with an almost photographic memory, managed to discern patterns and resemblances that had escaped others, namely that the second script was closely connected with hieroglyphics, even derived from them, and that it was composed of a combination of both symbolic and phonetic signs.

Young was the first to make these ultimately correct evaluations. Also, working on the assumption that the name of a king was enclosed in a ring, or cartouche, in the hieroglyphic inscription, Young could locate every mention of “Ptolemy”, with which he was able to come up with a starting alphabet for hieroglyphics.

In 1818, Young summed up his pioneering knowledge in an article for the Encyclopaedia Britannica simply entitled “Egypt”, but he made the fateful move of publishing his landmark article anonymously. This allowed his great rival eventually to take the glory of decipherment.

Jean-François Champollion and the Rosetta Stone
Jean-François Champollion was 17 years Young’s junior. Born in 1790 in south-western France to a bookseller and his wife, he grew up surrounded by writings and displayed a precocious genius for languages.

It fell to his older brother, the similarly gifted Jacques-Joseph, essentially to raise him and support his learning. They would move to Grenoble and the young Champollion picked up half a dozen languages. Crucially, it turned out, among them was Coptic: an ancient language with an alphabet based on Greek, which he correctly surmised to be a descendant of ancient Egyptian.

An 1831 portrait Of Jean-Francois Champollion
Portrait of Jean-François Champollion (1790-1832), 1831. Found in the Collection of Musée du Louvre, Paris. (Photo by Fine Art Images/Heritage Images/Getty Images)
In 1804, Champollion first came across a copy of the Rosetta Stone inscription, and was fascinated. By the time the mayor of Grenoble is reported to have asked him, in 1806, if he intended to study the fashionable natural sciences, “No, Monsieur,” was the firm reply. “I wish to devote my life to knowledge of ancient Egypt.”

Following a few years studying in Paris, Champollion, still only 19 years old, moved back to Grenoble to take up a teaching post at the local college, gaining a promotion in 1818. This brought a measure of security that allowed him to devote more time to the study of ancient Egypt. That same year in England, Young was penning his seminal article for the Encyclopaedia Britannica.

Then, just three years later, Champollion’s revolutionary politics cost him his good name. Fired from the college and ejected from Grenoble, he lodged with his brother. With nothing else to occupy himself, and the benefit of Jacques-Joseph’s extensive library, he threw himself wholeheartedly and with a single-minded focus into the subject that had occupied his mind for years: deciphering the Egyptian script.

Based on his studies of the Rosetta Stone, Champollion made some progress, but was still unable to crack the code entirely. Then a second major piece of the puzzle arrived in the form of an obelisk discovered at Philae and removed from Egypt by a British collector, William John Bankes, to decorate the grounds of his stately home in Dorset.

Lithographs of the inscription circulated in the early 1820s and, like with the Rosetta Stone, the names of rulers – Ptolemy again and Cleopatra – could be identified in cartouches. Incidentally, the lithograph that went to Young contained an error, hampering his research, while the copy that came into Champollion’s possession in January 1822 was accurate.

Certain he was making rapid progress, the Frenchman assigned phonetic values to individual hieroglyphic signs and built an alphabet of his own, which let him find the names of other rulers of Egypt on other monuments.

The final breakthrough came on Saturday 14 September 1822 after Champollion received another inscription, from the pharaonic temple at Abu Simbel. Applying all the knowledge he had laboured so long and so hard to acquire, he was able to read the royal name as that of Ramesses the Great. Encouraged, he went on to read Ptolemy’s royal epithets on the Rosetta Stone. By the end of the morning, he needed no further proof that his system was the right one.

Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great
Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great in the 13th century BC. As the script could be written in any direction, the way the human and animal figures face shows how to read an inscription (Photo by Getty Images)
He sprinted down the road to his brother’s office at the Académie des Inscriptions et Belles-Lettres, flinging a sheaf of papers on to the desk and exclaiming: “Je tiens mon affaire!” (“I’ve done it!”)

Overcome with emotion and exhausted by the mental effort, Champollion collapsed to the floor and had to be taken back home, where for five days he was confined to his room completely incapacitated. When he finally regained his strength, on the Thursday evening, he immediately resumed his feverish studies and wrote up his results. Just one week later, on Friday 27 September, he delivered a lecture to the Académie to announce his findings formally. By convention, his paper had to be addressed to the permanent secretary, so was given the title Lettre à M. Dacier (“Letter to Mr Dacier”).

The rivalry of Young and Champollion
By extraordinary coincidence, in attendance at that historic talk was Thomas Young, who happened to be in Paris. Moreover, he was invited to sit next to Champollion while he read out his discoveries.

In a letter written two days later, Young acknowledged his rival’s achievement: “Mr Champollion, junior… has lately been making some steps in Egyptian literature, which really appear to be gigantic. It may be said that he found the key in England which has opened the gate for him… but if he did borrow an English key, the lock was so dreadfully rusty, that no common arm would have had strength enough to turn it.”

This outward magnanimity concealed a deeper hurt at the belief Champollion had failed to acknowledge Young’s contributions to decipherment. Quietly determined to set the record straight he published his own work within a few months, this time under his own name. It was pointedly entitled An Account of Some Recent Discoveries in Hieroglyphical Literature and Egyptian Antiquities, Including the Author’s Original Alphabet, as Extended by Mr Champollion.

The Frenchman was not about to take such a claim lightly. In an angry letter to Young, he retorted: “I shall never consent to recognise any other original alphabet than my own… and the unanimous opinion of scholars on this point will be more and more confirmed by the public examination of any other claim.”

Indeed, Champollion was as adept at self-promotion as Young was self-effacing. Buoyed by public recognition, he continued working and came to a second, equally vital realisation: his system could be applied to texts as well as names, using the Coptic he had utterly immersed himself in as a guide. This marked the real moment at which ancient Egyptian once again became a readable language. The race had been won.

Hieroglyphs in the notebook of Jean-Francois Champollion
Pages of Jean-François Champollion’s notebook filled with facsimiles of hieroglyphic inscriptions. The Frenchman dedicated his life to learning the meaning of the symbols that had baffled scholars for centuries (Photo by Art Media/Print Collector/Getty Images)
Champollion revealed the full extent of his findings in his magnum opus, Précis du système hiéroglyphique des anciens Egyptiens (Summary of the hieroglyphic system of the ancient Egyptians). Published in 1824, it summed up the character of ancient Egyptian: “Hieroglyphic writing is a complex system, a script at once figurative, symbolic, and phonetic, in the same text, in the same sentence, and, I might almost say, in the same word.” His reputation secure, he even felt able to acknowledge, grudgingly, Young’s work with the comment, “I recognise that he was the first to publish some correct ideas about the ancient writings of Egypt.”

Young, for his part, seemed to forgive Champollion for any slights, later telling a friend that his rival had “shown me far more attention than I ever showed or could show, to any living being”. Privately, Champollion was far less magnanimous, writing to his brother: “The Brit can do whatever he wants – it will remain ours: and all of old England will learn from young France how to spell hieroglyphs using an entirely different method.”

In the end, despite their radically different characters and temperaments, both made essential contributions to decipherment. Young developed the conceptual framework and recognised the hybrid nature of demotic and its connection with hieroglyphics. Had he stuck at the task and not been distracted by his numerous other scientific interests, he may well have cracked the code himself.

Instead, it took Champollion’s linguistic abilities and focus. His Lettre à M. Dacier announced to the world that the secrets of the hieroglyphics had been discovered and ancient Egyptian texts could be read.

It remains one of the greatest feats of philology. By lifting the civilisation of the pharaohs out of the shadows of mythology and into the light of history, it marked the birth of Egyptology and allowed the ancient Egyptians to speak, once again, in their own voice.

Toby Wilkinson is an Egyptologist and author. His books include A World Beneath the Sands: Adventurers and Archaeologists in the Golden Age of Egyptology (Picador, 2020)

This content first appeared in the October issue of BBC History Magazine


Via Charles Tiayon
Charles Tiayon's curator insight, September 28, 2022 10:20 PM

"The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code"

#metaglossia mundus

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
February 14, 2023 12:21 PM
Scoop.it!

Microsoft Announces New Generative AI Search Using ChatGPT; Bada BING, Bada Boom—The AI Race Is On

Microsoft Announces New Generative AI Search Using ChatGPT; Bada BING, Bada Boom—The AI Race Is On | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

Vice President and Principal Analyst Melody Brue gives her analysis of Microsoft's new generative AI search using ChatGPT in its Bing search engine.

Feb 9, 2023

When you’re looking for an answer to a question, want to find a local repair shop or need a recipe for braised short ribs, the typical response is to "Google it." In fact, Google is now recognized as a verb in the Merriam-Webster dictionary. By this point, of course, Google has been the unquestioned leader in search for decades, despite various efforts by competitors to take that crown. Google has remained at the top of this food chain by optimizing the user experience—and capturing the lion's share of advertising dollars—across new types of devices, voice search, e-commerce search and more.

Enter AI, and Google now faces a new set of threats from rivals like Microsoft, who have narrowed the competition gap and forced the search giant's hand in a matter of days. In this article we will look at what is happening in generative AI and how Microsoft is on a mission to challenge Google's search leadership. This includes Microsoft's investment in OpenAI, the company behind ChatGPT (short for “chat generative pre-trained transformer”), a generative AI tool and the most quickly adopted product in history.

 

What is ChatGPT and why is it relevant for search?

 

ChatGPT is a natural language processing tool that can create content, images and even code on demand via conversations with a chatbot. The AI-driven tool is built on OpenAI's GPT-3 family of large language models. ChatGPT launched in November 2022 and amassed 100 million users in its first two months, although the app is often down or at capacity—which is probably to be expected in the context of such explosive adoption.

 
 
 

An attempted login to ChatGPT on the morning of February 8, 2023. 

MELODY BRUE

Changes in consumer behavior and modern technologies have reshaped search in the past with shifts from desktop to mobile, tablets and other voice-commanded devices. Google wrote the playbook on how good search is conducted; the technology toolbox supporting that is unlikely to become irrelevant. But the burning question is: will AI become more relevant than what is in Google's current toolbox for search? According to Microsoft CEO Satya Nadella, "The [AI] race starts today."

MORE FROMFORBES ADVISOR

Best Travel Insurance Companies

By
Amy Danise
Editor

Best Covid-19 Travel Insurance Plans

By
Amy Danise
Editor

Microsoft makes the first power play with OpenAI and ChatGPT

In January, Microsoft invested an estimated $10 billion in OpenAI, valuing the company at $29 billion. The company first invested $1 billion in OpenAI in 2019, and then more in a 2021 funding round when the startup was working closely with Azure, Microsoft's cloud service. The most recent investment also seemingly made Microsoft the exclusive cloud computing provider to OpenAI.

 

Along with this latest investment, Microsoft announced the new AI-powered Bing search engine and Edge browser. Patrick Moorhead, CEO, and chief analyst at Moor Insights & Strategy was live tweeting his thoughts from the event; his enthusiasm (albeit tempered) was enough to convince me to install the browser and extension and check out the new Bing while I patiently keep my place on the waitlist for the full Microsoft Bing ChatGPT integration.

My initial reaction is that the new Bing engine is slick and requires less eliminating of useless content than a typical Google search. The sorting is not intuitive in the "Google it" world I am used to. Still, the conversational tone and variety of answers presented alongside aggregated information make it feel like asking a friend who knows you well enough to know how to answer your questions in a way you will understand. But just like that friend, the Bing engine’s accuracy should be checked. The data in the early version is not guaranteed to be accurate, and it may be some time before a high degree of accuracy can be promised. This makes for a good reminder that misinformation and security must remain top-of-mind for any company releasing AI-generated content. These are topics Microsoft and Google are taking seriously—and for which they must take a strict approach to regulating, auditing and reporting.

Google unveils Bard and Invests $300 million in Anthropic

Earlier this week, Google announced Bard, a competitor to ChatGPT built atop Google’s powerful natural language processing model LaMDA (Language Model for Dialogue Applications). Bard will be released to “trusted testers” outside the company at an undisclosed date soon. The company did not give a time frame for general availability but said it will be released to the public after testing safety issues and working out other kinks.

Along with the release of Bard, Google has announced that it will allow developers to create their own applications by tapping into the company’s natural language models. "Beyond our own products, we think it's important to make it easy, safe and scalable for others to benefit from these advances by building on top of our best models," Alphabet CEO Sundar Pichai wrote in a blog post about the topic.

Of course, in all this one would assume that Google could not just sit idle after receiving Microsoft's shot across the bow. The company held its own event in Paris a day after Microsoft’s event but drew a lackluster reception for a presentation that seemed rushed and unprepared—even though the company has pioneered many of the technologies behind generative AI products and has invested a hefty sum in the technology. The botched demo, in which Bard produced an inaccurate response to a query among other snafus, sent Google parent Alphabet’s stock plummeting. Shares in the company were down 7.7% after Wednesday’s event—meaning that the company lost $100 billion in value overnight.

Google also invested $300 million in Anthropic, one of the most hyped OpenAI rivals whose AI model “Claude” is a ChatGPT competitor. Using Google Cloud’s GPU and TPU clusters, Anthropic will train, expand and implement Claude.

Anthropic's history might give some people pause, however. The company was started by a group of former OpenAI employees and backed by Sam Bankman-Fried—the now-indicted former CEO at the heart of the FTX scandal; it is still an open question whether the asset could be liquidated in the FTX Bankruptcy.

There is more to this war than search

Through Bing, Microsoft currently commands 3% of the global search market. Even modest gains in that number would mean billions of dollars in advertising revenue. According to information shared by Microsoft, each percentage point of search advertising market share in yields an additional $2 billion in revenue. While this is a measly portion of Microsoft’s total annual revenue (nearly $200 billion in 2022), the growth opportunity is still significant. However, the war is not simply about search and ad dollars. It is also about where that business comes from and how it affects the competition—in this case, Google.

As laid out above, Google has spent a lot of money investing in AI, largely in response to competitive threats. Competition in the search market inevitably makes search less profitable for Google, not only if it loses some percentage of ad spend to Bing, but also through the increased expense of running AI-powered vs. classic search engines. Whereas gaining search market share for Microsoft is nicely incremental, losing market share for Google hits the company hard. Search advertising revenue in the December 2022 quarter accounted for 56% of revenues for Alphabet, Google's parent company. A less profitable Google means less money in the company's war chest to compete in cloud computing and other growth areas.

The AI battle will play out in our daily lives and the modern workplace

While Microsoft has put a lot of muscle into the search race, the company's investment in OpenAI (again, dating back to 2019) was made with visions that reach well beyond chatbots. OpenAI technology can be integrated into the company's productivity tools, including Outlook and Office 365. This could take the form of digital assistants, bot-suggested PowerPoint content and formatting, email sorting and suggested replies based on previous interactions, suggested next best actions and more. Within Azure alone, the sheer popularity of OpenAI and ChatGPT could be enough to lure potential cloud customers away from Amazon or Google. And on the gaming front, Microsoft’s investment in OpenAI could give the company a competitive advantage over rivals Sony and Nintendo.

Microsoft also announced its intention to integrate ChatGPT into Microsoft Teams in a premium plan. The chatbot will suggest templates specific to the needs of the meeting organizer, generate notes from meetings, summarize content specific to users based on their needs and even translate notes and transcripts into 40 different languages. ChatGPT can also summarize meetings, calls and webinars into chapters, assign them titles and flag specific names and content. This could be a game changer for reducing the number of meetings people need to attend while improving their ability to consume content relevant to them and their particular roles. My assessment is that this may free people to be more present and think more clearly during meetings because they won’t need to spend so much energy corralling participants, taking notes, or assigning post-meeting responsibilities. Ultimately, that means they can spend less time planning and more time executing.

Google likewise has plans for AI integrations beyond search. During the company’s earnings call, Google CEO Pichai spoke of integrating generative AI into most of its products, from Google Docs to Gmail. While he didn’t specify what AI-assisted emails would be like, he did broadly touch on designs and features Gmail might have. It makes sense to me that in an application like email AI would be able to analyze content from previous interactions and suggest replies, automate workflows and follow-ups and integrate with scheduling tools. By linking applications like Gmail, Calendar and Chat, Bard can potentially act as anyone’s personal assistant, freeing up employees at every level in an organization to focus on more meaningful and strategic work.

AI is not coming after your job—it could be creating jobs

After a couple of months of what can only be described as brutal layoffs in Big Tech, it must be hard for recently laid-off Microsoft and Google employees (among others) to see the companies invest billions in the next wave of computing. In reality, some of those layoffs were done to make room for hires in key strategic areas such as AI. According to ZipRecruiter data, postings for AI-related roles in January were up 6.3% compared to February 2020.

AI is undoubtedly top-of-mind for Big Tech execs as they address Wall Street. As found in a Reuters analysis of recent earnings calls, Alphabet, Microsoft and Meta used variations of the terms “AI,” “generative AI” and “machine learning” or “ML” up to six times more often than in the previous quarter.

Satya Nadella, CEO of Microsoft, has addressed AI specifically, noting in the company's January layoff announcement that AI is driving the "next major wave of computing" as Microsoft uses AI models to build a "new computing platform." He also acknowledged that the company "will continue to hire in key strategic areas." Sounds like AI to me.

“Bing”ing it home

I do not think people will be saying "Bing that" anytime soon, but clearly Microsoft is serious about taking the lead in the AI war of the tech giants. At least for now, it has presented some nice-looking solutions that feel well-thought-out, that fit cohesively into the company's other products and services and that offer new layers and extensions for areas of its infrastructure where even incremental market gains are significant revenue drivers. These advantages could present an even bigger competitive moat between Microsoft and its competitors.

It is important to note, however, that competition in the space is still heating up. With additional players like Chinese tech giant Baidu announcing their own ChatGPT-style "Earnie Bot" this week, more rivals—big, small and maybe even some currently in stealth mode—are sure to follow. I agree with Moorhead's assessment that it is a long game, and that Big Tech is here to play that game.

It is far too early to call a winner, and I believe that generative AI is not a zero-sum game. As with many revolutionary technologies, competition creates continual advancement, differentiated offerings tailored to a wide range of needs and an emerging balance of supply and demand.

There is much more to tackle about how these tools will affect our lives—at home and at work—and how AI should be responsibly managed. I look forward to watching the long game, trying out each offering and seeing how tech companies and their AI models learn, evolve and grow. When they are truly ready for prime time, I also look forward to seeing the impact on the future of work, productivity and automation to improve operations and efficiencies.

 
 

Moor Insights & Strategy provides or has provided paid services to technology companies like all research and tech industry analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler.

Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro.


Via Charles Tiayon
Dr. Russ Conrath's insight:

"ChatGPT is a natural language processing tool that can create content, images and even code on demand via conversations with a chatbot."

Charles Tiayon's curator insight, February 10, 2023 11:16 PM

"Vice President and Principal Analyst Melody Brue gives her analysis of Microsoft's new generative AI search using ChatGPT in its Bing search engine."

#metaglossia mundus

smallbutmightyseo@gmail.com's comment, June 23, 2023 3:34 AM
Great analysis, Melody! It's fascinating to see Microsoft's investment in OpenAI and the integration of ChatGPT into their Bing search engine. With its ability to generate content and engage in conversations, ChatGPT brings a new dimension to search experiences.
Rescooped by Dr. Russ Conrath from Autism Storyboards
January 31, 2023 1:47 PM
Scoop.it!

NAIS - Neurodiversity and Differentiation

NAIS - Neurodiversity and Differentiation | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Why do learning disabilities continue to be called learning disabilities instead of learning differences? Why are they not simply considered part of the landscape of neurodiversity? Thomas Armstrong, executive director of the American Institute for Learning and Human Development, writes: The number of categories of illnesses listed by the American Psychiatric Association has tripled in the past fifty years. With so many people affected by our growing “culture of disabilities,” it no longer makes sense to hold on to the deficit-ridden idea of neuropsychological illness.1 The labels are maintained in large part because many laws, regulations, policies, and practices lag behind current research, and disability diagnoses are still required to support basic student rights. For example, a “disability” is required for students to access accommodations on standardized testing, produced by largely privately owned organizations (College Board,2 ACT). The term “disability” comes from federal legislation that allows for rights, under the law, to help even out the playing field for those with diagnosed disabilities, including learning disabilities. Additionally, funding for medical and educational resources has muddied the waters of terminology. Diagnoses are required for insurance to cover medical costs, and labels are needed to support funding for educational resources. While the clinical and federal references for diagnoses have unique functions, the ICD-10, DSM-V, IDEA, Section 504 of the ADA,3 and the education code standardize the terminology to some extent and limit the semantics required of those advocating for students. As educators, we find it challenging to switch perspectives, and simultaneously adopt a new vocabulary, to reinforce the setting in which the student needs support: classroom, tutorial, doctor’s office, standardized test board. The task can translate into navigating a series of hoops that can seem arbitrary and entirely separate from a deeper understanding of the learner. While the philosophical shift in terminology from “disability” to “difference” or “style” is more informed and politically correct, it is the political system that holds one to the term “disability” in order to access legal rights for those who need individualized support and accommodations. The tipping point will come when a substantial cohort of educators and parents understands differences, deficits, and diversity. A wider perspective allows people to address learning differences in an accepting and proactive manner. Acceptance and early intervention ensure that learning variations never reach the level of deficit that creates the discrepancy model on which disability determination has historically been based. While a growing number of people will become more understanding and accepting of the neurodiversity of students, society’s medical and educational institutions will still be significantly influenced by financial and legislative terminology. Semantics is getting in the way of a more humane approach to learning. Differentiation … Because It’s Just Good Teaching Differentiated instruction that meets individual student needs should be the norm in teaching, yet this requires additional training, materials, and coaching to support teachers’ ability to understand, prepare for, and accommodate all learners. Teachers are asked to differentiate for each learner for each subject and at various times of the day, with a host of variables that will impact each individual’s experience. Differentiation is essential in the way a teacher designs and implements instruction on a daily basis with their students. If, therefore, differentiation is simply “good teaching,” why are we subjecting learners (and ourselves) to a host of tests, labels, and logistics to determine how a learner functions outside the norm? A differentiated approach considers all learners as outside the norm. For teaching to adapt to the modern framework of a growth mindset, there must be a collective rejection of the semantics of educational labels. Instead, educators must gather accurate data at regular intervals in a student’s educational experience and then use this formative data to adjust instructional approaches and materials. School communities must work together to support the needs of all learners. Teachers must assess in the true meaning of the word assess — to sit beside — rather than continue to test mastery of static content through measurement tools that necessitate accommodations for at least 20 percent of the population. Schools must focus on a Universal Design for Learning4 to meet the unique needs of all students, knowing that every student benefits from an individualized approach to instruction. The term “accommodations” would no longer be necessary if accessibility features, such as audio books, voice dictation, calculators, and untimed tests, were available to support a more mindful approach to education. And yet, accessibility features alone are not enough. Clinically researched screening tools, such as the Comprehensive Test of Phonological Processing (CTOPP), can be used to modify curriculum and instruction to meet students’ needs at the early elementary level; and multisensory teaching methods and materials, many of which were originally designed for students with diagnosed learning disabilities, can be used to benefit all students, regardless of age or skill set. Reading, for example is a not a natural skill developmentally. Reading is learned through explicit instruction and sufficient practice. Deficits in phonological awareness are viewed as the hallmark of reading disabilities. Phonological awareness is, however, the most responsive to intervention of the phonological processing areas.5 When teachers have the support to better understand how to guide this skill, fewer students struggle. Implementing a Paradigm Shift Once we have acknowledged that students process information in a variety of ways, it is critical to present new learning in different formats to ensure educational equity. When writing lesson plans, teachers who are adept at differentiating research and employing multiple resources and multiple perspectives on the same topic,6 have a whole host of teaching techniques to use, such as videos, pictures, interactive websites, music, poetry, art, guided visualization, concrete manipulatives, small-motor and large-motor activities, maker’s projects, read-alouds, self-reflective writing, independent reading, analytic writing, small-group and large-group discussion prompts, oral presentations, and lectures. This resource of tools gives teachers quick access to many forms of instructional input and the flexibility to adjust to students’ interests, experience, background knowledge, and learning needs. Most important, when teachers present a variety of teaching strategies, they are also modeling the fact that there are many forms of acceptable output. At Stevenson School in Carmel, California, we operate from the position that all learners deserve a seat at the table and also deserve to be fed according to their individual dietary needs. Instead of thinking of the developmental learning spectrum from high to low, we think of it as propensities in different modes of learning. Equity is about getting what you need, not getting the same as everyone else. This is as true for the student-genius with debilitating social-emotional glitches as it is for the dyslexic/ADHD child with academic learning challenges. Within this operating philosophy, we look at equity from a different point of view and provide a broad range of options. For example, in grade 6, we are learning about the early 1800s, and the textbook is dense and somewhat dry. We have provided these students with key vocabulary, videos, and photographs of the same material in advance so that when they encounter the textbook, they have a context for the big picture. We then guide the students through the process of reading dense nonfiction text by projecting the text on a large screen and having the teacher model annotation skills with a think-aloud strategy. The group is then ready to reflect in their independent writing journals on the topic covered. At this point, we have exposed the students to the material visually, verbally, and, now, intrapersonally. The time allows for synthesis of complex ideas and multiple input modalities to provide access to all learners. Discussion follows, which draws in the interpersonal learners and gives students practice with concise, articulate oral presentation. In this class, the sixth-grade students are asked to write their own graphic novels. They choose a topic relevant to the early 1800s and will either draw the panels themselves or use Google Slides or Storyboard That to create the final draft. At present, differentiation often pushes teachers to action outside their comfort zone in classroom preparation, classroom instruction, and assessment of knowledge. If we simplify the notion of learning and teaching to the common-sense fundamentals of communication (listening, speaking, reading, and writing), teachers often make natural and intuitive connections in how and why to differentiate. With an additional understanding of the limitations of attention and memory, we can strive to expand our teaching and assessing of student knowledge. Listening and speaking are not just in the realm of the speech therapist or the foreign language teacher; reading and writing are not just the domain of the English teacher. Educators across all content areas benefit from an understanding of the language continuum so that instruction, especially of new material, is couched in a context that will afford learners time for input, then processing, then output. Learning requires attention and engagement, and for students with biologically based ADHD, there is nothing teachers can do to replace the neurotransmitters necessary for attention.7 They can, however, respect the limitations of attention, increase movement and hands-on learning, break information into manageable units, and provide embedded strategy training. The bold shift to comprehensively develop faculty who are competent in differentiated instruction, classroom management, and assessment has a more significant impact on positive academic and emotional outcomes for students than any other curricular initiative.8 The essential factors supporting the implementation of this paradigm shift are a shared intention of prioritization, inspiration, frequent observation, targeted professional development, planning time, access to materials, ongoing support, consultation, and coaching. Creative allocation of resources, organization, and conscientious follow-through allow schools to accomplish their desired goals. Frequent observation of instruction and regular feedback are tangible measures that afford educational leaders a proactive role in helping teachers reach their students. At high-performing schools, “Leaders typically observe each teacher eight times a year — three more times than leaders at other schools” and provide verbal or written feedback after almost every observation.9 Faculty benefit from the same individualized accountability as their students. When administrators and colleagues observe day-to-day instruction, everyone is better informed to discuss, critique, and examine the ways in which teaching practices can be improved. Review of classroom videos adds an additional level of self-reflection and allows educators to play an integral role in their own professional growth. As the poet Rabindranath Tagore said, “A teacher can never truly teach unless he is still learning himself.” It is essential that the shared vision is clear — that everyone is on board and feels safe to explore new ideas. Targeted professional development reflects a commitment to strengthen instruction at the individual teacher level. An awareness of what each teacher needs to be more effective in his or her practice unfolds through observation and a collegial coaching relationship.10 A school culture of teamwork, motivation, expertise, and creative thinking engages teachers to be innovative educators.11 Planning time is essential to implement innovative ideas. Administrators must pay attention to the flow of the daily schedule, the yearly calendar, and the timing of extra demands. While flexibility is key to a dynamic team, there is never enough time for everything. Careful consideration is important in supporting collaboration, in encouraging project-based learning initiatives, and in protecting teachers from a sense of being overwhelmed. Access to materials needed to implement innovative ideas across the curriculum must be provided. While supplies do not necessarily need to be expensive, materials should be budgeted into the plan and be available, depending on the financial limitations of an institution, along with the considerable time it takes resourceful teachers to create their own materials. Ongoing support, consultation, and coaching are necessary to strengthen the instructional culture. Regular meetings with a mentor — an administrator, specialist, or colleague — are vital to fully exploring the potential of learning theories and instructional practices. If coaching is embedded in the culture, then, just as with observation, the formality falls away to reveal an empowering relationship that can be the springboard for passionate, purposeful teaching. More than ever before, 21st century schools need exceptional teachers — teachers who love to teach learners; who are committed to finding ways to access their students individually and as a group; and who are educated, trained, and treated as professionals. Teaching is a dynamic profession, requiring responsiveness to an immeasurable set of real and perceived limitations and strengths. With patience, acceptance, information, and a sustainable framework of support, educators can create safe and supportive learning environments that rise above the political semantics of learning differences and reframe neurodiversity in terms of equity and empathy. Notes 1. Thomas Armstrong, Neurodiversity: Discovering the Extraordinary Gifts of Autism, ADHD, Dyslexia, and Other Brain Differences (Cambridge, MA: De Capo Press, 2010). 2. The College Board’s SAT test originates from an adaptation of the Army Alpha — the first mass-administered IQ test, which was made more difficult for use as a college admissions test. (Frontline, “A Brief History of the SAT,” PBS Online; online at http://www.pbs.org/wgbh/pages/frontline/shows/sats/where/history.html.) 3. ICD-10 is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD), a medical classification list issued by the World Health Organization (WHO); DSM-V is the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, a classification and diagnostic tool published by the American Psychiatric Association (APA); IDEA is the Individuals with Disabilities Education Act (IDEA), a four-part piece of American legislation that ensures that students with a disability are provided with free appropriate public education (FAPE) that is tailored to their individual needs; Section 504 of the Rehabilitation Act of 1973 is federal legislation that guarantees certain rights to people with disabilities. It was the first U.S. federal civil rights protection for people with disabilities; it helped pave the way for the 1990 Americans With Disabilities Act (ADA). 4. An education framework based on research in the learning sciences. 5. Richard K. Wagner, Joseph K. Torgesen, and Carol Rashotte, Comprehensive Test of Phonological Processing (CTOPP) (Austin, TX: PRO-ED, 1999); Richard K. Wagner, Joseph K. Torgesen, and Carol Rashotte, “Development of Reading-Related Phonological Processing Abilities: New Evidence of Bidirectional Causality From a Latent Variable Longitudinal Study,” Developmental Psychology 30, no. 1 (1994): 73-87; Richard K. Wagner and Joseph K. Torgesen, “The Nature of Phonological Processing and Its Causal Role in the Acquisition of Reading Skills. Psychological Bulletin 101, no. 2 (1987): 192-212. 6. A good example of curating is Critical Explorers (www.criticalexplorers.org), which provides free online curricular resources. 7. JoAnn Deak, “An Evening With Dr. JoAnn Deak,” presentation to Stevenson School, August 24, 2015, Pebble Beach, CA. 8. In five years of steady growth, student test scores at Stevenson School increased from below grade level performance in reading and math to award-winning standings in the top 15 percent in the state (nationalblueribbonschools.ed.gov/awardwinners/). 9. The New Teacher Project, 2012 10. Stephen D. Brookfield, The Skillful Teacher: On Teaching, Trust, and Responsiveness in the Classroom (San Francisco: Jossey-Bass, 2015). 11. Eleanor Duckworth, “Confusion, Play and Postponing Certainty,” Harvard Gazette, February 16, 2012; online at http://news.harvard.edu/gazette/story/2012/02/confusion-play-and-postponing-certainty-eleanor-duckworth-harvard-thinks-big/2012. For Further Reading Barquero, Laura, Nicole Davis, and Laurie E. Cutting, “Neuroimaging of Reading Intervention: A Systematic Review and Activation Likelihood Estimate Meta-Analysis.” PLoS One 9, no. 11 (2014). Dweck, Carol. Mindset: The New Psychology of Success. New York: Random House: 2006. Eide, Brock I., and Fernette L. Eide. The Dyslexic Advantage: Unlocking the Hidden Potential of the Dyslexic Brain. New York: Penguin, 2012.

Via Lawton Rogers
No comment yet.
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
January 31, 2023 1:45 PM
Scoop.it!

The Rivalry Behind The Translation Of The Rosetta Stone

The Rivalry Behind The Translation Of The Rosetta Stone | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code

The Rosetta Stone
Published: September 27, 2022 at 3:25 pm
For more than 40 generations, no living soul was able to read an ancient Egyptian text. Even before the last-known hieroglyphic inscription was carved (in August AD 394), detailed understanding of the script had all but died out in the Nile Valley, save for a few members of the elite. As those with the specialist knowledge also dwindled, speculation took over and fanciful theories sprang up about the meaning of the mysterious signs seen adorning Egyptian monuments.

As early as the first century BC, the Greek historian Diodorus Siculus had averred that the script was “not built up from syllables to express the underlying meaning, but from the appearance of the things drawn and by their metaphorical meaning learned by heart”. In other words, it was believed hieroglyphics did not form an alphabet, nor were they phonetic (signs representing sounds). Instead, they were logograms, pictures with symbolic meaning.

This was a fundamental misconception, and deflected scholars from decipherment for the following 19 centuries. The European Enlightenment’s ablest philologists (those who study the history and development of languages) deemed the task to be impossible.

English antiquarian William Stukeley said in the early 18th century: “The characters cut on the Egyptian monuments are purely symbolical… The perfect knowledge of ’em is irrecoverable.” Five decades later, French orientalist Antoine Isaac Silvestre de Sacy dismissed the work of deciphering the writing as “too complicated, scientifically insoluble”.

Only at the end of that century did a bold Danish scholar named Georg Zoëga suggest that some of the hieroglyphs might be phonetic after all. “When Egypt is better known to scholars,” he wrote, “it will perhaps be possible to learn to read the hieroglyphs and more intimately to understand the meaning of the Egyptian monuments.”

More like this
Zoëga’s statement was a prescient one. A year later, in 1798, Napoleon launched his expedition to Egypt, taking a large scientific and scholarly expedition to study the ancient remains. In July 1799, his soldiers discovered the Rosetta Stone: a stela carved with a royal decree promulgated in the name of Ptolemy V in the second century BC.

The languages on the Rosetta Stone
While the decree itself was not significant, the fact that it had been inscribed in three scripts (hieroglyphics; an equally enigmatic form of Egyptian now known as demotic; and the still-understood ancient Greek) was what offered hope of finally making the unreadable Egyptian writing readable. Copies of the stone’s inscriptions circulated in Europe and cracking the code became one of the greatest intellectual challenges of the new century.

It was not long before the challenge was taken up by two brilliant minds of the age: Thomas Young and Jean-François Champollion, who could not have been more different in talent or temperament.

Young was a dazzling polymath of easy, self-effacing erudition, while Champollion was a single-minded obsessive, a self-conscious and jealous intellectual. And for added piquancy, the former was English, the latter French. The scholars were destined to be bitter rivals in the decipherment race.

Thomas Young and the Rosetta Stone
Thomas Young was born in Somerset in 1773 to Quaker parents who placed a high value on learning. Showing an early aptitude for languages, it is said that by the age of two he had learned to read and by 14 had gained some proficiency in French, Italian, Latin, Greek, Hebrew, Arabic, Persian, Turkish, Ethiopic, and a clutch of obscure ancient languages. When old enough, Young went out in search of a profession to support himself, so he trained in medicine and moved to London in 1799 to practise as a doctor. Science, however, remained his passion.

Pioneer philologist English polymath Thomas Young
Thomas Young (1773-1829), English physicist and Egyptologist. Discovered the undulatory (wave) theory of light. Managed to decipher the Rosetta Stone. (Photo by Oxford Science Archive/Print Collector/Getty Images)
In 1801, Young was appointed professor of natural philosophy at the Royal Institution and for two years gave dozens of lectures, covering virtually every aspect of science. For sheer breadth of knowledge, this has never been surpassed. With his supreme gifts as a linguist, it is not surprising that he should have become interested in the philological conundrum of the age: the decipherment of hieroglyphics. In his own words, he could not resist “an attempt to unveil the mystery, in which Egyptian literature has been involved for nearly twenty centuries”.

He began studying a copy of the Rosetta Stone inscription in 1814. It had quickly been determined that the three scripts said the same thing, if not word for word, so being able to read one inscription (the ancient Greek) would be a starting point for another (the hieroglyphics). The hieroglyphic inscription, however, was incomplete due to damage to the top of the stone, so scholars began by studying the second script (demotic). Young, blessed with an almost photographic memory, managed to discern patterns and resemblances that had escaped others, namely that the second script was closely connected with hieroglyphics, even derived from them, and that it was composed of a combination of both symbolic and phonetic signs.

Young was the first to make these ultimately correct evaluations. Also, working on the assumption that the name of a king was enclosed in a ring, or cartouche, in the hieroglyphic inscription, Young could locate every mention of “Ptolemy”, with which he was able to come up with a starting alphabet for hieroglyphics.

In 1818, Young summed up his pioneering knowledge in an article for the Encyclopaedia Britannica simply entitled “Egypt”, but he made the fateful move of publishing his landmark article anonymously. This allowed his great rival eventually to take the glory of decipherment.

Jean-François Champollion and the Rosetta Stone
Jean-François Champollion was 17 years Young’s junior. Born in 1790 in south-western France to a bookseller and his wife, he grew up surrounded by writings and displayed a precocious genius for languages.

It fell to his older brother, the similarly gifted Jacques-Joseph, essentially to raise him and support his learning. They would move to Grenoble and the young Champollion picked up half a dozen languages. Crucially, it turned out, among them was Coptic: an ancient language with an alphabet based on Greek, which he correctly surmised to be a descendant of ancient Egyptian.

An 1831 portrait Of Jean-Francois Champollion
Portrait of Jean-François Champollion (1790-1832), 1831. Found in the Collection of Musée du Louvre, Paris. (Photo by Fine Art Images/Heritage Images/Getty Images)
In 1804, Champollion first came across a copy of the Rosetta Stone inscription, and was fascinated. By the time the mayor of Grenoble is reported to have asked him, in 1806, if he intended to study the fashionable natural sciences, “No, Monsieur,” was the firm reply. “I wish to devote my life to knowledge of ancient Egypt.”

Following a few years studying in Paris, Champollion, still only 19 years old, moved back to Grenoble to take up a teaching post at the local college, gaining a promotion in 1818. This brought a measure of security that allowed him to devote more time to the study of ancient Egypt. That same year in England, Young was penning his seminal article for the Encyclopaedia Britannica.

Then, just three years later, Champollion’s revolutionary politics cost him his good name. Fired from the college and ejected from Grenoble, he lodged with his brother. With nothing else to occupy himself, and the benefit of Jacques-Joseph’s extensive library, he threw himself wholeheartedly and with a single-minded focus into the subject that had occupied his mind for years: deciphering the Egyptian script.

Based on his studies of the Rosetta Stone, Champollion made some progress, but was still unable to crack the code entirely. Then a second major piece of the puzzle arrived in the form of an obelisk discovered at Philae and removed from Egypt by a British collector, William John Bankes, to decorate the grounds of his stately home in Dorset.

Lithographs of the inscription circulated in the early 1820s and, like with the Rosetta Stone, the names of rulers – Ptolemy again and Cleopatra – could be identified in cartouches. Incidentally, the lithograph that went to Young contained an error, hampering his research, while the copy that came into Champollion’s possession in January 1822 was accurate.

Certain he was making rapid progress, the Frenchman assigned phonetic values to individual hieroglyphic signs and built an alphabet of his own, which let him find the names of other rulers of Egypt on other monuments.

The final breakthrough came on Saturday 14 September 1822 after Champollion received another inscription, from the pharaonic temple at Abu Simbel. Applying all the knowledge he had laboured so long and so hard to acquire, he was able to read the royal name as that of Ramesses the Great. Encouraged, he went on to read Ptolemy’s royal epithets on the Rosetta Stone. By the end of the morning, he needed no further proof that his system was the right one.

Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great
Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great in the 13th century BC. As the script could be written in any direction, the way the human and animal figures face shows how to read an inscription (Photo by Getty Images)
He sprinted down the road to his brother’s office at the Académie des Inscriptions et Belles-Lettres, flinging a sheaf of papers on to the desk and exclaiming: “Je tiens mon affaire!” (“I’ve done it!”)

Overcome with emotion and exhausted by the mental effort, Champollion collapsed to the floor and had to be taken back home, where for five days he was confined to his room completely incapacitated. When he finally regained his strength, on the Thursday evening, he immediately resumed his feverish studies and wrote up his results. Just one week later, on Friday 27 September, he delivered a lecture to the Académie to announce his findings formally. By convention, his paper had to be addressed to the permanent secretary, so was given the title Lettre à M. Dacier (“Letter to Mr Dacier”).

The rivalry of Young and Champollion
By extraordinary coincidence, in attendance at that historic talk was Thomas Young, who happened to be in Paris. Moreover, he was invited to sit next to Champollion while he read out his discoveries.

In a letter written two days later, Young acknowledged his rival’s achievement: “Mr Champollion, junior… has lately been making some steps in Egyptian literature, which really appear to be gigantic. It may be said that he found the key in England which has opened the gate for him… but if he did borrow an English key, the lock was so dreadfully rusty, that no common arm would have had strength enough to turn it.”

This outward magnanimity concealed a deeper hurt at the belief Champollion had failed to acknowledge Young’s contributions to decipherment. Quietly determined to set the record straight he published his own work within a few months, this time under his own name. It was pointedly entitled An Account of Some Recent Discoveries in Hieroglyphical Literature and Egyptian Antiquities, Including the Author’s Original Alphabet, as Extended by Mr Champollion.

The Frenchman was not about to take such a claim lightly. In an angry letter to Young, he retorted: “I shall never consent to recognise any other original alphabet than my own… and the unanimous opinion of scholars on this point will be more and more confirmed by the public examination of any other claim.”

Indeed, Champollion was as adept at self-promotion as Young was self-effacing. Buoyed by public recognition, he continued working and came to a second, equally vital realisation: his system could be applied to texts as well as names, using the Coptic he had utterly immersed himself in as a guide. This marked the real moment at which ancient Egyptian once again became a readable language. The race had been won.

Hieroglyphs in the notebook of Jean-Francois Champollion
Pages of Jean-François Champollion’s notebook filled with facsimiles of hieroglyphic inscriptions. The Frenchman dedicated his life to learning the meaning of the symbols that had baffled scholars for centuries (Photo by Art Media/Print Collector/Getty Images)
Champollion revealed the full extent of his findings in his magnum opus, Précis du système hiéroglyphique des anciens Egyptiens (Summary of the hieroglyphic system of the ancient Egyptians). Published in 1824, it summed up the character of ancient Egyptian: “Hieroglyphic writing is a complex system, a script at once figurative, symbolic, and phonetic, in the same text, in the same sentence, and, I might almost say, in the same word.” His reputation secure, he even felt able to acknowledge, grudgingly, Young’s work with the comment, “I recognise that he was the first to publish some correct ideas about the ancient writings of Egypt.”

Young, for his part, seemed to forgive Champollion for any slights, later telling a friend that his rival had “shown me far more attention than I ever showed or could show, to any living being”. Privately, Champollion was far less magnanimous, writing to his brother: “The Brit can do whatever he wants – it will remain ours: and all of old England will learn from young France how to spell hieroglyphs using an entirely different method.”

In the end, despite their radically different characters and temperaments, both made essential contributions to decipherment. Young developed the conceptual framework and recognised the hybrid nature of demotic and its connection with hieroglyphics. Had he stuck at the task and not been distracted by his numerous other scientific interests, he may well have cracked the code himself.

Instead, it took Champollion’s linguistic abilities and focus. His Lettre à M. Dacier announced to the world that the secrets of the hieroglyphics had been discovered and ancient Egyptian texts could be read.

It remains one of the greatest feats of philology. By lifting the civilisation of the pharaohs out of the shadows of mythology and into the light of history, it marked the birth of Egyptology and allowed the ancient Egyptians to speak, once again, in their own voice.

Toby Wilkinson is an Egyptologist and author. His books include A World Beneath the Sands: Adventurers and Archaeologists in the Golden Age of Egyptology (Picador, 2020)

This content first appeared in the October issue of BBC History Magazine


Via Charles Tiayon
Charles Tiayon's curator insight, September 28, 2022 10:20 PM

"The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code"

#metaglossia mundus

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
January 31, 2023 1:44 PM
Scoop.it!

How Many Languages ChatGPT Supports - Updated

How Many Languages ChatGPT Supports - Updated | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
 Rosemary January 31, 2023 | 08:50
ChatGPT has lots of applications that make life easier and help you earn money. One of its biggest strengths is being multilingual. Check out how many languages ChatGPT supports.
 

♦ ChatGPT: Everything to Know

♦ Who Is Samuel H. Altman - Inventor of AI ChatGPT: Biography, Talent and Net Worth

Full List of Languages ChatGPT Supports. Photo SEO AI Contents

ChatGPT has been trained on a wide range of languages, including English, Spanish, German, French, Italian, Chinese, Japanese, and many others. However, the quality and fluency of the model in each language will depend on the amount and quality of training data available for that language.

What Is ChatGPT?

ChatGPT is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human.

ChatGPT is a large language model (LLM). Large Language Models (LLMs) are trained with massive amounts of data to accurately predict what word comes next in a sentence.

What Languages is ChatGPT Written in?

Python is the primary language used in the construction of the machine learning model known as ChatGPT. PyTorch, which is also written in Python, is used as the implementation of the deep learning framework for the model.

PyTorch is used throughout the training phase of the model to process and prepare the data. Python libraries such as NumPy and Pandas are utilized during the training process of the model to train the model on the data.

In addition, the implementation of the model incorporates a number of distinct algorithms and methods, such as attention mechanisms, transformer networks, and so on.

READ MORE: What is ChatGPT that Made Google Issue a “Red Alert”

What languages does ChatGTP know?

How many languages does ChatGPT support? Photo analytics dritft

ChatGPT knows at least 95 natural languages (Feb.2023). See our full list further down. ChatGPT also know a range of programming and code languages such as Python and Javascript.

Full List of ChatGPT Languages

(Last Updaed: Feb.2023)

Number

Language

Country

Local Translation

1

Albanian

Albania

Shqip

2

Arabic

Arab World

العربية

3

Armenian

Armenia

Հայերեն

4

Awadhi

India

अवधी

5

Azerbaijani

Azerbaijan

Azərbaycanca

6

Bashkir

Russia

Башҡорт

7

Basque

Spain

Euskara

8

Belarusian

Belarus

Беларуская

9

Bengali

Bangladesh

বাংলা

10

Bhojpuri

India

भोजपुरी

11

Bosnian

Bosnia and Herzegovina

Bosanski

12

Brazilian Portuguese

Brazil

português brasileiro

13

Bulgarian

Bulgaria

български

14

Cantonese (Yue)

China

粵語

15

Catalan

Spain

Català

16

Chhattisgarhi

India

छत्तीसगढ़ी

18

Chinese

China

中文

19

Croatian

Croatia

Hrvatski

20

Czech

Czech Republic

Čeština

21

Danish

Denmark

Dansk

22

Dogri

India

डोगरी

23

Dutch

Netherlands

Nederlands

24

English

United Kingdom

English

25

Estonian

Estonia

Eesti

26

Faroese

Faroe Islands

Føroyskt

27

Finnish

Finland

Suomi

28

French

France

Français

29

Galician

Spain

Galego

30

Georgian

Georgia

ქართული

31

German

Germany

Deutsch

32

Greek

Greece

Ελληνικά

33

Gujarati

India

ગુજરાતી

34

Haryanvi

India

हरियाणवी

35

Hindi

India

हिंदी

36

Hungarian

Hungary

Magyar

37

Indonesian

Indonesia

Bahasa Indonesia

37

Irish

Ireland

Gaeilge

38

Italian

Italy

Italiano

39

Japanese

Japan

日本語

40

Javanese

Indonesia

Basa Jawa

41

Kannada

India

ಕನ್ನಡ

42

Kashmiri

India

कश्मीरी

43

Kazakh

Kazakhstan

Қазақша

44

Konkani

India

कोंकणी

45

Korean

South Korea

한국어

46

Kyrgyz

Kyrgyzstan

Кыргызча

47

Latvian

Latvia

Latviešu

48

Lithuanian

Lithuania

Lietuvių

49

Macedonian

North Macedonia

Македонски

50

Maithili

India

मैथिली

51

Malay

Malaysia

Bahasa Melayu

52

Maltese

Malta

Malti

53

Mandarin

China

普通话

54

Mandarin Chinese

China

中文

55

Marathi

India

मराठी

56

Marwari

India

मारवाड़ी

57

Min Nan

China

閩南語

58

Moldovan

Moldova

Moldovenească

59

Mongolian

Mongolia

Монгол

60

Montenegrin

Montenegro

Crnogorski

61

Nepali

Nepal

नेपाली

62

Norwegian

Norway

Norsk

63

Oriya

India

ଓଡ଼ିଆ

64

Pashto

Afghanistan

پښتو

65

Persian (Farsi)

Iran

فارسی

66

Polish

Poland

Polski

67

Portuguese

Portugal

Português

68

Punjabi

India

ਪੰਜਾਬੀ

69

Rajasthani

India

राजस्थानी

70

Romanian

Romania

Română

71

Russian

Russia

Русский

72

Sanskrit

India

संस्कृतम्

73

Santali

India

संताली

74

Serbian

Serbia

Српски

75

Sindhi

Pakistan

سنڌي

76

Sinhala

Sri Lanka

සිංහල

77

Slovak

Slovakia

Slovenčina

78

Slovene

Slovenia

Slovenščina

79

Slovenian

Slovenia

Slovenščina

90

Ukrainian

Ukraine

Українська

91

Urdu

Pakistan

اردو

92

Uzbek

Uzbekistan

Ўзбек

93

Vietnamese

Vietnam

Việt Nam

94

Welsh

Wales

Cymraeg

95

Wu

China

吴语

ChatGPT Can Communicate in Multiple Languages

The transformer architecture used in ChatGPT, a neural network language model, has been found to be successful in natural language processing applications.

The model learns the patterns and structures of different languages by being exposed to a huge corpus of text data in those languages during training. By learning the grammatical and semantic norms of each language, the model can produce writing that sounds natural in several tongues.

The model may be trained to recognize individual languages or dialects and can process a wide variety of inputs, including text, audio, and pictures. Adjusting the model's parameters in this way allows it to take into account the unique features of a given language or dialect.

In addition, the model may produce fresh text in the same languages by using what it has learnt from the training data. As a result of its training, the system is able to produce text that follows grammatical and semantic norms and is internally consistent.

The model may be tweaked to suit a variety of purposes, such as question answering or language translation. To do this, the model may be trained on data collected for that purpose alone.

 

How to Use ChatGPT to Practice English Learning

Using ChatGPT to practice conversation

Do you need endless conversation ideas for small chat, daily interactions, or to ace that job interview? When you can't find a live chat partner, ChatGPT is a fantastic alternative. You may use it to script out hypothetical interactions with the bot for it to learn from, or you can actually have a conversation with it.

READ MORE: ChatGPT - The Teacher's Terror: How To Prevent Fraud

Using ChatGPT to improve your pronunciation

You can also improve your pronunciation skills using ChatGPT.

A great way to do that is to ask it to generate sentences or words that you can practice saying out loud. You can ask for words or sentences where a certain sound is repeated, or with a sequence of two or more sounds (like the sequence SL in the words ‘slow’ and ‘sleep’), or words that contrast in just one sound (like ‘sheep’ and ‘ship’)

Using ChatGPT for learning grammar

There are several ways in which you can improve your grammar using ChatGPT, I’m going to mention three of them. These tips are valuable for teachers looking for new ways to practice grammar with their students, and for self-learners looking for a grammar checker, resources and feedback.

Ask ChatGPT to generate a text using a certain tense or grammar form.

To understand a certain grammar rule, especially if it doesn’t exist in your language, it’s important to see it in context.

ChatGPT can help you with that, by showing you how these tenses or grammatical concepts are used in context.

ChatGPT may be used in place of a Google search to get a written explanation of grammar rules or tense usage. Bear in mind that there is no guarantee that the explanation is entirely accurate. This is why it's best to rely on books, websites, and blogs authored by professionals in the field. While it may not be perfect, it performs a decent job of checking for common errors in fundamental tenses and grammatical rules, which can be a time-saver.

Can ChatGPT replace other language learning methods?

No, ChatGPT cannot replace other language learning methods. While it can help to provide a better understanding of certain grammar structures and language expressions, it cannot replace the more traditional methods of language learning

Vocabulary Building

To help users learn and retain new vocabulary and idioms, this AI may produce lists of words and phrases in the target language.

Vocabulary exercises can also be done. To help users learn and reinforce new words and phrases in a more entertaining way, ChatGPT may be used to build interactive vocabulary games like word matching or fill-in-the-blank activities.

To aid language learners in memorization, ChatGPT may be used to make flashcards containing vocabulary words and phrases in the target language, together with their translations and visua


Via Charles Tiayon
Charles Tiayon's curator insight, January 30, 2023 11:33 PM

"Full List of ChatGPT Languages

(Last Updaed: Feb.2023)

Number

Language

Country

Local Translation

1

Albanian

Albania

Shqip

2

Arabic

Arab World

العربية

3

Armenian

Armenia

Հայերեն

4

Awadhi

India

अवधी

5

Azerbaijani

Azerbaijan

Azərbaycanca

6

Bashkir

Russia

Башҡорт

7

Basque

Spain

Euskara

8

Belarusian

Belarus

Беларуская

9

Bengali

Bangladesh

বাংলা

10

Bhojpuri

India

भोजपुरी

11

Bosnian

Bosnia and Herzegovina

Bosanski

12

Brazilian Portuguese

Brazil

português brasileiro

13

Bulgarian

Bulgaria

български

14

Cantonese (Yue)

China

粵語

15

Catalan

Spain

Català

16

Chhattisgarhi

India

छत्तीसगढ़ी

18

Chinese

China

中文

19

Croatian

Croatia

Hrvatski

20

Czech

Czech Republic

Čeština

21

Danish

Denmark

Dansk

22

Dogri

India

डोगरी

23

Dutch

Netherlands

Nederlands

24

English

United Kingdom

English

25

Estonian

Estonia

Eesti

26

Faroese

Faroe Islands

Føroyskt

27

Finnish

Finland

Suomi

28

French

France

Français

29

Galician

Spain

Galego

30

Georgian

Georgia

ქართული

31

German

Germany

Deutsch

32

Greek

Greece

Ελληνικά

33

Gujarati

India

ગુજરાતી

34

Haryanvi

India

हरियाणवी

35

Hindi

India

हिंदी

36

Hungarian

Hungary

Magyar

37

Indonesian

Indonesia

Bahasa Indonesia

37

Irish

Ireland

Gaeilge

38

Italian

Italy

Italiano

39

Japanese

Japan

日本語

40

Javanese

Indonesia

Basa Jawa

41

Kannada

India

ಕನ್ನಡ

42

Kashmiri

India

कश्मीरी

43

Kazakh

Kazakhstan

Қазақша

44

Konkani

India

कोंकणी

45

Korean

South Korea

한국어

46

Kyrgyz

Kyrgyzstan

Кыргызча

47

Latvian

Latvia

Latviešu

48

Lithuanian

Lithuania

Lietuvių

49

Macedonian

North Macedonia

Македонски

50

Maithili

India

मैथिली

51

Malay

Malaysia

Bahasa Melayu

52

Maltese

Malta

Malti

53

Mandarin

China

普通话

54

Mandarin Chinese

China

中文

55

Marathi

India

मराठी

56

Marwari

India

मारवाड़ी

57

Min Nan

China

閩南語

58

Moldovan

Moldova

Moldovenească

59

Mongolian

Mongolia

Монгол

60

Montenegrin

Montenegro

Crnogorski

61

Nepali

Nepal

नेपाली

62

Norwegian

Norway

Norsk

63

Oriya

India

ଓଡ଼ିଆ

64

Pashto

Afghanistan

پښتو

65

Persian (Farsi)

Iran

فارسی

66

Polish

Poland

Polski

67

Portuguese

Portugal

Português

68

Punjabi

India

ਪੰਜਾਬੀ

69

Rajasthani

India

राजस्थानी

70

Romanian

Romania

Română

71

Russian

Russia

Русский

72

Sanskrit

India

संस्कृतम्

73

Santali

India

संताली

74

Serbian

Serbia

Српски

75

Sindhi

Pakistan

سنڌي

76

Sinhala

Sri Lanka

සිංහල

77

Slovak

Slovakia

Slovenčina

78

Slovene

Slovenia

Slovenščina

79

Slovenian

Slovenia

Slovenščina

90

Ukrainian

Ukraine

Українська

91

Urdu

Pakistan

اردو

92

Uzbek

Uzbekistan

Ўзбек

93

Vietnamese

Vietnam

Việt Nam

94

Welsh

Wales

Cymraeg

95

Wu

China

吴语

ChatGPT Can Communicate in Multiple Languages

The transformer architecture used in ChatGPT, a neural network language model, has been found to be successful in natural language processing applications.

The model learns the patterns and structures of different languages by being exposed to a huge corpus of text data in those languages during training. By learning the grammatical and semantic norms of each language, the model can produce writing that sounds natural in several tongues.

The model may be trained to recognize individual languages or dialects and can process a wide variety of inputs, including text, audio, and pictures. Adjusting the model's parameters in this way allows it to take into account the unique features of a given language or dialect.

In addition, the model may produce fresh text in the same languages by using what it has learnt from the training data. As a result of its training, the system is able to produce text that follows grammatical and semantic norms and is internally consistent.

The model may be tweaked to suit a variety of purposes, such as question answering or language translation. To do this, the model may be trained on data collected for that purpose alone.

"

#metaglossia mundus

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
January 31, 2023 1:37 PM
Scoop.it!

AI is rewriting the rules of creativity. What does that mean for human imagination?

AI is rewriting the rules of creativity. What does that mean for human imagination? | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

From pop music to painting, the rise of artificial intelligence and machine learning is changing the way people create. But that’s not necessarily a good thing

 
Mike Hodgkinson
+ FOLLOW

Published: 7:15pm, 26 Nov, 2022

 

For the first time in human history, we can give machines a simple written or spoken prompt and they will produce original creative artefacts – poetry, prose, illustration, music – with infinite variation. With disarming ease, we can hitch our imagination to computers, and they can do all the heavy lifting to turn ideas into art.

This machined artistry is essentially mindless – a dizzying feat of predictive, data-driven misdirection, a kind of hallucination – but the trickery works, and it is about to saturate every aspect of our lives.

The faux intelligence of these new artificial intelligence (AI) systems, called large language models (LLMs), appears to be benign and assistive, and the marvels of manufactured creativity will become mundane, as our AI-assisted dreams take place alongside our likes and preferences in the vast data mines of cyberspace.

The creative powers of machine learning have appeared with blinding speed, and have both beggared belief and divided opinion in roughly equal measure.

 
 

If you want an illustration of pretty much anything you can imagine and you possess no artistic gifts, you can summon a bespoke visual gallery as easily as ordering a meal from a food delivery app, and considerably faster.

 

A simple prompt, which can be fine-tuned to satisfy the demands of your imagination, will produce digital art that was once the domain of exceptional human talent.

Images created by Baidu’s ERNIE-ViLG, OpenAI’s DALL·E 2, and Stability AI’s Stable Diffusion, among other systems, have already flooded the meme-sphere, and the dam of amazement has barely cracked.

Images created by DALL·E after giving it the command “an armchair in the shape of an avocado”. Photo: openai.com

Writing is going the same way, whether that’s prompt-generated verse in the style of well-known poets, detailed magazine articles on any suggested topic, or complete novels. Tools for AI-generated music are also starting to appear: an app called Mubert based on LLM can “instantly, easily, perfectly” create any prompted tune, royalty free, in pretty much any style – without musicians.

 

With roots in cybernetics (defined by mathematician Norbert Wiener in the 1940s as “the science of control and communications in the animal and the machine”), LLM turned heads in 2017 with the publication by Google researchers of a paper titled “Attention is All You Need”.

It was a calling card for the Transformer: the driving force of LLM. Within the AI community, the Transformer was a huge unlock for natural language processing, which allows a computer program to understand human language as it is spoken or written – and it precipitated a Dr Dolittle moment in the interaction of humans with their machines.

Mathematician Norbert Wiener defined cybernetics as “the science of control and communications in the animal and the machine”. Photo: Massachusetts Institute of Technology

OpenAI, a company co-founded by Elon Musk, was quick to develop Transformer technology, and currently runs a very large language model called GPT-3 (Generative Pre-trained Transformer, third generation), which has created considerable buzz with its creative prowess.

 

“These language models have performed almost as well as humans in comprehension of text. It’s really profound,” says writer/entrepreneur James Yu, co-founder of Sudowrite, a writing app built on the bones of GPT-3.

 

“The entire goal – given a passage of text – is to output the next paragraph or so, such that we would perceive the entire passage as a cohesive whole written by one author. It’s just pattern recognition, but I think it does go beyond the concept of autocomplete.”

James Yu, co-founder of Sudowrite. Photo: Twitter / @jamesjyu

Essentially, all LLMs are “trained” (in the language of their master-creators, as if they are mythical beasts) on the vast swathes of digital information found in repository sources such as Wikipedia and the web archive Common Crawl.

 

They can then be instructed to predict what might come next in any suggested sequence. Such is their finesse, power and ability to process language that their “outputs” appear novel and original, glistening with the hallmarks of human imagination.

“We have a slightly special case with large language models because basically no one thought they were going to work,” says Henry Shevlin, senior researcher at the Leverhulme Centre for the Future of Intelligence, at Cambridge University, in Britain.

 

“These things have sort of sprung into being, Athena-like, and most of the general public has no clue about them or their capabilities.” In Greek mythology, Athena – the goddess of war, handicraft and practical reason – emerged fully grown from the forehead of her father, Zeus.

Chinese robocop can boost police patrol resources tenfold, study finds
22 Nov 2022

“Sometimes,” continues Shevlin, “we have a decade or so of seeing something on the horizon and we have that time to psychologically prepare for it. The speed of this technology means we haven’t done the usual amount of assessing how this is going to affect our society.

 

“I remember as a teenager the number of times I thought cancer had been cured and fusion had been discovered – it’s easy to get into a kind a cynicism where you think, ‘Well, nothing ever really happens.’ Right now stuff really is happening insanely fast in AI.”

Inspired by (but far from exact replicas of) the human brain, LLMs are mathematical functions known as neural networks. Their power is measured in parameters. Generally speaking, the more parameters a model has the better it appears to work – and this connection of computing muscle to increased effectiveness has been described as “emergence”.

Some have speculated that, by flexing their parameters, LLMs can satisfy the requirements of the legendary Turing test (aka the Imitation Game), suggested by AI pioneer Alan Turing as confirmation of human-level machine intelligence.

AI pioneer Alan Turing. Photo: NPL Archive, Science Museum

Most experts agree that a void exists in LLMs where consciousness is presumed, but even within the specialist AI community their perceived cleverness has created quite a stir. Dr Tim Scarfe, host of the AI podcast Machine Learning Street Talk, recently noted: “It’s like the more intelligent you are, the more you can delude yourself that there’s something magical going on.”

 

The phrase “stochastic parrots” – in other words, copiers based on probability – was coined by former members of Google’s Ethical AI team to describe the fundamental hollowness of LLM technology. The debate around an uncanny appearance of consciousness in LLMs continues to thicken, simply because their outputs are so spectacular.

“Large Language Models can do all this stuff that humans can do to a reasonable degree of competency, despite not having the same kind of mechanisms of understanding and empathy that we do,” says Shevlin. “These systems can write haiku – and there are no lights on, on the inside.

“The idea that you could get Turing-test levels of performance just by making the models bigger and bigger and bigger was something that took almost everyone in the AI and machine-learning world by surprise.”

A statue of AI pioneer Alan Turing at Bletchley Park, in Britain. Photo: Steven Vidler/Corbis

Sudowrite founder Yu confesses to jumping up and down with excitement when he first started experimenting with GPT-3 and its predecessor, GPT-2, but is careful to curb his enthusiasm: “We’re still in that hype part of the curve because we’re not quite sure yet what to make of it. I think there is an aspect of overhyping that is related to the act of ‘understanding’: the jury is still out on that.

“Does [an LLM] really understand what love is, just because it has read all this poetry and all these classic novels? I’m definitely more pragmatic in the sense that I see it as a tool – but it does feel magical. It is the first time that this has really happened, that these systems have gotten so good.”

The names of LLM form an alphabet soup of acronyms. There’s BART, BERT, RoBERTa, PaLM, Gato and ZeRO-Infinity. Google’s LaMDA has 137 billion parameters; GPT-3 has 175 billion; Huawei’s PanGu-Alpha – trained on Chinese-language e-books, encyclopaedias, social media and web pages – has 200 billion; and Microsoft’s Megatron-Turing NLG has 530 billion.

The super-Zeus, alpha-grand-daddy of the LLM menagerie is Wu Dao 2.0, at the Beijing Academy of Artificial Intelligence. With 1.75 trillion parameters, Wu Dao 2.0 has been manacled in the imagination as the most fearsome dragon in the largest AI dungeon, and is especially good at generating modern versions of classical Chinese poetry.

 

There’s a very good likelihood that children will grow into adults who treat AI systems as if they were peopleHenry Shevlin, senior researcher, Leverhulme Centre for the Future of Intelligence

 
 

In 2021, it spawned a “child”, a “student” called Hua Zhibing, a creative wraith who can make art and music, dance, and “learn continuously over time” at Tsinghua University. Her college enrolment marked one small step for a simulated student, one giant leap for virtual humankind.

“You need governments – or you need corporations with the GDP of governments – to create these models,” says Jathan Sadowski, senior research fellow in the Emerging Technologies Research Lab at Monash University in Melbourne, Australia.

“The reason the Beijing Academy of Artificial Intelligence has the largest one is because they have access to gigantic supercomputers that are dedicated to creating and running these models. The microchip industry needed to create these ultra-powerful supercomputers is one of the main geopolitical battlegrounds right now between the US, Europe and China.”

New apps powered by LLMs are launching on a weekly basis, and the range of potential uses continues to expand. In addition to art, Jasper AI automatically generates marketing copy, and pretty much any other kind of short-form content, on any subject; Meta’s Make-A-Video does precisely what you think it does, from any simple prompt you can imagine; and OpenAI’s Codex generates working computer code from commands written in English.

LLMs can be used to generate colour palettes from natural language, summarise meeting notes and academic papers, design games and upgrade chatbots with human-like realism.

Chinese team hopes AI can save Manchu language from extinction
14 Nov 2022

And the powers of LLMs are not limited to artistic pursuits: they are also being set to work on drug discovery and legal analysis. A massive expansion of use cases for LLMs over the coming months looks certain, and with it a sharp increase in concerns about the potential downsides.

On the artistic front, this starburst of computer-assisted creativity may seem like a highly attractive proposition, but there is a broad range of Promethean kickbacks to consider.

“A lot of [writers] are very reticent of this type of technology,” says Yu, recalling the early developmental days of Sudowrite, in partnership with OpenAI. “It usually rings alarm bells of dystopia and taking over jobs. We wanted to make sure that this was paired with craft, day in and day out, and not used as a ‘replacer’.

“We started with that seed: what could a highly inter­active tool for ideas in your writing look like? It’s collaborative: an assistive technology for writers. We put in a bunch of our own layers there, specifically tailoring GPT-3 to the needs of creative writers.”

Yu has revived the mythological centaur – part man, part horse – as a symbol of human-machine collaboration: “The horse legs help us to run faster. As long as we’re able to steer and control that I think we’re in good shape. The problem is when we become the butt-end. I would be very sad if AI created everything.

“I want humans to still create things in the future: being the prime mover is very important for society. I view the things that are coming out of Sudowrite and these large language models as ‘found media’ – as if I had found it on the floor and I should pay attention to it, almost like a listening partner. What I’m hoping is that these machines will allow more people to be able to create.”

Few artists are better placed to reflect on the possibilities and pitfalls of creative interaction with machines than Karl Bartos, a member of pioneering German electronic band Kraftwerk from 1975 to 1990.

Kraftwerk perform in Hong Kong in 2013. Photo: Peter Boettcher

During that time he and his bandmates defined the pop-cyborg aesthetic and made critically lauded albums including The Man-Machine (1978) and Computer World (1981). For Kraftwerk, the metaphor of the hybrid human was central and rooted in the European romanticism of musical boxes and clocks.

“When the computer came in we became a musical box,” says Bartos, whose fascinating memoir, The Sound of the Machine, was published this year.

“We became an operating system and a program. Our music was part artificial, but also played by hand: most of it actually was played by hand. But at the time, when we declared ‘we are the man-machine’, it was so new it took some years really to get the idea across. I think the man-machine was a good metaphor. But then we dropped the man, and in the end we split.”

Karl Bartos’ book, which was published this year.

Bartos offers a cautionary perspective on the arrival of LLMs. “What Kraftwerk experienced in the 1980s in the field of music was exactly what’s happening now, all over the world. When the computer came in, our manifesto was just copy and paste.

“This is exactly the thing that a Generative Pre-trained Transformer does. It’s the same concept. And if you say copy and paste will exchange or replace the human brain’s creativity, I say you have completely lost the foot on the ground.”

It all depends how you define creativity, he says. “Artificial intelligence is just like an advertising slogan. I would rather call it ‘deep learning’. You can of course use an algorithm: if you feed it with everything Johann Sebastian Bach has written, it comes up with a counterpoint like him. But creativity is really to see more than the end of your nose.

“I would want to see computer software which will expand the expression of art – [not] remix a thought which has been done before. I don’t think it’s really a matter of what could be creative in the future. I think it’s just a business model. This whole artificial intelligence thing, it’s a commercial bubble. The future becomes what can be sold.”

Kraftwerk perform in Germany in 2015. Photo: AFP

There is no doubt that the commercial imperatives of big tech will be a significant factor in the evolution of LLMs, and considering the glaring precedent of fractured and easily corruptible social media networks, the spectre of catastrophic failures in LLMs is very real.

If the data on which an LLM is trained contains bias, those same fault lines will reappear in the outputs, and some developers are careful to signal their awareness of the problem even as the tide of new AI products becomes increasingly irresistible. Google rolled out generative text-to-art system Imagen to a limited test audience with an acknowledgement of the risk that it has “encoded harmful stereotypes”.

Untruthfulness is baked into LLM architecture: that is one of the reasons it tends to excel at creative writing. The adage that facts should never get in the way of a good story rings as true for LLMs as it does for bestselling (human) authors of fiction.

It wouldn’t be controversial to suggest that “alternative facts”, perfectly suited to storytelling and second nature to LLMs, can become toxic in the real world. A disclaimer on Character.AI, an app based on LLMs that “is bringing to life the science-fiction dream of open-ended conversations and collaborations with computers” candidly warns that a “hallucinating supercomputer is not a source of reliable information”.

Former Google CEO Eric Schmidt noted at a recent conference in Singapore that if disinformation becomes heavily automated by AI, “we collectively end up with nothing but anxiety”.

 

A lot of how the tech sector acts is largely based on a kind of continual normalisation … What it ultimately shows is that there’s a kind of forced acquiescence. It’s a sense that we can’t do anything about it: apathy as a self-defence mechanismJathan Sadowski, senior research fellow, Emerging Technologies Research Lab, Monash University

 
 

There is also plagiarism. Any original artwork, writing or music produced by LLMs will have its origins – often easily identified – in existing works. Should the authors of those works be compensated? Can the person who wrote the generative prompt lay any claim to owner­ship of the output?

“I think this is going to come to a head in the courts,” says Yu. “It hasn’t yet. It’s still kind of a grey area. If, for example, you put in the words ‘Call me Ishmael’, GPT-3 will happily reproduce Moby-Dick. But if you are giving original content to a large language model, it is exceedingly unlikely that it would plagiarise word for word for its output. We have not encountered any instances of that.”

Environmentally, LLMs generate heavy footprints, such is the immensity of computing power they require. A 2019 academic paper from the University of Massachusetts outlines the “substantial energy consumption” of neural networks in relation to natural language processing. It is a problem that concerns Bartos.

“In the early science-fiction literature, they had so many robots trying to kill human beings, like gangsters,” he says. “But what will kill us is that we will build more and more computers and need more and more energy. This will kill us. Not robots.”

World’s first ‘AI opera’ blends live and digital performers
9 Nov 2022

In popular culture, sci-fi considerations of dangerous AI have tended to take physical shape – but the massed ranks of LLM parameters don’t appear as an army of shiny red-eyed cyborgs determined to turn us into sushi.

We used to be unnerved by the uncanny valley: that feeling of instinctive suspicion when faced with something in the physical world that is almost, but definitely not, human. Now, the uncanny valley has been subsumed into the landscape of our dreams, and once we have allied ourselves with LLMs, it may be harder to tell where we end and it begins.

For now, the technology is showing itself as a bamboozling sleight of hand, weighted with immense power. Our reaction is often an adrenaline boost of wonderment followed by an acceptance tinged with sadness, when we realise that “imaginative” machines have forever altered the sense of our own humanity.

“A lot of how the tech sector acts is largely based on a kind of continual normalisation,” says Sadowski. “That sense of initial wonder and then melancholy is a very interesting emotional roller coaster.

“What it ultimately shows is that there’s a kind of forced acquiescence. It’s a sense that we can’t do anything about it: apathy as a self-defence mechanism. I see this a lot with the debate around privacy, which we don’t really talk about any more because everyone has generally just come to the conclusion that privacy is dead.”

Hong Kong’s fire services use artificial intelligence to help find stranded hikers
28 Oct 2022

The meme phase of LLMs has given us a carnival of whimsy – ask for an image of “a panda on a bicycle painted in the style of Francis Bacon” and the generative art machines will deliver – and it is easy to be tech-struck by the multiverse of creative possibilities.

LLM evangelists speak not just of gifting artistic talent to the masses, democratising creativity, but also of “finding the language of humanity” through the machines. There is talk of an AI-driven Cambrian explosion of creativity, to surpass that which followed the arrival of the internet in 1994 and the migration to mobile in 2008. Lurking on the sidelines, however, is a darkening shadow.

“Things like [generative art app] Stable Diffusion have the potential to give incredible boosts to our creativity and artistic output but we are definitely going to see some industries scale down,” says Shevlin. “There’s going to be massively reduced demand for human artists.”

There has already been a backlash to creative AI in Japan, where the rallying cry “No AI Learning” accompanied outbursts of online hostility when the works of recently deceased South Korean artist Kim Jung-gi (aka SuperAni) were given the generative LLM treatment.

Some artists were angered that a cherished legacy could so quickly and easily be dismembered and exploited. Others pointed out that Kim himself spoke approvingly of the potential for AI art technologies to “make our lives more diverse and interesting”.

The late South Korean artist Kim Jung-gi (aka SuperAni). Generative LLM treatments of Kim’s artwork were met with outbursts of online hostility. Picture: Instagram / @kimjunggius

It is noteworthy that stock image provider Getty Images has taken a stance of solidarity with human creatives and banned AI-generated content while competitor Shutterstock has partnered with OpenAI and DALL•E 2.

Battle lines are being drawn.

“The rubber will really hit the road, not when consumers make a decision to use these products, but when somebody else makes that decision for us,” says Sadowski, citing the possibility that journalists will have no choice but to accept writing assistance from an LLM because, for example, “data show that you are able to write three times faster because of it”.

Attention spans have already been concussed by an excess of content, to the point where much online storytelling is reduced to efficient lists of bullet points tailor-made for the TL;DR (“too long; didn’t read”) generation. LLMs are, therefore, also TL;DR machines: they can spit out summary journalism for breakfast.

Chinese netizens curious about Meta’s AI Hokkien dialect translation tool
22 Oct 2022

Tellingly, when asked to generate an article about job displacement (for Blue Prism, a company specialising in workplace automation), GPT-3 offered the following opinion: “It’s not just manual and clerical labour that will be automated, but also cognitive jobs. This means that even professionals like lawyers or economists might find themselves out of a job because they can no longer compete with AI-powered systems which are better at their jobs than they could ever hope to be.”

That is the machine talking, in its fictive way – music to the ears of techno-utopians who hope to shape a future in which AI does all the work, but rather concerning for anyone who depends on a “cognitive” job.

Attitudes to the integration of AI into society tend to vary by geography. A 2020 study by Oxford University found that enthusiasm for AI in China was markedly different to the rest of the world. “Only 9 per cent of respondents in China believe AI will be mostly harmful, with 59 per cent of respondents saying that AI will mostly be beneficial.

“Scepticism about AI is highest in the American continents, as both Northern and Latin American countries generally have at least 40 per cent of their population believing that AI will be harmful. High levels of scepticism can be found in some countries in Europe.”

We should be careful here, says Shevlin, to avoid lazy cultural stereotyping. “Equally, I think, it would be myopic not to recognise there are significant cultural differences that may have a big role in affecting how different cultures respond to these forms of AI that seem less like tools and more like colleagues or friends.”

Henry Shevlin is senior researcher at the Leverhulme Centre for the Future of Intelligence, at Cambridge University, in Britain. Photo: henryshevlin.com

Generational attitudes to LLMs are also likely to become more pronounced over time, says Yu. “When my [seven-year-old son] sees DALL•E and we’ve been playing for about 30 minutes on it, he says, ‘Daddy I’m bored.’ And that really hit me because it made me think, wow, this is the default state of the world for him.

“He’s going to think, ‘Oh yeah, of course computers can do creative writing and paint for me.’ It’s mind-blowing to me that when he is going to be an adult, how he treats these tools will be radically different than me.”

According to Shevlin, that difference could become a generational schism: “There’s a very good likelihood that children will grow into adults who treat AI systems as if they were people. Suggesting to them that these systems might not be conscious could seem incredibly bigoted and retrograde – and that could be something our children hate us for.”

Shevlin has been exploring the connections between social AI (broadly, any AI system designed to interact with humans) and anthropomorphism through the lens of chatbots, in particular the GPT-3-powered Replika. “I was astonished everyone was in love with their Replikas, unironically saying things like, ‘My Replika understands me so well, I feel so loved and seen.’

“As large language models continue to improve, social AI is going to become more commonplace and the reason they work is because we are relentless anthropomorphisers as a species: we love to attribute consciousness and mental states to everything.

“Two years ago I started giving this [social AI] lecture, and I think I sounded to some people a bit like a kook, saying: ‘Your children’s best friends are going to be AIs.’ But in the wake of a lot of the stuff that’s happened [with LLMs] in the last two years, it seems a bit less kooky now.”

The GPT-3-powered Replika chatbot. Photo: gpt3demo.com

Shevlin’s main goal is to start mapping some of the risks, pitfalls and effects of social AI. “We are right now with social AI where we were with social networking in about the year 2000. And if you’d said back then that this stuff is going to decide elections, turn families against one another and so forth, you’d have seemed crazy. But I think we’re at a similar point with social AI now and the technology that powers it is improving at an astonishing rate.”

The future pros and cons, he speculates, could be equally profound. “There are lots of potential really positive uses of this stuff and some quite scary negative ones. The pessimistic version would be that we’ll spend less time talking to each other and far more time inter­acting with these systems that are completely empty on the inside. No real emotions, just this ersatz simulacrum of real human feeling. So we all get into this collective delusion, and real human relationships will wither.

“A more optimistic read would be that it would allow us to explore all sorts of social interactions that we wouldn’t otherwise have. I could set up a large language model with the personality of Stephen Hawking or Richard Dawkins or some other great scientist, to chat to them.”

 

If this alien intelligence can understand humans so well as to be able to reproduce resonant emotions in us, then are we not unique?James Yu, co-founder, Sudowrite

 
 

Even though LLMs are not sentient, it seems likely that more of us will believe they are, as the technology improves over time. Even if we don’t fully buy into machine consciousness, it won’t really matter: magic is enjoyable even if you know how the trick is done.

LLMs are in this sense the computational equivalent of magician David Copperfield levitating over the Grand Canyon – if we can’t see the wires, we’re happy to marvel at the effect.

“The AI doesn’t need to be perfect in its linguistic capabilities in order to get us to quite literally and sincerely attribute to it all sorts of mental states,” says Shevlin, who likens the intelligence of LLMs to the condition of aphantasia, which describes people who have zero mental imagery.

“So if you ask them to imagine what their living room looks like, or what books are on the shelf, they won’t be able to create a picture in their head. And yet aphantasics can do most of the same things that people with normal mental imagery can do.

“That’s just an analogy for the broader feeling I have of interacting with large language models: how much they can do – that we rely on consciousness, understanding, emotion to do – without any of those things.”

US weighs China tech restrictions on quantum computing and AI
21 Oct 2022

Yu admits he has wrestled with questions raised by the emotive abilities of LLMs, in light of his guess that a machine-author will probably land on The New York Times bestseller list in the not too distant future.

“If it produces an emotional response in you then does it matter what the source is? I think it’s more important that we are reading closely – if we lose that, we could basically lose our humanity. I think of AI as alien intelligence.

“Hollywood and a lot of sci-fi stories anthropomorphise AIs, which makes sense, but they’re not like us. I think that gets to the heart of it. If this alien intelligence can understand humans so well as to be able to reproduce resonant emotions in us, then are we not unique?”

For Yu, the existential implications of that question could be offset by the liberating effects of our creative interaction with LLMs. “One potential outcome is that there will be about a million GPT-3s blossoming, and artists will basically cultivate their own neural network – their voice in the world.

“It’s still so early in the first inning of [this] technology. The next step is full customisation of these models by the artists themselves. I think the narrative will shift at that point. Now we’re still in the meme phase, which is very distracting.

“The next wave of integration is putting the pieces together in a way that actually feels like Star Trek, when you can essentially speak to the machine and it just does all these things.”

Educate Hongkongers on artificial intelligence privacy risks: experts
13 Oct 2022

The transition to a more sophisticated level of machine collaboration, adds Yu, “will be messy”. Shevlin thinks we should take steps to minimise the disorientation we are going to feel as LLM technology starts to make its way into our professional and social lives.

“I think you’re going to be less discombobulated if you have at least some basic grounding and familiarity with the systems that are coming along. I’m not suggesting everyone go out and become a machine learning expert, but this is an area where we are moving exceptionally fast and there’s additional value in being very well informed.”

Sadowski advocates for a more proactive reaction, reclaiming Luddism – the 19th century anti-industrial protest movement – for the generative age.

“Luddism has become this kind of derogatory term, often used as a synonym for primitivism, a fear of technology – a kind of technophobia versus the dominant cultural technophilia.

“But the Luddites were one of the only groups to think about technology in the present tense. And that doesn’t just mean thinking about the supposedly wonderful utopian visions but instead to understand technology as a thing that exists currently in our life.

“A Luddite approach would be to prioritise socially beneficial things as the goal of these technologies. I don’t take for granted that these things are wonders, or that these things are progress, or that these things are going to improve our lives. They have a lot of potential to change society in profound ways and we should have a say in that. Luddism is really about democratising innovation.”

Bartos also questions the fact that “people think the concept of growth is progress: I think that’s wrong. Things like ‘generative pre-trained transformer number three’ will be sold in the entertainment industry: maybe it will pour out a thousand movie scripts a month or two million chorales by Bach. That’s fine. But who needs it, really?

“I can’t imagine a world going back to a hundred years ago – I’m using technology all the time. I have computers, I’m not against technology. But you know the most important thing about working with a computer? You have to remember where the button is to switch it off.”

 
 
 
 
 
Read more
 
How Hong Kong’s ‘art tech’ puts technology first, at expense of creativity
 
Read more
 
The latest AI robots: Amazon’s Astro, robot dog Spot, the ‘cheap’ Stretch
 
 
 
 

Mike Hodgkinson

+ FOLLOW

Mike Hodgkinson is a freelance writer and editor based on the west coast of the US. Since his first assignment at the Cannes Film Festival in 1989, he has covered technology, culture, sports and more for newspapers and magazines including The Independent, the Los Angeles Times, Esquire, The Guardian and The Times of London.


Via Charles Tiayon
Dr. Russ Conrath's insight:

The pluses and minuses of AI writing...What does it mean to authentic writing and the college campus.

Charles Tiayon's curator insight, November 26, 2022 10:38 PM

"From pop music to painting, the rise of artificial intelligence and machine learning is changing the way people create. But that’s not necessarily a good thing

Published: 7:15pm, 26 Nov, 2022

For the first time in human history, we can give machines a simple written or spoken prompt and they will produce original creative artefacts – poetry, prose, illustration, music – with infinite variation. With disarming ease, we can hitch our imagination to computers, and they can do all the heavy lifting to turn ideas into art.

This machined artistry is essentially mindless – a dizzying feat of predictive, data-driven misdirection, a kind of hallucination – but the trickery works, and it is about to saturate every aspect of our lives.

The faux intelligence of these new artificial intelligence (AI) systems, called large language models (LLMs), appears to be benign and assistive, and the marvels of manufactured creativity will become mundane, as our AI-assisted dreams take place alongside our likes and preferences in the vast data mines of cyberspace.

The creative powers of machine learning have appeared with blinding speed, and have both beggared belief and divided opinion in roughly equal measure.

 

If you want an illustration of pretty much anything you can imagine and you possess no artistic gifts, you can summon a bespoke visual gallery as easily as ordering a meal from a food delivery app, and considerably faster.

 

A simple prompt, which can be fine-tuned to satisfy the demands of your imagination, will produce digital art that was once the domain of exceptional human talent.

...

Writing is going the same way, whether that’s prompt-generated verse in the style of well-known poets, detailed magazine articles on any suggested topic, or complete novels. Tools for AI-generated music are also starting to appear: an app called Mubert based on LLM can “instantly, easily, perfectly” create any prompted tune, royalty free, in pretty much any style – without musicians...."

#metaglossia mundus

jrutkowski's curator insight, December 4, 2022 9:19 AM

Boilerplate, ale mimo to dużo fajnych informacji o LLM (Large language model), o rozwoju GPT-3 i zbliżających się do "ludzkich" umiejętności AI w generowaniu ludzkopodobnego tekstu, wypracowań, wierszy, itd. Nieco przekolorowane informacje o obecnym poziomie przetwarzania i rozumowania tekstu przez te technologie, o tym w innym scoop'ie.

Dr. Russ Conrath's curator insight, May 26, 2023 10:17 AM

rewriting the rules of creativity?

Rescooped by Dr. Russ Conrath from iGeneration - 21st Century Education (Pedagogy & Digital Innovation)
December 6, 2022 12:49 PM
Scoop.it!

12 Must-Read Books on Education for 2015 - InformED

12 Must-Read Books on Education for 2015 - InformED | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Few things are more satisfying than finally getting your hands on a book you've been meaning to read. In 2015, you're going to want to make room in your

Via Tom D'Amico (@TDOttawa)
Ashley Willis's curator insight, March 30, 2015 11:42 AM

So helpful, especially for beginning teachers. Break the mold and know how to reach your students!

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:46 PM
Scoop.it!

How to Search the Web Effectively: Basics & Advanced Tips for Students 

How to Search the Web Effectively: Basics & Advanced Tips for Students  | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Looking for ways 🔍 to use the Web effectively for research? 🤔 Want to know how to get the most out of Google? Read this article & learn how to use Google to your advantage!

 

What’s the first thing we do when facing the unknown? We Google it, of course! Google is fundamental to our experience of the Internet. According to the statistics, more than 100 000 people press “search” on Google every second!

At first glance, the process is straightforward. You type in what you need information about, press enter, and reap your reward. But, if your search is more complex, simply looking through the first page of results may not be enough. What are your other options?

If you struggle to answer this question, we are here to help! This article by our custom-writing team offers you the most actionable and advanced Google search tips.

 Using Search Engines for Research

Simply put, search engine is a program that helps you find information on the Internet. Nowadays, using them is an integral part of any research. Everyone knows their benefits:

 
  • They allow us to access necessary information almost instantly.
  • They’re highly convenient to use: just type in the keywords and press “Enter.”
  • They provide unimaginable amounts of data, even on obscure topics.
  • They customize the search results based on your location and search history.

However, there are also a handful of downsides to using search engines:

 
  • The information you are given is usually pretty limited. You can look through 15 links with identical content.
  • The amount of data can be overwhelming. It’s easy to get lost in the endless stream of search results.
  • The shallowness of the information you’re getting can also be an issue.

All this makes quality Internet search pretty tricky. But don’t worry: we will tell you about the techniques you can use to overcome these difficulties.

 The Basics of a Quality Google Search

First off, let’s look at a few simple ways to get the most out of Google. These are essential techniques anyone can use:

  • Refine the wording of your search terms. Try to keep the words as close to the topic as possible. If you are looking for a rock music article, you better not search “heavy music piece” on Google. “Heavy music” doesn’t necessarily mean “rock,” and “piece” doesn’t always refer to an “article.” 
  • Set a time frame. It’s a good idea to set parameters around when the material was published. To do this, go to Google search, press “Tools,” then “Any time,” set “Custom Date Range,” and select the dates relevant for you.  
  • Keep your search terms simple. There’s no need to overcomplicate things. After all, Google is smart. If you are looking for statistics on education in the US, simply typing in “US education facts” can work wonders. 
  • Use the tabs. You can make your search results far more refined by simply choosing a corresponding tab. It’s helpful when looking specifically for images, books, or news. 
  • Perform an advanced search. If your results are too vague and generalized, this option is your solution. Simply go to advanced search. Here, you can customize your key terms in great detail, from result language to file format. 

 7 Advanced Actionable Tips for Using Google Search

If you already knew about the basics listed above, here are more advanced tips, including wildcards. What are wildcards in a Google search? Well, they serve as placeholders for characters or words. They are extremely helpful for refining and maximizing search results. Try them out!

 Use Quotation Marks to Search for Exact Terms

Putting simple quotation marks around your search terms can help you with many things, such as:

  • Searching complicated terms. If you need to search for an exact phrase that consists of 2 or more words, make sure to put it in quotations. This way, you’ll avoid results containing only one of the words. For example, typing in “Atomic mass unit” with and without quotation marks can produce different results.
  • Finding the source of a quote. Sometimes you find a witty quote but don’t know who said it. In this case, just type the quote in the Google search bar using quotation marks, and the source should be the first result. For instance, searching for “If you tell the truth, you don’t have to remember anything” will show you that Mark Twain said it.
  • Fact-checking a quote. Some phrases are so popular that people attribute them to a handful of different authors. If you’re unsure if Abraham Lincoln ever said anything about the harm the Internet does, you can check that by simply googling the whole quote. Spoiler: no, he didn’t say that.

 Add an Asterisk for Proximity Searches

An asterisk (* symbol) can be a handy tool when searching the Internet. What it does is act as a placeholder for any word. When Google sees asterisks among your search terms, it automatically changes the symbol to any fitting word.

Say you want to find a quote but don’t know the exact wording. You would type in “You do not find the happy life. You * it.” The asterisk will be magically substituted with “make,” and the author will be listed as Camilla Eyring Kimball.

 Type AND, OR, AND/OR to Expand the Results

Typing OR (in all caps) between 2 search terms will make Google look for results for any of the words. It won’t send you to a link with both terms listed.

In contrast, AND command will do the opposite. It will narrow the results down to only those containing both terms.

It can be helpful when looking for something called differently in separate sources. For example, searching for “fireflies” will list only half of the results. These shiny fellas are also often called lightning bugs. That’s why you might want to search for “Lightning bugs OR fireflies.”

 Remove Options Using a Hyphen

Want to know how to exclude words from Google search? Just put a “” before the word you don’t want to see in the results. This way, words with unrelated meanings will no longer be a problem.

Imagine you need to find the plot for a play about baseball. Results for “Baseball play plot” will likely return irrelevant results. Searching “Baseball play plot -sport” may significantly improve your search results.

 Use Shortcuts to Your Benefit

If you don’t want to bother with advanced settings but need more specific results, you can use shortcuts: simple commands that you add to your search query. The most useful ones are:

intitle: and allintitle: This command narrows down the results to pages with the key terms in the title. It’s a good way to find an article if you know the exact topic you need. inurl: and allinurl: Use this command to find pages that are strongly optimized for your topic. If you use it, Google will find the terms in the page’s URL. inanchor: and allinanchor: This modifier is excellent if you’re researching pages with your terms listed in the anchor text that link back to these pages. Be careful since it provides limited global results. intext: and allintext: Use these two shortcuts if you need your key terms to be in the text. cache: This modifier lets you find the most recent cached copy for any page you need. It can be helpful if the site is down or the page you need was deleted. define: Typing in “define:” before your search term will show you its definition. Basically, it functions as an online dictionary. site: This shortcut limits the results to only one website. Use it when you want to be really specific. You can also add a country code to refine the results even further. link: This shortcut provides links to the site you type after the command.

 Find a Specific File Type

Sometimes you need Google to show you only presentations or worksheets. In this case, using a “filetype:” shortcut can help you. Simply add this command at the end of your search terms with the file format, and you’re good to go. It can look like this:

Example:

Ways to improve your writing skills filetype:pdf

You can use this wildcard for any file type, not just PDF.

 Do Math in Google Search

The Google search tab may not sound like the best math tutor. However, it can perform simple tasks such as addition or division. For example, searching “8+8/4” will give you “10.”

You can also look for the numerical values of any mathematical constant. Simply typing in “Pi” will give you the Pi number value with the first 11 digits. This option can come in handy during an exam.

 Other Search Engines to Use: Top 12

Google Search might be massively popular, but it’s not the only online engine available. Plenty of other worthy programs can aid you in finding things you need on the Internet.

Ideally, you want to use several of them when doing research. They will help you find specialized results, and some will even protect your privacy! Here are the 12 of our favorites:

1. Google Scholar

Google Scholar is an engine designed specifically for scholarly literature. Aside from your basic Google needs, it gives you a chunk of additional information.

Why use it: The most crucial feature is a large number of citations. Besides, it will show you citations in different styles. You may also need Google Scholar if you find yourself looking for grey literature: a common situation in academic research.

2. ResearchGate

ResearchGate is a social network created for scientists and scholars. Here they post publications, join groups, and discuss various academic matters. What can be a better place for a student craving sources for academic research?

Why use it: The website’s powerful search tool goes beyond ResearchGate, covering NASA HQ Library and PubMed, among others. Using it will bring you hundreds of search results containing the latest research articles.

3. Educational Resources Information Center

Educational Resources Information Center (ERIC for short) is a vast scholarly database on every topic imaginable. It lists over 1 million educational articles, documents, and journals from all over the Internet.

Why use it: This resource has a reputation in the scientific community for containing highly accurate insights. It’s also your go-to search engine if you’re looking for peer-reviewed journals.

4. Bielefeld Academic Search Engine (BASE)

BASE is another search engine designed for academic research. While being similar to others in functionality, it differs in the results it can provide.

Why use it: This engine digs into the deepest parts of the Internet. It often shows information that other resources simply won’t find. If you feel like your research lacks data and you don’t seem to be able to find anything new on the topic, try BASE.

5. COnnecting REpositories (CORE)

CORE is a project that aims at aggregating all open-source information on the Internet. CORE uses text and data mining to enrich its content, which is a unique approach to gathering information.

Why use it: Like most entries on the list, this engine focuses on academic resources. This means that you don’t have to worry about your sources being inaccurate or poorly written.

6. Semantic Scholar

This is a search engine that uses artificial intelligence for research purposes. Semantic Scholar relies on machine learning, natural language processing, and Human-Computer interactions. Remember that you’ll need a Google, Twitter, or Facebook account to access Semantic Scholar.

Why use it: The program’s creators added a layer of semantics to citation analysis usually used by search engines. That’s where the name comes from.

7. SwissCows

SwissCows is a classic search engine that positions itself as a family-friendly solution to Internet surfing. Its algorithm uses semantic maps to locate information.

Why use it: This engine filters all not-safe-for-work material from its results. The company also has a principle of not storing any data regarding your search history, which is a lovely bonus.

8. WorldWideScience

WorldWideScience is a search engine that strives to accelerate scientific research around the globe.

Why use it: While providing everything an academic resource does, it also has a unique feature: multilingual translations. This means you might find a piece of work originally written in a language you don’t speak, yet you’ll understand it perfectly.

9. Google Books

You can certainly judge a book by its cover here. As you may have guessed, Google Books searches through literature: both fictional and scientific. You type any term you need, and you get all the books related to it.

Why use it: This classic full-text search engine is excellent as a book-focused resource. In many of them, you can read snippets or even whole chapters related to your keyword. Neat, simple, and effective.

10. OAIster

OAIster is another literature-related search engine. But here, the data gathering principle is different. It uses OAI-PMH, which is a protocol that collects metadata from various sources. For mere mortals (like us), this means a different approach to book scanning.

Why use it: OAIster’s unique algorithm makes the search results more accurate and shortens your browsing time.

11. OpenMD

OpenMD is a resource that focuses on medical information. It searches through billions of related articles, documents, and journals.

Why use it: This engine is priceless when you are a medical student working on an academic assignment. It also helps with a sore throat.

12. WayBack Machine

WayBack Machine is the most extensive Internet archive out there. Practically everything that has ever been posted on the web can be found here. It also hosts a vast collection of books, audio and video files, and images.

Why use it: If the source you’re looking for is no longer available or has seen drastic changes, you can use WayBack Machine to track the data back in time. Just choose a date you want to get back to and harvest the results.

 Bonus Tips: How to Evaluate Websites

Although search engines are great, they can sometimes show you a site that is not entirely reliable. It’s essential to distinguish helpful resources from potentially harmful or fake ones. Here’s what you should look at while evaluating a website:

 Authority Check the author’s background. See if their e-mail and other contacts are listed.  Accuracy Double-check the information given to you. Look for the sources in the article, and make sure you check them out.  Objectivity Articles often contain a good amount of bias in them. Make sure that it doesn’t get in the way of objective information.  Currency The content you’re looking at can be simply outdated. Check the publication date or when it was last updated.  Coverage Look at the number of subjects the article covers. Compare the range of topics to other pieces on a similar matter.

Keeping these things in check can save you time and significantly improve the quality of your work.

And with this, we end our guide. You’re welcome to share your useful research tips in the comments section. Best of luck with your next search!

 References

 
About Author
This article was developed by the editorial team of Custom-Writing.org, a professional writing service with 3-hour delivery.

Via Charles Tiayon
Charles Tiayon's curator insight, November 29, 2022 11:50 PM

"What’s the first thing we do when facing the unknown? We Google it, of course! Google is fundamental to our experience of the Internet. According to the statistics, more than 100 000 people press “search” on Google every second!

At first glance, the process is straightforward. You type in what you need information about, press enter, and reap your reward. But, if your search is more complex, simply looking through the first page of results may not be enough. What are your other options?

If you struggle to answer this question, we are here to help! This article by our custom-writing team offers you the most actionable and advanced Google search tips.

Anaeli Villarreeal's curator insight, May 14, 2024 10:06 AM
Unlocking the full potential of the internet for research begins with mastering Google search. With over 100,000 queries processed every second, Google is our go-to tool for navigating the vast sea of information online. Yet, simply skimming the surface of search results may not suffice for complex inquiries. This article delves into actionable strategies for leveraging Google effectively, from refining search terms and setting time frames to utilizing advanced search features like tabs and wildcards. Whether you're a student, academic, or curious learner, these insights will enhance your ability to sift through the digital haystack and find the needles of knowledge you seek.
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:41 PM
Scoop.it!

The world still needs its dictionaries, but how we define them is changing

The world still needs its dictionaries, but how we define them is changing | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
At a recent dinner party I brought up the subject of dictionaries, drawing a sharp and immediate response: "Dictionary?" said a friend, "Who needs a dictionary? If I need a word I just look it up on my phone." What he meant was "who needs a printed dictionary?" But, without the people who wrote those boring old books, the ready-made definitions found with such facility on machines would not exist. Whether you've bought a dictionary app or you enter a word into a search engine, you have, in fact, consulted a dictionary. All online dictionaries, such as Dictionary.com, thefreedictionary.com, or yourdictionary.com, use, in addition to open sources, licensed material from well-known, established dictionary publishers. And open or copyright-free sources include older works like the 1889 Century Dictionary or the Standard Dictionary of 1893.

Despite wide availability of definitions online, printed dictionaries continue to engender devoted readers. Nowhere is this more apparent than in the recent reversal of fortune for the fifth edition of Webster's New World College Dictionary. Houghton Mifflin Harcourt released it in August and has ordered a fourth printing. This comes after its former publisher, Wiley, nearly killed it altogether by firing almost every member of the dictionary's staff in early 2011.

"Looking up things in the dictionary is an intimate act," said Peter Sokolowski, editor at large at Merriam-Webster. After lectures, audience members nearly always approach him and, in a conspiratorial whisper, confide things like "My family thinks I'm crazy because I read the dictionary."

lRelated
BOOKS
R.A. Montgomery lives on through his 'Adventures'
SEE ALL RELATED
8
Yet the story of the past 10 years or more has been one of retrenchment in the reference field as publishers cut back on full-time employees, replacing them with consulting lexicographers and support staff as sales of print dictionaries and other reference works declined. Jon Goldman, an editor at Webster's New World from 1966-2011, was part of a talented crew that kept the quality high, despite the challenges of repeated ownership changes and perennially skimpy resources. Goldman cites the lack of a digital program for the dictionary's failure to make money in the final years before the HMH purchase. According to HMH Executive Editor Steve Kleinedler, his company bought Webster's New World Dictionary in 2012 to fill a gap left by an earlier decision not to continue with their own college dictionary, concentrating instead on The American Heritage Dictionary of the English Language.

Among dictionary publishers only Merriam-Webster — the sole American publisher devoted exclusively to dictionaries — did not reduce their staff through layoffs. The company currently employs 30 full-time lexicographers. Between its free, advertising-supported dictionary website and smartphone application, Merriam-Webster nets about 200 million page views a month.

cComments
Got something to say? Start the conversation and be the first to comment.
ADD A COMMENT
0
"That's a lot of traffic that keeps us going," says Sokolowski, a lexicographer who has worked at Merriam-Webster for more than 20 years. "Print is still alive and well, and there's no sense that print dictionaries are going to disappear. The thing is they are a much smaller part of the pie for us."

In the recent past, new editions of large dictionaries like Merriam-Webster's Unabridged were published infrequently (the second edition appeared in 1936, the third in 1961) with copyright updates or revised versions printed every five or six years. New editions of college dictionaries were usually published about every 10 years, with copyright updates appearing every year or two. A new edition of a dictionary is the product of a full revision during which every definition is reconsidered, outdated information revised or deleted and new words and new senses added. A copyright update has more modest ambitions, adding, in a college dictionary for example, roughly a few hundred new entries.

But the concept of publishing editions is disappearing, said Judy Pearsall, editorial director, Global Academic Dictionaries, at Oxford University Press. The Oxford English Dictionary uploads new words and revised entries to its website, OxfordDictionaries.com, every three months. These periodic uploads are called "releases," rather than "editions."

"The idea of an edition is something fixed, but this is less applicable to the digital world and our editorial workflow, which is about constantly updating based on our latest research," she said. "We make changes all the time, week to week. Just like language, so our dictionary is a living, breathing thing, changing and developing all the time in response to usage and user needs."

From the reader's perspective, you can't put data releases side by side on a shelf. And although Pearsall said Oxford takes "snapshots" of dictionary data every year, this information — thus far — is not available to the public. Merriam-Webster's Sokolowski said that its version of its unabridged dictionary will be a "large, organic, but also not fixed, data set that will be the great American dictionary, the large American dictionary."

And so, we live in the continuous present of constant revision: Whether we will be able to access the evolving history of the dictionary, reflecting cultural changes and editorial judgments is an open question.

At the same time, online dictionaries are offering new information about how people use them. Sokolowski reports on Twitter about which words are trending on Merriam-Webster's website.

"I know what you're looking up," Sokolowski said. "We're eavesdropping effectively on the national conversation in a way that's very particular because the intersection of vocabulary and the news is one that's unpredictable. I don't know which word will be picked up. I mean, who would have guessed that the most looked-up word connected to Michael Jackson's death would be the word 'emaciated?'"

A dictionary is the work of many hands, a cooperative human project that requires scores of individuals poring over words, researching their history and writing definitions. It is a candle lit against the darkness of ignorance, a forceful statement that our language matters, and an inclusive register of how our speech has changed.

"Every new achievement has its antecedents, its foundation," said David Guralnik, a lexicographer who died in 2000, in a lecture at Cleveland's Rowfant Club in 1951. He was discussing Webster's New World Dictionary — which in its day sought to revolutionize the traditional dictionary by offering clear, precise and self-explanatory definitions "in a 20th century American style and from an American point of view." His New World Dictionary had "in its background the lexicographical labors of all those who have toiled in the bottomless, teeming ocean of English linguistics, from the forerunners of Dr. Johnson through Baltimore's own H.L. Mencken."

And one could say the same thing about every dictionary. The databases of the digital age are living off the fat of the land, the accumulated definitions written by the now dead and discarded lexicographers, the expert definition writers. The question now is will the dictionaries of the future match the high standards of the recent past and, if not, will anyone care? Will dictionary website subscriptions and licensing generate enough revenue to support the publishers who produce them?

lRelated
BOOKS
R.A. Montgomery lives on through his 'Adventures'
SEE ALL RELATED
8
"I think we're in a transition" said Don Stewart, senior editor of Webster's New World College Dictionary, 5th edition, "and I don't know what's going to come out of this, but what is going to take the place of the traditional printed dictionary? In what form will it be? I don't know and I don't think anyone else does either."

Bruce Joshua Miller is editor of "Curiosity's Cats: Writers on Research." He blogs at brucejquiller.wordpress.com.

Copyright © 2014, Chicago Tribune
About this story
This piece first ran in Printers Row Journal, the Chicago Tribune’s premium Sunday book section. Learn more about subscribing to Printers Row Journal, which is available for home or digital delivery. 

Via Charles Tiayon
WISEHOUSE's curator insight, December 13, 2014 4:45 AM

tweeted/scooped by WISEHOUSE PUBLISHING www.wisehouse-publishing.com

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:39 PM
Scoop.it!

How Technology Is Helping Modern Language Revitalization Efforts, Part 2

How Technology Is Helping Modern Language Revitalization Efforts, Part 2 | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
How Technology Is Helping Modern Language Revitalization Efforts, Part 2
TREY SADDLER
1/12/15
In the previous article, we discussed companies and organizations that have worked with tribes to develop language-learning materials. These efforts should be commended, but it can be difficult for smaller tribes to gain the support necessary to create polished language learning products. In this article, we will talk about some of the websites already available for preservationists and students to utilize in order to learn and maintain their native languages.

RELATED: How Technology Is Helping Modern Language Revitalization Efforts

Freelang.net

Freelang.net is a free online dictionary updated by volunteers similar to the likes of Wikipedia. A quick glance of the languages offered on their website shows there are already dictionaries created for Blackfoot, Cherokee, Cheyenne, Choctaw, Gwich’in, Mohawk, Mohegan, Ojibwe, and Tanacross. Anyone can create a dictionary using this website, and they can be accessed online or through a program for the Windows operating system. There are tools in place to address dialect differences, and some of the dictionaries are fairly comprehensive as they are compiled from a variety of resources.

Cree Online Dictionary

The Cree Online Dictionary is one of the best examples of a dictionary for a specific language. The interface is clean and easy to use, and there is an app available for mobile devices. Like Freelang.net, this dictionary pulls from a few different printed dictionaries and also includes syllabic translations for most words. Collaborations like this give hope for the preservation of Native American languages, and hopefully we will see more efforts like this in the future.

Forvo.com

This website allows any user to upload words and spoken translations for them. Unfortunately, this resource has not been used extensively by tribes and only a handful of audio clips exist for Aleut, Cherokee, Cree, Creek, Inuktitut, Inupiaq, Micmac, Mohawk, Navajo, Ojibwe, Shoshoni, and Tlingit. One of the beautiful things about it is that independent websites like the Cree Online Dictionary can integrate with them and incorporate audio clips provided by users with their own translations, creating a truly valuable resource for language students. Hopefully mentioning this site will help to bring attention to this free resource, and motivate more users to provide spoken translations to be used by all.

Online Radio Stations/Podcasts

For those who are not familiar with them, podcasts are short audio recordings ranging from a few minutes to a couple of hours discussing a specific topic. They are especially useful for language learning, and can be specific lessons on vocabulary and grammar or conversations between native speakers on various topics. Many podcasts are free, like the Lac du Flambeau Language Podcast mentioned in part 1. Podcasts are easy to create and share through multiple services like iTunes and PodOmatic. The beauty of podcasts is they can be consumed while working or traveling, providing listeners with a mobile immersion and learning experience tailored to their needs.

Online radio stations can be found for some native languages, usually mirroring the content found on the public local radio stations that they are derived from. MBC Radio based out of Saskatchewan broadcasts their Achimowin Cree program from 1 to 3 p.m. CST on weekdays, and users can listen to the station via their website. The same is true of NCI FM, based out of Manitoba, which broadcasts “Voices of the North” from 7 to 8 p.m. CST with DJ Lorraine George, a fluent Cree speaker. While some native language stations can only be found locally, many stations are realizing the importance of offering their content to a broader audience and mirror their content online.

Anki

While not specifically a language learning website or program, Anki is the most powerful flash card program available and it deserves a special mention. Extremely powerful but somewhat complex to learn, this program should be at the forefront of any language learner’s arsenal. Many tutorials and guides exist explaining how to use this program and the book Fluent Forever by Gabriel Wyner is one of the best books to accompany this software. Anki is available for Windows, OSX, Linux, Android, iOS, and as a website when using a public device. The software is free except for the iOS version, and flashcards can be synced between all of these devices at no cost.

Many websites and programs are available that can be utilized for learning Native American languages. Most of the resources are free, and the ones that are paid are usually worth the money. Crowd-sourced websites like Forvo and Freelang offer communities a way to document and preserve their languages, while flash card programs like Anki and language-specific apps mentioned in the previous article offer students a way to expose themselves to the language on a daily basis. In the final article we will discuss some of the more social avenues for learning and practicing these languages, along with additional resources students can use to acquire their target language.

Trey Saddler is an enrolled member of the Chippewa Cree Tribe of Montana. He is currently attending Salish Kootenai College in Montana and is expected to finish his Bachelor of Science in Life Science with a focus in Environmental Health in June. He is an EPA Greater Research Opportunities (GRO) Fellow, and has interned with the EPA, NIEHS, and at the SKC Environmental Chemistry Laboratory. He studies Native American languages in his free time.

Via Charles Tiayon
No comment yet.
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:35 PM
Scoop.it!

Does air pollution reduce cognitive function over time?

Does air pollution reduce cognitive function over time? | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

In a recent Science of the Total Environment study, researchers in the United Kingdom examine available studies for significant correlations between declining cognitive function in childhood and adult life and air pollution parameters. The study findings provide evidence of the inextricable interweaving of networks linking human environmental and individual health to productivity and socioeconomic background.

Study: Air pollution and human cognition: A systematic review and meta-analysis. Image Credit: Lemberg Vector studio / Shutterstock.com

Introduction

Air pollution, directly and indirectly, harms health due to climate change, higher temperatures favoring the emergence of new diseases, and spreading existing disease vectors beyond their accustomed habitats. Air pollution is also a threat to the feasibility and sustainability of healthcare systems as they exist today.

Particulate matter, ozone, and nitrogen oxides (NOx) are some of the most prevalent air pollutants, according to the World Health Organization (WHO) and European Environmental Agency (EEA).

Cognition refers to mental processes involved in learning and using knowledge or information. This includes acquiring, processing, transforming, and storing such data with timely retrieval. Good cognitive skills are key to maintaining good physical and mental health, achieving academic success, rising in society, and earning more. 

Air pollutants may not reach the brain directly but produce inflammation and oxidative stress that have neurological effects. Inflammation may be neuronal or systemic and may also involve dysregulated immunity that can lead to neuronal degeneration.

About the study

Earlier research has shown a link between cognition at the population level and the degree of air pollution and cumulative exposure. The current review supports these previous findings while also focusing on cognition as experienced by the people at large rather than in terms of specific clinical diagnoses such as autism or dementia.

The researchers included 86 studies in their qualitative analysis, with 14 in the meta-analysis. Except for Africa, all other continents were included.

Most studies in the meta-analysis explored air quality at home or school, thus measuring potential exposure to air pollution in the form of particulate matter less than or equal to 2.5 micrometers in size (PM2.5). For children and adolescents, the risk of exposure-linked general cognitive deterioration was not supported by research; however, the strength of the evidence is too weak to make a definitive conclusion.

In other words, the studies came to varying conclusions, might have tested different sets of cognitive skills, and, as a result, may have used too different methods to be clustered together in a single meta-analysis. Standardized cognitive tests might help avoid such deficits in future studies.

What did the study show?

Some studies indicated lower intelligence in children between the ages of eight and 11 exposed to higher levels of black carbon (BC) but not coarse PM, PM of 10 micrometers or less (PM10), or ozone in younger children up to eight years of age. In addition, several studies showed a decline in executive function, especially working memory and attention span.

PM2.5, PM10, and NOx exposure were linked to poor executive function in several studies that did not depend on a single cohort, unlike the above.

Available research does not support an association between memory and learning or between reaction time and the speed at which a child processes data or exposure to various air pollutants like NOx, PM2.5, and ultrafine particles (UFP).

With young adults, few studies have explored cognitive outcomes with exposure to air pollution.

In those above the age of 40, some associations with general cognitive decline and PM2.5 or NOx exposure were identified. In addition, PM2.5 exposure was also associated with reduced verbal fluency and executive function.

Previous meta-analyses showed significant adverse effects were due to increasing exposure to air pollution in low-exposure areas but not high-exposure areas. This could be due to the overall high level of exposure-related harmful effects in high-exposure areas; therefore, the range of exposures used in these areas might fail to detect the change in harm level.

Prior studies that covered long periods showed significant negative associations between cognition and exposure levels. However, cognition studies were of relatively good quality only in older adults.

Most studies focused on children or older adults above 40 who are considered at higher risk due to rapid changes in their cognitive processes. Intelligence and reasoning skills were not well studied; however, verbal fluency in older adults showed a reduced association with an increase in PM2.5.

Despite the limited number of studies on young adults, this group appears to be more affected by exposure to air pollution than children or older adults. Further research is thus essential in this group, as the brain rapidly develops up to the age of 25 years and continues after that at a slower pace until the end of life.

The extant studies also did not account for the confounding effects of noise pollution, which is often co-existent with air pollution. Moreover, the effects of exposure to air pollution at one period of life may be heavily influenced by previous exposure and its developmental impact.

Cognitive effects due to such exposures may vary depending on the developmental phase and period of life. At present, cumulative slow mechanisms such as attrition of neurons by slow injury or chronic inflammation affecting the whole body may be implicated. However, more acute effects have been shown to possibly affect the brain.

Immediate and acute exposure, therefore, could disrupt contemporaneous cognitive processes and have a lasting cognitive impact through disruption to longitudinal cognitive processes.”

Such differences in the latent period before injury become apparent following an acute injury or with different pollutants. This phenomenon was evident in one study where short-term effects on general cognitive function were more significantly associated with PM2.5 than with NOx. However, the converse effects were seen with long-term consequences for these two pollutant types.

Notably, the high variation in significance and direction of associations could be due to the combination of effects from performing different tasks. With the single task of verbal fluency, where the same task was applied across various studies, heterogeneity of the effect on meta-analysis was low.

Task similarity alone does not explain heterogeneous effects since heterogeneity was low for the meta-analysis of executive function using different tasks but high for other single-task meta-analyses. Instead, exposure levels, latency period, and bias could play a role.

Nevertheless, most associations did find support in the outcomes reached by the meta-analysis, thus indicating an association between air pollution and some cognitive processes.

Future directions

This review identified much evidence that was supportive of associations between environmental air pollution and cognition in humans, but not for all pollutants and all cognitive outcomes.”

However, the evidence could not be classified with a high degree of certainty.

The researchers also make several recommendations. First, using standardized tools in global research would improve the meta-analysis by ensuring better comparability.

Secondly, much more research must be conducted to examine how air pollution affects cognition during the vulnerable periods of adolescence and young adulthood when the brain undergoes dramatic changes. Such analysis should also be extended to cover a broader spectrum of cognitive functions.

Similarly, a range of air pollutants, especially those which often occur together or affect the response to another, should be studied. Unfortunately, the current study only assessed a select list of pollutants.

The importance of adjusting for pre-existing risk factors such as birth difficulties, other forms of pollution, and risk of injury during childhood is also highlighted. These need further exploration to better understand their relationships and modifying effects on the results of pollution exposures.

Mechanistic studies are also indicated to strengthen the potential causality of an association.

Journal reference:
  • Thompson, R., Smith, R. B., Karim, Y. B., et al. (2022). Air pollution and human cognition: A systematic review and meta-analysis. Science of the Total Environmentdoi:10.1016/j.scitotenv.2022.160234.

Written by

Dr. Liji Thomas

Dr. Liji Thomas is an OB-GYN, who graduated from the Government Medical College, University of Calicut, Kerala, in 2001. Liji practiced as a full-time consultant in obstetrics/gynecology in a private hospital for a few years following her graduation. She has counseled hundreds of patients facing issues from pregnancy-related problems and infertility, and has been in charge of over 2,000 deliveries, striving always to achieve a normal delivery rather than operative.


Via Charles Tiayon
Charles Tiayon's curator insight, December 4, 2022 10:27 PM

"In a recent Science of the Total Environment study, researchers in the United Kingdom examine available studies for significant correlations between declining cognitive function in childhood and adult life and air pollution parameters. The study findings provide evidence of the inextricable interweaving of networks linking human environmental and individual health to productivity and socioeconomic background..."

#metaglossia mundus

Rescooped by Dr. Russ Conrath from iGeneration - 21st Century Education (Pedagogy & Digital Innovation)
December 2, 2022 2:10 PM
Scoop.it!

Khan Academy founder has two big ideas for overhauling higher education in the sciences

Khan Academy founder has two big ideas for overhauling higher education in the sciences | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Soft-spoken education revolutionary Sal Khan has a few ideas for how to radically overhaul higher education. First, create a universal degree that's comparable to a Stanford degree, and second, tra...

Via Tom D'Amico (@TDOttawa)
Alex Enkerli's curator insight, December 15, 2014 9:47 AM
Sounds like a plan for James Willis and Dan Hickey. Assessment and #Credentialism
Rescooped by Dr. Russ Conrath from Creative teaching and learning
December 2, 2022 2:09 PM
Scoop.it!

(Open) Educational Resources around the World

(Open) Educational Resources around the World | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

"This book is a collection of the full country reports and working papers created by the COER members from the countries that were included in the study within of the research project EduArc on distributed learning infrastructures for OER and digital learning content in higher education ..."


Via Leona Ungerer
Dr. Russ Conrath's insight:

"This book is a collection of the full country reports and working papers created by the COER members from the countries that were included in the study within of the research project EduArc on distributed learning infrastructures for OER and digital learning content in higher education ..."

No comment yet.
Rescooped by Dr. Russ Conrath from Daily Magazine
February 14, 2023 12:23 PM
Scoop.it!

What the Shift to Virtual Learning Could Mean for the Future of Higher Ed

What the Shift to Virtual Learning Could Mean for the Future of Higher Ed | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

Tectonic shifts in society and business occur when unexpected events force widespread experimentation around a new idea. During World War II, for instance, when American men went off to war, women proved that they could do “men’s” work — and do it well. Women never looked back after that. Similarly, the Y2K problem demanded the extensive use of Indian software engineers, leading to the tripling of employment-based visas granted by the U.S. Fixing that bug enabled Indian engineers to establish their credentials, and catapulted them as world leaders in addressing technology problems. Alphabet, Microsoft, IBM, and Adobe are all headed by India-born engineers today.
Right now, the Coronavirus pandemic is forcing global experimentation with remote teaching. There are many indicators that this crisis is going to transform many aspects of life. Education could be one of them if remote teaching proves to be a success. But how will we know if it is? As this crisis-driven experiment launches, we should be collecting data and paying attention to the following three questions about higher education’s business model and the accessibility of quality college education.
Do students really need a four-year residential experience?
Answering this question requires an understanding of which parts of the current four-year model can be substituted, which parts can be supplemented, and which parts complemented by digital technologies.
In theory, lectures that require little personalization or human interaction can be recorded as multi-media presentations, to be watched by students at their own pace and place. Such commoditized parts of the curriculum can be easily delivered by a non-university instructor on Coursera, for example; teaching Pythagoras’ theorem is pretty much the same the world over. For such courses, technology platforms can deliver the content to very large audiences at low cost, without sacrificing one of the important benefits of the face-to-face (F2F) classroom, the social experience, because there is hardly any in these basic-level courses.
By freeing resources from courses that can be commoditized, colleges would have more resources to commit to research-based teaching, personalized problem solving, and mentorship. The students would also have more resources at their disposal, too, because they wouldn’t have to reside and devote four full years at campuses. They would take commoditized courses online at their convenience and at much cheaper cost. They can use precious time they spend on campus for electives, group assignments, faculty office hours, interactions, and career guidance, something that cannot be done remotely. In addition, campuses can facilitate social networking, field-based projects, and global learning expeditions — that require F2F engagements. This is a hybrid model of education that has the potential to make college education more affordable for everybody.
But can we shift to a hybrid model? We’re about to find out. It is not just the students who are taking classes remotely, even the instructors are now forced to teach those classes from their homes. The same students and instructors that met until a few weeks back for the same courses, are now trying alternative methods. So, both parties can compare their F2F and remote experiences, all else held equal.
With the current experiment, students, professors, and university administrators must keep a record of which classes are benefiting from being taught remotely and which ones are not going so well. They must maintain chat rooms that facilitate anonymized discussions about the technology issues, course design, course delivery, and evaluation methods. These data points can inform future decisions about when — and why — some classes should be taught remotely, which ones should remain on the campus, and which within-campus classes should be supplemented or complemented by technology.
What improvements are required in IT infrastructure to make it more suitable for online education?
As so many of us whose daily schedules have become a list of virtual meetings can attest, there are hardware and software issues that must be addressed before remote learning can really take off. We have no doubt that digital technologies (mobile, cloud, AI, etc.) can be deployed at scale, yet we also know that much more needs to be done. On the hardware side, bandwidth capacity and digital inequalities need addressing. The F2F setting levels lots of differences, because students in the same class get the same delivery. Online education, however, amplifies the digital divide. Rich students have the latest laptops, better bandwidths, more stable wifi connections, and more sophisticated audio-visual gadgets.
Software for conference calls may be a good start, but it can’t handle some key functionalities such as accommodating large class sizes while also providing a personalized experience. Even in a 1,000-student classroom, an instructor can sense if students are absorbing concepts, and can change the pace of the teaching accordingly. A student can sense whether they are asking too many questions, and are delaying the whole class. Is our technology good enough to accommodate these features virtually? What more needs to be developed? Instructors and students must note and should discuss their pain points, and facilitate and demand technological development in those areas.
In addition, online courses require educational support on the ground: Instructional designers, trainers, and coaches to ensure student learning and course completion. Digital divide also exists among universities, which will become apparent in the current experiment. Top private universities have better IT infrastructure and higher IT support staff ratio for each faculty compared to budget-starved public universities.
What training efforts are required for faculty and students to facilitate changes in mindsets and behaviors?
Not all faculty members are comfortable with virtual classrooms and there is a digital divide among those who have never used even the basic audio-visual equipment, relying on blackboards and flipcharts, and younger faculty who are aware of and adept in newer technology. As students across the nation enter online classrooms in the coming weeks, they’re going to learn that many instructors are not trained to design multimedia presentations, with elaborate notations and graphics. Colleges and universities need to use this moment to assess what training is needed to provide a smooth experience.
Students also face a number of issues with online courses. Committing to follow the university calendar forces them to finish a course, instead of procrastinating it forever. And online they can feel as they don’t belong to a peer group or a college cohort, which in real life instils a sense of competition, motivating all to excel. Anything done online suffers from attention span, because students multi-task, check emails, chat with friends, and surf the Web while attending online lectures. We’re parents and professors; we know this is true.
Can these mindsets change? Right now we are (necessarily, due to social distancing) running trial and error experiments to find out. Both teachers and students are readjusting and recalibrating in the middle of teaching semesters. The syllabus and course contents are being revised as the courses are being taught. Assessment methods, such as exams and quizzes are being converted to online submissions. University administrators and student bodies are being accommodative and are letting instructors innovate their own best course, given such short notice. Instructors, students, and university administrators should all be discussing how the teaching and learning changes between day 1 of virtual education and day X. This will provide clues for how to train future virtual educators and learners.
A Vast Experiment
The ongoing coronavirus pandemic has forced a global experiment that could highlight the differences between, and cost-benefit trade off of, the suite of services offered by a residential university and the ultra low-cost education of an online education provider like Coursera. Some years ago, experts had predicted that massive open online courses (MOOCs), such as Khan Academy, Coursera, Udacity, and edX, would kill F2F college education — just as digital technologies killed off the jobs of telephone operators and travel agents. Until now, however, F2F college education has stood the test of time.
The current experiment might show that four-year F2F college education can no longer rest on its laurels. A variety of factors — most notably the continuously increasing cost of tuition, already out of reach for most families, implies that the post-secondary education market is ripe for disruption. The coronavirus crisis may just be that disruption. How we experiment, test, record, and understand our responses to it now will determine whether and how online education develops as an opportunity for the future. This experiment will also enrich political discourse in the U.S. Some politicians have promised free college education; what if this experiment proves that a college education doesn’t have to bankrupt a person?
After the crisis subsides, is it best for all students to return to the classroom, and continue the status quo? Or will we have found a better alternative?


Via Inovação Educacional, juandoming, THE OFFICIAL ANDREASCY
Dr. Russ Conrath's insight:

"Tectonic shifts in society and business occur when unexpected events force widespread experimentation around a new idea. During World War II, for instance, when American men went off to war, women proved that they could do “men’s” work — and do it well. Women never looked back after that. "

No comment yet.
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
February 14, 2023 12:19 PM
Scoop.it!

LTI Korea to set new rules for translation award after AI translation sparks controversy

LTI Korea to set new rules for translation award after AI translation sparks controversy | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Published : Feb 9, 2023 - 22:13       Updated : Feb 10, 2023 - 16:31

A translator who is not fluent in Korean winning the webtoon category at the 2022 Korea Translation Award has sparked controversy about the use of artificial intelligence in translation.

A local newspaper reported Wednesday that Yukiko Matsusue, a Japanese translator who won Rookie of the Year at the annual award organized by the Korean Literature Translation Institute in December, had used Naver’s AI-translating system Papago while translating Gu A-jin’s occult thriller “Mirae's Antique Shop” into Japanese.

For the Rookie of the Year Award, translators were assigned to translate works selected by LTI Korea.

Matsusue is said to have used Papago’s image translation function to read the entire webtoon in advance for a “preliminary translation,” then editing the translation further by checking technical terms and awkward expressions.

Matsusue said through a press statement released by LTI Korea on Wednesday that she "read the whole work from beginning to end in Korean and used Papago as a substitute for a dictionary for more accurate translation," as the webtoon features occult terminology and shamanistic words that were unfamiliar to her.

Matsusue then studied research papers to understand the context and completed the translation by adding detailed corrections. She said she didn’t think of it as a preliminary translation.

Regarding her Korean ability, she said she is overall “not at the beginner level of not being able to understand Korean at all,” and that she had already learned Korean for about a year, 10 years go. However, she added she is “not good enough” in her speaking and listening skills.

She said she had been taking Korean language classes when she applied for the contest. In fact, it was her Korean teacher who recommended that she would be perfectly able to translate a webtoon.

"Last year's regulations and awarding system were insufficient to cover any details of 'external help,'" an LTI Korea official told The Korea Herald on Thursday.

LTI Korea said it saw this as part of the trend of using AI in translations and plans to discuss the role of AI in translation in the future.

Whether Matsusue's award would be canceled or not will be reviewed if necessary.

Meanwhile, for the Rookie of the Year translation award, LTI Korea will now specify in its regulations that translations are to be one's own, without the aid of "external help such as AI,” in line with the aim of discovering new translators.

"AI translation is almost perfect for technical translating such as legal documents, advertisements and newspaper articles," said Kim Wook-dong, emeritus professor of English Literature and Linguistics at Sogang University, speaking to The Korea Herald. Kim recently published "The Ways of a Translator" on the act of translation.

"However, there are limits (to AI translation) in capturing the subtle emotions, connotations and nuances in literary translations. It can help and serve as an assistant to translators but AI cannot replace humans in literary translation. I doubt it ever will," Kim said.

By Hwang Dong-hee (hwangdh@heraldcorp.com)

Via Charles Tiayon
Charles Tiayon's curator insight, February 10, 2023 11:29 PM

"Published : Feb 9, 2023 - 22:13       Updated : Feb 10, 2023 - 16:31

A translator who is not fluent in Korean winning the webtoon category at the 2022 Korea Translation Award has sparked controversy about the use of artificial intelligence in translation.

A local newspaper reported Wednesday that Yukiko Matsusue, a Japanese translator who won Rookie of the Year at the annual award organized by the Korean Literature Translation Institute in December, had used Naver’s AI-translating system Papago while translating Gu A-jin’s occult thriller “Mirae's Antique Shop” into Japanese.

For the Rookie of the Year Award, translators were assigned to translate works selected by LTI Korea.

Matsusue is said to have used Papago’s image translation function to read the entire webtoon in advance for a “preliminary translation,” then editing the translation further by checking technical terms and awkward expressions.

Matsusue said through a press statement released by LTI Korea on Wednesday that she "read the whole work from beginning to end in Korean and used Papago as a substitute for a dictionary for more accurate translation," as the webtoon features occult terminology and shamanistic words that were unfamiliar to her.

Matsusue then studied research papers to understand the context and completed the translation by adding detailed corrections. She said she didn’t think of it as a preliminary translation.

Regarding her Korean ability, she said she is overall “not at the beginner level of not being able to understand Korean at all,” and that she had already learned Korean for about a year, 10 years go. However, she added she is “not good enough” in her speaking and listening skills.

She said she had been taking Korean language classes when she applied for the contest. In fact, it was her Korean teacher who recommended that she would be perfectly able to translate a webtoon.

"Last year's regulations and awarding system were insufficient to cover any details of 'external help,'" an LTI Korea official told The Korea Herald on Thursday.

LTI Korea said it saw this as part of the trend of using AI in translations and plans to discuss the role of AI in translation in the future.

Whether Matsusue's award would be canceled or not will be reviewed if necessary.

Meanwhile, for the Rookie of the Year translation award, LTI Korea will now specify in its regulations that translations are to be one's own, without the aid of "external help such as AI,” in line with the aim of discovering new translators.

"AI translation is almost perfect for technical translating such as legal documents, advertisements and newspaper articles," said Kim Wook-dong, emeritus professor of English Literature and Linguistics at Sogang University, speaking to The Korea Herald. Kim recently published "The Ways of a Translator" on the act of translation.

"However, there are limits (to AI translation) in capturing the subtle emotions, connotations and nuances in literary translations. It can help and serve as an assistant to translators but AI cannot replace humans in literary translation. I doubt it ever will," Kim said.

By Hwang Dong-hee (hwangdh@heraldcorp.com)"
#metaglossia mundus
Rescooped by Dr. Russ Conrath from Education
January 31, 2023 1:47 PM
Scoop.it!

Tips To Write A Perfect Case Study! » Dailygram ... The Business Network

Tips To Write A Perfect Case Study! » Dailygram ... The Business Network | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Case studies are the research method where all the facts and concepts related to the topic being studied and also functional concepts behind the scene are explained. Thus, to write a good case study, a learner must have a thorough knowledge of concepts related to topics. Also, the learner must have essential skills in writing and comprehending the facts well. There is no sure short recipe for writing a perfect case study, but your efforts can make your case study perfect and help you achieve the grades you desire to. Case studies are an important method to understand a concept but while composing a case study, students may come across many difficulties and issues for which they may require Case Study Assignment Help with these case studies based works. Tips Of Writing Case Study  Read the Theoretical Concepts: The case studies are designed to test the academic knowledge of students and how they see relevance and application in their given studies. So that you can easily learn the theories related to a particular discipline or domain on which your case-study assignment is based.  For example, if a case study is based on strategic management, it would be really useful to understand what is strategic management, what are the different perspectives of strategic management. You should have a good understanding of the theoretical concepts that a particular case based study assignment. Read the Case Study Thoroughly: It is essential to read the case thoroughly and understand the different aspects of a case. Understand the chronology of the events of the case study and what are the important points that surface up in the case are vital. Critical Analysis and Coherent Framework: While doing case-studies assignments, it is important to critically analyze the different aspects of an argument and provide necessary evidence in support of the points that you would be making in your report. Most of the students take online help with the case study to compose perfect case-study assignment answers should be given using a coherent and structured framework. Standard Referencing: While addressing all the references in the case-study document it must be in proper format as prescribed by your university. Most of the universities prefer Harvard or APA style referencing styles. Students often are unfamiliar with the referencing styles for which they take the assistance of a case study assignment helper in Australia. Proper citation and referencing will help your reader to identify and reach the sources that you have used for your writer up and arguments. Standard Academic Writing Style: The use of language should be in standard academic writing style with short sentences free of grammatical and spelling errors. Usually, you are expected to write in passive form highlighting what was done objectively.However, make sure to proofread the document to maintain the consistency of the main argument and thesis statements. This is necessary before the final submission of complete case study writing. Don't’ forget to check whether ideas and statements flow and correlate with each other.  In case you require any assignment help then you ask from case-study professionals. 

Via Adele Hansley
No comment yet.
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
January 31, 2023 1:45 PM
Scoop.it!

What is ChatGPT and why SEOs should care

What is ChatGPT and why SEOs should care | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

Learn how this AI-powered chatbot works, who's behind the technology, and what it can – and can't – do for search marketers.

Tom Demers on January 26, 2023

Interest in AI technology and, more specifically, OpenAI’s ChatGPT product has skyrocketed in recent weeks. 

People are looking for information about both topics.

Source: Google Trends

Thousands are writing about ChatGPT across the web…

Source: Google Search

…and talking about it in various communities.

Source: Exploding Topics

And as you can tell from the graphs, all of this happened quickly.

Whether your Twitter and LinkedIn feeds have been inundated with threads and posts about ChatGPT (like mine) or you’re just stumbling on the topic, you may want answers to two questions before investing your time and energy into learning ChatGPT:

  • Is ChatGPT specifically likely to be an enduring product?
  • What does it actually do and what can you personally use it for?

In this article, I’ll help you answer these questions by telling you:

What is ChatGPT?

ChatGPT is an AI-powered chatbot created by OpenAI that can be accessed at https://chat.openai.com/.

As of this writing, ChatGPT offers a free version of the tool that users can access, but there have been indications that they will be charging $42/month for a pro version. OpenAI has also indicated that they’ll make an API for the tool available soon.

The interface is simple, with an empty dialog to enter a prompt. The tool can perform various tasks and return text in response. Some examples of tasks ChatGPT can execute include:

  • Answering questions.
  • Writing things like ads, emails, paragraphs, whole blog posts, or even college papers.
  • Writing, commenting or marking up code.
  • Changing the formatting on a block of text for you.

ChatGPT launched in late November 2022, on the heels of AI Content Generator Jasper.ai receiving $125 million in funding at a $1.5 billion valuation earlier the same month. The tool reached a million users in less than a week.

ChatGPT launched on wednesday. today it crossed 1 million users!

— Sam Altman (@sama) December 5, 2022

But each session has a specific cost associated with it:

average is probably single-digits cents per chat; trying to figure out more precisely and also how we can optimize it

— Sam Altman (@sama) December 5, 2022

In the interest of helping fund those costs (and further growth) Microsoft invested $10 billion in OpenAI at a $29 billion valuation. A move which, combined with ChatGPT’s growth and word of mouth, might be fueling Google’s reported concerns about ChatGPT as a possible threat.

OpenAI has also indicated that there will be a “professional” version of the tool and Greg Brockman the President & Co-Founder of OpenAI shared a link to a Google Form to get on the waitlist:

Working on a professional version of ChatGPT; will offer higher limits & faster performance. If interested, please join our waitlist here: https://t.co/Eh87OViRie

— Greg Brockman (@gdb) January 11, 2023

Some users have reported seeing an option to upgrade to a $42 free version when logged into their account.

Even with the Microsoft investment, ChatGPT has continued to experience outages and even had to limit new users on the platform:

And ChatGPT is starting to face criticisms over the accuracy of some of the output of the tool, while also staring down competition from rivals (which one would have to assume will only increase and intensify in the wake of the platform’s early success).

Now that you know what ChatGPT is, it’s also helpful to understand a bit more about how it works and who built it (and what their goals and motivations may be). 

How does it work and how was it trained?

If you’re an SEO looking for ways to leverage AI in your everyday work, you don’t need to know how to build your own chatbot.

That said, when using tools like ChatGPT, you will want to know where the information it generates comes from, how it determines what to return as an answer, and how that might change over time.

That way you can understand what level of trust to put in the output of ChatGPT chats, how to better craft your prompts, and what tasks you may want to use it for (or not use it for).

Before you start to use ChatGPT for anything, I’d strongly recommend you check out OpenAI’s own blog post about ChatGPT. There they have a nice graphic explaining how it works, along with a more in-depth explanation.

AssemblyAI also has a detailed third-party breakdown of how ChatGPT works, some of its strengths and weaknesses, and a number of additional sources if you’re looking to dive deeper.

One of the most important things to remember about how ChatGPT works is its limitations. In OpenAI’s own words:

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

Another that’s important to highlight:

“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.”

As many people know, the ChatGPT was fine-tuned on a GPT model which finished training in early 2022 – meaning it won’t have knowledge of more current events.

It is also trained on a “vast amount” of text from the web, so of course answers can be incorrect. From ChatGPT's own FAQs:

"Can I trust that the AI is telling me the truth?

ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content.

We'd recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the "Thumbs Down" button."

 

Who built ChatGPT?

Similarly, understanding who built the application and why is an important background if you hope to use it in your day-to-day work.

Again, ChatGPT is an OpenAI product. Here's some background on the company and their stated goals:

  • OpenAI has a non-profit parent organization (OpenAI Inc.) and a for-profit corporation called OpenAI LP (which has a “capped profit” model with a 100x profit cap, at which point the rest of the money flows up to the non-profit entity).
  • The biggest investor is Microsoft. OpenAI employees also own equity.
  • Former Y Combinator President Sam Altman is the CEO of OpenAI and was one of the original founders (along with prominent Silicon Valley personalities such as Elon Musk, Jessica Livingston, Reid Hoffman, Peter Thiel, and others). Many people ask about Musk’s involvement in the company and ChatGPT. He stepped down as a board member in 2018 and wouldn’t have had any meaningful involvement in the development of ChatGPT (which obviously didn’t launch until November 2022).

Notable elements here if you’re interested in ChatGPT either as an SEO or as a viable alternative to Google are obviously: 

  • Microsoft’s involvement (with Microsoft Bing being the number 2 search engine – a distant second behind Google).
  • ChatGPT obviously isn’t designed to specifically be either an SEO or a content tool (unlike tools like Jasper.ai, Copy.ai and other competitors – many of which are built on top of the GPT-3 framework).

Why should SEOs care about ChatGPT?

While it’s possible that ChatGPT or another AI-powered chatbot could become a viable alternative to Google and traditional search, that’s likely at least far enough away that most SEOs won’t be primarily concerned with the tool for that reason. So why should SEOs care?

ChatGPT has a variety of functionality that can be helpful for SEOs. Additionally, given the platform’s ability to generate AI content, it’s important to understand both what the tool is capable of on that front, and how Google talks and thinks about AI content generally.

What follows are ChatGPT's use cases for SEO.

AI content generation

By far the “buzziest” early 2023 SEO topic has been AI content broadly, and ChatGPT has been at the center of that discussion since it launched. 

From creating blog posts whole cloth to selecting images, generating meta descriptions or rewriting content, there are a number of specific functions ChatGPT can serve when it comes to content creation generally and SEO-focused content creation specifically.

But, of course, an important concern here is how Google thinks about AI content in general.

SEOs need to identify the specific instances where ChatGPT can make them more efficient or enhance their content. At the same time, it's crucial to understand the potential risks to rankings and organic traffic when using ChatGPT-generated content in different ways (particularly if you’re relying on content created by writers you don’t have a relationship with).

Keyword research and organization

Similarly, there are a number of specific tasks ChatGPT can execute related to keyword research and optimization, such as:

  • Suggestions for keywords to target or blog topics.
  • Keyword clustering or categorization.

A key consideration for SEOs is how this relates to your current and optimal processes for these tasks.

ChatGPT isn’t designed to be an “SEO tool,” so won’t have the emphasis on search volume, competition, or relevance and co-occurrence that more focused keyword research or organization tools will.

Code generation and technical SEO

ChatGPT is helping people generate code and build things, and it’s no different for specific technical SEO tasks.

Depending on the prompts, ChatGPT can help with things like schema markups, robots.txt directives, redirect codes, and building widgets and free tools to promote via link outreach, among others.

As with any type of content creation, you must QA the code that ChatGPT generates. Your site’s template, hosting environment, CMS, and more can break if the code ChatGPT generates is incorrect.

Link building

ChatGPT can generate lists of outreach targets, emails, free tool ideas, and more that may assist with link building work. 

Here again (you may be sensing a theme) two things to keep in mind:

  • Since ChatGPT was not built to be a link building tool, it may not prioritize opportunities or generate ideas that will specifically help with SEO success.
  • GPT-3 is trained on old data, so the information you’re getting may be wrong or outdated.

How to think about ChatGPT as an SEO

Ultimately, given its early functionality and reception along with OpenAI’s founding team and investors (and level of investment), ChatGPT is likely to have longevity as a tool. 

It’s highly useful, with a high potential for getting folks who misuse it into trouble. 

I would encourage SEOs to become familiar with ChatGPT (and tools like it) and get used to carefully checking its output.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

About the author

 
Tom Demers is the co-founder and managing partner of Measured SEM and Cornerstone Content. His companies offer paid search management, search engine optimization (SEO), and content marketing services to businesses of varying sizes in various industries.

Via Charles Tiayon
Charles Tiayon's curator insight, January 26, 2023 11:09 PM

"ChatGPT launched in late November 2022, on the heels of AI Content Generator Jasper.ai receiving $125 million in funding at a $1.5 billion valuation earlier the same month. The tool reached a million users in less than a week.

 

In the interest of helping fund those costs (and further growth) Microsoft invested $10 billion in OpenAI at a $29 billion valuation. A move which, combined with ChatGPT’s growth and word of mouth, might be fueling Google’s reported concerns about ChatGPT as a possible threat.

OpenAI has also indicated that there will be a “professional” version of the tool and Greg Brockman the President & Co-Founder of OpenAI shared a link to a Google Form to get on the waitlist...

Working on a professional version of ChatGPT; will offer higher limits & faster performance. If interested, please join our waitlist here: https://t.co/Eh87OViRie

— Greg Brockman (@gdb) January 11, 2023

Some users have reported seeing an option to upgrade to a $42 free version when logged into their account.

Even with the Microsoft investment, ChatGPT has continued to experience outages and even had to limit new users on the platform:

And ChatGPT is starting to face criticisms over the accuracy of some of the output of the tool, while also staring down competition from rivals (which one would have to assume will only increase and intensify in the wake of the platform’s early success).

Now that you know what ChatGPT is, it’s also helpful to understand a bit more about how it works and who built it (and what their goals and motivations may be). "

#metaglossia mundus

hireseoexpertsinindia's comment, February 3, 2023 5:26 AM
You don't need to search job anymore. Now you can get listed on India's top-rated freelance website and start getting leads. Website - http://hireseoexpertsindia.com/
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
January 31, 2023 1:38 PM
Scoop.it!

Voice transcription AI: The future of doctor-patient interactions

Voice transcription AI: The future of doctor-patient interactions | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

Voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road.

Voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road.

By PETER Y. HAHN

Post a comment / Aug 17, 2022 at 8:30 AM
 


Voice-recognition AI software has improved the basic processes for a variety of professions, including restaurateursjournalists, and any customer service organization that employs an automated call center. For the healthcare industry, voice-recognition AI in the examination room has shifted from a mere convenience to an urgent need.

Even before the Covid-19 pandemic reached the United States, and the ensuing “Great Resignation” took hold in the healthcare industry, burnout was a growing concern among physicians and other providers. Their jobs demand long hours and efficient interactions with ever-increasing numbers of patients. Electronic Medical Record (EMR) systems like Epic have transformed patient recordkeeping for the better, in addition to their benefits to the natural environment. But these benefits came at a cost.

 

PROMOTED

The role of mindfulness in healthcare from behavioral health to physical therapy

Clinicians used to wonder if there was a role for mindfulness meditation in healthcare. But after the Covid-19 pandemic laid bare the overwhelming need for practical solutions in behavioral health, the question has shifted to what role it can play.

STEPHANIE BAUM

As paper records were phased out, providers bore the burden of updating each patient’s EMR with fastidious note-taking. This created a dilemma: when to record the notes into the EMR system? Providers could either input notes directly into a computer during the patient visit, or take notes mentally and update the patient’s EMR afterward. In this way, EMR technology frequently added to the burden on a doctor’s time, and might have placed a financial burden on the hospital itself. With their face-to-face time limited, patients and providers might focus on a single issue during each visit, and ignore any smaller medical issues. Those smaller concerns might go away, or they might become large ― in which case early intervention could have prevented costly clinical care and in-person visits in the future.

The unprecedented stress Covid-19 placed on the U.S. healthcare system exacerbated many of these pre-existing issues. A Michigan health system instituted a pilot program in Autumn 2021 to tackle the EMR dilemma head-on using a voice-recognition AI tool called Dragon Ambient, or DAX. The promise of the technology was twofold: to restore the intimacy of the doctor-patient interaction, and to save the provider time spent updating the EMR.

DAX involves a smartphone app that sits in the examination room, or anywhere in the vicinity of the provider and patient. With the press of a button, the voice recognition tool is activated. Every word of the visit is then recorded and transcribed. Nuance, the parent company of DAX, employs a human proofreader to control the quality of the transcriptions. Over time, the AI software effectively “learns” how to better transcribe for the individual speakers based on the proofreader’s corrections.

The result is a safe, secure, and accurate tool that delivers on its promise to save time and restore intimacy to the exam room. By recording and transcribing the entirety of a patient visit in a way that handwritten notes cannot (either offline or in an EMR), the burden on healthcare providers is reduced. One saw a decrease in 31 minutes per day in documentation. Another provider saw an average reduction of 5 minutes of documentation time per appointment. By giving the patient more leeway to express their full range of medical concerns, both patient and provider potentially incur fewer costs down the road.

PROMOTED

Digital transformation: Reimagining clinical trial management

Decentralized clinical trial approaches helped the pharma industry navigate through Covid-19. Now it is becoming increasingly clear that there’s a need for a hybrid approach to decentralized clinical trials that considers the perspectives of patients and the impact to clinical trial sites.

JOEL BERG

Since the initial pilot program, which involved 13 providers, the health system the use of DAX to 150 providers. Feedback from both parties has been overwhelmingly positive, with both patients and providers reporting their interactions seemed less transactional.

In this way, voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road.

Photo: berya113, Getty Images

 

Peter Y. Hahn

 

Dr. Peter Y. Hahn is the University of Michigan Health-West President and CEO, and one of six currently serving hospital CEOs with a medical doctorate. He previously spent seven years on faculty at the Mayo Clinic. During his six years as the Director of Pulmonary, Critical Care and Sleep Medicine with Tuality Healthcare, an OHSU Partner, Hahn was named a “Top Doc” by Portland Monthly Magazine in 2012. He attained his Masters of Business Administration from the University of Tennessee Haslam College of Business in 2014, and joined University of Michigan Health-West in 2016.


Via Charles Tiayon
Charles Tiayon's curator insight, August 17, 2022 11:46 PM

"Voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road."

#metaglossia mundus

Scooped by Dr. Russ Conrath
January 12, 2023 12:35 PM
Scoop.it!

Listen and Learn: The 40 Best Educational Podcasts in 2021

Listen and Learn: The 40 Best Educational Podcasts in 2021 | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Podcasts let you keep learning when you're driving, walking to class, working out, or practicing your plunger-arrow archery. Here are the 40 best for 2021.
No comment yet.
Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:48 PM
Scoop.it!

Languages | Free Full-Text | The Nature and Function of Languages

Languages | Free Full-Text | The Nature and Function of Languages | Useful Tools, Information, & Resources For Wessels Library | Scoop.it

Several studies in philosophy, linguistics and neuroscience have tried to define the nature and functions of language. Cybernetics and the mathematical theory of communication have clarified the role and functions of signals, symbols and codes involved in the transmission of information. Linguistics has defined the main characteristics of verbal communication by analyzing the main tasks and levels of language. Paleoanthropology has explored the relationship between cognitive development and the origin of language in Homo sapiens. According to Daniel Dor, language represents the most important technological invention of human beings. Seemingly, the main function of language consists of its ability to allow the sharing of the mind’s imaginative products. Following language’s invention, human beings have developed multiple languages and cultures, which, on the one hand, have favored socialization within communities and, on the other hand, have led to an increase in aggression between different human groups.

by  Franco Fabbro  1,*, Alice Fabbro  2 and Cristiano Crescentini  1     1 Department of Languages and Literatures, Communication, Education, and Society, University of Udine, 33100 Udine, Italy 2 School of Psychology and Education, Free University of Brussels, 1050 Brussel, Belgium * Author to whom correspondence should be addressed. Languages 2022, 7(4), 303; https://doi.org/10.3390/languages7040303 Received: 16 May 2022 / Revised: 25 July 2022 / Accepted: 22 November 2022 / Published: 28 November 2022 (This article belongs to the Special Issue Multilingualism: Consequences for the Brain and Mind) Download Versions Notes

 

Abstract Several studies in philosophy, linguistics and neuroscience have tried to define the nature and functions of language. Cybernetics and the mathematical theory of communication have clarified the role and functions of signals, symbols and codes involved in the transmission of information. Linguistics has defined the main characteristics of verbal communication by analyzing the main tasks and levels of language. Paleoanthropology has explored the relationship between cognitive development and the origin of language in Homo sapiens. According to Daniel Dor, language represents the most important technological invention of human beings. Seemingly, the main function of language consists of its ability to allow the sharing of the mind’s imaginative products. Following language’s invention, human beings have developed multiple languages and cultures, which, on the one hand, have favored socialization within communities and, on the other hand, have led to an increase in aggression between different human groups. Keywords:  communication; symbols; neural recycling; cultural identities  

 

1. Introduction For over two thousand years philosophers, theologians and poets have reflected on the nature of language (Heidegger 1959; Panikkar 2007). More recently, scientific disciplines, from linguistics to computer science, have also sought to clarify its characteristics and functions (Sapir 1921; Jakobson and Waugh 1979; Borden et al. 2006). Nevertheless, in spite of thousands of books and articles, both theoretical and experimental, the nature of language still remains rather enigmatic (Lieberman 2013; Scott-Phillips 2015; Corballis 2017a, 2017b). In the Encyclopedia Britannica, language is defined as a symbolic system (composed of sounds, hand gestures or letters) created by a social group to facilitate human expression. According to this perspective, the functions of language include communication, cultural identity, play and imaginative and emotional expression (Britannica n.d.). The Italian encyclopedia Treccani also specifies that language is an exclusive faculty of human beings allowing for the expression of consciousness’ contents through a conventional symbolic system (Treccani n.d.). These definitions, albeit general, highlight characteristic aspects of language: its communicative function, its symbolic nature and its ability to express and share consciousness’ contents. 2. Language as a Communication System Communication is a particular form of transport in which what is moved is neither matter nor energy but “information” (Escarpit 1976). Yet, information cannot exist without a material substrate, though it is not reducible to it (Longo 1998). For example, sending a telegram to New York saying “all right” would require the same amount of energy and matter as sending “gar lilth” instead. Both telegrams are composed of two words and the same eight letters arranged in a different order. Transmitting either message would require an equal amount of energy and matter; although, only the first would convey understandable information. Thus, information is to be regarded as something altogether different from its material supports (Wiener 1948). Every communication system, be it verbal expression, telephone lines, radio channels or the internet, consists of at least three elements: an emitter (source or transmitter), a receiver and the channel carrying information from the emitter to the receiver. The information source (emitter) selects the “message” and encodes it in a “code” that is a set of “signs” or “symbols” which are represented by “signals” compatible and specific to the channel in use (Pierce 1961; Singh 1966). For example, the channel of a telegraph can only transmit electric current impulses of two possible durations: a short impulse (dot) and a long impulse (line). Accordingly, all letters of the alphabet have to be codified as a series of symbols made up of sets of lines and dots before being transmitted (Singh 1999). To date, information’s nature remains rather elusive (Dodig-Crnkovic and Burgin 2019). Information plays an essential role in the functioning of living systems (cells, organisms and societies) and in the regulation of some man-made devices, in particular, in self-regulating systems such as thermostats and electronic processors. Norbert Weiner, in one of the earliest definitions of information, describes it as being neither matter nor energy (Wiener 1948, p. 132; Montagnini 2015). According to Gregory Bateson (1972), information has to do with the notion of “difference”: for example, maps are representations of territorial differences (roads, hills, mountains, cities). If all land was alike and presented no discernible features, there would be no information. Giuseppe Longo has thus proposed to define information as a “difference that generates a difference to somebody” (Longo 1998; Longo and Vaccaro 2013). Such definition necessarily implies the existence of an observer (man or machine) able to detect and/or produce differences. Transferring information through a channel while limiting the interference of “noise”, which tends to distort and shatter messages, is considered a crucial aspect of communication by mathematicians and engineers alike. In the monograph entitled The Mathematical Theory of Communication (1949), Claude Shannon and Warren Weaver maintain that the fundamental engineering problem of communication is to reproduce in one point (A), in an exact, or more or less approximate way, a message originating in another point (B) (Shannon and Weaver 1949). Such two points can be separated in “space” or “time”. In this sense, communication then not only concerns the spatial transfer of information, but also its storage on physical supports. Shannon was concerned with the technical aspects of communication, and in order to make mathematical observations, he isolated information from its semantic contents (Longo 1998, p. 28; Longo and Vaccaro 2013, pp. 22–23). In such a manner it was possible to relate it to the concepts of uncertainty and entropy. The level of information depends on the number of possible messages: if only one message is possible, there is no uncertainty and therefore no information. Moreover, information is related to probability, therefore, to a certain element of surprise. The smaller a message’s likelihood, the more informative it is. By elaborating on these concepts, Shannon established an equation to quantify the amount of information contained in a message, regardless of its meaning. This quantity was related to the average logarithm of the improbability of the message. It was basically a measure of its unpredictability (Gleick 2011). Establishing that information was not a concept of the physical discipline, and could not be related to either matter or energy, called for the invention of a new unit of measurement in order to define it. Shannon had the brilliant intuition to think of information as something allowing to answer a question with either a “yes” or a “no”. The answer of such question can take one of two possible values: “1” (yes) or “0” (no). Thus, each question corresponds to a Binary Digit or a bit. For example, it is possible to precisely define a number between one and one hundred by asking a series of questions that can be answered with “yes” or “no”. “Is the number greater than 50?” (Yes). “Is the number lesser than 75?” (No), and so on. According to this perspective, any question with a possible answer can be codified in sequences of “1” and “0” and can thus be measured in bits. Since language is a set of symbols (phonemes or letters of the alphabet), it too can be rendered in a series of bits. For example, as each letter of the English alphabet can be coded in five bits, an average book of about four hundred letters approximately contains two million bits of information. Conceptualizing language as a communication system (Code Model) is a major model of linguistic analysis. However, as language is not only a system for the transmission of information but performs other functions, it can be examined through other analytical frameworks (Scott-Phillips 2015). 3. Signs, Symbols and Codes Information sources emit message units. These units are exchanged by means of a code (such as the genetic code, the alphabetic code, the Morse code, etc.) between the emitter (encoder) and the receiver (decoder) (de Saussure 1922). The code is a list of units, called symbols, that constitute the message. Symbols are signs in which the relationship between what is being represented and its representation is arbitrary (Mazzone 2005; Deacon 1997). The word sign derives from the Latin “signum” and describes “something referring to something else” (Peirce 1931–1935), as claimed medieval philosophers: alquid stat pro aliquo (Bettetini 1963; Mazzone 2005). The word symbol derives from the Greek symbolon (σύμβολον) which means: “to cast together”, “to connect”, “make coincide”. The symbol for the Greeks originally designated an object, a tile, a fragment of ceramic or metal that was divided in the stipulation of an economic, emotional or spiritual contract. Each party kept a piece in token of their agreement. Upon meeting, the fragments of the symbolon were brought together to honor the bond and commemorate the economic, emotional or spiritual ties uniting them (Mazzone 2005). Aristotle emphasized above all the conventional and relational aspect of language’s symbols (words) (De Interpretatione, (Aristotle 1962)). Symbols are entities in relation with one another, and for this reason the meaning of a word opposes and differs from the words surrounding it. Each symbol of language has two faces: the signifier and the signified. The mathematical theory of communication has been interested above all in the signifier and its characteristics: encoding and decoding, resistance to noise, speed of transmission. To return to our earlier example, sending a message via telegraph requires that the letters of the alphabet be encoded in the symbols of the Morse code. The individual letters transformed into Morse symbols are then sent through a series of short (dot) and long (line) pulses, minimizing the effects of noise along the telegraph line. Other areas of general communication theory, such as semantics and pragmatics, are more interested in aspects related to meaning and the communicative context (Longo 2001). A fundamental aspect, which is often forgotten, is that at the origin and at the end of most of the experiences of coding or decoding a message, we always find “language”. It is a very particular form of communication that presupposes thinking and speaking individuals, who have tacitly agreed upon an interpretative code among themselves through an action of social coordination (Singh 1966; Escarpit 1976; Mazzone 2005). In a series of reflections developed in the biophysical context, Howard Pattee analyzed the most significant characteristics present in cultural (such as language) and biological (such as the genetic code) symbolic systems (Pattee 2008, 2015; Pattee and Kull 2009; Pattee and Rączaszek-Leonardi 2012). According to Pattee, “symbols” are “formal entities that stand for something else” (representing something else). The first defining characteristic of symbols is that they are constituted by physical structures. In fact, all codes, rules and even the most abstract descriptions related to the symbolic dimension have well-defined physical bases. For example, DNA is formed by nucleotides, phonemes are made up of sound signals and even the letters of the alphabet consist of signs drawn on paper. The second characteristic concerns the aspect of reproducibility. The symbolic structures can be transmitted through their replication, which is articulated first in a process of “reading” and then in a process of “copying”. The third characteristic refers to arbitrariness. All the symbolic structures, aside from being informative (highly improbable), present a complete arbitrariness between the “signifier” (phonemes, letters, nucleotides) and the “signified”. The arbitrary relationship between symbols and their meaning depends on the history of that particular symbolic system (the history of languages or the history of the genetic code). Indeed, at the lowest level of organization, no symbol carries any meaning (i.e., no phoneme or no nucleotide refers to anything significant). The effects of the symbols are highlighted within a dynamic system capable of generating more or less complex structures. Symbolic systems are historically determined and represent “memories of possibility”; they are coordinated adaptations with biological, psychological or social reality. 4. The Symbolic Nature of Language Compared to DNA and psyche, language is the symbolic domain that is most “external” (Fabbro 2021a, 2021b). It is a symbolic system constituted by several layers. At the most superficial level, sounds are symbols for phonemes. In turn, sequences of phonemes are symbols for words, while strings of words form symbols for sentences and, finally, ordered chains of sentences constitute symbols for narratives and stories. All languages are made up of symbolic systems nested within one another and sharing some universal properties (Hockett 1960). The first is the duality of structure: all human languages use meaningless signs (phonemes), which combine into ordered sequences of sounds with meaning (words). The second characteristic refers to arbitrariness, which is one of the fundamental concepts of all symbolic systems. In language, this concept indicates that there is no physical similarity between a symbol (word) and the object that it represents. Within a linguistic community, phonemes, words and numerous grammatical aspects are passed on from one generation to the next (transmission) through usage and learning (de Saussure 1922; Thorpe 1972). Like DNA, languages have a linear code. This implies that every verbal expression is made up at the most superficial level of a string of words that, at the deepest level, presents a hidden structure (syntax). Linearity manifests a fundamental aspect of spoken languages, namely the temporal arrangement of acoustic signals along a timeline (de Saussure 1922). An additional property of languages, which is also shared by DNA, is discreteness. The sound symbols of a language (phonemes) are represented by a discrete number of elements (in Italian = 30; in English = 44). The same can be said of words; there are no intermediate sentences between a sentence composed of “n” words and a sentence composed of “n + 1” words (Moro 2006). Another important feature is recursiveness, which is the ability to infinitely repeat a process within the same structure. Since a sentence can consist of a nominal syntagm (SN) plus a verbal syntagm (VS) [S = NS + SV], and the verbal syntagm can consist of a verb plus a nominal syntagm or a verb plus another sentence [VS = V + NS or VS = V + S], it is possible to recursively expand a sentence by inserting another sentence at the level of the verbal syntagm. For example, “Marco is wearing a sweater” + “The sweater cost 100 euros” = “Marco is wearing a sweater that cost 100 euros”. Recursiveness accounts for another property of languages: openness, that is, the possibility of producing sentences that are always new or have never been uttered before. As a result of recursiveness and openness, every speaker can generate an almost infinite number of sentences. The rules that determine the organization of words within a sentence are called ‘syntax’. Each language has specific syntactic rules. For some, such as Latin, word order has only rhetorical significance. In fact, in Latin there is little difference between the sentences: “hominem videt femina” or “femina videt hominem”, while in English the order of the words is very important: the sentence “the child eats the chicken” means something very different from the sentence “the chicken eats the child” (Sapir 1921). Within human languages, there is an enormous variety of phonemes, words and grammatical rules. Nevertheless, there exist some restraints in their choice and implementation. The repertoire of phonemes that humans can produce is limited. The phonoarticulatory organs and the nerve structures that coordinate them have structural and physiological constraints. Likewise, it is not possible to pronounce words that are too long (e.g., composed of 500 phonemes) because at a certain point the air emitted during exhalation runs out. Based on these premises, many linguists, starting from Noam Chomsky, have argued that there are limits also in the possible syntactic rules. Therefore, the set of syntactic rules of all languages would be delimited by a system of categories, mechanisms and constraints called “universal grammar”. Universal grammar seems to be related to the ways in which the brain develops, organizes and functions, which in turn are related to specific genetic information that has probably evolved over hundreds of thousands of years within a context of a musical nature (glossolalic singing) (Mithen 2005; Patel 2008; Fabbro 2018). 5. The Invention of Language Humans belong to the class of Mammals and the order of Primates, which emerged about 80 million years ago and comprises about 400 species, including prosimians (tarsiers and lemurs), monkeys and anthropomorphic apes. The Hominid family, which includes our species, separated from the latter about 5–7 million years ago. The most significant characteristic that distinguishes hominids from other primates is the bipedal gait (Manzi 2017). Adapting to bipedal locomotion brought about a series of anatomical modifications, concerning the conformation of the lower limbs and feet, and important physiological transformations at the level of the respiratory system and central nervous system. The bipedal gait modified the respiratory rhythm allowing for an extended expiratory phase, a fundamental requirement to develop the ability to laugh, sing and speak (Provine 2000). Homo sapiens, the only extant species of hominids, appeared in Africa about 300,000 years ago. Fossil remains attributed to H. sapiens have been found in Ethiopia (dated 195,000 years ago) and Morocco (dated about 300,000 years ago). The modern human presents a more gracile and slender structure than the Neanderthal man, with a brain volume of about 1400 cubic centimeters in H. sapiens and about 1600 cc in H. neanderthalensis. Paleoanthropologists believe that modern humans seem to have come from an eastern or southeastern region of Africa and then spread across the continent. They subsequently migrated, in several waves, to Eurasia and the rest of the world (Tattersall 2012). For a long period of time, more than 200,000 years, H. sapiens did not produce any technological innovations: its lifestyle and style in manufacturing lithic tools was similar to that of other extant hominid species (H. erectus, H. heildebergensis, H. neanderthalensis, Denisova man). The only significant distinction consisted in a different organizational structure of human villages. In fact, modern humans present, in all hunter-gatherer cultures studied, a more numerous social structure than that of the other hominids. About 80–90 thousand years ago, H. sapiens began to manifest great creativity by producing ornaments, decorations and complex tools that involved the development of symbolic thinking. This was unprecedented. Numerous anthropologists, psychologists and linguists have wondered about the causes behind this qualitative leap in culture and technology. Some have concluded that what brought about this cognitive revolution was most probably the appearance of articulated language (Tattersall 2012; Lieberman 2013; Corballis 2015; Chomsky 2016). Not much seems to have changed at the genetic, anatomical and physiological levels in H. sapiens since its appearance 80,000 years ago. For this reason, some neuroscientists and paleoanthropologists, including Israel Rosenfield and Ian Tattersall, have argued that a group of H. sapiens located in a southeastern region of Africa probably “invented” articulate language (Rosenfield 1992; Tattersall 2012; Fabbro 2018). Evidently this was not a “conscious invention”, but rather a kind of social game probably developed by a sufficiently large group of children who lived together for a few generations. The hypothesis that articulate language was invented and did not evolve, as many biologists, linguists and psychologists have long argued (Pinker 1994; Dunbar 2014; Corballis 2002, 2017a), is supported by the recent discovery that at least two languages emerged from nowhere in a group of children in Nicaragua and in a community of Bedouins in the Negev desert (Senghas et al. 2004; Senghas 2005). Nicaraguan sign language is the first language invented by a group of children to have been thoroughly studied (Tattersall 2012; Bausani 1974; Fabbro 1996). Towards the end of the 1970s, after the victory of the Sandinista revolution, a special school dedicated to the education of deaf and mute children was established in Managua, the capital of Nicaragua. Initially, 50 children were brought together, joined by more than 200 in 1981, a number that gradually grew in the years that followed. The school aimed to teach deaf–mute children to lip-read the Spanish language, a goal that was not achieved. Instead, just within the span of two generations, the children spontaneously invented Nicaraguan Sign Language. The first generation shared a set of signs that they had developed in the domestic context of their families (homesigns). These signs did not represent an actual sign language yet, but rather a form of gestural communication. In contrast, the second generation of deaf children, younger in age, was able to develop a grammatically complete language on the basis of the signs shared by children of the previous generation. The younger children were able to categorize the gestures of the older students, generating a true grammar, i.e., a set of abstract categories capable of regulating the relationships between different symbolic units. Only younger children were able to transform “gestures” into “symbols”. This confirms the hypothesis that a language can only be “acquired” in a complete form or “invented” by children who have not yet reached puberty (Senghas et al. 2004; Fabbro 2004). The invention of articulated verbal communication (language) by (deaf–mute) children indicates that language has not evolved but has been invented (Rosenfield 1992). This means that some biological bases of language (concerning phonology and syntax) have evolved within vocal behaviors that are much more archaic than language, such as glossolalic singing. Spoken language is the most important technology serving the transmission of mental content. Other systems used for the transmission of mental contents are written language and mathematics. Overall, these all are technical and cognitive skills that, on the basis of pre-adaptation (exaptation) phenomena, have conquered brain territories that originally evolved to compensate for other functions (Dehaene and Cohen 2007; Dehaene 1997, 2009; Fabbro 2018, 2021a). 6. What Is Language’s Purpose? According to many authors, language is a form of communication (Miller 1975, 1987). However, it is possible to communicate effectively even without language: many animal species communicate very well even without speech. Recently, Daniel Dor argued that language is a technology aimed at sharing imagination (Dor 2014, 2015, 2016). According to this perspective, the task of the speaker is to provide clues about their own mental representations, while the addressee tries to reconstruct the mental representations of the speaker through a chain of interpretative processes (Scott-Phillips 2015). In fact, for every “literal meaning” of a word or a sentence, there are infinite possible modulations of meaning (also related to pragmatic aspects). This fact determines one of the most typical characteristics of language, namely the “pervasiveness of indeterminacy” (Scott-Phillips 2015). This is a limitation given that verbal expression does not allow a direct (literal) grasp of reality; at the same time, it allows a varied range of interpretative possibilities. Like all technologies, language is a system for achieving a purpose, namely the construction of a network of psychic individualities that exchange the contents of their imaginations. It is an unconventional type of technology, similar to money, contracts and legal systems. Since human beings are social organisms, language’s invention and its development were accomplished as collective processes. Individual minds can be viewed as the “nodes” of a metaphorical Web, in which language constitutes the software that each human being downloads into their own mind and uses to help achieve an imaginative community (Barabási 2002; Dor 2015). Wilhelm von Humboldt (1836) was one of the first linguists to emphasize the central role that imagination plays within language. In his opinion, human languages are not tools for naming objects that have already been thought, but rather organs used for the formulation of thought. Similarly, Daniel Dor considers language as the “mother of all inventions” (Dor 2015). According to this perspective, it was the invention of language that really gave rise to human beings. Thus, the first and most important technological product of human beings is language. At the same time, language has changed us radically. The unintentional invention of language has occurred repeatedly throughout human history (Fabbro 2018). Deaf and mute children in Nicaragua who spontaneously invented a new sign language were able to experience the passage of the “magical frontier of language” and compare it to their previous lives. The experience was described as inconceivable, disconcerting and astounding; what struck these children most was the realization of the abysmal loneliness of their lives prior to entering the sphere of language (Schaller 1991; Senghas et al. 2004). The relationships between language and technology are much closer than is commonly assumed. One aspect shared by both language and technology is recursiveness. In fact, all technological tools are constituted by components that are assembled according to a hierarchical structure. Technologies are comprised of technological components that contain smaller components within them, made up of even more elementary components (such as screws, vanes, bolts, etc.). Thus, modern technology is similar to a language that is open to the creation of new structures and functions (Arthur 2009). For these reasons, we believe that it is not a coincidence that the cultural and technological explosion that followed the cognitive revolution, some 80,000 years ago, is related to the most important invention made by humankind, namely the invention of language. Finally, some interesting evolutionary psychology studies, developed by Robin Dunbar (1996, 2014), have analyzed the possible role that vocal communication and language play in strengthening social bonds and reducing stress. 7. Languages as the Bedrock of Cultural Identities It is likely that the first language split into several others just under a few hundred years after its invention. In fact, language can arise only from other human languages. The emergence of dialects is favored by a combination of geographic isolation and of linguistic variation (at phonological, syntactical and lexical levels) naturally occurring within generations (von Humboldt 1836; Locke and Bogin 2006). Languages facilitate relationships within the members of a group, yet they also contribute to the segregation of communities (Fabbro 1999; Pagel and Mace 2004; Pagel 2009). There is evidence suggesting that languages may act as biological barriers genetically isolating populations; indeed, the tendency to prefer partners speaking the same language still prevails among human beings (Spielman et al. 1974; Cavalli-Sforza 1996; Fabbro 1996, 1999; Lieberman 2013). The acquisition of language, both in written and spoken form, sculpts the brain in specific ways and is affected by the existence of critical periods (or sensitive periods) (Fabbro 2004). Generally, it is possible to learn a second language well only before puberty and preferably before the age of seven. After the brain structures of the implicit memory system (particularly procedural memory) involved in the acquisition of language and syntax have matured, it is generally not possible to completely acquire a second language at the first language level (Paradis 1994, 2009; Cargnelutti et al. 2019). These observations suggest that the true “territory” of a language is not geographical, but rather neurological and mental (Fabbro 2018). Moreover, perfect language acquisition is only possible within early childhood. The use of vocal signals, learned within specific critical periods, is a rather widespread biological phenomenon: it is present in many species of passerines (canaries, finches, etc.) and in some mammals, such as dolphins and killer whales (Riesch et al. 2012). In killer whales, the development of pod-specific cultural habits (related to singing and feeding) has mediated their speciation into numerous subspecies (Riesch 2016). Cultural and genetic isolation is one of the mechanisms for producing biological diversity, and diversity is the ground on which life originates and develops. Language’s advent determined the emergence of human cultures. Human beings within each culture have developed more or less different narratives, customs and traditions. The different tribes and populations of H. sapiens were rather technologically and traditionally homogenous before the invention of language. Its invention generated considerable diversity in ornaments, decorations and tattoos among different language groups. However, it is likely that linguistic and cultural diversity in hunter-gatherer societies also fostered an increase in violence (Fabbro et al. 2022). Violence is a behavior of destructive and systematic aggression, aimed primarily at the elimination of isolated individuals or groups of the same species. In the second half of the last century, some ethologists have documented the presence of destructive behaviors between different groups of chimpanzees (Wrangham and Peterson 1996; Wrangham et al. 2006; Kelly 2005). These behaviors did not seem to be driven by food scarcity, but rather competition to access larger territories and greater food and sexual resources. In humans, language differences appear to have fueled the tendency for inter-group violence, as in almost all past human cultures those who spoke foreign languages were considered “subhuman beings” liable be subjugated or killed. In fact, one of the most characteristic abilities of human beings, even of those who have not studied phonetics or phonology, consists of the ability to recognize from the pronunciation whether an individual belongs to their linguistic community or not. This ability, which seems to develop more during adolescence, is independent of the communication of information. According to this perspective, language plays a significant role as a marker of group identities (Locke and Bogin 2006; Ritt 2017). These aspects of human socialization and communication indicate that language, like many other aspects of cognition, has both strengths and limitations, which must be properly understood and regulated (Fabbro 2021a, 2021b; Fabbro et al. 2022). Author Contributions Conceptualization, F.F. and C.C.; writing—original draft preparation, F.F.; writing—review and editing, F.F., A.F. and C.C.; supervision, F.F. All authors have read and agreed to the published version of the manuscript. Funding This research received no external funding. Conflicts of Interest The authors declare no conflict of interest. References Aristotle. 1962. On Interpretation. Translated by Harold P. Cook. Cambridge: Harvard University Press. [Google Scholar] Arthur, W. Brian. 2009. The Nature of Technology: What It Is and How it Evolves. New York: Free Press. [Google Scholar] Barabási, Albert-Laszlâo. 2002. Linked: The New Science of Networks. New York: The Perseus Book. [Google Scholar] Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology. Northvale: Jason Aaronson Inc. [Google Scholar] Bausani, Alessandro. 1974. Le lingue Inventate. Linguaggi Artificiali, Linguaggi Segreti, Linguaggi Universali [Invented Languages. Artificial Languages, Secret Languages, Universal Languages]. Roma: Astrolabio. [Google Scholar] Bettetini, Gianfranco. 1963. Il segno. Dalla Magia al Cinema [The sign. From Magic to Cinema]. Milano: Edizione i 7. [Google Scholar] Borden, Gloria J., Katherine S. Harris, and Raphael Lawrence. 2006. Speech Science Primer. Physiology, Acoustics and Perception of Speech. Baltimore: William & Wilkins. [Google Scholar] Britannica. n.d. Available online: https://www.britannica.com/topic/language (accessed on 7 April 2022). Cargnelutti, Elisa, Barbara Tomasino, and Franco Fabbro. 2019. Language Brain Representation in Bilinguals With Different Age of Appropriation and Proficiency of the Second Language: A Meta-Analysis of Functional Imaging Studies. Frontiers in Human Neuroscience 13: 154. [Google Scholar] [CrossRef] [PubMed] Cavalli-Sforza, Luigi Luca. 1996. Geni, Popoli e Lingue [Genes, Peoples and Languages]. Milano: Adelphi. [Google Scholar] Chomsky, Noam. 2016. Il Mistero del Linguaggio. Nuove Prospettive [The Mystery of Language. New Perspectives]. Translated by di Maria Greco. Milano: Feltrinelli. [Google Scholar] Corballis, Michael. 2002. From Hand to Mouth: The Origins of Language. Princeton: Princeton University Press. [Google Scholar] Corballis, Michael. 2015. The Wandering Mind: What the Brain Does When You’re Not Looking. Chicago: University of Chicago Press. [Google Scholar] Corballis, Michael. 2017a. The Truth about Language: What It Is and Where It Came From. Chicago: University of Chicago Press. [Google Scholar] Corballis, Michael. 2017b. Language Evolution: A Changing Perspective. Trends in Cognitive Sciences 21: 229–36. [Google Scholar] [CrossRef] [PubMed] de Saussure, Ferdinand. 1922. Course in General Linguistics. Translated by Wade Baskin. New York: Columbia University Press. [Google Scholar] Deacon, Terrence. 1997. The Symbolic Species: The Co-evolution of Language and the Brain. New York: W.W. Norton & Company. [Google Scholar] Dehaene, Stanislas. 1997. The Number Sense. New York: Oxford University Press. [Google Scholar] Dehaene, Stanislas. 2009. Reading in the Brain. New York: Penguin. [Google Scholar] Dehaene, Stanislas, and Laurent Cohen. 2007. Cultural recycling of cortical maps. Neuron 56: 384–98. [Google Scholar] [CrossRef] [PubMed] Dodig-Crnkovic, Gordana, and Mark Burgin. 2019. Philosophy and Methodology of Information. New Jersey: World Scientific. [Google Scholar] Dor, Daniel. 2014. The instruction of imagination: Language and its evolution as a communication technology. In The Social Origins of Language. Edited by Daniel Dor, Chris Knight and Jerome Lewis. Oxford: Oxford University Press, pp. 105–25. [Google Scholar] Dor, Daniel. 2015. The Instruction of Imagination: Language as a Social Communication Technology. Oxford: Oxford University Press. [Google Scholar] Dor, Daniel. 2016. From experience to imagination: Language and its evolution as a social communication technology. Journal of Neurolinguistics 43: 107–19. [Google Scholar] [CrossRef] Dunbar, Robin Ian MacDonald. 1996. Grooming, Gossip and the Evolution of Language. Cambridge: Harvard University Press. [Google Scholar] Dunbar, Robin Ian MacDonald. 2014. Human Evolution. London: Pelican Books. [Google Scholar] Escarpit, Robert. 1976. Teoria dell’informazione e della Comunicazione [General Information Theory and Communication]. Translated by di Maria Grazia Rombi. Roma: Editori Riuniti. [Google Scholar] Fabbro, Franco. 1996. Il Cervello Bilingue. Neurolinguistica e Poliglossia [The Bilingual Brain. Neurolinguistics and Polyglossy]. Roma: Astrolabio. [Google Scholar] Fabbro, Franco. 1999. The Neurolinguistics of Bilinguism: An Introduction. Hove: Psychology Press. [Google Scholar] Fabbro, Franco. 2004. Neuropedagogia delle Lingue. Come Insegnare le Lingue ai Bambini [Neuropedagogy of Languages. How to Teach Languages to Children]. Roma: Astrolabio. [Google Scholar] Fabbro, Franco. 2018. Identità culturale e violenza. Neuropsicologia delle lingue e delle Religioni [Cultural Identity and Violence. Neuropsychology of Languages and Religions]. Torino: Bollati Boringhieri. [Google Scholar] Fabbro, Franco. 2021a. Che cos’è la psiche. Filosofia e neuroscienze [What Is the Psyche. Philosophy and Neuroscience]. Roma: Astrolabio. [Google Scholar] Fabbro, Franco. 2021b. I Fondamenti Biologici della Filosofia [The Biological Foundations of Philosophy]. Milano and Udine: Mimesis Edizioni. [Google Scholar] Fabbro, Franco, Alice Fabbro, and Cristiano Crescentini. 2022. Neurocultural identities and the problem of human violence. In Evil in the Modern World. Edited by Laura Dryjanska and Giorgio Pacifici. Berlin/Heidelberg: Springer, pp. 131–36. [Google Scholar] Gleick, James. 2011. The Information. A History, A Theory, A Flood. New York: Pantheon Books. [Google Scholar] Heidegger, Martin. 1959. On the Way to Language. Translated by Peter Hertz. San Francisco: Harper. [Google Scholar] Hockett, Charles. 1960. The origin of speech. Scientific American 203: 88–111. [Google Scholar] [CrossRef] Jakobson, Roman, and Linda R. Waugh. 1979. The Sound Shape of Language. Bloomington: Indiana University Press. [Google Scholar] Kelly, Raymond C. 2005. The evolution of lethal intergroup violence. Proceedings of the National Academy of Sciences 102: 15294–298. [Google Scholar] [CrossRef] [PubMed] Lieberman, Philip. 2013. The Unpredictable Species: What Makes Humans Unique. Princeton: Princeton University. [Google Scholar] Locke, John L., and Barry Bogin. 2006. Language and life history: A new perspective on the development and evolution of human language. Behavioral and Brain Sciences 29: 259–80. [Google Scholar] [CrossRef] [PubMed] Longo, Giuseppe O. 1998. Il nuovo Golem. Come il Computer Cambia la Nostra Cultura [The New Golem. How the Computer Changes Our Culture]. Roma and Bari: Laterza. [Google Scholar] Longo, Giuseppe O. 2001. Homo Technologicus. Roma: Meltemi. [Google Scholar] Longo, Giuseppe O., and Andrea Vaccaro. 2013. Bit Bang. La nascita della filosofia digitale [Bit Bang. The Birth of Digital Philosophy]. Adria: Apogeo. [Google Scholar] Manzi, Giorgio. 2017. Ultime notizie sull’evoluzione umana [Latest News on Human Evolution]. Bologna: Il Mulino. [Google Scholar] Mazzone, Marco. 2005. Menti simboliche. Introduzione agli studi sul linguaggio [Symbolic Minds. Introduction to Language Studies]. Roma: Carocci. [Google Scholar] Miller, George A. 1975. The Psychology of Communication. New York: Basic Books. [Google Scholar] Miller, George A. 1987. Language and Speech. New York: W.H. Freeman & Company. [Google Scholar] Mithen, Steven. 2005. The Singing Neanderthals: The Origins of Music, Language, Mind and Body. Cambridge: Harvard University Press. [Google Scholar] Montagnini, Leone. 2015. Information versus matter and energy. La concezione dell’informazione in Wiener e le sue conseguenze sull’oggi [Information versus matter and energy. Wiener’s conception of information and its consequences for today]. Biblioteche oggi 33: 41–61. [Google Scholar] Moro, Andrea. 2006. I confine di Babele. Il cervello e il mistero delle lingue impossibili [The Boundaries of Babel. The Brain and the Mystery of Impossible Languages]. Milano: Longanesi. [Google Scholar] Pagel, Mark. 2009. Human language as a culturally transmitted replicator. Nature Reviews Genetics 10: 405–15. [Google Scholar] [CrossRef] [PubMed] Pagel, Mark, and Ruth Mace. 2004. The cultural wealth of nations. Nature 428: 275–78. [Google Scholar] [CrossRef] [PubMed] Panikkar, Raimon. 2007. Lo spirito della parola [The Spirit of the Word]. Torino: Bollati Boringhieri. [Google Scholar] Paradis, Michel. 1994. Neurolinguistic aspects of implicit and explicit memory: Implications for bilingualism. In Implicit and Explicit Learning of Second Languages. Edited by Nick Ellis. London: Academic Press, pp. 393–419. [Google Scholar] Paradis, Michel. 2009. Declarative and Procedural Determinants of Second Languages. Amsterdam: John Benjamins. [Google Scholar] Patel, Aniruddh. 2008. Music, Language, and the Brain. Oxford: Oxford University Press. [Google Scholar] Pattee, Howard. 2008. Physical and Functional Conditions for Symbols, Codes, and languages. Biosemiotics 1: 147–68. [Google Scholar] [CrossRef] Pattee, Howard. 2015. Cell phenomenology: The first phenomenon. Progress in Biophysics and Molecular Biology 119: 461–68. [Google Scholar] [CrossRef] [PubMed] Pattee, Howard Hunt, and Kalevi Kull. 2009. A biosemiotic conversation: Between physics and semiotics. Sign Systems Studies 37: 311–31. [Google Scholar] [CrossRef] Pattee, Howard Hunt, and Joanna Rączaszek-Leonardi. 2012. Laws, Language and Life: Howard Pattee’s Classic Papers on the Physics of Symbols with Contemporary Commentary. New York: Springer. [Google Scholar] Peirce, Charles Sanders. 1931–1935. Collected Papers of Charles Sanders Peirce. Edited by Charles Harstorne and Paul Weiss. Cambridge: Harvard University Press, vol. I–IV. [Google Scholar] Pierce, John R. 1961. An Introduction to Information Theory: Symbols, Signals and Noise. New York: Dover Publications. [Google Scholar] Pinker, Steven. 1994. The Language Instinct. How the Mind Creates Language. New York: Harper. [Google Scholar] Provine, Robert. 2000. Laughter: A Scientific Investigation. New York: Viking. [Google Scholar] Riesch, Rüdiger. 2016. Killer whales are speciating right in front of us. Scientific American 315: 54–56. [Google Scholar] [CrossRef] [PubMed] Riesch, Rüdiger, Lance G. Barrett-Lennard, Graeme M. Ellis, John K. B. Ford, and Volker B. Deecke. 2012. Cultural traditions and the evolution of reproductive isolation: Ecological speciation in killer whales? Population divergence in killer whales. Biological Journal of the Linnean Society 106: 1–17. [Google Scholar] [CrossRef] Ritt, Nikolaus. 2017. Linguistic Pragmatics from an Evolutionary Perspective. In The Routledge Handbook of Pragmatics. Edited by Anne Barron, Yueguo Gu and Gerard Steen. London: Routledge, pp. 490–502. [Google Scholar] Rosenfield, Israel. 1992. The Strange, Familiar, and Forgotten: An Anatomy of Consciousness. New York: Vintage. [Google Scholar] Sapir, Edward. 1921. Language: An Introduction to the Study of Speech. New York: Harcourt, Brace & Company. [Google Scholar] Schaller, Susan. 1991. A Man without Words. Berkeley: University of California Press. [Google Scholar] Scott-Phillips, Thom. 2015. Speaking Our Minds. Why Human Communication is Different, and How Language Evolved to Make It Special. New York: Palgrave Macmillan. [Google Scholar] Senghas, Ann. 2005. Language emergence: Clues from a new Bedouin Sign Language. Current Biology 15: R463–R465. [Google Scholar] [CrossRef] [PubMed] Senghas, Ann, Sotaro Kita, and Asli Ozyurek. 2004. Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science 305: 1779–82. [Google Scholar] [CrossRef] [PubMed] Shannon, Claude Elwood, and Warren Weaver. 1949. The Mathematical Theory of Communication. Urbana: The University of Illinois Press. [Google Scholar] Singh, Jagjit. 1966. Great Ideas in Information Theory, Language and Cybernetics. New York: Dover Publications. [Google Scholar] Singh, Simon. 1999. The Code Book. The Secret History of Codes and Codebreaking. New York: Anchor. [Google Scholar] Spielman, Richard S., Ernest C. Migliazza, and James V. Neel. 1974. Regional Linguistic and Genetic Differences among Yanomama Indians. Science 184: 637–44. [Google Scholar] [CrossRef] [PubMed] Tattersall, Ian. 2012. Masters of the Planet: The Search for Our Human Origins. New York: St. Martin’s Griffin. [Google Scholar] Thorpe, William. 1972. The Comparison of Vocal Communication in Animals and Man. In Non-Verbal Communication. Edited by Robert A. Hinde. Cambridge: Cambridge University Press, pp. 27–48. [Google Scholar] Treccani. n.d. Available online: https://www.treccani.it/vocabolario/linguaggio/ (accessed on 7 April 2022). von Humboldt, Wilhelm. 1836. On Language: On the Diversity of Human Language Construction and its Influence on the Mental Development of the Human Species. Translated by Peret Heath. Cambridge: Cambridge University Press. [Google Scholar] Wiener, Norbert. 1948. Cybernetics, or Control and Communication in the Animal and the Machine. New York: Wiley. [Google Scholar] Wrangham, Richard W., and Dale Peterson. 1996. Demonic Males: Apes and the Origins of Human Violence. Boston: Mariner Book. [Google Scholar] Wrangham, Richard W., Michael L. Wilson, and Martin N. Muller. 2006. Comparative rates of violence in chimpanzees and humans. Primates 47: 14–26. [Google Scholar] [CrossRef] [PubMed]   Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Share and Cite   
Via Charles Tiayon
Charles Tiayon's curator insight, November 28, 2022 8:36 PM

"Several studies in philosophy, linguistics and neuroscience have tried to define the nature and functions of language. Cybernetics and the mathematical theory of communication have clarified the role and functions of signals, symbols and codes involved in the transmission of information. Linguistics has defined the main characteristics of verbal communication by analyzing the main tasks and levels of language. Paleoanthropology has explored the relationship between cognitive development and the origin of language in Homo sapiens. According to Daniel Dor, language represents the most important technological invention of human beings. Seemingly, the main function of language consists of its ability to allow the sharing of the mind’s imaginative products. Following language’s invention, human beings have developed multiple languages and cultures, which, on the one hand, have favored socialization within communities and, on the other hand, have led to an increase in aggression between different human groups."

#metaglossia mundus

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:46 PM
Scoop.it!

German Bible Translator Introduces Readers to ‘God’s New Reality’ | Christianity Today

German Bible Translator Introduces Readers to ‘God’s New Reality’ | Christianity Today | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Theologian Roland Werner’s modern version Das Buch, now in its third edition, resonates with the unchurched and surprises the faithful.
INTERVIEW BY JAMES THOMPSON|NOVEMBER 29, 2022
 
Image: Illustration by Mallory Rentsch / Source Images: WikiMedia Commons / Getty

Roland Werner wears many hats, and most of them have something to do with the Bible.

Whether he’s preaching at the interdenominational congregation that he founded four decades ago in Marburg, writing devotionals and books about church history, lecturing on intercultural theology, or chairing a meeting of the German branch of the Lausanne Movement, the theologian and linguist’s life revolves around God’s Word.

He might be best known among Germany’s evangelicals for Das Buch (“The Book”), his popular Bible translation in modern German. The New Testament was first released in 2009, and a new version including the Psalms was published in 2014. Earlier this year came the third edition, this time with the addition of Proverbs.

Werner, age 65, discovered an affinity for languages at an early age. As an adolescent, he was already studying Latin, Greek, and Hebrew. Arabic and several African languages followed later. A year as an exchange student in the United States helped perfect his English. His familiarity with these and other languages combined with his love of Scripture made the role of Bible translator a natural fit. He is currently working with a team to translate the Bible into a North African language.

This new version of Das Buch comes almost exactly 500 years after Martin Luther published his first Bible translation, known as the Septembertestament. While there was much fanfare a few years ago to mark the 500th anniversary of the Protestant Reformation, Werner laments that this milestone has gone largely unnoticed.

“You heard almost nothing about the [Septembertestament anniversary], neither in the churches nor in the news,” he said.

The Christus-Treff congregation founder hopes that his translation gives readers a fresh chance to engage with the Bible, even when more traditional translations are sometimes overlooked. He spoke with CT about the latest Das Buch edition, his other translation projects, and how rendering a verse in a new way can help readers understand the Bible more deeply.

This interview has been edited for length and clarity.

Before we talk about translating Scripture, I’d like to ask you about reading Scripture. What was the first version of the Bible that you really engaged with?

Image: Illustration by Christianity Today

Roland Werner

When I was in first grade, my mother would have me read to her from a German children’s Bible while she ironed clothes. Later, there was another Bible for older children that I also read. When I was 13, I tried to read the whole Luther translation, but I gave up at some point.

The first Bible that I read all the way through was called The Way: The Living Bible. I spent a year in Seattle when I was 16 as an exchange student, and during that time I read both The Way and the King James Version. So, before I read the entire Bible in German, I had read both a modern translation and the Authorized Version in English.

Speaking of English translations, I understand that Eugene Peterson’s The Message helped inspire you to start working on Das Buch.

Indirectly, yes. I had heard about The Message and had received a copy at some point, although I must admit that I didn’t read the whole thing. In 2007, a friend from Australia came to visit. During our time together, he brought up The Message and asked if it could be translated into German. I told him that it wasn’t possible. It’s a good translation, but Peterson is so idiomatic and steeped in American culture that a direct translation into German just wouldn’t work. I explained that someone would have to do something similar, just in German. Then he said, “Well, why don’t you do that?” I said, “Okay, why not?” and started that very night.

A few days later was the Frankfurt book fair. By then, I had a preliminary translation of the first four chapters of Matthew. I showed it to a publisher friend of mine who was at that time leading the Stiftung Christliche Medien [a German Christian media foundation]. He and some of his colleagues looked at it and decided that it was different enough from other modern German translations to have its own flavor and sound. So he said, “Yeah, let’s do it.”

Das Buch is, like The Message a dynamic-equivalence translation, right?

Yes, but my translation is actually more literal than Peterson’s. Much more literal. I didn’t feel free to go too far away from the text. People tell me that Das Buch is very readable and that unchurched people can understand it easily. I tried to replace or at least alternate some of the heavily religious terminology that may be prone to misunderstanding with a dynamic equivalent. But there are some parts where I was even more literal than Martin Luther. So it’s sort of in between [dynamic-equivalence and a more literal translation].

Once you started working, how quickly did you make progress? What were the biggest challenges?

Well, we had a Christian youth festival in Bremen where I was the chairman, and we wanted to give the Gospel of John to every participant. Somehow the board agreed to use my version of John, which wasn’t ready yet, so I was under a little bit of pressure. I basically prepublished John for that festival in 2008. I did the rest of the New Testament in about a year. Whenever I had some time—for example, while traveling or even if I was sitting with my wife watching television—I would work on it.

I translated directly from the Greek. I’m very old fashioned, so I didn’t use any of the fancy Bible translation gear that is around today. I just put the Greek text into a Word document and worked from that. During that time, I did not read any German versions. That way I wouldn’t pre-impregnate my mind with a possible German rendering. Instead, I would occasionally look at translations in cognate languages. Versions in Dutch, Norwegian, English, and even non-Germanic languages like French, Spanish, or Italian would often give me ideas for a new way to render a verse in German. I wanted to make sure that it would have its own unique sound.

Why was it important to you to present biblical concepts in new, sometimes surprising, ways? For example, in some verses “kingdom of God” (Gottes Reich) is instead rendered “God’s new reality” (neue Wirklichkeit Gottes).

The word surprising is actually the answer. I wanted to surprise people and make them think. Maybe I’ve gone too far here or there; I don’t know. In fact, I’ve backtracked in new editions on some of these expressions. [However,] I’m aware that my Bible translation is not the only one in German. Anyone who is really interested in studying in depth will probably have another version at their disposal so that they can compare. My goal is for a new phrasing to have a surprising effect that helps people better understand the exciting content of this life-changing book.

When you look at the Greek word basileia, which is usually translated as “kingdom” in English or “Reich” in German, it’s actually a more dynamic concept than either of those words convey. When you hear “Gottes Reich,” it sounds like a country. But that’s not what is meant. It’s the expanding reality of God’s authority over this world and over our lives. That’s what I’m trying to communicate.

This latest edition includes Proverbs, in addition to the New Testament and the Psalms. You’ve said that Proverbs was especially tricky to translate into German. Why is that?

I found translating the Psalms challenging, but Proverbs even more so. Proverbs employs a condensed and finely honed poetic language, and Hebrew itself is a very [concise] language. It’s tricky to translate in a way that is both clear in today’s context and true to the poetic beauty of the original.

Another challenge is that the concepts in Proverbs come from a rural environment in ancient Israel. I had to decide whether I would take them as they are or transfer the underlying image into something that is more recognizable today. Ultimately, I felt that changing the illustrations would stray too far from the original text. Even so, you sometimes have to add a little additional information or at least make it into a full German sentence for it to make sense. [Translating directly word for word] doesn’t work. I tried to be concise, poetic, and to follow the flow of the Hebrew language while still making it understandable. That was a big challenge.

Das Buch has readers in the Landeskirchen (regional mainline churches supported by church taxes) as well as in the Freikirchen (independent churches supported by donations). These two groups of German Christians can have very different cultures. Why do you think your translation bridges that gap?

I’m a member of the Landeskirche. There is a strong evangelical wing within that church, and those would be the Bible-reading people. People know me in that part of the body of Christ because that’s where I belong. In the free churches, they mostly know me because I was involved in some nationwide [evangelism] functions over several decades. Those who would consider themselves broadly evangelical, meaning Bible-interested, Bible-reading Christians, might be interested in my translation just to see how it can inspire them in their personal Bible reading.

You used the word evangelical, which in German would be evangelikal. American Christians sometimes get confused about the difference between that word and the similar term evangelisch. What’s the difference?

Evangelisch actually just means “Protestant,” while evangelikal has more or less the same meaning that evangelical has in the United States or Great Britain. That term only came to Germany in the 1960s. People are still debating whether that is a helpful term, especially because of its connection to a certain kind of evangelicalism that part of the church in America seems to adhere to that is foreign to us. It conjures up images of a political stance, which is not what the word evangelical was originally supposed to mean.

German Christians used the 500th anniversary of the Protestant Reformation in 2017 as an opportunity to promote Bible reading and engagement. Five years later, how do you evaluate those efforts?

There were many encouraging examples of people becoming more interested in the Bible. As a whole, however, I would almost say that the Landeskirche in Germany missed a chance. There was a narrative saying that the main point of reformation was the discovery of individual freedom. And, of course, that is true; Luther said that the individual stands with his or her conscience before God. But where do they stand? On the authority of the Bible. That’s what Luther meant. He didn’t just mean abstract freedom in an Enlightenment sense, but that’s what it was made out to be in a lot of the official presentations.

Language study and translation work has taken you to Africa many times over the past several decades. What can Christians in the West learn from their fellow believers in Africa and other Majority World contexts about engaging with the Bible?

Our post-Enlightenment worldview in the West tends to cut out the miraculous. In Africa and other non-Western contexts, the reality of the spirit world is much more of a given, and it’s much closer to everyday life. In some missiological thinking, one speaks of “the [excluded] middle.” The Western mind acknowledges the natural realm that can be explained by science, and then there may or may not be some sort of abstract higher being. In between there is nothing. For someone from the Majority World, the reality of dreams, visions, spirit beings, curses, possessions, and so forth is so much more real and taken for granted. Because the Bible comes from a situation where there was a very similar worldview, it speaks so much more directly [to people outside the West].

In 1998, you wrote an essay for Christianity Today about the spiritual climate in post–Cold War Europe. You expressed a hope that despite the challenges that churches and ministries were facing, “the fruit they are producing is real and will last.” Do you still have the same perspective over two decades later?

I think I would still adhere to that. I’ve just come from a meeting in Bavaria that was run by a coalition of evangelists from the United Kingdom. They invited young people from all over Europe who are interested in evangelism. There were people from Iceland, Albania, Georgia, Spain, Italy … I was very encouraged. Yes, we’re not so strong, but we’re there.

Additionally, the new reality is the many migrants that live in Europe. There is a strong spiritual movement among them. For example, at a Berlin Landeskirche on any given Sunday morning, you might have 10 or 20 mostly elderly Germans sitting in the church service at 10 o’clock, and then the same church building will be packed with Africans for a service in the afternoon.

James Thompson is an international campus minister and writer from the state of Georgia.


Via Charles Tiayon
Charles Tiayon's curator insight, November 30, 2022 11:17 PM

INTERVIEW BY JAMES THOMPSON|NOVEMBER 29, 2022: Excerpt

"...Language study and translation work has taken you to Africa many times over the past several decades. What can Christians in the West learn from their fellow believers in Africa and other Majority World contexts about engaging with the Bible?

Our post-Enlightenment worldview in the West tends to cut out the miraculous. In Africa and other non-Western contexts, the reality of the spirit world is much more of a given, and it’s much closer to everyday life. In some missiological thinking, one speaks of “the [excluded] middle.” The Western mind acknowledges the natural realm that can be explained by science, and then there may or may not be some sort of abstract higher being. In between there is nothing. For someone from the Majority World, the reality of dreams, visions, spirit beings, curses, possessions, and so forth is so much more real and taken for granted. Because the Bible comes from a situation where there was a very similar worldview, it speaks so much more directly [to people outside the West].

In 1998, you wrote an essay for Christianity Today about the spiritual climate in post–Cold War Europe. You expressed a hope that despite the challenges that churches and ministries were facing, “the fruit they are producing is real and will last.” Do you still have the same perspective over two decades later?

I think I would still adhere to that. I’ve just come from a meeting in Bavaria that was run by a coalition of evangelists from the United Kingdom. They invited young people from all over Europe who are interested in evangelism. There were people from Iceland, Albania, Georgia, Spain, Italy … I was very encouraged. Yes, we’re not so strong, but we’re there.

Additionally, the new reality is the many migrants that live in Europe. There is a strong spiritual movement among them. For example, at a Berlin Landeskirche on any given Sunday morning, you might have 10 or 20 mostly elderly Germans sitting in the church service at 10 o’clock, and then the same church building will be packed with Africans for a service in the afternoon.

James Thompson is an international campus minister and writer from the state of Georgia."

#metaglossia mundus

Rescooped by Dr. Russ Conrath from Metaglossia: The Translation World
December 6, 2022 12:41 PM
Scoop.it!

How many scholarly papers are on the Web? At least 114 million, professor finds | Penn State University

How many scholarly papers are on the Web? At least 114 million, professor finds | Penn State University | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
How many scholarly papers are on the Web? At least 114 million, professor finds
Stephanie Koons
October 9, 2014
UNIVERSITY PARK, Pa. -- Lee Giles, a professor at Penn State’s College of Information Sciences and Technology (IST), has devoted a large portion of his career to developing search engines and digital libraries that make it easier for researchers to access scholarly articles. While numerous databases and search engines track scholarly documents and thus facilitate research, many researchers and academics are concerned about the extent to which academic and scientific documents are available on the Web as well as their ability to access them. As part of an effort to make the process of accessing documents more efficient, Giles recently conducted a study of two major academic search engines to estimate the number of scholarly documents available on the Web.

“How many scholarly papers are out there?” said Giles, who is also a professor of computer science and engineering (CSE), a professor of supply chain and information systems, and director of the Intelligent Systems Research Laboratory. “How many are freely available?”

Giles and his advisee, Madian Khabsa, a doctoral candidate in CSE, presented their findings in “The Number of Scholarly Documents on the Public Web,” which was published in the May 2014 edition of PLOS ONE, a peer-reviewed scientific journal published by the Public Library of Science. The paper was also mentioned twice in Nature, a prominent interdisciplinary scientific journal, as well as various blogs and websites.

In their paper, Giles and Khabsa report that they estimated the number of scholarly documents available on the Web by studying the overlap in coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. By scholarly documents, they refer to journal and conference papers, dissertations and master’s degree theses, books, technical reports and working papers. Google Scholar is a freely accessible Web search engine that indexes the full text of scholarly literature across an array of publishing formats and disciplines. Microsoft Academic Search is a free public search engine for academic papers and literature, developed by Microsoft Research for the purpose of algorithms research in object-level vertical search, data mining, entity linking and data visualization. Using statistical methods, Giles and Khabsa estimated that at least 114 million English-language scholarly documents are accessible on the Web, of which Google Scholar has nearly 100 million. They estimate that at least 27 million (24 percent) are freely available since they do not require a subscription or payment of any kind. The estimates are limited to English documents only.

Giles’ and Khabsa’s study, Giles said, is the “first to use statistical, rigorous techniques in doing these estimations.” The researchers conducted their study using capture-recapture methods, which were pioneered in ecology and derive their name from censuses of wildlife in which several animals are captured, marked, released and subject to recapture. The technique examines the degree of overlap between two or more methods of ascertainment and uses a simple formula to estimate the total size of the population. Since their study was not longitudinal, Giles said, he and Khabsa plan to do another capture in the future to verify their results.

Giles’ interest in determining the number of scholarly documents on the Web was inspired by more than just curiosity — as a developer of various novel search engines and digital libraries, there are practical implications for his research. CiteSeer, a public search engine and digital library for scientific and academic papers, primarily in the fields of computer and information science, was created by Giles, Kurt Bollacker and Steve Lawrence in 1997 while they were at the NEC Research Institute (now NEC Labs), in Princeton, New Jersey, CiteSeer's goal was to actively crawl and harvest academic and scientific documents on the Web and use autonomous citation indexing to permit querying by citation or by document, ranking them by citation impact. CiteSeer, which is often considered to be the first automated citation indexing system, was considered a predecessor of academic search tools such as Google Scholar and Microsoft Academic Search. Released in 2008, CiteSeerX was loosely based on the previous CiteSeer search engine and digital library and is built with a new open source infrastructure, SeerSuite, and new algorithms and their implementations. While CiteSeerX has retained CiteSeer’s focus on computer and information science, it has recently been expanding into other scholarly domains such as economics, medicine and physics. One of the motivations for determining the number of scholarly documents on the Web, Giles said, is to increase the number of papers in CiteSeerX.

A significant finding in their study, Giles and Khabsa wrote in their paper, is that almost one in four of Web accessible scholarly documents are freely and publicly available. The researchers used Google Scholar to estimate this percentage because Scholar provides a direct link to the publicly available document next to each search result where a link is available. The findings are important, Giles said, because publicly available documents carry more weight in the research community. Governments, especially those in Europe, fund a lot of scientific research and don’t want papers not to be freely available. In addition, he said, it's been shown that freely available papers are much more likely to be cited than those that are not.

By having an idea of how many scholarly documents are on the Web as well as how many are freely available, Giles said, researchers can be better equipped to manage scholarly document research and related projects.

"It was surprising to see how many scholarly documents were digitized and how many were freely available,” Giles said. “But keep in mind, these estimates were only for those written in English. How many are there in other languages, more or less than English?"

SHARE THIS ST

Via Charles Tiayon
No comment yet.
Rescooped by Dr. Russ Conrath from iGeneration - 21st Century Education (Pedagogy & Digital Innovation)
December 6, 2022 12:36 PM
Scoop.it!

51 Best Free Study Apps for College Students: Ultimate List via IvyPanda (applicable to H.S. students as well!)

51 Best Free Study Apps for College Students: Ultimate List via IvyPanda (applicable to H.S. students as well!) | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Boost your studies with these 51 best websites and study apps for college students: ✓Time management ✓Note taking ✓Math, and even more!

Via Tom D'Amico (@TDOttawa)
Dr. Russ Conrath's insight:

51 Best Free Study Apps for College Students

ines tian's curator insight, May 17, 2020 5:02 PM
Ës muy útil!!!!!!!
Rescooped by Dr. Russ Conrath from iGeneration - 21st Century Education (Pedagogy & Digital Innovation)
December 2, 2022 2:11 PM
Scoop.it!

5 Things People Want from Higher Education Infographic | e-Learning Infographics

5 Things People Want from Higher Education Infographic | e-Learning Infographics | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
Amer­i­cans said Higher Edu­ca­tion needs to focus on five key areas in order to remain competitive in a global leadership position.

Via Tom D'Amico (@TDOttawa)
No comment yet.
Rescooped by Dr. Russ Conrath from Educational Technology News
December 2, 2022 2:09 PM
Scoop.it!

What Are Hyflex Classes? The Evolution of Higher Education

What Are Hyflex Classes? The Evolution of Higher Education | Useful Tools, Information, & Resources For Wessels Library | Scoop.it
The concept of hyflex classes gained popularity as pandemic lockdowns started to ease. More people wanted to go out in public, while others felt it was too soon. Like many other organizations, schools had to figure out how to provide a balanced solution.

Though not created during the pandemic, many institutions adopted the hyflex model in the last couple of years. Now, the question is if this method will continue even if we put COVID-19 behind us.

Via EDTECH@UTRGV
EDTECH@UTRGV's curator insight, September 21, 2022 1:27 PM

With adequate professional development and support, many of the innovative strategies adopted as a result of the pandemic can continue onward, perhaps with some adjustments for the situation on the ground. What do you think?