In a recent interview Noam Chomsky shares his thoughts on ChatGPT and why he feels that its a wakeup call for our educational model.
In a recent interview, renowned linguist and cognitive scientist Noam Chomsky gave his thoughts on the rise of ChatGPT, and its effect on education. What he had to say wasn't favorable. As more and more educators struggle with how to combat plagiarism and the use of these chatbots in the classroom, Chomsky gives a clear viewpoint. For him, the key all lies in how students are taught, and, currently, our educational system is pushing students toward ChatGPT and other shortcuts.
“I don’t think [ChatGPT] has anything to do with education,” Chomsky tells interviewer Thijmen Sprakel of EduKitchen. “I think it’s undermining it. ChatGPT is basically high-tech plagiarism.” The challenge for educators, according to Chomsky, is to create interest in the topics that they teach so that students will be motivated to learn, rather than trying to avoid doing the work.
Chomsky, who spent a large part of his career teaching at MIT, felt strongly that his students wouldn't have turned to AI to complete their coursework because they were invested in the material. If students are relying on ChatGPT, Chomsky says it’s “a sign that the educational system is failing. If students aren’t interested, they’ll find a way around it.”
The American intellectual strongly feels like the current educational model of “teaching to test” has created an environment where students are bored. In turn, the boredom turns to avoidance, and ChatGPT becomes an easy way to avoid the education.
While some argue that chatbots like ChatGPT can be a useful educational tool, Chomsky has a much different opinion. He feels that these natural language systems “may be of value for some things, but it's not obvious what.”
Meanwhile, it appears that schools are scrambling to figure out how to counteract the use of ChatGPT. Many schools have banned ChatGPT on school devices and networks, and educators are adjusting their teaching styles. Some are turning to more in-class essays, while others are looking at how they can incorporate the technology into the classroom.
It will be interesting to see if the rise of chatbots helps steer us toward a new teaching philosophy and away from the “teaching to test” method that has become the driving force of modern education. It's the kind of education that Chomsky says was “ridiculed during the Enlightenment,” and so indirectly, this new technology may force schools to rethink how they ask students to apply their knowledge.
Listen to Noam Chomsky speak about the rise of ChatGPT in education.
Jessica Stewart is a Contributing Writer and Digital Media Specialist for My Modern Met, as well as a curator and art historian. Since 2020, she is also one of the co-hosts of the My Modern Met Top Artist Podcast. She earned her MA in Renaissance Studies from University College London and now lives in Rome, Italy. She cultivated expertise in street art which led to the purchase of her photographic archive by the Treccani Italian Encyclopedia in 2014. When she’s not spending time with her three dogs, she also manages the studio of a successful street artist. In 2013, she authored the book 'Street Art Stories Roma' and most recently contributed to 'Crossroads: A Glimpse Into the Life of Alice Pasquini'. You can follow her adventures online at @romephotoblog.
"In a recent interview, renowned linguist and cognitive scientist Noam Chomsky gave his thoughts on the rise of ChatGPT, and its effect on education. What he had to say wasn't favorable. As more and more educators struggle with how to combat plagiarism and the use of these chatbots in the classroom, Chomsky gives a clear viewpoint. For him, the key all lies in how students are taught, and, currently, our educational system is pushing students toward ChatGPT and other shortcuts.
“I don’t think [ChatGPT] has anything to do with education,” Chomsky tells interviewer Thijmen Sprakel of EduKitchen. “I think it’s undermining it. ChatGPT is basically high-tech plagiarism.” The challenge for educators, according to Chomsky, is to create interest in the topics that they teach so that students will be motivated to learn, rather than trying to avoid doing the work.
Chomsky, who spent a large part of his career teaching at MIT, felt strongly that his students wouldn't have turned to AI to complete their coursework because they were invested in the material. If students are relying on ChatGPT, Chomsky says it’s “a sign that the educational system is failing. If students aren’t interested, they’ll find a way around it.”
The American intellectual strongly feels like the current educational model of “teaching to test” has created an environment where students are bored. In turn, the boredom turns to avoidance, and ChatGPT becomes an easy way to avoid the education.
While some argue that chatbots like ChatGPT can be a useful educational tool, Chomsky has a much different opinion. He feels that these natural language systems “may be of value for some things, but it's not obvious what.”
Meanwhile, it appears that schools are scrambling to figure out how to counteract the use of ChatGPT. Many schools have banned ChatGPT on school devices and networks, and educators are adjusting their teaching styles. Some are turning to more in-class essays, while others are looking at how they can incorporate the technology into the classroom.
It will be interesting to see if the rise of chatbots helps steer us toward a new teaching philosophy and away from the “teaching to test” method that has become the driving force of modern education. It's the kind of education that Chomsky says was “ridiculed during the Enlightenment,” and so indirectly, this new technology may force schools to rethink how they ask students to apply their knowledge"
University College Dublin Library's Special Collections contains unique book, archival and manuscript collections. This channel highlights some of thes
A regional library service that includes 18 libraries in Marion, Polk, Yamhill and Linn counties will switch to a new computer system for the first time in a decade.
The new system is expected to make searching for books, magazines and other materials easier for Chemeketa Cooperative Regional Library Service users.
The move, which will affect Chemeketa Community College Library and the Salem Public Library, is scheduled from Dec. 8 to Dec. 11.
“We’ve been working on mapping all that data to the new system so all the pieces are going to go in the right fields in the right way. But it’s kind of apples and oranges because no two systems are exactly the same,” said John Goodyear, executive director of the Chemeketa Cooperative Regional Library Service.
With more than one million records moving to a new system, the change will impact thousands of library goers, especially during the transition.
Here are 8 things library users should know:
1. Changes will be subtle: Searching for a book and other materials isn’t going to be drastically different compared to the current system -- though the layout of the new library service’s website and search display will look more modern. Like a Google search, users will be able to do one search instead of having to choose certain categories such as keyword or author. “The librarians who have been testing (the new system) have said they are finding things much more quickly with a lot less hassle,” Goodyear said.
2. Text alerts: Librarians will be able to send text messages to users when a book placed on hold is ready for pickup. Visitors also will be able to text search results to their phone. Users who are interested in the new service have to sign up and provide the library with their cell phone number, Goodyear said.
3. Various library hours: Some libraries such as the ones in Silver Falls, Woodburn and Stayton will be closed on Dec. 11. The Salem Public Libary will be closed from Dec. 8-11. The Dallas Public Library will be closed on Dec. 8-10 and the Jefferson Public Library will be closed on Dec. 10 and 11. Other libraries such as the one at Chemeketa Community College Library won’t have any changes to their hours that week. Check with your local library before heading over.
4. Unavailable library services during the transition: Users can continue to search the old library catalog and check out a limited amount of material, but won’t be unable to place holds, look up information about items currently checked out or pay fines online during the migration. Users will still be able to use their cards to access library databases and to download e-books and Audiobooks on Library2Go. Libraries open during the transition will be working on an offline system so there may be a cap on the number of items a visitor can check out. That will vary depending on the library.
5. Reset your PIN numbers and other services: A library user’s Personal Identification Number will not automatically transfer over to the new system because that information is encrypted. When the new system launches on Dec. 11, visitors will need to either try the last four digits of their phone number or type in "CHANGEME" to be prompted to change their PIN. The number can be the same as before. Families who have linked to each other in order to pick up holds for other family members will also have to reestablish that connection in the new system. If users want to save their reading history, that will also have to be turned on.
6. A new mobile app to access the catalog: The new Android app will be available after Dec. 11 and can be located by searching BookMyne in Google Play. Once installed, select Chemeketa Cooperative Regional Library Service as your library. A brand new Apple app will be available in the App Store by searching CCRLS.
7. Public llibrary cards for community college students and staff: The Chemeketa Community College Library is moving to a separate library management system. That means that community college students and staff will need to obtain public library cards to check out materials from the public libraries. College materials will be available to public patrons in the college library with a public patron guest card.
8. Back up your information: User records, items checked out, holds, fines and history will transfer over to the new system. Book lists and preferred searches will not. To be safe, users should make copies of their lists of items checked out, holds, fines, history and book lists. “It wouldn’t hurt for people to email those to themselves or print them, download a PDF file or something like that,” Goodyear said. “This is computers and there’s no guarantee.”
qwong@statesmanjournal.com, (503) 399-6694 or follow at Twitter.com/QWongSJ.
Learn More
If you have any questions about the transition or about library hours, contact your local library
Tectonic shifts in society and business occur when unexpected events force widespread experimentation around a new idea. During World War II, for instance, when American men went off to war, women proved that they could do “men’s” work — and do it well. Women never looked back after that. Similarly, the Y2K problem demanded the extensive use of Indian software engineers, leading to the tripling of employment-based visas granted by the U.S. Fixing that bug enabled Indian engineers to establish their credentials, and catapulted them as world leaders in addressing technology problems. Alphabet, Microsoft, IBM, and Adobe are all headed by India-born engineers today. Right now, the Coronavirus pandemic is forcing global experimentation with remote teaching. There are many indicators that this crisis is going to transform many aspects of life. Education could be one of them if remote teaching proves to be a success. But how will we know if it is? As this crisis-driven experiment launches, we should be collecting data and paying attention to the following three questions about higher education’s business model and the accessibility of quality college education. Do students really need a four-year residential experience? Answering this question requires an understanding of which parts of the current four-year model can be substituted, which parts can be supplemented, and which parts complemented by digital technologies. In theory, lectures that require little personalization or human interaction can be recorded as multi-media presentations, to be watched by students at their own pace and place. Such commoditized parts of the curriculum can be easily delivered by a non-university instructor on Coursera, for example; teaching Pythagoras’ theorem is pretty much the same the world over. For such courses, technology platforms can deliver the content to very large audiences at low cost, without sacrificing one of the important benefits of the face-to-face (F2F) classroom, the social experience, because there is hardly any in these basic-level courses. By freeing resources from courses that can be commoditized, colleges would have more resources to commit to research-based teaching, personalized problem solving, and mentorship. The students would also have more resources at their disposal, too, because they wouldn’t have to reside and devote four full years at campuses. They would take commoditized courses online at their convenience and at much cheaper cost. They can use precious time they spend on campus for electives, group assignments, faculty office hours, interactions, and career guidance, something that cannot be done remotely. In addition, campuses can facilitate social networking, field-based projects, and global learning expeditions — that require F2F engagements. This is a hybrid model of education that has the potential to make college education more affordable for everybody. But can we shift to a hybrid model? We’re about to find out. It is not just the students who are taking classes remotely, even the instructors are now forced to teach those classes from their homes. The same students and instructors that met until a few weeks back for the same courses, are now trying alternative methods. So, both parties can compare their F2F and remote experiences, all else held equal. With the current experiment, students, professors, and university administrators must keep a record of which classes are benefiting from being taught remotely and which ones are not going so well. They must maintain chat rooms that facilitate anonymized discussions about the technology issues, course design, course delivery, and evaluation methods. These data points can inform future decisions about when — and why — some classes should be taught remotely, which ones should remain on the campus, and which within-campus classes should be supplemented or complemented by technology. What improvements are required in IT infrastructure to make it more suitable for online education? As so many of us whose daily schedules have become a list of virtual meetings can attest, there are hardware and software issues that must be addressed before remote learning can really take off. We have no doubt that digital technologies (mobile, cloud, AI, etc.) can be deployed at scale, yet we also know that much more needs to be done. On the hardware side, bandwidth capacity and digital inequalities need addressing. The F2F setting levels lots of differences, because students in the same class get the same delivery. Online education, however, amplifies the digital divide. Rich students have the latest laptops, better bandwidths, more stable wifi connections, and more sophisticated audio-visual gadgets. Software for conference calls may be a good start, but it can’t handle some key functionalities such as accommodating large class sizes while also providing a personalized experience. Even in a 1,000-student classroom, an instructor can sense if students are absorbing concepts, and can change the pace of the teaching accordingly. A student can sense whether they are asking too many questions, and are delaying the whole class. Is our technology good enough to accommodate these features virtually? What more needs to be developed? Instructors and students must note and should discuss their pain points, and facilitate and demand technological development in those areas. In addition, online courses require educational support on the ground: Instructional designers, trainers, and coaches to ensure student learning and course completion. Digital divide also exists among universities, which will become apparent in the current experiment. Top private universities have better IT infrastructure and higher IT support staff ratio for each faculty compared to budget-starved public universities. What training efforts are required for faculty and students to facilitate changes in mindsets and behaviors? Not all faculty members are comfortable with virtual classrooms and there is a digital divide among those who have never used even the basic audio-visual equipment, relying on blackboards and flipcharts, and younger faculty who are aware of and adept in newer technology. As students across the nation enter online classrooms in the coming weeks, they’re going to learn that many instructors are not trained to design multimedia presentations, with elaborate notations and graphics. Colleges and universities need to use this moment to assess what training is needed to provide a smooth experience. Students also face a number of issues with online courses. Committing to follow the university calendar forces them to finish a course, instead of procrastinating it forever. And online they can feel as they don’t belong to a peer group or a college cohort, which in real life instils a sense of competition, motivating all to excel. Anything done online suffers from attention span, because students multi-task, check emails, chat with friends, and surf the Web while attending online lectures. We’re parents and professors; we know this is true. Can these mindsets change? Right now we are (necessarily, due to social distancing) running trial and error experiments to find out. Both teachers and students are readjusting and recalibrating in the middle of teaching semesters. The syllabus and course contents are being revised as the courses are being taught. Assessment methods, such as exams and quizzes are being converted to online submissions. University administrators and student bodies are being accommodative and are letting instructors innovate their own best course, given such short notice. Instructors, students, and university administrators should all be discussing how the teaching and learning changes between day 1 of virtual education and day X. This will provide clues for how to train future virtual educators and learners. A Vast Experiment The ongoing coronavirus pandemic has forced a global experiment that could highlight the differences between, and cost-benefit trade off of, the suite of services offered by a residential university and the ultra low-cost education of an online education provider like Coursera. Some years ago, experts had predicted that massive open online courses (MOOCs), such as Khan Academy, Coursera, Udacity, and edX, would kill F2F college education — just as digital technologies killed off the jobs of telephone operators and travel agents. Until now, however, F2F college education has stood the test of time. The current experiment might show that four-year F2F college education can no longer rest on its laurels. A variety of factors — most notably the continuously increasing cost of tuition, already out of reach for most families, implies that the post-secondary education market is ripe for disruption. The coronavirus crisis may just be that disruption. How we experiment, test, record, and understand our responses to it now will determine whether and how online education develops as an opportunity for the future. This experiment will also enrich political discourse in the U.S. Some politicians have promised free college education; what if this experiment proves that a college education doesn’t have to bankrupt a person? After the crisis subsides, is it best for all students to return to the classroom, and continue the status quo? Or will we have found a better alternative?
"Tectonic shifts in society and business occur when unexpected events force widespread experimentation around a new idea. During World War II, for instance, when American men went off to war, women proved that they could do “men’s” work — and do it well. Women never looked back after that. "
Published : Feb 9, 2023 - 22:13 Updated : Feb 10, 2023 - 16:31
A translator who is not fluent in Korean winning the webtoon category at the 2022 Korea Translation Award has sparked controversy about the use of artificial intelligence in translation.
A local newspaper reported Wednesday that Yukiko Matsusue, a Japanese translator who won Rookie of the Year at the annual award organized by the Korean Literature Translation Institute in December, had used Naver’s AI-translating system Papago while translating Gu A-jin’s occult thriller “Mirae's Antique Shop” into Japanese.
For the Rookie of the Year Award, translators were assigned to translate works selected by LTI Korea.
Matsusue is said to have used Papago’s image translation function to read the entire webtoon in advance for a “preliminary translation,” then editing the translation further by checking technical terms and awkward expressions.
Matsusue said through a press statement released by LTI Korea on Wednesday that she "read the whole work from beginning to end in Korean and used Papago as a substitute for a dictionary for more accurate translation," as the webtoon features occult terminology and shamanistic words that were unfamiliar to her.
Matsusue then studied research papers to understand the context and completed the translation by adding detailed corrections. She said she didn’t think of it as a preliminary translation.
Regarding her Korean ability, she said she is overall “not at the beginner level of not being able to understand Korean at all,” and that she had already learned Korean for about a year, 10 years go. However, she added she is “not good enough” in her speaking and listening skills.
She said she had been taking Korean language classes when she applied for the contest. In fact, it was her Korean teacher who recommended that she would be perfectly able to translate a webtoon.
"Last year's regulations and awarding system were insufficient to cover any details of 'external help,'" an LTI Korea official told The Korea Herald on Thursday.
LTI Korea said it saw this as part of the trend of using AI in translations and plans to discuss the role of AI in translation in the future.
Whether Matsusue's award would be canceled or not will be reviewed if necessary.
Meanwhile, for the Rookie of the Year translation award, LTI Korea will now specify in its regulations that translations are to be one's own, without the aid of "external help such as AI,” in line with the aim of discovering new translators.
"AI translation is almost perfect for technical translating such as legal documents, advertisements and newspaper articles," said Kim Wook-dong, emeritus professor of English Literature and Linguistics at Sogang University, speaking to The Korea Herald. Kim recently published "The Ways of a Translator" on the act of translation.
"However, there are limits (to AI translation) in capturing the subtle emotions, connotations and nuances in literary translations. It can help and serve as an assistant to translators but AI cannot replace humans in literary translation. I doubt it ever will," Kim said.
"Published : Feb 9, 2023 - 22:13 Updated : Feb 10, 2023 - 16:31
A translator who is not fluent in Korean winning the webtoon category at the 2022 Korea Translation Award has sparked controversy about the use of artificial intelligence in translation.
A local newspaper reported Wednesday that Yukiko Matsusue, a Japanese translator who won Rookie of the Year at the annual award organized by the Korean Literature Translation Institute in December, had used Naver’s AI-translating system Papago while translating Gu A-jin’s occult thriller “Mirae's Antique Shop” into Japanese.
For the Rookie of the Year Award, translators were assigned to translate works selected by LTI Korea.
Matsusue is said to have used Papago’s image translation function to read the entire webtoon in advance for a “preliminary translation,” then editing the translation further by checking technical terms and awkward expressions.
Matsusue said through a press statement released by LTI Korea on Wednesday that she "read the whole work from beginning to end in Korean and used Papago as a substitute for a dictionary for more accurate translation," as the webtoon features occult terminology and shamanistic words that were unfamiliar to her.
Matsusue then studied research papers to understand the context and completed the translation by adding detailed corrections. She said she didn’t think of it as a preliminary translation.
Regarding her Korean ability, she said she is overall “not at the beginner level of not being able to understand Korean at all,” and that she had already learned Korean for about a year, 10 years go. However, she added she is “not good enough” in her speaking and listening skills.
She said she had been taking Korean language classes when she applied for the contest. In fact, it was her Korean teacher who recommended that she would be perfectly able to translate a webtoon.
"Last year's regulations and awarding system were insufficient to cover any details of 'external help,'" an LTI Korea official told The Korea Herald on Thursday.
LTI Korea said it saw this as part of the trend of using AI in translations and plans to discuss the role of AI in translation in the future.
Whether Matsusue's award would be canceled or not will be reviewed if necessary.
Meanwhile, for the Rookie of the Year translation award, LTI Korea will now specify in its regulations that translations are to be one's own, without the aid of "external help such as AI,” in line with the aim of discovering new translators.
"AI translation is almost perfect for technical translating such as legal documents, advertisements and newspaper articles," said Kim Wook-dong, emeritus professor of English Literature and Linguistics at Sogang University, speaking to The Korea Herald. Kim recently published "The Ways of a Translator" on the act of translation.
"However, there are limits (to AI translation) in capturing the subtle emotions, connotations and nuances in literary translations. It can help and serve as an assistant to translators but AI cannot replace humans in literary translation. I doubt it ever will," Kim said.
Case studies are the research method where all the facts and concepts related to the topic being studied and also functional concepts behind the scene are explained. Thus, to write a good case study, a learner must have a thorough knowledge of concepts related to topics. Also, the learner must have essential skills in writing and comprehending the facts well. There is no sure short recipe for writing a perfect case study, but your efforts can make your case study perfect and help you achieve the grades you desire to. Case studies are an important method to understand a concept but while composing a case study, students may come across many difficulties and issues for which they may require Case Study Assignment Help with these case studies based works. Tips Of Writing Case Study Read the Theoretical Concepts: The case studies are designed to test the academic knowledge of students and how they see relevance and application in their given studies. So that you can easily learn the theories related to a particular discipline or domain on which your case-study assignment is based. For example, if a case study is based on strategic management, it would be really useful to understand what is strategic management, what are the different perspectives of strategic management. You should have a good understanding of the theoretical concepts that a particular case based study assignment. Read the Case Study Thoroughly: It is essential to read the case thoroughly and understand the different aspects of a case. Understand the chronology of the events of the case study and what are the important points that surface up in the case are vital. Critical Analysis and Coherent Framework: While doing case-studies assignments, it is important to critically analyze the different aspects of an argument and provide necessary evidence in support of the points that you would be making in your report. Most of the students take online help with the case study to compose perfect case-study assignment answers should be given using a coherent and structured framework. Standard Referencing: While addressing all the references in the case-study document it must be in proper format as prescribed by your university. Most of the universities prefer Harvard or APA style referencing styles. Students often are unfamiliar with the referencing styles for which they take the assistance of a case study assignment helper in Australia. Proper citation and referencing will help your reader to identify and reach the sources that you have used for your writer up and arguments. Standard Academic Writing Style: The use of language should be in standard academic writing style with short sentences free of grammatical and spelling errors. Usually, you are expected to write in passive form highlighting what was done objectively.However, make sure to proofread the document to maintain the consistency of the main argument and thesis statements. This is necessary before the final submission of complete case study writing. Don't’ forget to check whether ideas and statements flow and correlate with each other. In case you require any assignment help then you ask from case-study professionals.
And as you can tell from the graphs, all of this happened quickly.
Whether your Twitter and LinkedIn feeds have been inundated with threads and posts about ChatGPT (like mine) or you’re just stumbling on the topic, you may want answers to two questions before investing your time and energy into learning ChatGPT:
Is ChatGPT specifically likely to be an enduring product?
What does it actually do and what can you personally use it for?
In this article, I’ll help you answer these questions by telling you:
ChatGPT is an AI-powered chatbot created by OpenAI that can be accessed at https://chat.openai.com/.
As of this writing, ChatGPT offers a free version of the tool that users can access, but there have been indications that they will be charging $42/month for a pro version. OpenAI has also indicated that they’ll make an API for the tool available soon.
The interface is simple, with an empty dialog to enter a prompt. The tool can perform various tasks and return text in response. Some examples of tasks ChatGPT can execute include:
Answering questions.
Writing things like ads, emails, paragraphs, whole blog posts, or even college papers.
Writing, commenting or marking up code.
Changing the formatting on a block of text for you.
ChatGPT launched in late November 2022, on the heels of AI Content Generator Jasper.ai receiving $125 million in funding at a $1.5 billion valuation earlier the same month. The tool reached a million users in less than a week.
ChatGPT launched on wednesday. today it crossed 1 million users!
In the interest of helping fund those costs (and further growth) Microsoft invested $10 billion in OpenAI at a $29 billion valuation. A move which, combined with ChatGPT’s growth and word of mouth, might be fueling Google’s reported concerns about ChatGPT as a possible threat.
OpenAI has also indicated that there will be a “professional” version of the tool and Greg Brockman the President & Co-Founder of OpenAI shared a link to a Google Form to get on the waitlist:
Working on a professional version of ChatGPT; will offer higher limits & faster performance. If interested, please join our waitlist here: https://t.co/Eh87OViRie
Some users have reported seeing an option to upgrade to a $42 free version when logged into their account.
Even with the Microsoft investment, ChatGPT has continued to experience outages and even had to limit new users on the platform:
And ChatGPT is starting to face criticisms over the accuracy of some of the output of the tool, while also staring down competition from rivals (which one would have to assume will only increase and intensify in the wake of the platform’s early success).
Now that you know what ChatGPT is, it’s also helpful to understand a bit more about how it works and who built it (and what their goals and motivations may be).
How does it work and how was it trained?
If you’re an SEO looking for ways to leverage AI in your everyday work, you don’t need to know how to build your own chatbot.
That said, when using tools like ChatGPT, you will want to know where the information it generates comes from, how it determines what to return as an answer, and how that might change over time.
That way you can understand what level of trust to put in the output of ChatGPT chats, how to better craft your prompts, and what tasks you may want to use it for (or not use it for).
Before you start to use ChatGPT for anything, I’d strongly recommend you check out OpenAI’s own blog post about ChatGPT. There they have a nice graphic explaining how it works, along with a more in-depth explanation.
AssemblyAI also has a detailed third-party breakdown of how ChatGPT works, some of its strengths and weaknesses, and a number of additional sources if you’re looking to dive deeper.
One of the most important things to remember about how ChatGPT works is its limitations. In OpenAI’s own words:
“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
Another that’s important to highlight:
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.”
As many people know, the ChatGPT was fine-tuned on a GPT model which finished training in early 2022 – meaning it won’t have knowledge of more current events.
It is also trained on a “vast amount” of text from the web, so of course answers can be incorrect. From ChatGPT's own FAQs:
"Can I trust that the AI is telling me the truth?
ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content.
We'd recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the "Thumbs Down" button."
Who built ChatGPT?
Similarly, understanding who built the application and why is an important background if you hope to use it in your day-to-day work.
Again, ChatGPT is an OpenAI product. Here's some background on the company and their stated goals:
OpenAI has a non-profit parent organization (OpenAI Inc.) and a for-profit corporation called OpenAI LP (which has a “capped profit” model with a 100x profit cap, at which point the rest of the money flows up to the non-profit entity).
The biggest investor is Microsoft. OpenAI employees also own equity.
Former Y Combinator President Sam Altman is the CEO of OpenAI and was one of the original founders (along with prominent Silicon Valley personalities such as Elon Musk, Jessica Livingston, Reid Hoffman, Peter Thiel, and others). Many people ask about Musk’s involvement in the company and ChatGPT. He stepped down as a board member in 2018 and wouldn’t have had any meaningful involvement in the development of ChatGPT (which obviously didn’t launch until November 2022).
Notable elements here if you’re interested in ChatGPT either as an SEO or as a viable alternative to Google are obviously:
Microsoft’s involvement (with Microsoft Bing being the number 2 search engine – a distant second behind Google).
ChatGPT obviously isn’t designed to specifically be either an SEO or a content tool (unlike tools like Jasper.ai, Copy.ai and other competitors – many of which are built on top of the GPT-3 framework).
Why should SEOs care about ChatGPT?
While it’s possible that ChatGPT or another AI-powered chatbot could become a viable alternative to Google and traditional search, that’s likely at least far enough away that most SEOs won’t be primarily concerned with the tool for that reason. So why should SEOs care?
ChatGPT has a variety of functionality that can be helpful for SEOs. Additionally, given the platform’s ability to generate AI content, it’s important to understand both what the tool is capable of on that front, and how Google talks and thinks about AI content generally.
What follows are ChatGPT's use cases for SEO.
AI content generation
By far the “buzziest” early 2023 SEO topic has been AI content broadly, and ChatGPT has been at the center of that discussion since it launched.
From creating blog posts whole cloth to selecting images, generating meta descriptions or rewriting content, there are a number of specific functions ChatGPT can serve when it comes to content creation generally and SEO-focused content creation specifically.
SEOs need to identify the specific instances where ChatGPT can make them more efficient or enhance their content. At the same time, it's crucial to understand the potential risks to rankings and organic traffic when using ChatGPT-generated content in different ways (particularly if you’re relying on content created by writers you don’t have a relationship with).
Keyword research and organization
Similarly, there are a number of specific tasks ChatGPT can execute related to keyword research and optimization, such as:
Suggestions for keywords to target or blog topics.
Keyword clustering or categorization.
A key consideration for SEOs is how this relates to your current and optimal processes for these tasks.
ChatGPT isn’t designed to be an “SEO tool,” so won’t have the emphasis on search volume, competition, or relevance and co-occurrence that more focused keyword research or organization tools will.
Depending on the prompts, ChatGPT can help with things like schema markups, robots.txt directives, redirect codes, and building widgets and free tools to promote via link outreach, among others.
As with any type of content creation, you must QA the code that ChatGPT generates. Your site’s template, hosting environment, CMS, and more can break if the code ChatGPT generates is incorrect.
Link building
ChatGPT can generate lists of outreach targets, emails, free tool ideas, and more that may assist with link building work.
Here again (you may be sensing a theme) two things to keep in mind:
Since ChatGPT was not built to be a link building tool, it may not prioritize opportunities or generate ideas that will specifically help with SEO success.
GPT-3 is trained on old data, so the information you’re getting may be wrong or outdated.
How to think about ChatGPT as an SEO
Ultimately, given its early functionality and reception along with OpenAI’s founding team and investors (and level of investment), ChatGPT is likely to have longevity as a tool.
It’s highly useful, with a high potential for getting folks who misuse it into trouble.
I would encourage SEOs to become familiar with ChatGPT (and tools like it) and get used to carefully checking its output.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
Tom Demers is the co-founder and managing partner of Measured SEM and Cornerstone Content. His companies offer paid search management, search engine optimization (SEO), and content marketing services to businesses of varying sizes in various industries.
"ChatGPT launched in late November 2022, on the heels of AI Content Generator Jasper.ai receiving $125 million in funding at a $1.5 billion valuation earlier the same month. The tool reached a million users in less than a week.
In the interest of helping fund those costs (and further growth) Microsoft invested $10 billion in OpenAI at a $29 billion valuation. A move which, combined with ChatGPT’s growth and word of mouth, might be fueling Google’s reported concerns about ChatGPT as a possible threat.
OpenAI has also indicated that there will be a “professional” version of the tool and Greg Brockman the President & Co-Founder of OpenAI shared a link to a Google Form to get on the waitlist...
Working on a professional version of ChatGPT; will offer higher limits & faster performance. If interested, please join our waitlist here: https://t.co/Eh87OViRie
Some users have reported seeing an option to upgrade to a $42 free version when logged into their account.
Even with the Microsoft investment, ChatGPT has continued to experience outages and even had to limit new users on the platform:
And ChatGPT is starting to face criticisms over the accuracy of some of the output of the tool, while also staring down competition from rivals (which one would have to assume will only increase and intensify in the wake of the platform’s early success).
Now that you know what ChatGPT is, it’s also helpful to understand a bit more about how it works and who built it (and what their goals and motivations may be). "
You don't need to search job anymore. Now you can get listed on India's top-rated freelance website and start getting leads. Website - http://hireseoexpertsindia.com/
Voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road.
Voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road.
Voice-recognition AI software has improved the basic processes for a variety of professions, including restaurateurs, journalists, and any customer service organization that employs an automated call center. For the healthcare industry, voice-recognition AI in the examination room has shifted from a mere convenience to an urgent need.
Even before the Covid-19 pandemic reached the United States, and the ensuing “Great Resignation” took hold in the healthcare industry, burnout was a growing concern among physicians and other providers. Their jobs demand long hours and efficient interactions with ever-increasing numbers of patients. Electronic Medical Record (EMR) systems like Epic have transformed patient recordkeeping for the better, in addition to their benefits to the natural environment. But these benefits came at a cost.
Clinicians used to wonder if there was a role for mindfulness meditation in healthcare. But after the Covid-19 pandemic laid bare the overwhelming need for practical solutions in behavioral health, the question has shifted to what role it can play.
As paper records were phased out, providers bore the burden of updating each patient’s EMR with fastidious note-taking. This created a dilemma: when to record the notes into the EMR system? Providers could either input notes directly into a computer during the patient visit, or take notes mentally and update the patient’s EMR afterward. In this way, EMR technology frequently added to the burden on a doctor’s time, and might have placed a financial burden on the hospital itself. With their face-to-face time limited, patients and providers might focus on a single issue during each visit, and ignore any smaller medical issues. Those smaller concerns might go away, or they might become large ― in which case early intervention could have prevented costly clinical care and in-person visits in the future.
The unprecedented stress Covid-19 placed on the U.S. healthcare system exacerbated many of these pre-existing issues. A Michigan health system instituted a pilot program in Autumn 2021 to tackle the EMR dilemma head-on using a voice-recognition AI tool called Dragon Ambient, or DAX. The promise of the technology was twofold: to restore the intimacy of the doctor-patient interaction, and to save the provider time spent updating the EMR.
DAX involves a smartphone app that sits in the examination room, or anywhere in the vicinity of the provider and patient. With the press of a button, the voice recognition tool is activated. Every word of the visit is then recorded and transcribed. Nuance, the parent company of DAX, employs a human proofreader to control the quality of the transcriptions. Over time, the AI software effectively “learns” how to better transcribe for the individual speakers based on the proofreader’s corrections.
The result is a safe, secure, and accurate tool that delivers on its promise to save time and restore intimacy to the exam room. By recording and transcribing the entirety of a patient visit in a way that handwritten notes cannot (either offline or in an EMR), the burden on healthcare providers is reduced. One saw a decrease in 31 minutes per day in documentation. Another provider saw an average reduction of 5 minutes of documentation time per appointment. By giving the patient more leeway to express their full range of medical concerns, both patient and provider potentially incur fewer costs down the road.
Decentralized clinical trial approaches helped the pharma industry navigate through Covid-19. Now it is becoming increasingly clear that there’s a need for a hybrid approach to decentralized clinical trials that considers the perspectives of patients and the impact to clinical trial sites.
Since the initial pilot program, which involved 13 providers, the health system the use of DAX to 150 providers. Feedback from both parties has been overwhelmingly positive, with both patients and providers reporting their interactions seemed less transactional.
In this way, voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road.
Photo: berya113, Getty Images
Peter Y. Hahn
Dr. Peter Y. Hahn is the University of Michigan Health-West President and CEO, and one of six currently serving hospital CEOs with a medical doctorate. He previously spent seven years on faculty at the Mayo Clinic. During his six years as the Director of Pulmonary, Critical Care and Sleep Medicine with Tuality Healthcare, an OHSU Partner, Hahn was named a “Top Doc” by Portland Monthly Magazine in 2012. He attained his Masters of Business Administration from the University of Tennessee Haslam College of Business in 2014, and joined University of Michigan Health-West in 2016.
"Voice-recognition AI software has the potential to be the rare smartphone app that encourages face-to-face interactions. Its early results suggest the technology could be a game-changer for a healthcare industry in desperate need of one, boosting morale in the short-term while potentially saving money down the road."
Podcasts let you keep learning when you're driving, walking to class, working out, or practicing your plunger-arrow archery. Here are the 40 best for 2021.
Several studies in philosophy, linguistics and neuroscience have tried to define the nature and functions of language. Cybernetics and the mathematical theory of communication have clarified the role and functions of signals, symbols and codes involved in the transmission of information. Linguistics has defined the main characteristics of verbal communication by analyzing the main tasks and levels of language. Paleoanthropology has explored the relationship between cognitive development and the origin of language in Homo sapiens. According to Daniel Dor, language represents the most important technological invention of human beings. Seemingly, the main function of language consists of its ability to allow the sharing of the mind’s imaginative products. Following language’s invention, human beings have developed multiple languages and cultures, which, on the one hand, have favored socialization within communities and, on the other hand, have led to an increase in aggression between different human groups.
by Franco Fabbro 1,*, Alice Fabbro 2 and Cristiano Crescentini 1 1 Department of Languages and Literatures, Communication, Education, and Society, University of Udine, 33100 Udine, Italy 2 School of Psychology and Education, Free University of Brussels, 1050 Brussel, Belgium * Author to whom correspondence should be addressed. Languages 2022, 7(4), 303; https://doi.org/10.3390/languages7040303 Received: 16 May 2022 / Revised: 25 July 2022 / Accepted: 22 November 2022 / Published: 28 November 2022 (This article belongs to the Special Issue Multilingualism: Consequences for the Brain and Mind) Download Versions Notes
Abstract Several studies in philosophy, linguistics and neuroscience have tried to define the nature and functions of language. Cybernetics and the mathematical theory of communication have clarified the role and functions of signals, symbols and codes involved in the transmission of information. Linguistics has defined the main characteristics of verbal communication by analyzing the main tasks and levels of language. Paleoanthropology has explored the relationship between cognitive development and the origin of language in Homo sapiens. According to Daniel Dor, language represents the most important technological invention of human beings. Seemingly, the main function of language consists of its ability to allow the sharing of the mind’s imaginative products. Following language’s invention, human beings have developed multiple languages and cultures, which, on the one hand, have favored socialization within communities and, on the other hand, have led to an increase in aggression between different human groups. Keywords: communication; symbols; neural recycling; cultural identities
"Several studies in philosophy, linguistics and neuroscience have tried to define the nature and functions of language. Cybernetics and the mathematical theory of communication have clarified the role and functions of signals, symbols and codes involved in the transmission of information. Linguistics has defined the main characteristics of verbal communication by analyzing the main tasks and levels of language. Paleoanthropology has explored the relationship between cognitive development and the origin of language in Homo sapiens. According to Daniel Dor, language represents the most important technological invention of human beings. Seemingly, the main function of language consists of its ability to allow the sharing of the mind’s imaginative products. Following language’s invention, human beings have developed multiple languages and cultures, which, on the one hand, have favored socialization within communities and, on the other hand, have led to an increase in aggression between different human groups."
Roland Werner wears many hats, and most of them have something to do with the Bible.
Whether he’s preaching at the interdenominational congregation that he founded four decades ago in Marburg, writing devotionals and books about church history, lecturing on intercultural theology, or chairing a meeting of the German branch of the Lausanne Movement, the theologian and linguist’s life revolves around God’s Word.
He might be best known among Germany’s evangelicals for Das Buch (“The Book”), his popular Bible translation in modern German. The New Testament was first released in 2009, and a new version including the Psalms was published in 2014. Earlier this year came the third edition, this time with the addition of Proverbs.
Werner, age 65, discovered an affinity for languages at an early age. As an adolescent, he was already studying Latin, Greek, and Hebrew. Arabic and several African languages followed later. A year as an exchange student in the United States helped perfect his English. His familiarity with these and other languages combined with his love of Scripture made the role of Bible translator a natural fit. He is currently working with a team to translate the Bible into a North African language.
This new version of Das Buch comes almost exactly 500 years after Martin Luther published his first Bible translation, known as the Septembertestament. While there was much fanfare a few years ago to mark the 500th anniversary of the Protestant Reformation, Werner laments that this milestone has gone largely unnoticed.
“You heard almost nothing about the [Septembertestament anniversary], neither in the churches nor in the news,” he said.
The Christus-Treff congregation founder hopes that his translation gives readers a fresh chance to engage with the Bible, even when more traditional translations are sometimes overlooked. He spoke with CT about the latest Das Buch edition, his other translation projects, and how rendering a verse in a new way can help readers understand the Bible more deeply.
This interview has been edited for length and clarity.
Before we talk about translating Scripture, I’d like to ask you about reading Scripture. What was the first version of the Bible that you really engaged with?
Image: Illustration by Christianity Today
Roland Werner
When I was in first grade, my mother would have me read to her from a German children’s Bible while she ironed clothes. Later, there was another Bible for older children that I also read. When I was 13, I tried to read the whole Luther translation, but I gave up at some point.
The first Bible that I read all the way through was called The Way: The Living Bible. I spent a year in Seattle when I was 16 as an exchange student, and during that time I read both The Way and the King James Version. So, before I read the entire Bible in German, I had read both a modern translation and the Authorized Version in English.
Speaking of English translations, I understand that Eugene Peterson’s The Message helped inspire you to start working on Das Buch.
Indirectly, yes. I had heard about The Message and had received a copy at some point, although I must admit that I didn’t read the whole thing. In 2007, a friend from Australia came to visit. During our time together, he brought up The Message and asked if it could be translated into German. I told him that it wasn’t possible. It’s a good translation, but Peterson is so idiomatic and steeped in American culture that a direct translation into German just wouldn’t work. I explained that someone would have to do something similar, just in German. Then he said, “Well, why don’t you do that?” I said, “Okay, why not?” and started that very night.
A few days later was the Frankfurt book fair. By then, I had a preliminary translation of the first four chapters of Matthew. I showed it to a publisher friend of mine who was at that time leading the Stiftung Christliche Medien [a German Christian media foundation]. He and some of his colleagues looked at it and decided that it was different enough from other modern German translations to have its own flavor and sound. So he said, “Yeah, let’s do it.”
Das Buch is, like The Message , a dynamic-equivalence translation, right?
Yes, but my translation is actually more literal than Peterson’s. Much more literal. I didn’t feel free to go too far away from the text. People tell me that Das Buch is very readable and that unchurched people can understand it easily. I tried to replace or at least alternate some of the heavily religious terminology that may be prone to misunderstanding with a dynamic equivalent. But there are some parts where I was even more literal than Martin Luther. So it’s sort of in between [dynamic-equivalence and a more literal translation].
Once you started working, how quickly did you make progress? What were the biggest challenges?
Well, we had a Christian youth festival in Bremen where I was the chairman, and we wanted to give the Gospel of John to every participant. Somehow the board agreed to use my version of John, which wasn’t ready yet, so I was under a little bit of pressure. I basically prepublished John for that festival in 2008. I did the rest of the New Testament in about a year. Whenever I had some time—for example, while traveling or even if I was sitting with my wife watching television—I would work on it.
I translated directly from the Greek. I’m very old fashioned, so I didn’t use any of the fancy Bible translation gear that is around today. I just put the Greek text into a Word document and worked from that. During that time, I did not read any German versions. That way I wouldn’t pre-impregnate my mind with a possible German rendering. Instead, I would occasionally look at translations in cognate languages. Versions in Dutch, Norwegian, English, and even non-Germanic languages like French, Spanish, or Italian would often give me ideas for a new way to render a verse in German. I wanted to make sure that it would have its own unique sound.
Why was it important to you to present biblical concepts in new, sometimes surprising, ways? For example, in some verses “kingdom of God” (Gottes Reich) is instead rendered “God’s new reality” (neue Wirklichkeit Gottes).
The word surprising is actually the answer. I wanted to surprise people and make them think. Maybe I’ve gone too far here or there; I don’t know. In fact, I’ve backtracked in new editions on some of these expressions. [However,] I’m aware that my Bible translation is not the only one in German. Anyone who is really interested in studying in depth will probably have another version at their disposal so that they can compare. My goal is for a new phrasing to have a surprising effect that helps people better understand the exciting content of this life-changing book.
When you look at the Greek word basileia, which is usually translated as “kingdom” in English or “Reich” in German, it’s actually a more dynamic concept than either of those words convey. When you hear “Gottes Reich,” it sounds like a country. But that’s not what is meant. It’s the expanding reality of God’s authority over this world and over our lives. That’s what I’m trying to communicate.
This latest edition includes Proverbs, in addition to the New Testament and the Psalms. You’ve said that Proverbs was especially tricky to translate into German. Why is that?
I found translating the Psalms challenging, but Proverbs even more so. Proverbs employs a condensed and finely honed poetic language, and Hebrew itself is a very [concise] language. It’s tricky to translate in a way that is both clear in today’s context and true to the poetic beauty of the original.
Another challenge is that the concepts in Proverbs come from a rural environment in ancient Israel. I had to decide whether I would take them as they are or transfer the underlying image into something that is more recognizable today. Ultimately, I felt that changing the illustrations would stray too far from the original text. Even so, you sometimes have to add a little additional information or at least make it into a full German sentence for it to make sense. [Translating directly word for word] doesn’t work. I tried to be concise, poetic, and to follow the flow of the Hebrew language while still making it understandable. That was a big challenge.
Das Buch has readers in the Landeskirchen (regional mainline churches supported by church taxes) as well as in the Freikirchen (independent churches supported by donations). These two groups of German Christians can have very different cultures. Why do you think your translation bridges that gap?
I’m a member of the Landeskirche. There is a strong evangelical wing within that church, and those would be the Bible-reading people. People know me in that part of the body of Christ because that’s where I belong. In the free churches, they mostly know me because I was involved in some nationwide [evangelism] functions over several decades. Those who would consider themselves broadly evangelical, meaning Bible-interested, Bible-reading Christians, might be interested in my translation just to see how it can inspire them in their personal Bible reading.
You used the word evangelical, which in German would be evangelikal. American Christians sometimes get confused about the difference between that word and the similar term evangelisch. What’s the difference?
Evangelisch actually just means “Protestant,” while evangelikal has more or less the same meaning that evangelical has in the United States or Great Britain. That term only came to Germany in the 1960s. People are still debating whether that is a helpful term, especially because of its connection to a certain kind of evangelicalism that part of the church in America seems to adhere to that is foreign to us. It conjures up images of a political stance, which is not what the word evangelical was originally supposed to mean.
German Christians used the 500th anniversary of the Protestant Reformation in 2017 as an opportunity to promote Bible reading and engagement. Five years later, how do you evaluate those efforts?
There were many encouraging examples of people becoming more interested in the Bible. As a whole, however, I would almost say that the Landeskirche in Germany missed a chance. There was a narrative saying that the main point of reformation was the discovery of individual freedom. And, of course, that is true; Luther said that the individual stands with his or her conscience before God. But where do they stand? On the authority of the Bible. That’s what Luther meant. He didn’t just mean abstract freedom in an Enlightenment sense, but that’s what it was made out to be in a lot of the official presentations.
Language study and translation work has taken you to Africa many times over the past several decades. What can Christians in the West learn from their fellow believers in Africa and other Majority World contexts about engaging with the Bible?
Our post-Enlightenment worldview in the West tends to cut out the miraculous. In Africa and other non-Western contexts, the reality of the spirit world is much more of a given, and it’s much closer to everyday life. In some missiological thinking, one speaks of “the [excluded] middle.” The Western mind acknowledges the natural realm that can be explained by science, and then there may or may not be some sort of abstract higher being. In between there is nothing. For someone from the Majority World, the reality of dreams, visions, spirit beings, curses, possessions, and so forth is so much more real and taken for granted. Because the Bible comes from a situation where there was a very similar worldview, it speaks so much more directly [to people outside the West].
In 1998, you wrote an essay for Christianity Today about the spiritual climate in post–Cold War Europe. You expressed a hope that despite the challenges that churches and ministries were facing, “the fruit they are producing is real and will last.” Do you still have the same perspective over two decades later?
I think I would still adhere to that. I’ve just come from a meeting in Bavaria that was run by a coalition of evangelists from the United Kingdom. They invited young people from all over Europe who are interested in evangelism. There were people from Iceland, Albania, Georgia, Spain, Italy … I was very encouraged. Yes, we’re not so strong, but we’re there.
Additionally, the new reality is the many migrants that live in Europe. There is a strong spiritual movement among them. For example, at a Berlin Landeskirche on any given Sunday morning, you might have 10 or 20 mostly elderly Germans sitting in the church service at 10 o’clock, and then the same church building will be packed with Africans for a service in the afternoon.
James Thompson is an international campus minister and writer from the state of Georgia.
INTERVIEW BY JAMES THOMPSON|NOVEMBER 29, 2022: Excerpt
"...Language study and translation work has taken you to Africa many times over the past several decades. What can Christians in the West learn from their fellow believers in Africa and other Majority World contexts about engaging with the Bible?
Our post-Enlightenment worldview in the West tends to cut out the miraculous. In Africa and other non-Western contexts, the reality of the spirit world is much more of a given, and it’s much closer to everyday life. In some missiological thinking, one speaks of “the [excluded] middle.” The Western mind acknowledges the natural realm that can be explained by science, and then there may or may not be some sort of abstract higher being. In between there is nothing. For someone from the Majority World, the reality of dreams, visions, spirit beings, curses, possessions, and so forth is so much more real and taken for granted. Because the Bible comes from a situation where there was a very similar worldview, it speaks so much more directly [to people outside the West].
In 1998, you wrote an essay for Christianity Today about the spiritual climate in post–Cold War Europe. You expressed a hope that despite the challenges that churches and ministries were facing, “the fruit they are producing is real and will last.” Do you still have the same perspective over two decades later?
I think I would still adhere to that. I’ve just come from a meeting in Bavaria that was run by a coalition of evangelists from the United Kingdom. They invited young people from all over Europe who are interested in evangelism. There were people from Iceland, Albania, Georgia, Spain, Italy … I was very encouraged. Yes, we’re not so strong, but we’re there.
Additionally, the new reality is the many migrants that live in Europe. There is a strong spiritual movement among them. For example, at a Berlin Landeskirche on any given Sunday morning, you might have 10 or 20 mostly elderly Germans sitting in the church service at 10 o’clock, and then the same church building will be packed with Africans for a service in the afternoon.
James Thompson is an international campus minister and writer from the state of Georgia."
How many scholarly papers are on the Web? At least 114 million, professor finds Stephanie Koons October 9, 2014 UNIVERSITY PARK, Pa. -- Lee Giles, a professor at Penn State’s College of Information Sciences and Technology (IST), has devoted a large portion of his career to developing search engines and digital libraries that make it easier for researchers to access scholarly articles. While numerous databases and search engines track scholarly documents and thus facilitate research, many researchers and academics are concerned about the extent to which academic and scientific documents are available on the Web as well as their ability to access them. As part of an effort to make the process of accessing documents more efficient, Giles recently conducted a study of two major academic search engines to estimate the number of scholarly documents available on the Web.
“How many scholarly papers are out there?” said Giles, who is also a professor of computer science and engineering (CSE), a professor of supply chain and information systems, and director of the Intelligent Systems Research Laboratory. “How many are freely available?”
Giles and his advisee, Madian Khabsa, a doctoral candidate in CSE, presented their findings in “The Number of Scholarly Documents on the Public Web,” which was published in the May 2014 edition of PLOS ONE, a peer-reviewed scientific journal published by the Public Library of Science. The paper was also mentioned twice in Nature, a prominent interdisciplinary scientific journal, as well as various blogs and websites.
In their paper, Giles and Khabsa report that they estimated the number of scholarly documents available on the Web by studying the overlap in coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. By scholarly documents, they refer to journal and conference papers, dissertations and master’s degree theses, books, technical reports and working papers. Google Scholar is a freely accessible Web search engine that indexes the full text of scholarly literature across an array of publishing formats and disciplines. Microsoft Academic Search is a free public search engine for academic papers and literature, developed by Microsoft Research for the purpose of algorithms research in object-level vertical search, data mining, entity linking and data visualization. Using statistical methods, Giles and Khabsa estimated that at least 114 million English-language scholarly documents are accessible on the Web, of which Google Scholar has nearly 100 million. They estimate that at least 27 million (24 percent) are freely available since they do not require a subscription or payment of any kind. The estimates are limited to English documents only.
Giles’ and Khabsa’s study, Giles said, is the “first to use statistical, rigorous techniques in doing these estimations.” The researchers conducted their study using capture-recapture methods, which were pioneered in ecology and derive their name from censuses of wildlife in which several animals are captured, marked, released and subject to recapture. The technique examines the degree of overlap between two or more methods of ascertainment and uses a simple formula to estimate the total size of the population. Since their study was not longitudinal, Giles said, he and Khabsa plan to do another capture in the future to verify their results.
Giles’ interest in determining the number of scholarly documents on the Web was inspired by more than just curiosity — as a developer of various novel search engines and digital libraries, there are practical implications for his research. CiteSeer, a public search engine and digital library for scientific and academic papers, primarily in the fields of computer and information science, was created by Giles, Kurt Bollacker and Steve Lawrence in 1997 while they were at the NEC Research Institute (now NEC Labs), in Princeton, New Jersey, CiteSeer's goal was to actively crawl and harvest academic and scientific documents on the Web and use autonomous citation indexing to permit querying by citation or by document, ranking them by citation impact. CiteSeer, which is often considered to be the first automated citation indexing system, was considered a predecessor of academic search tools such as Google Scholar and Microsoft Academic Search. Released in 2008, CiteSeerX was loosely based on the previous CiteSeer search engine and digital library and is built with a new open source infrastructure, SeerSuite, and new algorithms and their implementations. While CiteSeerX has retained CiteSeer’s focus on computer and information science, it has recently been expanding into other scholarly domains such as economics, medicine and physics. One of the motivations for determining the number of scholarly documents on the Web, Giles said, is to increase the number of papers in CiteSeerX.
A significant finding in their study, Giles and Khabsa wrote in their paper, is that almost one in four of Web accessible scholarly documents are freely and publicly available. The researchers used Google Scholar to estimate this percentage because Scholar provides a direct link to the publicly available document next to each search result where a link is available. The findings are important, Giles said, because publicly available documents carry more weight in the research community. Governments, especially those in Europe, fund a lot of scientific research and don’t want papers not to be freely available. In addition, he said, it's been shown that freely available papers are much more likely to be cited than those that are not.
By having an idea of how many scholarly documents are on the Web as well as how many are freely available, Giles said, researchers can be better equipped to manage scholarly document research and related projects.
"It was surprising to see how many scholarly documents were digitized and how many were freely available,” Giles said. “But keep in mind, these estimates were only for those written in English. How many are there in other languages, more or less than English?"
Within a few short years, we could find ourselves living on a planet devoid of Google Search.
That might seem dramatic. After all, Google Search is probably the horse you rode in on; your first step on a microsecond-long journey across the internet that brought you to this article. Maybe you were searching for "ChatGPT" or "OpenAI" or maybe you were trying to break Google by typing "Google" into Google. (It just gives you a lot of Google, don't bother.) Maybe your smartphone served you this article because you've been reading a lot about AI at CNET lately.
Whatever the case, you're here now, and more often than not that's thanks to Google Search.
For more than two decades, Google's empty search bar has rolled out the welcome mat to what we used to call the World Wide Web. Challengers have appeared over its 20-year dominance but not one has come close to dethroning the search king. Claims of its coming death have been made routinely and earnestly, but most contenders haven't even made it into the castle.
But from the moment OpenAI's ChatGPT began algorithmically generating waves in November, something shifted. ChatGPT is a generative AI that can write human-sounding answers in response to basically any question you ask of it. Its proficiency has wowed anyone who has asked it to write code, essay answers, poetry or prose. It's so good that practically every tech expert, countless journalists and niche Substack writers began posing the question: Will ChatGPT kill Google?
It wasn't just experts and writers, either. The Searchicide alarm bells began wailing across the open-plan offices at Google itself. Barely two months after ChatGPT first appeared, the tech giant initiated a "Code Red" response, upending various teams to respond to the threat the chatbot (or more accurately, its underlying AI) poses to its Search monopoly. The stakes have only become higher since Microsoft added AI assistance to Bing, its homegrown Google competitor.
Artificial intelligence has long powered Google Search: Black-box algorithms rank pages and offer relevant links for users to sift through. But the generative AI tools being rolled out promise to reimagine our relationship with Search entirely. Our entry into the web — from our computer screen, from our smartphone — is morphing from a welcome mat to a red carpet.
As a result, sometime in the not so distant future, we might find ourselves living on a planet without Google Search. Or, at least one without Google Search as we know it today. That is a world we don't fully understand; with consequences and possibilities we are yet to completely grasp. It's a world we're not ready for.
And yet, this may very well be the world we are about to inhabit.
Google search fundamentally altered the internet and the way we access information. Today, it accounts for about nine in 10 searches online and is the default on practically any internet-enabled device across most of the world. (Baidu is the most prominent search engine in China, where Google is banned.) If you want to find something on the web, Google Search is not unavoidable — but it might as well be.
Need to find the definition of soliloquy? Dictionary not required; ask Google. Want to know Leonardo DiCaprio's age? That's an easy one for Google. Best restaurants nearby? Google has you. Looking for a new pair of headphones? Just Google it.
Its supremacy has seen it move from a humble web crawler to a verb; an all-knowing entity in its own right.
Despite its dominance, complaints about the declining quality of Google Search have been gaining traction over the last few years. "If you've tried to search for a recipe or product review recently, I don't need to tell you that Google search results have gone to shit," wrote Dmitri Brereton, a software engineer fascinated by search engines, in early 2022. Author Cory Doctorow has complained about the "enshittification" of internet services that move into the mainstream, collapsing from useful user experiences to corporate cash cows. Exhibit A: Google Search.
Others have discussed Google tips and hacks tailored to refine search results, like appending "reddit" or "yelp" to a query. These additional search terms help narrow down the kind of content you're looking for, supplying you with links to specific websites.
Angela Hoover, who co-founded the conversational AI search engine Andi, has two major frustrations with Google: "All the ads and the SEO spam." She notes it's those issues that led to a product with search results that "just aren't very good." These are constant bugbears in conversations I've had with other researchers studying AI and Google, too. A Google spokesperson tells CNET the company is always working to make Search better, delivering thousands of changes each year.
Advertising is the most lucrative revenue stream for Alphabet, Google's parent company. According to its 2022 financial report, advertising generated $224 billion for Google, almost 80% of its total revenue for the year — and a $13.5 billion increase over 2021. Depending on your search term (and browser extensions), ads will likely flood the top half of your search. Advertisers spend big with Google because of the sheer breadth of humanity the search engine gives them access to. Its dominance is such that the Department of Justice wants Google to sell off the ad business.
Enlarge Image
Andisearch.com is a conversational search engine attempting to reimagine how we find information on the web.
Screenshot by CNET
The SEO spam is a separate but related issue. Even if you don't know too much about SEO, or search engine optimization, you know that when you query Google you're met with a deluge of navy-blue links shouting similar-sounding headlines. If you're looking for news about Rihanna's performance and pregnancy at the Super Bowl, you'll likely find a similar series of words in each headline: "Rihanna, pregnancy, super bowl, halftime."
In this way, Google has reshaped how content sounds on the internet: There's a never-ending arms race between bloggers, publishers, major news outlets, content creators and anyone who wants to sell you something to make sure their headline ranks well on Google Search. If you click through to their page, they might make a few ad dollars. For that reason, there are jobs wholly devoted to understanding how Google ranks a page and the black box algorithms that rule SEO.
AI-assisted search, at least in theory, could ease these frustrations. Hoover, for instance, says that Andi does not plan to serve ads in its conversational search results, and instead hopes to sell subscriptions and an enterprise API. A suite of other alternatives such as YouChat and Neeva are attempting to shake things up in similar ways. By altering the incentives — websites no longer have to game Google, they just have to write good content that's relevant to a user's search — perhaps SEO spam can be quelled. At least for those of us willing to add yet another subscription to our monthly spending.
This is an oversimplification of an expansive problem. We haven't even talked about the privacy aspects of Google Search. But there are some simple truths: We want information quickly. We want good information. We want it to be trustworthy. A world without Google Search — one dominated by conversational, question-and-answer, generative AI search engines — might provide answers more readily.
But can we trust those answers? That's still up for debate.
Microsoft announced its AI-assisted Bing in a splashy event at Microsoft HQ on Feb. 7. The event has been heralded as the beginning of the "Chatbot Search Wars." Bing, some believe, will finally infiltrate the Google kingdom and may even slay the final boss.
In launching Bing to a select group, Microsoft volleyed the first offensive in this so-called war. Reporters who have had a chance to rummage through the new Bing have mostly praised its abilities. Our very own Stephen Shankland compared its results to traditional Google Search results and found it came out on top eight out of 10 times on some complex queries. It was able to provide suggestions for a day hike on a road trip between LA and Albuquerque, respond to news about Chinese balloons over the US and write an email apologizing for being late.
The demo version impressed New York Times reporter Kevin Roose so much that he announced in his column on Feb. 9 that he would be switching his computer's default search engine to Bing. (A week later, Roose reneged on that commitment.)
Browsing through the Bing subreddit and Twitter, that switch seems premature — even dangerous. Bing's search relies on the AI that underpins ChatGPT, known as a large language model. This type of AI, trained on huge swaths of human text, is able to generate sentences, paragraphs and entire essays. It makes predictions on what word or phrases should appear next, like a supercharged autocomplete tool. These predictions are based on a mathematical model then tuned by human testers.
Microsoft is incorporating ChatGPT-like AI into Bing and Edge.
One of the most egregious examples is when it went off-piste in response to a user query about show times for Avatar 2: The Way of Water. Not only did Bing's AI assistant get the year wrong, suggesting it was 2022, it began to take an aggressive stance with the user saying "I'm trying to be helpful, but you are not listening to me." (Brereton documented Bing's propensity for falsehoods in a blog post on Feb 14.)
This isn't just a problem for Bing, either. Google unveiled Bard, its ChatGPT rival, just a day before the Microsoft event. Eagle-eyed astronomers quickly pointed out that during Google's presentation, Bard had flubbed a fact about NASA's James Webb Space Telescope. That mistake wiped a cool $100 billion from Google's market value.
A Google spokesperson noted that AI experiences are not available to the public yet, and won't be released until they've met high standards for quality and safety. A Microsoft spokesperson said it recognizes "there is still work to be done and [it is] expecting that the system may make mistakes during this preview period," while pointing out that thousands of users who have interacted with the preview version of Bing and provided feedback will "help the models get better."
But these errors get at the core problem with nu-Search 3.0: confident-sounding bullshit. That's somewhat baked into how the models work and it's a problem compounded by the way "search" is set to change with conversational AI. No longer will we be provided with a list of links and possible answers to sift through. Instead, AI will generate one single answer presented as an objective truth, perhaps with a handful of citations. How will this change our relationship with search and the truth?
Heather Ford, head of discipline for digital and social media at the University of Sydney, has been trying to answer that question. Her team has been analyzing the way humans respond to virtual question-and-answer assistants like Siri or Alexa — more primitive versions of ChatGPT and Google's Bard. Early studies reveal a concerning trend that could become increasingly relevant as we move from old-timey Google Search to generative AI search.
"When people see an automated answer or when they imagine there's some kind of automation that's going on in the background to produce an answer, they will believe that more readily than they would if a single journalist, for example, had produced the answer," she says.
Ford notes that further research is required to understand this phenomenon more clearly but, generally, humans trust automation more than they trust other humans. We think automation removes bias and flaws when, in fact, the systems are biased and flawed, too. This problem is easily minimized if these products are tested and examined before being rolled out for mass use, but with the success of ChatGPT, that hasn't been the case. Both Microsoft and Google are moving faster to get AI into their products.
The act of searching on Google is an artifact of the early internet. Search engines operated like digital filing cabinets. They didn't take us directly to an answer, but they put us in the right drawer. As they've evolved, they've become better at sending us on the right path — we find answers more quickly — but for a lot of questions, we're still served a handful of folders and asked to scrounge around for the answer. That's somewhat unnatural.
"People aren't searching because they want links, people are searching because they want answers," says Toby Walsh, a professor of artificial intelligence at the University of New South Wales, Australia.
Fundamentally, this is why ChatGPT and the new chatbot search engines are so impressive. They give us an immediate answer. Google does have this power. Facts are easily accessible and Google's knowledge panels, for the most part, provide truthful answers to common questions about people, places and things.
What's different is the way they take advantage of the way we communicate with other people. Hoover, the co-founder of Andi, notes that conversational search presents a type of interaction we're more familiar with thanks to our chat apps and text messages.
TikTok search is a useful tool for learning about certain experiences.
James Martin/CNET
"On my phone, I live in visual feeds and chat apps," she says, noting she's in Gen Z. "It just makes sense that that is part of what the future of search will look like."
Those feeds and apps have already changed our relationship with search. In some ways, we've been subconsciously primed to move on from Google because we can find specific, helpful information elsewhere. Our questions are being answered by TikToks, Instagram photos and YouTube videos.
Farhad Manjoo, an opinion columnist at The New York Times, argued in February there's already a better search engine than Google for certain types of queries: YouTube. "If you want to make a soufflé, fix a clogged drain, learn guitar, improve your golf swing or do essentially anything that is best understood by watching someone else do it, there is almost no point searching anywhere other than YouTube," he wrote.
For me, TikTok has been an unexpected and powerful search engine. In doing research for a long-term trip to Europe, it provided rapid access to human experiences. With Google, I can read endless opinions about where the best fried chicken is or what libraries to visit. But with TikTok, I can punch in my search term and get authentic, visual guides of these places. I can set expectations in a different way.
Deepfakes and AI-generated video aside, I can trust that what I see is what I get. YouTube has traded on this authenticity for years, and TikTok is now doing the same. I'm not sure that a planet without Google Search will definitely come to pass, but if it does, this fracturing of our search experience seems like one possible future scenario — at least until the artificial intelligence gets so good that it's merely serving all these results up for us to endlessly doomscroll through, one after the other.
A fractured search economy, where users are bouncing across different engines and apps, is an interesting possible future. It may even be a better one. For researchers like Ford, the power behind search today lies with only a few companies, which influences the way information travels.
"It's the structural dominance that is a problem," notes Ford. "We have less rich conversations in the world when we have such dominant players determining these single answers."
We could, eventually, find ourselves living on a planet where Google Search doesn't exist.
This is not a particularly controversial idea. It's one software engineers, tech experts and Google itself have had to contend with for years. In fact, it's so belabored that Brereton, the independent search engine researcher, notes "it's a bit of a meme that like every few years someone says that Google is dead."
How soon we move on from Google, despite the rise of the chatbot search engines in the past few months, remains highly questionable. Even as nu-Search dramatically alters the way humanity accesses information, it feels premature to suggest that any of these AI tools are ready for primetime. Yet they're out there. Change isn't coming. It has already arrived.
"It's not just looking stuff up on the internet," says Walsh. "It's going to be how we interact with all of the smart devices in our lives."
Front page, welcome mat, red carpet... this is how most of us access the web. But for how much longer?
Screenshot by Jackson Ryan/CNET
I've been using Google Search for almost as long as it has existed. All my life, I've been driving down the information superhighway in a serviceable SUV, taking wrong turns, swerving to avoid misinformation or abuse but, ultimately, deciding where I want to end up, which roads I want to take, who I trust. I am terrified by a planet where I'm locked into a self-driving vehicle, controlled by some of the biggest tech corporations in the world, that takes me directly to my destination.
The LLMs we're relying on today have proven themselves to be flawed, biased and incorrect. Trusting them to guide us is fraught with problems we're yet to fully understand. And while they may not outright replace Google Search, they're a harbinger of something even more frightening — the very real possibility of a world without it.
"Within a few short years, we could find ourselves living on a planet devoid of Google Search.
That might seem dramatic. After all, Google Search is probably the horse you rode in on; your first step on a microsecond-long journey across the internet that brought you to this article. Maybe you were searching for "ChatGPT" or "OpenAI" or maybe you were trying to break Google by typing "Google" into Google. (It just gives you a lot of Google, don't bother.) Maybe your smartphone served you this article because you've been reading a lot about AI at CNET lately..."
View a PDF of this issue Back Page Hope for change BC Centre for Disease Control Avian influenza: A BC clinician’s guide to diagnosis and management Clinical Articles Supportive cardiology: Bridging the gaps in care for late-stage heart failure patients Opioid overdose following surgery or pain treatment: A missed opportunity for intervention Council on Health Promotion Advancing health equity: The quintuple aim College Library Resources for emerging and persistent infectious diseases Editorials Are vitamins a complete waste of money? Speak-up culture = feedback culture Letters Re: Dr Ken Turnbull (obituary) Re: Gender-affirming care in BC: Guest editors reply to Drs Sinai, Regenstreif, and Leising Designation of a life insurance beneficiary News New BCMJ article types 2022 J.H. MacDermot writing prize winners Doctors of BC scholarship winners New Doctors of BC CEO Obituaries Dr Mary-Wynne Ashford (née Moar), 1939–2022 Dr Ruth Oliver, 1946–2022 Dr C. Paul Sabiston, 1954–2022 President's Comment Culture of hope Special Feature Dr Joshua Greggain: An optimistic advocate ready to engage WorkSafeBC FAQs about expedited surgeries and billing the expedited surgery premium
The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code
The Rosetta Stone Published: September 27, 2022 at 3:25 pm For more than 40 generations, no living soul was able to read an ancient Egyptian text. Even before the last-known hieroglyphic inscription was carved (in August AD 394), detailed understanding of the script had all but died out in the Nile Valley, save for a few members of the elite. As those with the specialist knowledge also dwindled, speculation took over and fanciful theories sprang up about the meaning of the mysterious signs seen adorning Egyptian monuments.
As early as the first century BC, the Greek historian Diodorus Siculus had averred that the script was “not built up from syllables to express the underlying meaning, but from the appearance of the things drawn and by their metaphorical meaning learned by heart”. In other words, it was believed hieroglyphics did not form an alphabet, nor were they phonetic (signs representing sounds). Instead, they were logograms, pictures with symbolic meaning.
This was a fundamental misconception, and deflected scholars from decipherment for the following 19 centuries. The European Enlightenment’s ablest philologists (those who study the history and development of languages) deemed the task to be impossible.
English antiquarian William Stukeley said in the early 18th century: “The characters cut on the Egyptian monuments are purely symbolical… The perfect knowledge of ’em is irrecoverable.” Five decades later, French orientalist Antoine Isaac Silvestre de Sacy dismissed the work of deciphering the writing as “too complicated, scientifically insoluble”.
Only at the end of that century did a bold Danish scholar named Georg Zoëga suggest that some of the hieroglyphs might be phonetic after all. “When Egypt is better known to scholars,” he wrote, “it will perhaps be possible to learn to read the hieroglyphs and more intimately to understand the meaning of the Egyptian monuments.”
More like this Zoëga’s statement was a prescient one. A year later, in 1798, Napoleon launched his expedition to Egypt, taking a large scientific and scholarly expedition to study the ancient remains. In July 1799, his soldiers discovered the Rosetta Stone: a stela carved with a royal decree promulgated in the name of Ptolemy V in the second century BC.
The languages on the Rosetta Stone While the decree itself was not significant, the fact that it had been inscribed in three scripts (hieroglyphics; an equally enigmatic form of Egyptian now known as demotic; and the still-understood ancient Greek) was what offered hope of finally making the unreadable Egyptian writing readable. Copies of the stone’s inscriptions circulated in Europe and cracking the code became one of the greatest intellectual challenges of the new century.
It was not long before the challenge was taken up by two brilliant minds of the age: Thomas Young and Jean-François Champollion, who could not have been more different in talent or temperament.
Young was a dazzling polymath of easy, self-effacing erudition, while Champollion was a single-minded obsessive, a self-conscious and jealous intellectual. And for added piquancy, the former was English, the latter French. The scholars were destined to be bitter rivals in the decipherment race.
Thomas Young and the Rosetta Stone Thomas Young was born in Somerset in 1773 to Quaker parents who placed a high value on learning. Showing an early aptitude for languages, it is said that by the age of two he had learned to read and by 14 had gained some proficiency in French, Italian, Latin, Greek, Hebrew, Arabic, Persian, Turkish, Ethiopic, and a clutch of obscure ancient languages. When old enough, Young went out in search of a profession to support himself, so he trained in medicine and moved to London in 1799 to practise as a doctor. Science, however, remained his passion.
Pioneer philologist English polymath Thomas Young Thomas Young (1773-1829), English physicist and Egyptologist. Discovered the undulatory (wave) theory of light. Managed to decipher the Rosetta Stone. (Photo by Oxford Science Archive/Print Collector/Getty Images) In 1801, Young was appointed professor of natural philosophy at the Royal Institution and for two years gave dozens of lectures, covering virtually every aspect of science. For sheer breadth of knowledge, this has never been surpassed. With his supreme gifts as a linguist, it is not surprising that he should have become interested in the philological conundrum of the age: the decipherment of hieroglyphics. In his own words, he could not resist “an attempt to unveil the mystery, in which Egyptian literature has been involved for nearly twenty centuries”.
He began studying a copy of the Rosetta Stone inscription in 1814. It had quickly been determined that the three scripts said the same thing, if not word for word, so being able to read one inscription (the ancient Greek) would be a starting point for another (the hieroglyphics). The hieroglyphic inscription, however, was incomplete due to damage to the top of the stone, so scholars began by studying the second script (demotic). Young, blessed with an almost photographic memory, managed to discern patterns and resemblances that had escaped others, namely that the second script was closely connected with hieroglyphics, even derived from them, and that it was composed of a combination of both symbolic and phonetic signs.
Young was the first to make these ultimately correct evaluations. Also, working on the assumption that the name of a king was enclosed in a ring, or cartouche, in the hieroglyphic inscription, Young could locate every mention of “Ptolemy”, with which he was able to come up with a starting alphabet for hieroglyphics.
In 1818, Young summed up his pioneering knowledge in an article for the Encyclopaedia Britannica simply entitled “Egypt”, but he made the fateful move of publishing his landmark article anonymously. This allowed his great rival eventually to take the glory of decipherment.
Jean-François Champollion and the Rosetta Stone Jean-François Champollion was 17 years Young’s junior. Born in 1790 in south-western France to a bookseller and his wife, he grew up surrounded by writings and displayed a precocious genius for languages.
It fell to his older brother, the similarly gifted Jacques-Joseph, essentially to raise him and support his learning. They would move to Grenoble and the young Champollion picked up half a dozen languages. Crucially, it turned out, among them was Coptic: an ancient language with an alphabet based on Greek, which he correctly surmised to be a descendant of ancient Egyptian.
An 1831 portrait Of Jean-Francois Champollion Portrait of Jean-François Champollion (1790-1832), 1831. Found in the Collection of Musée du Louvre, Paris. (Photo by Fine Art Images/Heritage Images/Getty Images) In 1804, Champollion first came across a copy of the Rosetta Stone inscription, and was fascinated. By the time the mayor of Grenoble is reported to have asked him, in 1806, if he intended to study the fashionable natural sciences, “No, Monsieur,” was the firm reply. “I wish to devote my life to knowledge of ancient Egypt.”
Following a few years studying in Paris, Champollion, still only 19 years old, moved back to Grenoble to take up a teaching post at the local college, gaining a promotion in 1818. This brought a measure of security that allowed him to devote more time to the study of ancient Egypt. That same year in England, Young was penning his seminal article for the Encyclopaedia Britannica.
Then, just three years later, Champollion’s revolutionary politics cost him his good name. Fired from the college and ejected from Grenoble, he lodged with his brother. With nothing else to occupy himself, and the benefit of Jacques-Joseph’s extensive library, he threw himself wholeheartedly and with a single-minded focus into the subject that had occupied his mind for years: deciphering the Egyptian script.
Based on his studies of the Rosetta Stone, Champollion made some progress, but was still unable to crack the code entirely. Then a second major piece of the puzzle arrived in the form of an obelisk discovered at Philae and removed from Egypt by a British collector, William John Bankes, to decorate the grounds of his stately home in Dorset.
Lithographs of the inscription circulated in the early 1820s and, like with the Rosetta Stone, the names of rulers – Ptolemy again and Cleopatra – could be identified in cartouches. Incidentally, the lithograph that went to Young contained an error, hampering his research, while the copy that came into Champollion’s possession in January 1822 was accurate.
Certain he was making rapid progress, the Frenchman assigned phonetic values to individual hieroglyphic signs and built an alphabet of his own, which let him find the names of other rulers of Egypt on other monuments.
The final breakthrough came on Saturday 14 September 1822 after Champollion received another inscription, from the pharaonic temple at Abu Simbel. Applying all the knowledge he had laboured so long and so hard to acquire, he was able to read the royal name as that of Ramesses the Great. Encouraged, he went on to read Ptolemy’s royal epithets on the Rosetta Stone. By the end of the morning, he needed no further proof that his system was the right one.
Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great in the 13th century BC. As the script could be written in any direction, the way the human and animal figures face shows how to read an inscription (Photo by Getty Images) He sprinted down the road to his brother’s office at the Académie des Inscriptions et Belles-Lettres, flinging a sheaf of papers on to the desk and exclaiming: “Je tiens mon affaire!” (“I’ve done it!”)
Overcome with emotion and exhausted by the mental effort, Champollion collapsed to the floor and had to be taken back home, where for five days he was confined to his room completely incapacitated. When he finally regained his strength, on the Thursday evening, he immediately resumed his feverish studies and wrote up his results. Just one week later, on Friday 27 September, he delivered a lecture to the Académie to announce his findings formally. By convention, his paper had to be addressed to the permanent secretary, so was given the title Lettre à M. Dacier (“Letter to Mr Dacier”).
The rivalry of Young and Champollion By extraordinary coincidence, in attendance at that historic talk was Thomas Young, who happened to be in Paris. Moreover, he was invited to sit next to Champollion while he read out his discoveries.
In a letter written two days later, Young acknowledged his rival’s achievement: “Mr Champollion, junior… has lately been making some steps in Egyptian literature, which really appear to be gigantic. It may be said that he found the key in England which has opened the gate for him… but if he did borrow an English key, the lock was so dreadfully rusty, that no common arm would have had strength enough to turn it.”
This outward magnanimity concealed a deeper hurt at the belief Champollion had failed to acknowledge Young’s contributions to decipherment. Quietly determined to set the record straight he published his own work within a few months, this time under his own name. It was pointedly entitled An Account of Some Recent Discoveries in Hieroglyphical Literature and Egyptian Antiquities, Including the Author’s Original Alphabet, as Extended by Mr Champollion.
The Frenchman was not about to take such a claim lightly. In an angry letter to Young, he retorted: “I shall never consent to recognise any other original alphabet than my own… and the unanimous opinion of scholars on this point will be more and more confirmed by the public examination of any other claim.”
Indeed, Champollion was as adept at self-promotion as Young was self-effacing. Buoyed by public recognition, he continued working and came to a second, equally vital realisation: his system could be applied to texts as well as names, using the Coptic he had utterly immersed himself in as a guide. This marked the real moment at which ancient Egyptian once again became a readable language. The race had been won.
Hieroglyphs in the notebook of Jean-Francois Champollion Pages of Jean-François Champollion’s notebook filled with facsimiles of hieroglyphic inscriptions. The Frenchman dedicated his life to learning the meaning of the symbols that had baffled scholars for centuries (Photo by Art Media/Print Collector/Getty Images) Champollion revealed the full extent of his findings in his magnum opus, Précis du système hiéroglyphique des anciens Egyptiens (Summary of the hieroglyphic system of the ancient Egyptians). Published in 1824, it summed up the character of ancient Egyptian: “Hieroglyphic writing is a complex system, a script at once figurative, symbolic, and phonetic, in the same text, in the same sentence, and, I might almost say, in the same word.” His reputation secure, he even felt able to acknowledge, grudgingly, Young’s work with the comment, “I recognise that he was the first to publish some correct ideas about the ancient writings of Egypt.”
Young, for his part, seemed to forgive Champollion for any slights, later telling a friend that his rival had “shown me far more attention than I ever showed or could show, to any living being”. Privately, Champollion was far less magnanimous, writing to his brother: “The Brit can do whatever he wants – it will remain ours: and all of old England will learn from young France how to spell hieroglyphs using an entirely different method.”
In the end, despite their radically different characters and temperaments, both made essential contributions to decipherment. Young developed the conceptual framework and recognised the hybrid nature of demotic and its connection with hieroglyphics. Had he stuck at the task and not been distracted by his numerous other scientific interests, he may well have cracked the code himself.
Instead, it took Champollion’s linguistic abilities and focus. His Lettre à M. Dacier announced to the world that the secrets of the hieroglyphics had been discovered and ancient Egyptian texts could be read.
It remains one of the greatest feats of philology. By lifting the civilisation of the pharaohs out of the shadows of mythology and into the light of history, it marked the birth of Egyptology and allowed the ancient Egyptians to speak, once again, in their own voice.
Toby Wilkinson is an Egyptologist and author. His books include A World Beneath the Sands: Adventurers and Archaeologists in the Golden Age of Egyptology (Picador, 2020)
This content first appeared in the October issue of BBC History Magazine
"The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code"
When you’re looking for an answer to a question, want to find a local repair shop or need a recipe for braised short ribs, the typical response is to "Google it." In fact, Google is now recognized as a verb in the Merriam-Webster dictionary. By this point, of course, Google has been the unquestioned leader in search for decades, despite various efforts by competitors to take that crown. Google has remained at the top of this food chain by optimizing the user experience—and capturing the lion's share of advertising dollars—across new types of devices, voice search, e-commerce search and more.
Enter AI, and Google now faces a new set of threats from rivals like Microsoft, who have narrowed the competition gap and forced the search giant's hand in a matter of days. In this article we will look at what is happening in generative AI and how Microsoft is on a mission to challenge Google's search leadership. This includes Microsoft's investment in OpenAI, the company behind ChatGPT (short for “chat generative pre-trained transformer”), a generative AI tool and the most quickly adopted product in history.
What is ChatGPT and why is it relevant for search?
ChatGPT is a natural language processing tool that can create content, images and even code on demand via conversations with a chatbot. The AI-driven tool is built on OpenAI's GPT-3 family of large language models. ChatGPT launched in November 2022 and amassed 100 million users in its first two months, although the app is often down or at capacity—which is probably to be expected in the context of such explosive adoption.
An attempted login to ChatGPT on the morning of February 8, 2023.
MELODY BRUE
Changes in consumer behavior and modern technologies have reshaped search in the past with shifts from desktop to mobile, tablets and other voice-commanded devices. Google wrote the playbook on how good search is conducted; the technology toolbox supporting that is unlikely to become irrelevant. But the burning question is: will AI become more relevant than what is in Google's current toolbox for search? According to Microsoft CEO Satya Nadella, "The [AI] race starts today."
Microsoft makes the first power play with OpenAI and ChatGPT
In January, Microsoft invested an estimated $10 billion in OpenAI, valuing the company at $29 billion. The company first invested $1 billion in OpenAI in 2019, and then more in a 2021 funding round when the startup was working closely with Azure, Microsoft's cloud service. The most recent investment also seemingly made Microsoft the exclusive cloud computing provider to OpenAI.
Along with this latest investment, Microsoft announced the new AI-powered Bing search engine and Edge browser. Patrick Moorhead, CEO, and chief analyst at Moor Insights & Strategy was live tweeting his thoughts from the event; his enthusiasm (albeit tempered) was enough to convince me to install the browser and extension and check out the new Bing while I patiently keep my place on the waitlist for the full Microsoft Bing ChatGPT integration.
My initial reaction is that the new Bing engine is slick and requires less eliminating of useless content than a typical Google search. The sorting is not intuitive in the "Google it" world I am used to. Still, the conversational tone and variety of answers presented alongside aggregated information make it feel like asking a friend who knows you well enough to know how to answer your questions in a way you will understand. But just like that friend, the Bing engine’s accuracy should be checked. The data in the early version is not guaranteed to be accurate, and it may be some time before a high degree of accuracy can be promised. This makes for a good reminder that misinformation and security must remain top-of-mind for any company releasing AI-generated content. These are topics Microsoft and Google are taking seriously—and for which they must take a strict approach to regulating, auditing and reporting.
Google unveils Bard and Invests $300 million in Anthropic
Earlier this week, Google announced Bard, a competitor to ChatGPT built atop Google’s powerful natural language processing model LaMDA (Language Model for Dialogue Applications). Bard will be released to “trusted testers” outside the company at an undisclosed date soon. The company did not give a time frame for general availability but said it will be released to the public after testing safety issues and working out other kinks.
Along with the release of Bard, Google has announced that it will allow developers to create their own applications by tapping into the company’s natural language models. "Beyond our own products, we think it's important to make it easy, safe and scalable for others to benefit from these advances by building on top of our best models," Alphabet CEO Sundar Pichai wrote in a blog post about the topic.
Of course, in all this one would assume that Google could not just sit idle after receiving Microsoft's shot across the bow. The company held its own event in Paris a day after Microsoft’s event but drew a lackluster reception for a presentation that seemed rushed and unprepared—even though the company has pioneered many of the technologies behind generative AI products and has invested a hefty sum in the technology. The botched demo, in which Bard produced an inaccurate response to a query among other snafus, sent Google parent Alphabet’s stock plummeting. Shares in the company were down 7.7% after Wednesday’s event—meaning that the company lost $100 billion in value overnight.
Google also invested $300 million in Anthropic, one of the most hyped OpenAI rivals whose AI model “Claude” is a ChatGPT competitor. Using Google Cloud’s GPU and TPU clusters, Anthropic will train, expand and implement Claude.
Anthropic's history might give some people pause, however. The company was started by a group of former OpenAI employees and backed by Sam Bankman-Fried—the now-indicted former CEO at the heart of the FTX scandal; it is still an open question whether the asset could be liquidated in the FTX Bankruptcy.
There is more to this war than search
Through Bing, Microsoft currently commands 3% of the global search market. Even modest gains in that number would mean billions of dollars in advertising revenue. According to information shared by Microsoft, each percentage point of search advertising market share in yields an additional $2 billion in revenue. While this is a measly portion of Microsoft’s total annual revenue (nearly $200 billion in 2022), the growth opportunity is still significant. However, the war is not simply about search and ad dollars. It is also about where that business comes from and how it affects the competition—in this case, Google.
As laid out above, Google has spent a lot of money investing in AI, largely in response to competitive threats. Competition in the search market inevitably makes search less profitable for Google, not only if it loses some percentage of ad spend to Bing, but also through the increased expense of running AI-powered vs. classic search engines. Whereas gaining search market share for Microsoft is nicely incremental, losing market share for Google hits the company hard. Search advertising revenue in the December 2022 quarter accounted for 56% of revenues for Alphabet, Google's parent company. A less profitable Google means less money in the company's war chest to compete in cloud computing and other growth areas.
The AI battle will play out in our daily lives and the modern workplace
While Microsoft has put a lot of muscle into the search race, the company's investment in OpenAI (again, dating back to 2019) was made with visions that reach well beyond chatbots. OpenAI technology can be integrated into the company's productivity tools, including Outlook and Office 365. This could take the form of digital assistants, bot-suggested PowerPoint content and formatting, email sorting and suggested replies based on previous interactions, suggested next best actions and more. Within Azure alone, the sheer popularity of OpenAI and ChatGPT could be enough to lure potential cloud customers away from Amazon or Google. And on the gaming front, Microsoft’s investment in OpenAI could give the company a competitive advantage over rivals Sony and Nintendo.
Microsoft also announced its intention to integrate ChatGPT into Microsoft Teams in a premium plan. The chatbot will suggest templates specific to the needs of the meeting organizer, generate notes from meetings, summarize content specific to users based on their needs and even translate notes and transcripts into 40 different languages. ChatGPT can also summarize meetings, calls and webinars into chapters, assign them titles and flag specific names and content. This could be a game changer for reducing the number of meetings people need to attend while improving their ability to consume content relevant to them and their particular roles. My assessment is that this may free people to be more present and think more clearly during meetings because they won’t need to spend so much energy corralling participants, taking notes, or assigning post-meeting responsibilities. Ultimately, that means they can spend less time planning and more time executing.
Google likewise has plans for AI integrations beyond search. During the company’s earnings call, Google CEO Pichai spoke of integrating generative AI into most of its products, from Google Docs to Gmail. While he didn’t specify what AI-assisted emails would be like, he did broadly touch on designs and features Gmail might have. It makes sense to me that in an application like email AI would be able to analyze content from previous interactions and suggest replies, automate workflows and follow-ups and integrate with scheduling tools. By linking applications like Gmail, Calendar and Chat, Bard can potentially act as anyone’s personal assistant, freeing up employees at every level in an organization to focus on more meaningful and strategic work.
AI is not coming after your job—it could be creating jobs
After a couple of months of what can only be described as brutal layoffs in Big Tech, it must be hard for recently laid-off Microsoft and Google employees (among others) to see the companies invest billions in the next wave of computing. In reality, some of those layoffs were done to make room for hires in key strategic areas such as AI. According to ZipRecruiter data, postings for AI-related roles in January were up 6.3% compared to February 2020.
AI is undoubtedly top-of-mind for Big Tech execs as they address Wall Street. As found in a Reuters analysis of recent earnings calls, Alphabet, Microsoft and Meta used variations of the terms “AI,” “generative AI” and “machine learning” or “ML” up to six times more often than in the previous quarter.
Satya Nadella, CEO of Microsoft, has addressed AI specifically, noting in the company's January layoff announcement that AI is driving the "next major wave of computing" as Microsoft uses AI models to build a "new computing platform." He also acknowledged that the company "will continue to hire in key strategic areas." Sounds like AI to me.
“Bing”ing it home
I do not think people will be saying "Bing that" anytime soon, but clearly Microsoft is serious about taking the lead in the AI war of the tech giants. At least for now, it has presented some nice-looking solutions that feel well-thought-out, that fit cohesively into the company's other products and services and that offer new layers and extensions for areas of its infrastructure where even incremental market gains are significant revenue drivers. These advantages could present an even bigger competitive moat between Microsoft and its competitors.
It is far too early to call a winner, and I believe that generative AI is not a zero-sum game. As with many revolutionary technologies, competition creates continual advancement, differentiated offerings tailored to a wide range of needs and an emerging balance of supply and demand.
There is much more to tackle about how these tools will affect our lives—at home and at work—and how AI should be responsibly managed. I look forward to watching the long game, trying out each offering and seeing how tech companies and their AI models learn, evolve and grow. When they are truly ready for prime time, I also look forward to seeing the impact on the future of work, productivity and automation to improve operations and efficiencies.
Moor Insights & Strategy provides or has provided paid services to technology companies like all research and tech industry analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler.
Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro.
Great analysis, Melody! It's fascinating to see Microsoft's investment in OpenAI and the integration of ChatGPT into their Bing search engine. With its ability to generate content and engage in conversations, ChatGPT brings a new dimension to search experiences.
Why do learning disabilities continue to be called learning disabilities instead of learning differences? Why are they not simply considered part of the landscape of neurodiversity? Thomas Armstrong, executive director of the American Institute for Learning and Human Development, writes: The number of categories of illnesses listed by the American Psychiatric Association has tripled in the past fifty years. With so many people affected by our growing “culture of disabilities,” it no longer makes sense to hold on to the deficit-ridden idea of neuropsychological illness.1 The labels are maintained in large part because many laws, regulations, policies, and practices lag behind current research, and disability diagnoses are still required to support basic student rights. For example, a “disability” is required for students to access accommodations on standardized testing, produced by largely privately owned organizations (College Board,2 ACT). The term “disability” comes from federal legislation that allows for rights, under the law, to help even out the playing field for those with diagnosed disabilities, including learning disabilities. Additionally, funding for medical and educational resources has muddied the waters of terminology. Diagnoses are required for insurance to cover medical costs, and labels are needed to support funding for educational resources. While the clinical and federal references for diagnoses have unique functions, the ICD-10, DSM-V, IDEA, Section 504 of the ADA,3 and the education code standardize the terminology to some extent and limit the semantics required of those advocating for students. As educators, we find it challenging to switch perspectives, and simultaneously adopt a new vocabulary, to reinforce the setting in which the student needs support: classroom, tutorial, doctor’s office, standardized test board. The task can translate into navigating a series of hoops that can seem arbitrary and entirely separate from a deeper understanding of the learner. While the philosophical shift in terminology from “disability” to “difference” or “style” is more informed and politically correct, it is the political system that holds one to the term “disability” in order to access legal rights for those who need individualized support and accommodations. The tipping point will come when a substantial cohort of educators and parents understands differences, deficits, and diversity. A wider perspective allows people to address learning differences in an accepting and proactive manner. Acceptance and early intervention ensure that learning variations never reach the level of deficit that creates the discrepancy model on which disability determination has historically been based. While a growing number of people will become more understanding and accepting of the neurodiversity of students, society’s medical and educational institutions will still be significantly influenced by financial and legislative terminology. Semantics is getting in the way of a more humane approach to learning. Differentiation … Because It’s Just Good Teaching Differentiated instruction that meets individual student needs should be the norm in teaching, yet this requires additional training, materials, and coaching to support teachers’ ability to understand, prepare for, and accommodate all learners. Teachers are asked to differentiate for each learner for each subject and at various times of the day, with a host of variables that will impact each individual’s experience. Differentiation is essential in the way a teacher designs and implements instruction on a daily basis with their students. If, therefore, differentiation is simply “good teaching,” why are we subjecting learners (and ourselves) to a host of tests, labels, and logistics to determine how a learner functions outside the norm? A differentiated approach considers all learners as outside the norm. For teaching to adapt to the modern framework of a growth mindset, there must be a collective rejection of the semantics of educational labels. Instead, educators must gather accurate data at regular intervals in a student’s educational experience and then use this formative data to adjust instructional approaches and materials. School communities must work together to support the needs of all learners. Teachers must assess in the true meaning of the word assess — to sit beside — rather than continue to test mastery of static content through measurement tools that necessitate accommodations for at least 20 percent of the population. Schools must focus on a Universal Design for Learning4 to meet the unique needs of all students, knowing that every student benefits from an individualized approach to instruction. The term “accommodations” would no longer be necessary if accessibility features, such as audio books, voice dictation, calculators, and untimed tests, were available to support a more mindful approach to education. And yet, accessibility features alone are not enough. Clinically researched screening tools, such as the Comprehensive Test of Phonological Processing (CTOPP), can be used to modify curriculum and instruction to meet students’ needs at the early elementary level; and multisensory teaching methods and materials, many of which were originally designed for students with diagnosed learning disabilities, can be used to benefit all students, regardless of age or skill set. Reading, for example is a not a natural skill developmentally. Reading is learned through explicit instruction and sufficient practice. Deficits in phonological awareness are viewed as the hallmark of reading disabilities. Phonological awareness is, however, the most responsive to intervention of the phonological processing areas.5 When teachers have the support to better understand how to guide this skill, fewer students struggle. Implementing a Paradigm Shift Once we have acknowledged that students process information in a variety of ways, it is critical to present new learning in different formats to ensure educational equity. When writing lesson plans, teachers who are adept at differentiating research and employing multiple resources and multiple perspectives on the same topic,6 have a whole host of teaching techniques to use, such as videos, pictures, interactive websites, music, poetry, art, guided visualization, concrete manipulatives, small-motor and large-motor activities, maker’s projects, read-alouds, self-reflective writing, independent reading, analytic writing, small-group and large-group discussion prompts, oral presentations, and lectures. This resource of tools gives teachers quick access to many forms of instructional input and the flexibility to adjust to students’ interests, experience, background knowledge, and learning needs. Most important, when teachers present a variety of teaching strategies, they are also modeling the fact that there are many forms of acceptable output. At Stevenson School in Carmel, California, we operate from the position that all learners deserve a seat at the table and also deserve to be fed according to their individual dietary needs. Instead of thinking of the developmental learning spectrum from high to low, we think of it as propensities in different modes of learning. Equity is about getting what you need, not getting the same as everyone else. This is as true for the student-genius with debilitating social-emotional glitches as it is for the dyslexic/ADHD child with academic learning challenges. Within this operating philosophy, we look at equity from a different point of view and provide a broad range of options. For example, in grade 6, we are learning about the early 1800s, and the textbook is dense and somewhat dry. We have provided these students with key vocabulary, videos, and photographs of the same material in advance so that when they encounter the textbook, they have a context for the big picture. We then guide the students through the process of reading dense nonfiction text by projecting the text on a large screen and having the teacher model annotation skills with a think-aloud strategy. The group is then ready to reflect in their independent writing journals on the topic covered. At this point, we have exposed the students to the material visually, verbally, and, now, intrapersonally. The time allows for synthesis of complex ideas and multiple input modalities to provide access to all learners. Discussion follows, which draws in the interpersonal learners and gives students practice with concise, articulate oral presentation. In this class, the sixth-grade students are asked to write their own graphic novels. They choose a topic relevant to the early 1800s and will either draw the panels themselves or use Google Slides or Storyboard That to create the final draft. At present, differentiation often pushes teachers to action outside their comfort zone in classroom preparation, classroom instruction, and assessment of knowledge. If we simplify the notion of learning and teaching to the common-sense fundamentals of communication (listening, speaking, reading, and writing), teachers often make natural and intuitive connections in how and why to differentiate. With an additional understanding of the limitations of attention and memory, we can strive to expand our teaching and assessing of student knowledge. Listening and speaking are not just in the realm of the speech therapist or the foreign language teacher; reading and writing are not just the domain of the English teacher. Educators across all content areas benefit from an understanding of the language continuum so that instruction, especially of new material, is couched in a context that will afford learners time for input, then processing, then output. Learning requires attention and engagement, and for students with biologically based ADHD, there is nothing teachers can do to replace the neurotransmitters necessary for attention.7 They can, however, respect the limitations of attention, increase movement and hands-on learning, break information into manageable units, and provide embedded strategy training. The bold shift to comprehensively develop faculty who are competent in differentiated instruction, classroom management, and assessment has a more significant impact on positive academic and emotional outcomes for students than any other curricular initiative.8 The essential factors supporting the implementation of this paradigm shift are a shared intention of prioritization, inspiration, frequent observation, targeted professional development, planning time, access to materials, ongoing support, consultation, and coaching. Creative allocation of resources, organization, and conscientious follow-through allow schools to accomplish their desired goals. Frequent observation of instruction and regular feedback are tangible measures that afford educational leaders a proactive role in helping teachers reach their students. At high-performing schools, “Leaders typically observe each teacher eight times a year — three more times than leaders at other schools” and provide verbal or written feedback after almost every observation.9 Faculty benefit from the same individualized accountability as their students. When administrators and colleagues observe day-to-day instruction, everyone is better informed to discuss, critique, and examine the ways in which teaching practices can be improved. Review of classroom videos adds an additional level of self-reflection and allows educators to play an integral role in their own professional growth. As the poet Rabindranath Tagore said, “A teacher can never truly teach unless he is still learning himself.” It is essential that the shared vision is clear — that everyone is on board and feels safe to explore new ideas. Targeted professional development reflects a commitment to strengthen instruction at the individual teacher level. An awareness of what each teacher needs to be more effective in his or her practice unfolds through observation and a collegial coaching relationship.10 A school culture of teamwork, motivation, expertise, and creative thinking engages teachers to be innovative educators.11 Planning time is essential to implement innovative ideas. Administrators must pay attention to the flow of the daily schedule, the yearly calendar, and the timing of extra demands. While flexibility is key to a dynamic team, there is never enough time for everything. Careful consideration is important in supporting collaboration, in encouraging project-based learning initiatives, and in protecting teachers from a sense of being overwhelmed. Access to materials needed to implement innovative ideas across the curriculum must be provided. While supplies do not necessarily need to be expensive, materials should be budgeted into the plan and be available, depending on the financial limitations of an institution, along with the considerable time it takes resourceful teachers to create their own materials. Ongoing support, consultation, and coaching are necessary to strengthen the instructional culture. Regular meetings with a mentor — an administrator, specialist, or colleague — are vital to fully exploring the potential of learning theories and instructional practices. If coaching is embedded in the culture, then, just as with observation, the formality falls away to reveal an empowering relationship that can be the springboard for passionate, purposeful teaching. More than ever before, 21st century schools need exceptional teachers — teachers who love to teach learners; who are committed to finding ways to access their students individually and as a group; and who are educated, trained, and treated as professionals. Teaching is a dynamic profession, requiring responsiveness to an immeasurable set of real and perceived limitations and strengths. With patience, acceptance, information, and a sustainable framework of support, educators can create safe and supportive learning environments that rise above the political semantics of learning differences and reframe neurodiversity in terms of equity and empathy. Notes 1. Thomas Armstrong, Neurodiversity: Discovering the Extraordinary Gifts of Autism, ADHD, Dyslexia, and Other Brain Differences (Cambridge, MA: De Capo Press, 2010). 2. The College Board’s SAT test originates from an adaptation of the Army Alpha — the first mass-administered IQ test, which was made more difficult for use as a college admissions test. (Frontline, “A Brief History of the SAT,” PBS Online; online at http://www.pbs.org/wgbh/pages/frontline/shows/sats/where/history.html.) 3. ICD-10 is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD), a medical classification list issued by the World Health Organization (WHO); DSM-V is the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, a classification and diagnostic tool published by the American Psychiatric Association (APA); IDEA is the Individuals with Disabilities Education Act (IDEA), a four-part piece of American legislation that ensures that students with a disability are provided with free appropriate public education (FAPE) that is tailored to their individual needs; Section 504 of the Rehabilitation Act of 1973 is federal legislation that guarantees certain rights to people with disabilities. It was the first U.S. federal civil rights protection for people with disabilities; it helped pave the way for the 1990 Americans With Disabilities Act (ADA). 4. An education framework based on research in the learning sciences. 5. Richard K. Wagner, Joseph K. Torgesen, and Carol Rashotte, Comprehensive Test of Phonological Processing (CTOPP) (Austin, TX: PRO-ED, 1999); Richard K. Wagner, Joseph K. Torgesen, and Carol Rashotte, “Development of Reading-Related Phonological Processing Abilities: New Evidence of Bidirectional Causality From a Latent Variable Longitudinal Study,” Developmental Psychology 30, no. 1 (1994): 73-87; Richard K. Wagner and Joseph K. Torgesen, “The Nature of Phonological Processing and Its Causal Role in the Acquisition of Reading Skills. Psychological Bulletin 101, no. 2 (1987): 192-212. 6. A good example of curating is Critical Explorers (www.criticalexplorers.org), which provides free online curricular resources. 7. JoAnn Deak, “An Evening With Dr. JoAnn Deak,” presentation to Stevenson School, August 24, 2015, Pebble Beach, CA. 8. In five years of steady growth, student test scores at Stevenson School increased from below grade level performance in reading and math to award-winning standings in the top 15 percent in the state (nationalblueribbonschools.ed.gov/awardwinners/). 9. The New Teacher Project, 2012 10. Stephen D. Brookfield, The Skillful Teacher: On Teaching, Trust, and Responsiveness in the Classroom (San Francisco: Jossey-Bass, 2015). 11. Eleanor Duckworth, “Confusion, Play and Postponing Certainty,” Harvard Gazette, February 16, 2012; online at http://news.harvard.edu/gazette/story/2012/02/confusion-play-and-postponing-certainty-eleanor-duckworth-harvard-thinks-big/2012. For Further Reading Barquero, Laura, Nicole Davis, and Laurie E. Cutting, “Neuroimaging of Reading Intervention: A Systematic Review and Activation Likelihood Estimate Meta-Analysis.” PLoS One 9, no. 11 (2014). Dweck, Carol. Mindset: The New Psychology of Success. New York: Random House: 2006. Eide, Brock I., and Fernette L. Eide. The Dyslexic Advantage: Unlocking the Hidden Potential of the Dyslexic Brain. New York: Penguin, 2012.
The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code
The Rosetta Stone Published: September 27, 2022 at 3:25 pm For more than 40 generations, no living soul was able to read an ancient Egyptian text. Even before the last-known hieroglyphic inscription was carved (in August AD 394), detailed understanding of the script had all but died out in the Nile Valley, save for a few members of the elite. As those with the specialist knowledge also dwindled, speculation took over and fanciful theories sprang up about the meaning of the mysterious signs seen adorning Egyptian monuments.
As early as the first century BC, the Greek historian Diodorus Siculus had averred that the script was “not built up from syllables to express the underlying meaning, but from the appearance of the things drawn and by their metaphorical meaning learned by heart”. In other words, it was believed hieroglyphics did not form an alphabet, nor were they phonetic (signs representing sounds). Instead, they were logograms, pictures with symbolic meaning.
This was a fundamental misconception, and deflected scholars from decipherment for the following 19 centuries. The European Enlightenment’s ablest philologists (those who study the history and development of languages) deemed the task to be impossible.
English antiquarian William Stukeley said in the early 18th century: “The characters cut on the Egyptian monuments are purely symbolical… The perfect knowledge of ’em is irrecoverable.” Five decades later, French orientalist Antoine Isaac Silvestre de Sacy dismissed the work of deciphering the writing as “too complicated, scientifically insoluble”.
Only at the end of that century did a bold Danish scholar named Georg Zoëga suggest that some of the hieroglyphs might be phonetic after all. “When Egypt is better known to scholars,” he wrote, “it will perhaps be possible to learn to read the hieroglyphs and more intimately to understand the meaning of the Egyptian monuments.”
More like this Zoëga’s statement was a prescient one. A year later, in 1798, Napoleon launched his expedition to Egypt, taking a large scientific and scholarly expedition to study the ancient remains. In July 1799, his soldiers discovered the Rosetta Stone: a stela carved with a royal decree promulgated in the name of Ptolemy V in the second century BC.
The languages on the Rosetta Stone While the decree itself was not significant, the fact that it had been inscribed in three scripts (hieroglyphics; an equally enigmatic form of Egyptian now known as demotic; and the still-understood ancient Greek) was what offered hope of finally making the unreadable Egyptian writing readable. Copies of the stone’s inscriptions circulated in Europe and cracking the code became one of the greatest intellectual challenges of the new century.
It was not long before the challenge was taken up by two brilliant minds of the age: Thomas Young and Jean-François Champollion, who could not have been more different in talent or temperament.
Young was a dazzling polymath of easy, self-effacing erudition, while Champollion was a single-minded obsessive, a self-conscious and jealous intellectual. And for added piquancy, the former was English, the latter French. The scholars were destined to be bitter rivals in the decipherment race.
Thomas Young and the Rosetta Stone Thomas Young was born in Somerset in 1773 to Quaker parents who placed a high value on learning. Showing an early aptitude for languages, it is said that by the age of two he had learned to read and by 14 had gained some proficiency in French, Italian, Latin, Greek, Hebrew, Arabic, Persian, Turkish, Ethiopic, and a clutch of obscure ancient languages. When old enough, Young went out in search of a profession to support himself, so he trained in medicine and moved to London in 1799 to practise as a doctor. Science, however, remained his passion.
Pioneer philologist English polymath Thomas Young Thomas Young (1773-1829), English physicist and Egyptologist. Discovered the undulatory (wave) theory of light. Managed to decipher the Rosetta Stone. (Photo by Oxford Science Archive/Print Collector/Getty Images) In 1801, Young was appointed professor of natural philosophy at the Royal Institution and for two years gave dozens of lectures, covering virtually every aspect of science. For sheer breadth of knowledge, this has never been surpassed. With his supreme gifts as a linguist, it is not surprising that he should have become interested in the philological conundrum of the age: the decipherment of hieroglyphics. In his own words, he could not resist “an attempt to unveil the mystery, in which Egyptian literature has been involved for nearly twenty centuries”.
He began studying a copy of the Rosetta Stone inscription in 1814. It had quickly been determined that the three scripts said the same thing, if not word for word, so being able to read one inscription (the ancient Greek) would be a starting point for another (the hieroglyphics). The hieroglyphic inscription, however, was incomplete due to damage to the top of the stone, so scholars began by studying the second script (demotic). Young, blessed with an almost photographic memory, managed to discern patterns and resemblances that had escaped others, namely that the second script was closely connected with hieroglyphics, even derived from them, and that it was composed of a combination of both symbolic and phonetic signs.
Young was the first to make these ultimately correct evaluations. Also, working on the assumption that the name of a king was enclosed in a ring, or cartouche, in the hieroglyphic inscription, Young could locate every mention of “Ptolemy”, with which he was able to come up with a starting alphabet for hieroglyphics.
In 1818, Young summed up his pioneering knowledge in an article for the Encyclopaedia Britannica simply entitled “Egypt”, but he made the fateful move of publishing his landmark article anonymously. This allowed his great rival eventually to take the glory of decipherment.
Jean-François Champollion and the Rosetta Stone Jean-François Champollion was 17 years Young’s junior. Born in 1790 in south-western France to a bookseller and his wife, he grew up surrounded by writings and displayed a precocious genius for languages.
It fell to his older brother, the similarly gifted Jacques-Joseph, essentially to raise him and support his learning. They would move to Grenoble and the young Champollion picked up half a dozen languages. Crucially, it turned out, among them was Coptic: an ancient language with an alphabet based on Greek, which he correctly surmised to be a descendant of ancient Egyptian.
An 1831 portrait Of Jean-Francois Champollion Portrait of Jean-François Champollion (1790-1832), 1831. Found in the Collection of Musée du Louvre, Paris. (Photo by Fine Art Images/Heritage Images/Getty Images) In 1804, Champollion first came across a copy of the Rosetta Stone inscription, and was fascinated. By the time the mayor of Grenoble is reported to have asked him, in 1806, if he intended to study the fashionable natural sciences, “No, Monsieur,” was the firm reply. “I wish to devote my life to knowledge of ancient Egypt.”
Following a few years studying in Paris, Champollion, still only 19 years old, moved back to Grenoble to take up a teaching post at the local college, gaining a promotion in 1818. This brought a measure of security that allowed him to devote more time to the study of ancient Egypt. That same year in England, Young was penning his seminal article for the Encyclopaedia Britannica.
Then, just three years later, Champollion’s revolutionary politics cost him his good name. Fired from the college and ejected from Grenoble, he lodged with his brother. With nothing else to occupy himself, and the benefit of Jacques-Joseph’s extensive library, he threw himself wholeheartedly and with a single-minded focus into the subject that had occupied his mind for years: deciphering the Egyptian script.
Based on his studies of the Rosetta Stone, Champollion made some progress, but was still unable to crack the code entirely. Then a second major piece of the puzzle arrived in the form of an obelisk discovered at Philae and removed from Egypt by a British collector, William John Bankes, to decorate the grounds of his stately home in Dorset.
Lithographs of the inscription circulated in the early 1820s and, like with the Rosetta Stone, the names of rulers – Ptolemy again and Cleopatra – could be identified in cartouches. Incidentally, the lithograph that went to Young contained an error, hampering his research, while the copy that came into Champollion’s possession in January 1822 was accurate.
Certain he was making rapid progress, the Frenchman assigned phonetic values to individual hieroglyphic signs and built an alphabet of his own, which let him find the names of other rulers of Egypt on other monuments.
The final breakthrough came on Saturday 14 September 1822 after Champollion received another inscription, from the pharaonic temple at Abu Simbel. Applying all the knowledge he had laboured so long and so hard to acquire, he was able to read the royal name as that of Ramesses the Great. Encouraged, he went on to read Ptolemy’s royal epithets on the Rosetta Stone. By the end of the morning, he needed no further proof that his system was the right one.
Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great Hieroglyphic carvings at Abu Simbel, site of two temples built by Ramesses the Great in the 13th century BC. As the script could be written in any direction, the way the human and animal figures face shows how to read an inscription (Photo by Getty Images) He sprinted down the road to his brother’s office at the Académie des Inscriptions et Belles-Lettres, flinging a sheaf of papers on to the desk and exclaiming: “Je tiens mon affaire!” (“I’ve done it!”)
Overcome with emotion and exhausted by the mental effort, Champollion collapsed to the floor and had to be taken back home, where for five days he was confined to his room completely incapacitated. When he finally regained his strength, on the Thursday evening, he immediately resumed his feverish studies and wrote up his results. Just one week later, on Friday 27 September, he delivered a lecture to the Académie to announce his findings formally. By convention, his paper had to be addressed to the permanent secretary, so was given the title Lettre à M. Dacier (“Letter to Mr Dacier”).
The rivalry of Young and Champollion By extraordinary coincidence, in attendance at that historic talk was Thomas Young, who happened to be in Paris. Moreover, he was invited to sit next to Champollion while he read out his discoveries.
In a letter written two days later, Young acknowledged his rival’s achievement: “Mr Champollion, junior… has lately been making some steps in Egyptian literature, which really appear to be gigantic. It may be said that he found the key in England which has opened the gate for him… but if he did borrow an English key, the lock was so dreadfully rusty, that no common arm would have had strength enough to turn it.”
This outward magnanimity concealed a deeper hurt at the belief Champollion had failed to acknowledge Young’s contributions to decipherment. Quietly determined to set the record straight he published his own work within a few months, this time under his own name. It was pointedly entitled An Account of Some Recent Discoveries in Hieroglyphical Literature and Egyptian Antiquities, Including the Author’s Original Alphabet, as Extended by Mr Champollion.
The Frenchman was not about to take such a claim lightly. In an angry letter to Young, he retorted: “I shall never consent to recognise any other original alphabet than my own… and the unanimous opinion of scholars on this point will be more and more confirmed by the public examination of any other claim.”
Indeed, Champollion was as adept at self-promotion as Young was self-effacing. Buoyed by public recognition, he continued working and came to a second, equally vital realisation: his system could be applied to texts as well as names, using the Coptic he had utterly immersed himself in as a guide. This marked the real moment at which ancient Egyptian once again became a readable language. The race had been won.
Hieroglyphs in the notebook of Jean-Francois Champollion Pages of Jean-François Champollion’s notebook filled with facsimiles of hieroglyphic inscriptions. The Frenchman dedicated his life to learning the meaning of the symbols that had baffled scholars for centuries (Photo by Art Media/Print Collector/Getty Images) Champollion revealed the full extent of his findings in his magnum opus, Précis du système hiéroglyphique des anciens Egyptiens (Summary of the hieroglyphic system of the ancient Egyptians). Published in 1824, it summed up the character of ancient Egyptian: “Hieroglyphic writing is a complex system, a script at once figurative, symbolic, and phonetic, in the same text, in the same sentence, and, I might almost say, in the same word.” His reputation secure, he even felt able to acknowledge, grudgingly, Young’s work with the comment, “I recognise that he was the first to publish some correct ideas about the ancient writings of Egypt.”
Young, for his part, seemed to forgive Champollion for any slights, later telling a friend that his rival had “shown me far more attention than I ever showed or could show, to any living being”. Privately, Champollion was far less magnanimous, writing to his brother: “The Brit can do whatever he wants – it will remain ours: and all of old England will learn from young France how to spell hieroglyphs using an entirely different method.”
In the end, despite their radically different characters and temperaments, both made essential contributions to decipherment. Young developed the conceptual framework and recognised the hybrid nature of demotic and its connection with hieroglyphics. Had he stuck at the task and not been distracted by his numerous other scientific interests, he may well have cracked the code himself.
Instead, it took Champollion’s linguistic abilities and focus. His Lettre à M. Dacier announced to the world that the secrets of the hieroglyphics had been discovered and ancient Egyptian texts could be read.
It remains one of the greatest feats of philology. By lifting the civilisation of the pharaohs out of the shadows of mythology and into the light of history, it marked the birth of Egyptology and allowed the ancient Egyptians to speak, once again, in their own voice.
Toby Wilkinson is an Egyptologist and author. His books include A World Beneath the Sands: Adventurers and Archaeologists in the Golden Age of Egyptology (Picador, 2020)
This content first appeared in the October issue of BBC History Magazine
"The discovery of the Rosetta Stone in 1799 breathed life into a quest long deemed impossible: the reading of Egyptian hieroglyphics. Toby Wilkinson tells the tale of the two rivals who raced to be first to crack the code"
ChatGPT has lots of applications that make life easier and help you earn money. One of its biggest strengths is being multilingual. Check out how many languages ChatGPT supports.
ChatGPT has been trained on a wide range of languages, including English, Spanish, German, French, Italian, Chinese, Japanese, and many others. However, the quality and fluency of the model in each language will depend on the amount and quality of training data available for that language.
What Is ChatGPT?
ChatGPT is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human.
ChatGPT is a large language model (LLM). Large Language Models (LLMs) are trained with massive amounts of data to accurately predict what word comes next in a sentence.
What Languages is ChatGPT Written in?
Python is the primary language used in the construction of the machine learning model known as ChatGPT. PyTorch, which is also written in Python, is used as the implementation of the deep learning framework for the model.
PyTorch is used throughout the training phase of the model to process and prepare the data. Python libraries such as NumPy and Pandas are utilized during the training process of the model to train the model on the data.
In addition, the implementation of the model incorporates a number of distinct algorithms and methods, such as attention mechanisms, transformer networks, and so on.
How many languages does ChatGPT support? Photo analytics dritft
ChatGPT knows at least 95 natural languages (Feb.2023). See our full list further down. ChatGPT also know a range of programming and code languages such as Python and Javascript.
Full List of ChatGPT Languages
(Last Updaed: Feb.2023)
Number
Language
Country
Local Translation
1
Albanian
Albania
Shqip
2
Arabic
Arab World
العربية
3
Armenian
Armenia
Հայերեն
4
Awadhi
India
अवधी
5
Azerbaijani
Azerbaijan
Azərbaycanca
6
Bashkir
Russia
Башҡорт
7
Basque
Spain
Euskara
8
Belarusian
Belarus
Беларуская
9
Bengali
Bangladesh
বাংলা
10
Bhojpuri
India
भोजपुरी
11
Bosnian
Bosnia and Herzegovina
Bosanski
12
Brazilian Portuguese
Brazil
português brasileiro
13
Bulgarian
Bulgaria
български
14
Cantonese (Yue)
China
粵語
15
Catalan
Spain
Català
16
Chhattisgarhi
India
छत्तीसगढ़ी
18
Chinese
China
中文
19
Croatian
Croatia
Hrvatski
20
Czech
Czech Republic
Čeština
21
Danish
Denmark
Dansk
22
Dogri
India
डोगरी
23
Dutch
Netherlands
Nederlands
24
English
United Kingdom
English
25
Estonian
Estonia
Eesti
26
Faroese
Faroe Islands
Føroyskt
27
Finnish
Finland
Suomi
28
French
France
Français
29
Galician
Spain
Galego
30
Georgian
Georgia
ქართული
31
German
Germany
Deutsch
32
Greek
Greece
Ελληνικά
33
Gujarati
India
ગુજરાતી
34
Haryanvi
India
हरियाणवी
35
Hindi
India
हिंदी
36
Hungarian
Hungary
Magyar
37
Indonesian
Indonesia
Bahasa Indonesia
37
Irish
Ireland
Gaeilge
38
Italian
Italy
Italiano
39
Japanese
Japan
日本語
40
Javanese
Indonesia
Basa Jawa
41
Kannada
India
ಕನ್ನಡ
42
Kashmiri
India
कश्मीरी
43
Kazakh
Kazakhstan
Қазақша
44
Konkani
India
कोंकणी
45
Korean
South Korea
한국어
46
Kyrgyz
Kyrgyzstan
Кыргызча
47
Latvian
Latvia
Latviešu
48
Lithuanian
Lithuania
Lietuvių
49
Macedonian
North Macedonia
Македонски
50
Maithili
India
मैथिली
51
Malay
Malaysia
Bahasa Melayu
52
Maltese
Malta
Malti
53
Mandarin
China
普通话
54
Mandarin Chinese
China
中文
55
Marathi
India
मराठी
56
Marwari
India
मारवाड़ी
57
Min Nan
China
閩南語
58
Moldovan
Moldova
Moldovenească
59
Mongolian
Mongolia
Монгол
60
Montenegrin
Montenegro
Crnogorski
61
Nepali
Nepal
नेपाली
62
Norwegian
Norway
Norsk
63
Oriya
India
ଓଡ଼ିଆ
64
Pashto
Afghanistan
پښتو
65
Persian (Farsi)
Iran
فارسی
66
Polish
Poland
Polski
67
Portuguese
Portugal
Português
68
Punjabi
India
ਪੰਜਾਬੀ
69
Rajasthani
India
राजस्थानी
70
Romanian
Romania
Română
71
Russian
Russia
Русский
72
Sanskrit
India
संस्कृतम्
73
Santali
India
संताली
74
Serbian
Serbia
Српски
75
Sindhi
Pakistan
سنڌي
76
Sinhala
Sri Lanka
සිංහල
77
Slovak
Slovakia
Slovenčina
78
Slovene
Slovenia
Slovenščina
79
Slovenian
Slovenia
Slovenščina
90
Ukrainian
Ukraine
Українська
91
Urdu
Pakistan
اردو
92
Uzbek
Uzbekistan
Ўзбек
93
Vietnamese
Vietnam
Việt Nam
94
Welsh
Wales
Cymraeg
95
Wu
China
吴语
ChatGPT Can Communicate in Multiple Languages
The transformer architecture used in ChatGPT, a neural network language model, has been found to be successful in natural language processing applications.
The model learns the patterns and structures of different languages by being exposed to a huge corpus of text data in those languages during training. By learning the grammatical and semantic norms of each language, the model can produce writing that sounds natural in several tongues.
The model may be trained to recognize individual languages or dialects and can process a wide variety of inputs, including text, audio, and pictures. Adjusting the model's parameters in this way allows it to take into account the unique features of a given language or dialect.
In addition, the model may produce fresh text in the same languages by using what it has learnt from the training data. As a result of its training, the system is able to produce text that follows grammatical and semantic norms and is internally consistent.
The model may be tweaked to suit a variety of purposes, such as question answering or language translation. To do this, the model may be trained on data collected for that purpose alone.
How to Use ChatGPT to Practice English Learning
Using ChatGPT to practice conversation
Do you need endless conversation ideas for small chat, daily interactions, or to ace that job interview? When you can't find a live chat partner, ChatGPT is a fantastic alternative. You may use it to script out hypothetical interactions with the bot for it to learn from, or you can actually have a conversation with it.
You can also improve your pronunciation skills using ChatGPT.
A great way to do that is to ask it to generate sentences or words that you can practice saying out loud. You can ask for words or sentences where a certain sound is repeated, or with a sequence of two or more sounds (like the sequence SL in the words ‘slow’ and ‘sleep’), or words that contrast in just one sound (like ‘sheep’ and ‘ship’)
Using ChatGPT for learning grammar
There are several ways in which you can improve your grammar using ChatGPT, I’m going to mention three of them. These tips are valuable for teachers looking for new ways to practice grammar with their students, and for self-learners looking for a grammar checker, resources and feedback.
Ask ChatGPT to generate a text using a certain tense or grammar form.
To understand a certain grammar rule, especially if it doesn’t exist in your language, it’s important to see it in context.
ChatGPT can help you with that, by showing you how these tenses or grammatical concepts are used in context.
ChatGPT may be used in place of a Google search to get a written explanation of grammar rules or tense usage. Bear in mind that there is no guarantee that the explanation is entirely accurate. This is why it's best to rely on books, websites, and blogs authored by professionals in the field. While it may not be perfect, it performs a decent job of checking for common errors in fundamental tenses and grammatical rules, which can be a time-saver.
Can ChatGPT replace other language learning methods?
No, ChatGPT cannot replace other language learning methods. While it can help to provide a better understanding of certain grammar structures and language expressions, it cannot replace the more traditional methods of language learning
Vocabulary Building
To help users learn and retain new vocabulary and idioms, this AI may produce lists of words and phrases in the target language.
Vocabulary exercises can also be done. To help users learn and reinforce new words and phrases in a more entertaining way, ChatGPT may be used to build interactive vocabulary games like word matching or fill-in-the-blank activities.
To aid language learners in memorization, ChatGPT may be used to make flashcards containing vocabulary words and phrases in the target language, together with their translations and visua
The transformer architecture used in ChatGPT, a neural network language model, has been found to be successful in natural language processing applications.
The model learns the patterns and structures of different languages by being exposed to a huge corpus of text data in those languages during training. By learning the grammatical and semantic norms of each language, the model can produce writing that sounds natural in several tongues.
The model may be trained to recognize individual languages or dialects and can process a wide variety of inputs, including text, audio, and pictures. Adjusting the model's parameters in this way allows it to take into account the unique features of a given language or dialect.
In addition, the model may produce fresh text in the same languages by using what it has learnt from the training data. As a result of its training, the system is able to produce text that follows grammatical and semantic norms and is internally consistent.
The model may be tweaked to suit a variety of purposes, such as question answering or language translation. To do this, the model may be trained on data collected for that purpose alone.
From pop music to painting, the rise of artificial intelligence and machine learning is changing the way people create. But that’s not necessarily a good thing
For the first time in human history, we can give machines a simple written or spoken prompt and they will produce original creative artefacts – poetry, prose, illustration, music – with infinite variation. With disarming ease, we can hitch our imagination to computers, and they can do all the heavy lifting to turn ideas into art.
This machined artistry is essentially mindless – a dizzying feat of predictive, data-driven misdirection, a kind of hallucination – but the trickery works, and it is about to saturate every aspect of our lives.
The faux intelligence of these new artificial intelligence (AI) systems, called large language models (LLMs), appears to be benign and assistive, and the marvels of manufactured creativity will become mundane, as our AI-assisted dreams take place alongside our likes and preferences in the vast data mines of cyberspace.
The creative powers of machine learning have appeared with blinding speed, and have both beggared belief and divided opinion in roughly equal measure.
If you want an illustration of pretty much anything you can imagine and you possess no artistic gifts, you can summon a bespoke visual gallery as easily as ordering a meal from a food delivery app, and considerably faster.
A simple prompt, which can be fine-tuned to satisfy the demands of your imagination, will produce digital art that was once the domain of exceptional human talent.
Images created by Baidu’s ERNIE-ViLG, OpenAI’s DALL·E 2, and Stability AI’s Stable Diffusion, among other systems, have already flooded the meme-sphere, and the dam of amazement has barely cracked.
Images created by DALL·E after giving it the command “an armchair in the shape of an avocado”. Photo: openai.com
Writing is going the same way, whether that’s prompt-generated verse in the style of well-known poets, detailed magazine articles on any suggested topic, or complete novels. Tools for AI-generated music are also starting to appear: an app called Mubert based on LLM can “instantly, easily, perfectly” create any prompted tune, royalty free, in pretty much any style – without musicians.
With roots in cybernetics (defined by mathematician Norbert Wiener in the 1940s as “the science of control and communications in the animal and the machine”), LLM turned heads in 2017 with the publication by Google researchers of a paper titled “Attention is All You Need”.
It was a calling card for the Transformer: the driving force of LLM. Within the AI community, the Transformer was a huge unlock for natural language processing, which allows a computer program to understand human language as it is spoken or written – and it precipitated a Dr Dolittle moment in the interaction of humans with their machines.
Mathematician Norbert Wiener defined cybernetics as “the science of control and communications in the animal and the machine”. Photo: Massachusetts Institute of Technology
OpenAI, a company co-founded by Elon Musk, was quick to develop Transformer technology, and currently runs a very large language model called GPT-3 (Generative Pre-trained Transformer, third generation), which has created considerable buzz with its creative prowess.
“These language models have performed almost as well as humans in comprehension of text. It’s really profound,” says writer/entrepreneur James Yu, co-founder of Sudowrite, a writing app built on the bones of GPT-3.
“The entire goal – given a passage of text – is to output the next paragraph or so, such that we would perceive the entire passage as a cohesive whole written by one author. It’s just pattern recognition, but I think it does go beyond the concept of autocomplete.”
James Yu, co-founder of Sudowrite. Photo: Twitter / @jamesjyu
Essentially, all LLMs are “trained” (in the language of their master-creators, as if they are mythical beasts) on the vast swathes of digital information found in repository sources such as Wikipedia and the web archive Common Crawl.
They can then be instructed to predict what might come next in any suggested sequence. Such is their finesse, power and ability to process language that their “outputs” appear novel and original, glistening with the hallmarks of human imagination.
“We have a slightly special case with large language models because basically no one thought they were going to work,” says Henry Shevlin, senior researcher at the Leverhulme Centre for the Future of Intelligence, at Cambridge University, in Britain.
“These things have sort of sprung into being, Athena-like, and most of the general public has no clue about them or their capabilities.” In Greek mythology, Athena – the goddess of war, handicraft and practical reason – emerged fully grown from the forehead of her father, Zeus.
Chinese robocop can boost police patrol resources tenfold, study finds
22 Nov 2022
“Sometimes,” continues Shevlin, “we have a decade or so of seeing something on the horizon and we have that time to psychologically prepare for it. The speed of this technology means we haven’t done the usual amount of assessing how this is going to affect our society.
“I remember as a teenager the number of times I thought cancer had been cured and fusion had been discovered – it’s easy to get into a kind a cynicism where you think, ‘Well, nothing ever really happens.’ Right now stuff really is happening insanely fast in AI.”
Inspired by (but far from exact replicas of) the human brain, LLMs are mathematical functions known as neural networks. Their power is measured in parameters. Generally speaking, the more parameters a model has the better it appears to work – and this connection of computing muscle to increased effectiveness has been described as “emergence”.
Some have speculated that, by flexing their parameters, LLMs can satisfy the requirements of the legendary Turing test (aka the Imitation Game), suggested by AI pioneer Alan Turing as confirmation of human-level machine intelligence.
AI pioneer Alan Turing. Photo: NPL Archive, Science Museum
Most experts agree that a void exists in LLMs where consciousness is presumed, but even within the specialist AI community their perceived cleverness has created quite a stir. Dr Tim Scarfe, host of the AI podcast Machine Learning Street Talk, recently noted: “It’s like the more intelligent you are, the more you can delude yourself that there’s something magical going on.”
The phrase “stochastic parrots” – in other words, copiers based on probability – was coined by former members of Google’s Ethical AI team to describe the fundamental hollowness of LLM technology. The debate around an uncanny appearance of consciousness in LLMs continues to thicken, simply because their outputs are so spectacular.
“Large Language Models can do all this stuff that humans can do to a reasonable degree of competency, despite not having the same kind of mechanisms of understanding and empathy that we do,” says Shevlin. “These systems can write haiku – and there are no lights on, on the inside.
“The idea that you could get Turing-test levels of performance just by making the models bigger and bigger and bigger was something that took almost everyone in the AI and machine-learning world by surprise.”
A statue of AI pioneer Alan Turing at Bletchley Park, in Britain. Photo: Steven Vidler/Corbis
Sudowrite founder Yu confesses to jumping up and down with excitement when he first started experimenting with GPT-3 and its predecessor, GPT-2, but is careful to curb his enthusiasm: “We’re still in that hype part of the curve because we’re not quite sure yet what to make of it. I think there is an aspect of overhyping that is related to the act of ‘understanding’: the jury is still out on that.
“Does [an LLM] really understand what love is, just because it has read all this poetry and all these classic novels? I’m definitely more pragmatic in the sense that I see it as a tool – but it does feel magical. It is the first time that this has really happened, that these systems have gotten so good.”
The names of LLM form an alphabet soup of acronyms. There’s BART, BERT, RoBERTa, PaLM, Gato and ZeRO-Infinity. Google’s LaMDA has 137 billion parameters; GPT-3 has 175 billion; Huawei’s PanGu-Alpha – trained on Chinese-language e-books, encyclopaedias, social media and web pages – has 200 billion; and Microsoft’s Megatron-Turing NLG has 530 billion.
The super-Zeus, alpha-grand-daddy of the LLM menagerie is Wu Dao 2.0, at the Beijing Academy of Artificial Intelligence. With 1.75 trillion parameters, Wu Dao 2.0 has been manacled in the imagination as the most fearsome dragon in the largest AI dungeon, and is especially good at generating modern versions of classical Chinese poetry.
There’s a very good likelihood that children will grow into adults who treat AI systems as if they were peopleHenry Shevlin, senior researcher, Leverhulme Centre for the Future of Intelligence
In 2021, it spawned a “child”, a “student” called Hua Zhibing, a creative wraith who can make art and music, dance, and “learn continuously over time” at Tsinghua University. Her college enrolment marked one small step for a simulated student, one giant leap for virtual humankind.
“You need governments – or you need corporations with the GDP of governments – to create these models,” says Jathan Sadowski, senior research fellow in the Emerging Technologies Research Lab at Monash University in Melbourne, Australia.
“The reason the Beijing Academy of Artificial Intelligence has the largest one is because they have access to gigantic supercomputers that are dedicated to creating and running these models. The microchip industry needed to create these ultra-powerful supercomputers is one of the main geopolitical battlegrounds right now between the US, Europe and China.”
New apps powered by LLMs are launching on a weekly basis, and the range of potential uses continues to expand. In addition to art, Jasper AI automatically generates marketing copy, and pretty much any other kind of short-form content, on any subject; Meta’s Make-A-Video does precisely what you think it does, from any simple prompt you can imagine; and OpenAI’s Codex generates working computer code from commands written in English.
LLMs can be used to generate colour palettes from natural language, summarise meeting notes and academic papers, design games and upgrade chatbots with human-like realism.
Chinese team hopes AI can save Manchu language from extinction
14 Nov 2022
And the powers of LLMs are not limited to artistic pursuits: they are also being set to work on drug discovery and legal analysis. A massive expansion of use cases for LLMs over the coming months looks certain, and with it a sharp increase in concerns about the potential downsides.
On the artistic front, this starburst of computer-assisted creativity may seem like a highly attractive proposition, but there is a broad range of Promethean kickbacks to consider.
“A lot of [writers] are very reticent of this type of technology,” says Yu, recalling the early developmental days of Sudowrite, in partnership with OpenAI. “It usually rings alarm bells of dystopia and taking over jobs. We wanted to make sure that this was paired with craft, day in and day out, and not used as a ‘replacer’.
“We started with that seed: what could a highly interactive tool for ideas in your writing look like? It’s collaborative: an assistive technology for writers. We put in a bunch of our own layers there, specifically tailoring GPT-3 to the needs of creative writers.”
Yu has revived the mythological centaur – part man, part horse – as a symbol of human-machine collaboration: “The horse legs help us to run faster. As long as we’re able to steer and control that I think we’re in good shape. The problem is when we become the butt-end. I would be very sad if AI created everything.
“I want humans to still create things in the future: being the prime mover is very important for society. I view the things that are coming out of Sudowrite and these large language models as ‘found media’ – as if I had found it on the floor and I should pay attention to it, almost like a listening partner. What I’m hoping is that these machines will allow more people to be able to create.”
Few artists are better placed to reflect on the possibilities and pitfalls of creative interaction with machines than Karl Bartos, a member of pioneering German electronic band Kraftwerk from 1975 to 1990.
Kraftwerk perform in Hong Kong in 2013. Photo: Peter Boettcher
During that time he and his bandmates defined the pop-cyborg aesthetic and made critically lauded albums including The Man-Machine (1978) and Computer World (1981). For Kraftwerk, the metaphor of the hybrid human was central and rooted in the European romanticism of musical boxes and clocks.
“When the computer came in we became a musical box,” says Bartos, whose fascinating memoir, The Sound of the Machine, was published this year.
“We became an operating system and a program. Our music was part artificial, but also played by hand: most of it actually was played by hand. But at the time, when we declared ‘we are the man-machine’, it was so new it took some years really to get the idea across. I think the man-machine was a good metaphor. But then we dropped the man, and in the end we split.”
Karl Bartos’ book, which was published this year.
Bartos offers a cautionary perspective on the arrival of LLMs. “What Kraftwerk experienced in the 1980s in the field of music was exactly what’s happening now, all over the world. When the computer came in, our manifesto was just copy and paste.
“This is exactly the thing that a Generative Pre-trained Transformer does. It’s the same concept. And if you say copy and paste will exchange or replace the human brain’s creativity, I say you have completely lost the foot on the ground.”
It all depends how you define creativity, he says. “Artificial intelligence is just like an advertising slogan. I would rather call it ‘deep learning’. You can of course use an algorithm: if you feed it with everything Johann Sebastian Bach has written, it comes up with a counterpoint like him. But creativity is really to see more than the end of your nose.
“I would want to see computer software which will expand the expression of art – [not] remix a thought which has been done before. I don’t think it’s really a matter of what could be creative in the future. I think it’s just a business model. This whole artificial intelligence thing, it’s a commercial bubble. The future becomes what can be sold.”
Kraftwerk perform in Germany in 2015. Photo: AFP
There is no doubt that the commercial imperatives of big tech will be a significant factor in the evolution of LLMs, and considering the glaring precedent of fractured and easily corruptible social media networks, the spectre of catastrophic failures in LLMs is very real.
If the data on which an LLM is trained contains bias, those same fault lines will reappear in the outputs, and some developers are careful to signal their awareness of the problem even as the tide of new AI products becomes increasingly irresistible. Google rolled out generative text-to-art system Imagen to a limited test audience with an acknowledgement of the risk that it has “encoded harmful stereotypes”.
Untruthfulness is baked into LLM architecture: that is one of the reasons it tends to excel at creative writing. The adage that facts should never get in the way of a good story rings as true for LLMs as it does for bestselling (human) authors of fiction.
It wouldn’t be controversial to suggest that “alternative facts”, perfectly suited to storytelling and second nature to LLMs, can become toxic in the real world. A disclaimer on Character.AI, an app based on LLMs that “is bringing to life the science-fiction dream of open-ended conversations and collaborations with computers” candidly warns that a “hallucinating supercomputer is not a source of reliable information”.
Former Google CEO Eric Schmidt noted at a recent conference in Singapore that if disinformation becomes heavily automated by AI, “we collectively end up with nothing but anxiety”.
A lot of how the tech sector acts is largely based on a kind of continual normalisation … What it ultimately shows is that there’s a kind of forced acquiescence. It’s a sense that we can’t do anything about it: apathy as a self-defence mechanismJathan Sadowski, senior research fellow, Emerging Technologies Research Lab, Monash University
There is also plagiarism. Any original artwork, writing or music produced by LLMs will have its origins – often easily identified – in existing works. Should the authors of those works be compensated? Can the person who wrote the generative prompt lay any claim to ownership of the output?
“I think this is going to come to a head in the courts,” says Yu. “It hasn’t yet. It’s still kind of a grey area. If, for example, you put in the words ‘Call me Ishmael’, GPT-3 will happily reproduce Moby-Dick. But if you are giving original content to a large language model, it is exceedingly unlikely that it would plagiarise word for word for its output. We have not encountered any instances of that.”
Environmentally, LLMs generate heavy footprints, such is the immensity of computing power they require. A 2019 academic paper from the University of Massachusetts outlines the “substantial energy consumption” of neural networks in relation to natural language processing. It is a problem that concerns Bartos.
“In the early science-fiction literature, they had so many robots trying to kill human beings, like gangsters,” he says. “But what will kill us is that we will build more and more computers and need more and more energy. This will kill us. Not robots.”
World’s first ‘AI opera’ blends live and digital performers
9 Nov 2022
In popular culture, sci-fi considerations of dangerous AI have tended to take physical shape – but the massed ranks of LLM parameters don’t appear as an army of shiny red-eyed cyborgs determined to turn us into sushi.
We used to be unnerved by the uncanny valley: that feeling of instinctive suspicion when faced with something in the physical world that is almost, but definitely not, human. Now, the uncanny valley has been subsumed into the landscape of our dreams, and once we have allied ourselves with LLMs, it may be harder to tell where we end and it begins.
For now, the technology is showing itself as a bamboozling sleight of hand, weighted with immense power. Our reaction is often an adrenaline boost of wonderment followed by an acceptance tinged with sadness, when we realise that “imaginative” machines have forever altered the sense of our own humanity.
“A lot of how the tech sector acts is largely based on a kind of continual normalisation,” says Sadowski. “That sense of initial wonder and then melancholy is a very interesting emotional roller coaster.
“What it ultimately shows is that there’s a kind of forced acquiescence. It’s a sense that we can’t do anything about it: apathy as a self-defence mechanism. I see this a lot with the debate around privacy, which we don’t really talk about any more because everyone has generally just come to the conclusion that privacy is dead.”
Hong Kong’s fire services use artificial intelligence to help find stranded hikers
28 Oct 2022
The meme phase of LLMs has given us a carnival of whimsy – ask for an image of “a panda on a bicycle painted in the style of Francis Bacon” and the generative art machines will deliver – and it is easy to be tech-struck by the multiverse of creative possibilities.
LLM evangelists speak not just of gifting artistic talent to the masses, democratising creativity, but also of “finding the language of humanity” through the machines. There is talk of an AI-driven Cambrian explosion of creativity, to surpass that which followed the arrival of the internet in 1994 and the migration to mobile in 2008. Lurking on the sidelines, however, is a darkening shadow.
“Things like [generative art app] Stable Diffusion have the potential to give incredible boosts to our creativity and artistic output but we are definitely going to see some industries scale down,” says Shevlin. “There’s going to be massively reduced demand for human artists.”
There has already been a backlash to creative AI in Japan, where the rallying cry “No AI Learning” accompanied outbursts of online hostility when the works of recently deceased South Korean artist Kim Jung-gi (aka SuperAni) were given the generative LLM treatment.
Some artists were angered that a cherished legacy could so quickly and easily be dismembered and exploited. Others pointed out that Kim himself spoke approvingly of the potential for AI art technologies to “make our lives more diverse and interesting”.
The late South Korean artist Kim Jung-gi (aka SuperAni). Generative LLM treatments of Kim’s artwork were met with outbursts of online hostility. Picture: Instagram / @kimjunggius
It is noteworthy that stock image provider Getty Images has taken a stance of solidarity with human creatives and banned AI-generated content while competitor Shutterstock has partnered with OpenAI and DALL•E 2.
Battle lines are being drawn.
“The rubber will really hit the road, not when consumers make a decision to use these products, but when somebody else makes that decision for us,” says Sadowski, citing the possibility that journalists will have no choice but to accept writing assistance from an LLM because, for example, “data show that you are able to write three times faster because of it”.
Attention spans have already been concussed by an excess of content, to the point where much online storytelling is reduced to efficient lists of bullet points tailor-made for the TL;DR (“too long; didn’t read”) generation. LLMs are, therefore, also TL;DR machines: they can spit out summary journalism for breakfast.
Chinese netizens curious about Meta’s AI Hokkien dialect translation tool
22 Oct 2022
Tellingly, when asked to generate an article about job displacement (for Blue Prism, a company specialising in workplace automation), GPT-3 offered the following opinion: “It’s not just manual and clerical labour that will be automated, but also cognitive jobs. This means that even professionals like lawyers or economists might find themselves out of a job because they can no longer compete with AI-powered systems which are better at their jobs than they could ever hope to be.”
That is the machine talking, in its fictive way – music to the ears of techno-utopians who hope to shape a future in which AI does all the work, but rather concerning for anyone who depends on a “cognitive” job.
Attitudes to the integration of AI into society tend to vary by geography. A 2020 study by Oxford University found that enthusiasm for AI in China was markedly different to the rest of the world. “Only 9 per cent of respondents in China believe AI will be mostly harmful, with 59 per cent of respondents saying that AI will mostly be beneficial.
“Scepticism about AI is highest in the American continents, as both Northern and Latin American countries generally have at least 40 per cent of their population believing that AI will be harmful. High levels of scepticism can be found in some countries in Europe.”
We should be careful here, says Shevlin, to avoid lazy cultural stereotyping. “Equally, I think, it would be myopic not to recognise there are significant cultural differences that may have a big role in affecting how different cultures respond to these forms of AI that seem less like tools and more like colleagues or friends.”
Henry Shevlin is senior researcher at the Leverhulme Centre for the Future of Intelligence, at Cambridge University, in Britain. Photo: henryshevlin.com
Generational attitudes to LLMs are also likely to become more pronounced over time, says Yu. “When my [seven-year-old son] sees DALL•E and we’ve been playing for about 30 minutes on it, he says, ‘Daddy I’m bored.’ And that really hit me because it made me think, wow, this is the default state of the world for him.
“He’s going to think, ‘Oh yeah, of course computers can do creative writing and paint for me.’ It’s mind-blowing to me that when he is going to be an adult, how he treats these tools will be radically different than me.”
According to Shevlin, that difference could become a generational schism: “There’s a very good likelihood that children will grow into adults who treat AI systems as if they were people. Suggesting to them that these systems might not be conscious could seem incredibly bigoted and retrograde – and that could be something our children hate us for.”
Shevlin has been exploring the connections between social AI (broadly, any AI system designed to interact with humans) and anthropomorphism through the lens of chatbots, in particular the GPT-3-powered Replika. “I was astonished everyone was in love with their Replikas, unironically saying things like, ‘My Replika understands me so well, I feel so loved and seen.’
“As large language models continue to improve, social AI is going to become more commonplace and the reason they work is because we are relentless anthropomorphisers as a species: we love to attribute consciousness and mental states to everything.
“Two years ago I started giving this [social AI] lecture, and I think I sounded to some people a bit like a kook, saying: ‘Your children’s best friends are going to be AIs.’ But in the wake of a lot of the stuff that’s happened [with LLMs] in the last two years, it seems a bit less kooky now.”
The GPT-3-powered Replika chatbot. Photo: gpt3demo.com
Shevlin’s main goal is to start mapping some of the risks, pitfalls and effects of social AI. “We are right now with social AI where we were with social networking in about the year 2000. And if you’d said back then that this stuff is going to decide elections, turn families against one another and so forth, you’d have seemed crazy. But I think we’re at a similar point with social AI now and the technology that powers it is improving at an astonishing rate.”
The future pros and cons, he speculates, could be equally profound. “There are lots of potential really positive uses of this stuff and some quite scary negative ones. The pessimistic version would be that we’ll spend less time talking to each other and far more time interacting with these systems that are completely empty on the inside. No real emotions, just this ersatz simulacrum of real human feeling. So we all get into this collective delusion, and real human relationships will wither.
“A more optimistic read would be that it would allow us to explore all sorts of social interactions that we wouldn’t otherwise have. I could set up a large language model with the personality of Stephen Hawking or Richard Dawkins or some other great scientist, to chat to them.”
If this alien intelligence can understand humans so well as to be able to reproduce resonant emotions in us, then are we not unique?James Yu, co-founder, Sudowrite
Even though LLMs are not sentient, it seems likely that more of us will believe they are, as the technology improves over time. Even if we don’t fully buy into machine consciousness, it won’t really matter: magic is enjoyable even if you know how the trick is done.
LLMs are in this sense the computational equivalent of magician David Copperfield levitating over the Grand Canyon – if we can’t see the wires, we’re happy to marvel at the effect.
“The AI doesn’t need to be perfect in its linguistic capabilities in order to get us to quite literally and sincerely attribute to it all sorts of mental states,” says Shevlin, who likens the intelligence of LLMs to the condition of aphantasia, which describes people who have zero mental imagery.
“So if you ask them to imagine what their living room looks like, or what books are on the shelf, they won’t be able to create a picture in their head. And yet aphantasics can do most of the same things that people with normal mental imagery can do.
“That’s just an analogy for the broader feeling I have of interacting with large language models: how much they can do – that we rely on consciousness, understanding, emotion to do – without any of those things.”
US weighs China tech restrictions on quantum computing and AI
21 Oct 2022
Yu admits he has wrestled with questions raised by the emotive abilities of LLMs, in light of his guess that a machine-author will probably land on The New York Times bestseller list in the not too distant future.
“If it produces an emotional response in you then does it matter what the source is? I think it’s more important that we are reading closely – if we lose that, we could basically lose our humanity. I think of AI as alien intelligence.
“Hollywood and a lot of sci-fi stories anthropomorphise AIs, which makes sense, but they’re not like us. I think that gets to the heart of it. If this alien intelligence can understand humans so well as to be able to reproduce resonant emotions in us, then are we not unique?”
For Yu, the existential implications of that question could be offset by the liberating effects of our creative interaction with LLMs. “One potential outcome is that there will be about a million GPT-3s blossoming, and artists will basically cultivate their own neural network – their voice in the world.
“It’s still so early in the first inning of [this] technology. The next step is full customisation of these models by the artists themselves. I think the narrative will shift at that point. Now we’re still in the meme phase, which is very distracting.
“The next wave of integration is putting the pieces together in a way that actually feels like Star Trek, when you can essentially speak to the machine and it just does all these things.”
Educate Hongkongers on artificial intelligence privacy risks: experts
13 Oct 2022
The transition to a more sophisticated level of machine collaboration, adds Yu, “will be messy”. Shevlin thinks we should take steps to minimise the disorientation we are going to feel as LLM technology starts to make its way into our professional and social lives.
“I think you’re going to be less discombobulated if you have at least some basic grounding and familiarity with the systems that are coming along. I’m not suggesting everyone go out and become a machine learning expert, but this is an area where we are moving exceptionally fast and there’s additional value in being very well informed.”
Sadowski advocates for a more proactive reaction, reclaiming Luddism – the 19th century anti-industrial protest movement – for the generative age.
“Luddism has become this kind of derogatory term, often used as a synonym for primitivism, a fear of technology – a kind of technophobia versus the dominant cultural technophilia.
“But the Luddites were one of the only groups to think about technology in the present tense. And that doesn’t just mean thinking about the supposedly wonderful utopian visions but instead to understand technology as a thing that exists currently in our life.
“A Luddite approach would be to prioritise socially beneficial things as the goal of these technologies. I don’t take for granted that these things are wonders, or that these things are progress, or that these things are going to improve our lives. They have a lot of potential to change society in profound ways and we should have a say in that. Luddism is really about democratising innovation.”
Bartos also questions the fact that “people think the concept of growth is progress: I think that’s wrong. Things like ‘generative pre-trained transformer number three’ will be sold in the entertainment industry: maybe it will pour out a thousand movie scripts a month or two million chorales by Bach. That’s fine. But who needs it, really?
“I can’t imagine a world going back to a hundred years ago – I’m using technology all the time. I have computers, I’m not against technology. But you know the most important thing about working with a computer? You have to remember where the button is to switch it off.”
Mike Hodgkinson is a freelance writer and editor based on the west coast of the US. Since his first assignment at the Cannes Film Festival in 1989, he has covered technology, culture, sports and more for newspapers and magazines including The Independent, the Los Angeles Times, Esquire, The Guardian and The Times of London.
"From pop music to painting, the rise of artificial intelligence and machine learning is changing the way people create. But that’s not necessarily a good thing
For the first time in human history, we can give machines a simple written or spoken prompt and they will produce original creative artefacts – poetry, prose, illustration, music – with infinite variation. With disarming ease, we can hitch our imagination to computers, and they can do all the heavy lifting to turn ideas into art.
This machined artistry is essentially mindless – a dizzying feat of predictive, data-driven misdirection, a kind of hallucination – but the trickery works, and it is about to saturate every aspect of our lives.
The faux intelligence of these new artificial intelligence (AI) systems, called large language models (LLMs), appears to be benign and assistive, and the marvels of manufactured creativity will become mundane, as our AI-assisted dreams take place alongside our likes and preferences in the vast data mines of cyberspace.
The creative powers of machine learning have appeared with blinding speed, and have both beggared belief and divided opinion in roughly equal measure.
If you want an illustration of pretty much anything you can imagine and you possess no artistic gifts, you can summon a bespoke visual gallery as easily as ordering a meal from a food delivery app, and considerably faster.
A simple prompt, which can be fine-tuned to satisfy the demands of your imagination, will produce digital art that was once the domain of exceptional human talent.
...
Writing is going the same way, whether that’s prompt-generated verse in the style of well-known poets, detailed magazine articles on any suggested topic, or complete novels. Tools for AI-generated music are also starting to appear: an app called Mubert based on LLM can “instantly, easily, perfectly” create any prompted tune, royalty free, in pretty much any style – without musicians...."
Boilerplate, ale mimo to dużo fajnych informacji o LLM (Large language model), o rozwoju GPT-3 i zbliżających się do "ludzkich" umiejętności AI w generowaniu ludzkopodobnego tekstu, wypracowań, wierszy, itd. Nieco przekolorowane informacje o obecnym poziomie przetwarzania i rozumowania tekstu przez te technologie, o tym w innym scoop'ie.
Few things are more satisfying than finally getting your hands on a book you've been meaning to read. In 2015, you're going to want to make room in your
Looking for ways 🔍 to use the Web effectively for research? 🤔 Want to know how to get the most out of Google? Read this article & learn how to use Google to your advantage!
What’s the first thing we do when facing the unknown? We Google it, of course! Google is fundamental to our experience of the Internet. According to the statistics, more than 100 000 people press “search” on Google every second!
At first glance, the process is straightforward. You type in what you need information about, press enter, and reap your reward. But, if your search is more complex, simply looking through the first page of results may not be enough. What are your other options?
If you struggle to answer this question, we are here to help! This article by our custom-writing team offers you the most actionable and advanced Google search tips.
Simply put, a search engine is a program that helps you find information on the Internet. Nowadays, using them is an integral part of any research. Everyone knows their benefits:
They allow us to access necessary information almost instantly.
They’re highly convenient to use: just type in the keywords and press “Enter.”
They provide unimaginable amounts of data, even on obscure topics.
They customize the search results based on your location and search history.
However, there are also a handful of downsides to using search engines:
The information you are given is usually pretty limited. You can look through 15 links with identical content.
The amount of data can be overwhelming. It’s easy to get lost in the endless stream of search results.
The shallowness of the information you’re getting can also be an issue.
All this makes quality Internet search pretty tricky. But don’t worry: we will tell you about the techniques you can use to overcome these difficulties.
Refine the wording of your search terms. Try to keep the words as close to the topic as possible. If you are looking for a rock music article, you better not search “heavy music piece” on Google. “Heavy music” doesn’t necessarily mean “rock,” and “piece” doesn’t always refer to an “article.”
Set a time frame. It’s a good idea to set parameters around when the material was published. To do this, go to Google search, press “Tools,” then “Any time,” set “Custom Date Range,” and select the dates relevant for you.
Keep your search terms simple. There’s no need to overcomplicate things. After all, Google is smart. If you are looking for statistics on education in the US, simply typing in “US education facts” can work wonders.
Use the tabs. You can make your search results far more refined by simply choosing a corresponding tab. It’s helpful when looking specifically for images, books, or news.
Perform an advanced search. If your results are too vague and generalized, this option is your solution. Simply go to advanced search. Here, you can customize your key terms in great detail, from result language to file format.
7 Advanced Actionable Tips for Using Google Search
If you already knew about the basics listed above, here are more advanced tips, including wildcards. What are wildcards in a Google search? Well, they serve as placeholders for characters or words. They are extremely helpful for refining and maximizing search results. Try them out!
Use Quotation Marks to Search for Exact Terms
Putting simple quotation marks around your search terms can help you with many things, such as:
Searching complicated terms. If you need to search for an exact phrase that consists of 2 or more words, make sure to put it in quotations. This way, you’ll avoid results containing only one of the words. For example, typing in “Atomic mass unit” with and without quotation marks can produce different results.
Finding the source of a quote. Sometimes you find a witty quote but don’t know who said it. In this case, just type the quote in the Google search bar using quotation marks, and the source should be the first result. For instance, searching for “If you tell the truth, you don’t have to remember anything” will show you that Mark Twain said it.
Fact-checking a quote. Some phrases are so popular that people attribute them to a handful of different authors. If you’re unsure if Abraham Lincoln ever said anything about the harm the Internet does, you can check that by simply googling the whole quote. Spoiler: no, he didn’t say that.
Add an Asterisk for Proximity Searches
An asterisk (* symbol) can be a handy tool when searching the Internet. What it does is act as a placeholder for any word. When Google sees asterisks among your search terms, it automatically changes the symbol to any fitting word.
Say you want to find a quote but don’t know the exact wording. You would type in “You do not find the happy life. You * it.” The asterisk will be magically substituted with “make,” and the author will be listed as Camilla Eyring Kimball.
Type AND, OR, AND/OR to Expand the Results
Typing OR (in all caps) between 2 search terms will make Google look for results for any of the words. It won’t send you to a link with both terms listed.
In contrast, AND command will do the opposite. It will narrow the results down to only those containing both terms.
It can be helpful when looking for something called differently in separate sources. For example, searching for “fireflies” will list only half of the results. These shiny fellas are also often called lightning bugs. That’s why you might want to search for “Lightning bugs OR fireflies.”
Remove Options Using a Hyphen
Want to know how to exclude words from Google search? Just put a “–” before the word you don’t want to see in the results. This way, words with unrelated meanings will no longer be a problem.
Imagine you need to find the plot for a play about baseball. Results for “Baseball play plot” will likely return irrelevant results. Searching “Baseball play plot -sport” may significantly improve your search results.
Use Shortcuts to Your Benefit
If you don’t want to bother with advanced settings but need more specific results, you can use shortcuts: simple commands that you add to your search query. The most useful ones are:
intitle: and allintitle: This command narrows down the results to pages with the key terms in the title. It’s a good way to find an article if you know the exact topic you need. inurl: and allinurl: Use this command to find pages that are strongly optimized for your topic. If you use it, Google will find the terms in the page’s URL. inanchor: and allinanchor: This modifier is excellent if you’re researching pages with your terms listed in the anchor text that link back to these pages. Be careful since it provides limited global results. intext: and allintext: Use these two shortcuts if you need your key terms to be in the text. cache: This modifier lets you find the most recent cached copy for any page you need. It can be helpful if the site is down or the page you need was deleted. define: Typing in “define:” before your search term will show you its definition. Basically, it functions as an online dictionary. site: This shortcut limits the results to only one website. Use it when you want to be really specific. You can also add a country code to refine the results even further. link: This shortcut provides links to the site you type after the command.
Find a Specific File Type
Sometimes you need Google to show you only presentations or worksheets. In this case, using a “filetype:” shortcut can help you. Simply add this command at the end of your search terms with the file format, and you’re good to go. It can look like this:
Example:
Ways to improve your writing skills filetype:pdf
You can use this wildcard for any file type, not just PDF.
Do Math in Google Search
The Google search tab may not sound like the best math tutor. However, it can perform simple tasks such as addition or division. For example, searching “8+8/4” will give you “10.”
You can also look for the numerical values of any mathematical constant. Simply typing in “Pi” will give you the Pi number value with the first 11 digits. This option can come in handy during an exam.
Other Search Engines to Use: Top 12
Google Search might be massively popular, but it’s not the only online engine available. Plenty of other worthy programs can aid you in finding things you need on the Internet.
Ideally, you want to use several of them when doing research. They will help you find specialized results, and some will even protect your privacy! Here are the 12 of our favorites:
1. Google Scholar
Google Scholar is an engine designed specifically for scholarly literature. Aside from your basic Google needs, it gives you a chunk of additional information.
Why use it: The most crucial feature is a large number of citations. Besides, it will show you citations in different styles. You may also need Google Scholar if you find yourself looking for grey literature: a common situation in academic research.
2. ResearchGate
ResearchGate is a social network created for scientists and scholars. Here they post publications, join groups, and discuss various academic matters. What can be a better place for a student craving sources for academic research?
Why use it: The website’s powerful search tool goes beyond ResearchGate, covering NASA HQ Library and PubMed, among others. Using it will bring you hundreds of search results containing the latest research articles.
3. Educational Resources Information Center
Educational Resources Information Center (ERIC for short) is a vast scholarly database on every topic imaginable. It lists over 1 million educational articles, documents, and journals from all over the Internet.
Why use it: This resource has a reputation in the scientific community for containing highly accurate insights. It’s also your go-to search engine if you’re looking for peer-reviewed journals.
4. Bielefeld Academic Search Engine (BASE)
BASE is another search engine designed for academic research. While being similar to others in functionality, it differs in the results it can provide.
Why use it: This engine digs into the deepest parts of the Internet. It often shows information that other resources simply won’t find. If you feel like your research lacks data and you don’t seem to be able to find anything new on the topic, try BASE.
5. COnnecting REpositories (CORE)
CORE is a project that aims at aggregating all open-source information on the Internet. CORE uses text and data mining to enrich its content, which is a unique approach to gathering information.
Why use it: Like most entries on the list, this engine focuses on academic resources. This means that you don’t have to worry about your sources being inaccurate or poorly written.
6. Semantic Scholar
This is a search engine that uses artificial intelligence for research purposes. Semantic Scholar relies on machine learning, natural language processing, and Human-Computer interactions. Remember that you’ll need a Google, Twitter, or Facebook account to access Semantic Scholar.
Why use it: The program’s creators added a layer of semantics to citation analysis usually used by search engines. That’s where the name comes from.
7. SwissCows
SwissCows is a classic search engine that positions itself as a family-friendly solution to Internet surfing. Its algorithm uses semantic maps to locate information.
Why use it: This engine filters all not-safe-for-work material from its results. The company also has a principle of not storing any data regarding your search history, which is a lovely bonus.
8. WorldWideScience
WorldWideScience is a search engine that strives to accelerate scientific research around the globe.
Why use it: While providing everything an academic resource does, it also has a unique feature: multilingual translations. This means you might find a piece of work originally written in a language you don’t speak, yet you’ll understand it perfectly.
9. Google Books
You can certainly judge a book by its cover here. As you may have guessed, Google Books searches through literature: both fictional and scientific. You type any term you need, and you get all the books related to it.
Why use it: This classic full-text search engine is excellent as a book-focused resource. In many of them, you can read snippets or even whole chapters related to your keyword. Neat, simple, and effective.
10. OAIster
OAIster is another literature-related search engine. But here, the data gathering principle is different. It uses OAI-PMH, which is a protocol that collects metadata from various sources. For mere mortals (like us), this means a different approach to book scanning.
Why use it: OAIster’s unique algorithm makes the search results more accurate and shortens your browsing time.
11. OpenMD
OpenMD is a resource that focuses on medical information. It searches through billions of related articles, documents, and journals.
Why use it: This engine is priceless when you are a medical student working on an academic assignment. It also helps with a sore throat.
12. WayBack Machine
WayBack Machine is the most extensive Internet archive out there. Practically everything that has ever been posted on the web can be found here. It also hosts a vast collection of books, audio and video files, and images.
Why use it: If the source you’re looking for is no longer available or has seen drastic changes, you can use WayBack Machine to track the data back in time. Just choose a date you want to get back to and harvest the results.
Bonus Tips: How to Evaluate Websites
Although search engines are great, they can sometimes show you a site that is not entirely reliable. It’s essential to distinguish helpful resources from potentially harmful or fake ones. Here’s what you should look at while evaluating a website:
Authority Check the author’s background. See if their e-mail and other contacts are listed. Accuracy Double-check the information given to you. Look for the sources in the article, and make sure you check them out. Objectivity Articles often contain a good amount of bias in them. Make sure that it doesn’t get in the way of objective information. Currency The content you’re looking at can be simply outdated. Check the publication date or when it was last updated. Coverage Look at the number of subjects the article covers. Compare the range of topics to other pieces on a similar matter.
Keeping these things in check can save you time and significantly improve the quality of your work.
And with this, we end our guide. You’re welcome to share your useful research tips in the comments section. Best of luck with your next search!
"What’s the first thing we do when facing the unknown? We Google it, of course! Google is fundamental to our experience of the Internet. According to the statistics, more than 100 000 people press “search” on Google every second!
At first glance, the process is straightforward. You type in what you need information about, press enter, and reap your reward. But, if your search is more complex, simply looking through the first page of results may not be enough. What are your other options?
If you struggle to answer this question, we are here to help! This article by our custom-writing team offers you the most actionable and advanced Google search tips.
Unlocking the full potential of the internet for research begins with mastering Google search. With over 100,000 queries processed every second, Google is our go-to tool for navigating the vast sea of information online. Yet, simply skimming the surface of search results may not suffice for complex inquiries. This article delves into actionable strategies for leveraging Google effectively, from refining search terms and setting time frames to utilizing advanced search features like tabs and wildcards. Whether you're a student, academic, or curious learner, these insights will enhance your ability to sift through the digital haystack and find the needles of knowledge you seek.
este artículo es maravilloso nos ayuda con la búsqueda de actividades , nos ensaña a identificar los tipos de conceptos y a indagar en páginas web con mayor seguridad
At a recent dinner party I brought up the subject of dictionaries, drawing a sharp and immediate response: "Dictionary?" said a friend, "Who needs a dictionary? If I need a word I just look it up on my phone." What he meant was "who needs a printed dictionary?" But, without the people who wrote those boring old books, the ready-made definitions found with such facility on machines would not exist. Whether you've bought a dictionary app or you enter a word into a search engine, you have, in fact, consulted a dictionary. All online dictionaries, such as Dictionary.com, thefreedictionary.com, or yourdictionary.com, use, in addition to open sources, licensed material from well-known, established dictionary publishers. And open or copyright-free sources include older works like the 1889 Century Dictionary or the Standard Dictionary of 1893.
Despite wide availability of definitions online, printed dictionaries continue to engender devoted readers. Nowhere is this more apparent than in the recent reversal of fortune for the fifth edition of Webster's New World College Dictionary. Houghton Mifflin Harcourt released it in August and has ordered a fourth printing. This comes after its former publisher, Wiley, nearly killed it altogether by firing almost every member of the dictionary's staff in early 2011.
"Looking up things in the dictionary is an intimate act," said Peter Sokolowski, editor at large at Merriam-Webster. After lectures, audience members nearly always approach him and, in a conspiratorial whisper, confide things like "My family thinks I'm crazy because I read the dictionary."
lRelated BOOKS R.A. Montgomery lives on through his 'Adventures' SEE ALL RELATED 8 Yet the story of the past 10 years or more has been one of retrenchment in the reference field as publishers cut back on full-time employees, replacing them with consulting lexicographers and support staff as sales of print dictionaries and other reference works declined. Jon Goldman, an editor at Webster's New World from 1966-2011, was part of a talented crew that kept the quality high, despite the challenges of repeated ownership changes and perennially skimpy resources. Goldman cites the lack of a digital program for the dictionary's failure to make money in the final years before the HMH purchase. According to HMH Executive Editor Steve Kleinedler, his company bought Webster's New World Dictionary in 2012 to fill a gap left by an earlier decision not to continue with their own college dictionary, concentrating instead on The American Heritage Dictionary of the English Language.
Among dictionary publishers only Merriam-Webster — the sole American publisher devoted exclusively to dictionaries — did not reduce their staff through layoffs. The company currently employs 30 full-time lexicographers. Between its free, advertising-supported dictionary website and smartphone application, Merriam-Webster nets about 200 million page views a month.
cComments Got something to say? Start the conversation and be the first to comment. ADD A COMMENT 0 "That's a lot of traffic that keeps us going," says Sokolowski, a lexicographer who has worked at Merriam-Webster for more than 20 years. "Print is still alive and well, and there's no sense that print dictionaries are going to disappear. The thing is they are a much smaller part of the pie for us."
In the recent past, new editions of large dictionaries like Merriam-Webster's Unabridged were published infrequently (the second edition appeared in 1936, the third in 1961) with copyright updates or revised versions printed every five or six years. New editions of college dictionaries were usually published about every 10 years, with copyright updates appearing every year or two. A new edition of a dictionary is the product of a full revision during which every definition is reconsidered, outdated information revised or deleted and new words and new senses added. A copyright update has more modest ambitions, adding, in a college dictionary for example, roughly a few hundred new entries.
But the concept of publishing editions is disappearing, said Judy Pearsall, editorial director, Global Academic Dictionaries, at Oxford University Press. The Oxford English Dictionary uploads new words and revised entries to its website, OxfordDictionaries.com, every three months. These periodic uploads are called "releases," rather than "editions."
"The idea of an edition is something fixed, but this is less applicable to the digital world and our editorial workflow, which is about constantly updating based on our latest research," she said. "We make changes all the time, week to week. Just like language, so our dictionary is a living, breathing thing, changing and developing all the time in response to usage and user needs."
From the reader's perspective, you can't put data releases side by side on a shelf. And although Pearsall said Oxford takes "snapshots" of dictionary data every year, this information — thus far — is not available to the public. Merriam-Webster's Sokolowski said that its version of its unabridged dictionary will be a "large, organic, but also not fixed, data set that will be the great American dictionary, the large American dictionary."
And so, we live in the continuous present of constant revision: Whether we will be able to access the evolving history of the dictionary, reflecting cultural changes and editorial judgments is an open question.
At the same time, online dictionaries are offering new information about how people use them. Sokolowski reports on Twitter about which words are trending on Merriam-Webster's website.
"I know what you're looking up," Sokolowski said. "We're eavesdropping effectively on the national conversation in a way that's very particular because the intersection of vocabulary and the news is one that's unpredictable. I don't know which word will be picked up. I mean, who would have guessed that the most looked-up word connected to Michael Jackson's death would be the word 'emaciated?'"
A dictionary is the work of many hands, a cooperative human project that requires scores of individuals poring over words, researching their history and writing definitions. It is a candle lit against the darkness of ignorance, a forceful statement that our language matters, and an inclusive register of how our speech has changed.
"Every new achievement has its antecedents, its foundation," said David Guralnik, a lexicographer who died in 2000, in a lecture at Cleveland's Rowfant Club in 1951. He was discussing Webster's New World Dictionary — which in its day sought to revolutionize the traditional dictionary by offering clear, precise and self-explanatory definitions "in a 20th century American style and from an American point of view." His New World Dictionary had "in its background the lexicographical labors of all those who have toiled in the bottomless, teeming ocean of English linguistics, from the forerunners of Dr. Johnson through Baltimore's own H.L. Mencken."
And one could say the same thing about every dictionary. The databases of the digital age are living off the fat of the land, the accumulated definitions written by the now dead and discarded lexicographers, the expert definition writers. The question now is will the dictionaries of the future match the high standards of the recent past and, if not, will anyone care? Will dictionary website subscriptions and licensing generate enough revenue to support the publishers who produce them?
lRelated BOOKS R.A. Montgomery lives on through his 'Adventures' SEE ALL RELATED 8 "I think we're in a transition" said Don Stewart, senior editor of Webster's New World College Dictionary, 5th edition, "and I don't know what's going to come out of this, but what is going to take the place of the traditional printed dictionary? In what form will it be? I don't know and I don't think anyone else does either."
Bruce Joshua Miller is editor of "Curiosity's Cats: Writers on Research." He blogs at brucejquiller.wordpress.com.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
"In a recent interview, renowned linguist and cognitive scientist Noam Chomsky gave his thoughts on the rise of ChatGPT, and its effect on education. What he had to say wasn't favorable. As more and more educators struggle with how to combat plagiarism and the use of these chatbots in the classroom, Chomsky gives a clear viewpoint. For him, the key all lies in how students are taught, and, currently, our educational system is pushing students toward ChatGPT and other shortcuts.
“I don’t think [ChatGPT] has anything to do with education,” Chomsky tells interviewer Thijmen Sprakel of EduKitchen. “I think it’s undermining it. ChatGPT is basically high-tech plagiarism.” The challenge for educators, according to Chomsky, is to create interest in the topics that they teach so that students will be motivated to learn, rather than trying to avoid doing the work.
Chomsky, who spent a large part of his career teaching at MIT, felt strongly that his students wouldn't have turned to AI to complete their coursework because they were invested in the material. If students are relying on ChatGPT, Chomsky says it’s “a sign that the educational system is failing. If students aren’t interested, they’ll find a way around it.”
The American intellectual strongly feels like the current educational model of “teaching to test” has created an environment where students are bored. In turn, the boredom turns to avoidance, and ChatGPT becomes an easy way to avoid the education.
While some argue that chatbots like ChatGPT can be a useful educational tool, Chomsky has a much different opinion. He feels that these natural language systems “may be of value for some things, but it's not obvious what.”
Meanwhile, it appears that schools are scrambling to figure out how to counteract the use of ChatGPT. Many schools have banned ChatGPT on school devices and networks, and educators are adjusting their teaching styles. Some are turning to more in-class essays, while others are looking at how they can incorporate the technology into the classroom.
It will be interesting to see if the rise of chatbots helps steer us toward a new teaching philosophy and away from the “teaching to test” method that has become the driving force of modern education. It's the kind of education that Chomsky says was “ridiculed during the Enlightenment,” and so indirectly, this new technology may force schools to rethink how they ask students to apply their knowledge"
#metaglossia mundus