information analyst
43.8K views | +0 today
Follow
information analyst
km, ged / edms, workflow, collaboratif
Your new post is loading...
Your new post is loading...
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

The three challenges of AI regulation

The three challenges of AI regulation | information analyst | Scoop.it
Artificial intelligence has been quietly evolving behind the scenes for some time. When Google auto-completes a search query, or Amazon recommends a book, AI is at work. In November 2022, however, the release of ChatGPT-3 moved AI out of the shadows, repositioning it from a tool for software engineers, to a tool that is consumer-focused and ordinary people can use themselves without any need for technical expertise. Using ChatGPT, users can have a conversation with an AI bot asking it to design software rather than having to write the code itself.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Designing for AI: beyond the chatbot

Guidelines and strategies for meaningfully leveraging AI in your applications

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Using AI with Cognitive Apprenticeship Theory, Upscaling and Retooling

Using AI with Cognitive Apprenticeship Theory, Upscaling and Retooling | information analyst | Scoop.it
The cognitive apprenticeship model emphasizes providing authentic learning experiences that allow learners to engage in complex real-world activities. Generative AI can be used to create realistic simulations and scenarios that provide learners with opportunities to apply their knowledge and skills in context. Earn/learn partnerships can utilize onsite chatbots and intelligent tutoring systems, creating learning environments that provide learners with psychological safety to practice, vocalize and refine their problem-solving skills.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

How to use ChatGPT to enhance research

How to use ChatGPT to enhance research | information analyst | Scoop.it
A guide to three key uses of generative AI tools like ChatGPT in developing and enhancing research

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

AI Detection and the “Humanization” of Writing

Writers are trying to sound “more human” with the rise in AI-detection software, why that sucks, and why some AI-detection companies might be acting unethically

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

The Dark Side of ChatGPT: 6 Generative AI Risks to Watch

The Dark Side of ChatGPT: 6 Generative AI Risks to Watch | information analyst | Scoop.it
Gartner has identified six critical areas where the use of large language models such as ChatGPT can present legal or compliance risks that enterprise organizations must be aware of — or face potentially dire consequences. Organizations should consider what guardrails to put in place in order to ensure responsible use of these tools, the research firm advised.

Via EDTECH@UTRGV
Tina Jameson's curator insight, June 8, 2023 6:35 PM
Food for thought - all the more reason to teach our students critical thinking and information literacy skills.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Early thoughts on regulating generative AI like ChatGPT

Early thoughts on regulating generative AI like ChatGPT | information analyst | Scoop.it

Generative artificial intelligence, like ChatGPT, is a misunderstood emerging technology which poses a number of risks in its potential commercial application and malicious use, including its lack of truthfulness and potential for deception.

With OpenAI’s ChatGPT now a constant presence both on social media and in the news, generative artificial intelligence (AI) models have taken hold of the public’s imagination. Policymakers have taken note too, with statements from Members addressing risks and AI-generated text read on the floor of the House of Representatives. While they are still emerging technologies, generative AI models have been around long enough to consider what we know now, and what regulatory interventions might best tackle both legitimate commercial use and malicious use.

WHAT ARE GENERATIVE AI MODELS?

ChatGPT is just one of a new generation of generative models—its fame is a result of how accessible it is to the public, not necessarily its extraordinary function. Other examples include text generation models like DeepMind’s Sparrow and the collaborative open-science model Bloom; image generation models such as StabilityAI’s Stable Diffusion and OpenAI’s DALL-E 2; as well as audio-generating models like Microsoft’s VALL-E and Google’s MusicLM.

While any algorithm can generate output, generative AI systems are typically thought of as those which focus on aesthetically pleasing imagery, compelling text, or coherent audio outputs. These are different goals than more traditional AI systems, which often try to estimate a specific number or choose between a set of options.  More traditional AI systems might identify which advertisement would lead to the highest chance that an individual will click on it. Generative AI is different—it is instead doing its best to match aesthetic patterns in its underlying data to create convincing content.

In all forms (e.g., text, imagery, and audio), generative AI is attempting to match the style and appearance of its underlying data. Modern approaches have advanced incredibly fast in this capacity—leading to compelling text in many languages, cohesive imagery in many artistic styles, and synthetic audio that can impersonate individual voices or produce pleasant music.

Yet, this impressive mimicry is not the same as comprehension. A study of DALL-E 2 found that it could generate images that correctly matched prompts using the word “on” just over one quarter of the time. Other basic spatial connections (such as “under” and “in”) led to even worse results. ChatGPT shows similar problems. As it is merely designed to string words together in a likely order, it still cannot reliably pass basic tests of comprehension. As is well documented by Professor Gary Marcus, ChatGPT may often fail to “count to four… do one-digit arithmetic in the context of simple word problem… figure out the order of events in a story… [and] it couldn’t reason about the physical world.”

Further, text generation models constantly make things up—OpenAI CEO Sam Altman has said as much, noting “it’s a mistake to be relying on [ChatGPT] for anything important right now.” The lesson is that writing convincing, authoritative-sounding text based on everything written on the internet has turned out to be an easier problem to solve than teaching AI to know much about the world. However, this significant shortcoming did not stop Microsoft from rolling out a version of OpenAI’s technology for some users of its search engine.

Still, this sense of authenticity will make generative AI appealing for malicious use where the truth is less important than the message it advances, such as disinformation campaigns and online harassment. It is also why an early commercial application of generative AI is to create marketing content, where the strict accuracy of the writing simply isn’t very important. However, when the media website CNET started using generative models for writing financial articles, where the truth is quite important, the articles were discovered to have many errors.

These two examples offer a glimpse into two separate sources of risk from generative AI— commercial applications and malicious use—which warrant separate consideration, and likely, distinct policy interventions.[1]

HANDLING THE COMMERCIAL RISKS OF GENERATIVE AI

The first category of risks comes from the commercial application of generative AI. Many companies want to use generative AI for various business applications that are far more general than simply generating content. For the most part, generative AI models tend to be especially large and relatively powerful, and so while they may be particularly good at generated text or images, they can be adapted for a wide variety of tasks.[2]

The most prominent example may be Copilot, an adaptation of OpenAI’s GPT-3. Developed by GitHub, Copilot integrates GPT-3 into a more specific tool for generating code, aiming to ease certain programming tasks. Other examples include the expansion of image-generating AI in helping to design video game environments and the company Alpha Cephei, which takes open-source AI models for speech analysis and further develops them into enterprise voice recognition products.

The key concern with collaborative deployment using generative AI is that neither company may sufficiently understand the function of the final AI system.[3]

The original developer solely developed the generative AI model but cannot see the full extent to how it is used when it is adapted for another purpose. Then a “downstream developer,” which did not participate in the original model development, may adapt the model and integrate its outputs into a broader software system. Neither entity has complete control or a comprehensive view into the whole system. This may increase the likelihood of errors and unexpected behavior, especially since many downstream developers may overestimate the capacity of the generative AI model. This joint development process may be fine for processes where errors are not especially important (e.g., clothing recommendations) or where there is a human reviewing the result (e.g., a writing assistant).

However, if these trends extend into generative AI systems used for impactful socioeconomic decisions, such as educational access, hiring, financial services access, or healthcare, it should be carefully scrutinized by policymakers. The stakes for persons affected by these decisions can be very high, and policymakers should take note that AI systems developed or deployed by multiple entities may pose a higher degree of risk. Already, applications such as KeeperTax, which fine-tunes OpenAI models to evaluate tax statements to find tax-deductible expenses, are raising the stakes. This high-stakes category also includes DoNotPay, a company dubiously claiming to offer automated legal advice based on OpenAI models.

Further, if generative AI developers are uncertain if their models should be used for such impactful applications, they should clearly say so and restrict those questionable usages in their terms of service. In the future, if these applications are allowed, generative AI companies should work proactively to share information with downstream developers, such as operational and testing results, so that they can be used more appropriately. The best-case scenario may be that the developer shares the model itself, enabling the downstream developer to test it without restrictions. A middle-ground approach would be for generative AI developers to expand the available functionality for, and reduce or remove the cost of, thorough AI testing and evaluation.

Information sharing may mitigate the risks of multi-organizational AI development, but it would only be part of the solution. This approach to help downstream developers responsibly leverage generative AI tools only really works if the final system is itself regulated, as will be the case in the EU under the AI Act, and as is advocated for in the U.S.’s Blueprint for an AI Bill of Rights.

MITIGATING MALICIOUS USE OF GENERATIVE AI

The second category of harm arises from the malicious use of generative AI. Generative models can create non-consensual pornography and aid in the process of automating hate speech, targeted harassment, or disinformation. These models have also already started to enable more convincing scams, in one instance helping fraudsters mimic a CEO’s voice in order to obtain a $240,000 wire transfer. Most of these challenges are not new in digital ecosystems, but the proliferation of generative AI is likely to worsen them all.

Since these harms result from malicious use by scammers, anonymous harassers, foreign non-state actors, or hostile governments, it may also be much more challenging to prevent them, compared to commercial harms. However, it might be reasonable to require a certain degree of risk management, especially by commercial operations that deploy and profit from these cutting-edge models.

This might include tech companies that provide these models over API (e.g., OpenAI, Stability AI), through cloud services (e.g., the Amazon, Google, and Microsoft clouds), or possibly even through Software-as-a-Service providers (e.g., Adobe Photoshop). These businesses control several levers that might partially prevent malicious use of their AI models. This includes interventions with the input data, the model architecture, review of model outputs, monitoring users during deployment, and post-hoc detection of generated content.

Manipulating the input data before model development is an impactful way to influence the resulting generative AI, because these models greatly reflect that underlying data. For example, OpenAI uses human reviewers to detect and remove “images depicting graphic violence and sexual content” from the training data for DALL-E 2. The work of these human reviewers was used to build a smaller AI model that was used to detect images that OpenAI didn’t want to include in its training data, thus improving the impact of the human reviewers. The same type of model can also be used at other stages to further prevent malicious use, by checking to see if any images submitted by users, or the images generated by generative AI, might contain graphic violence or sexual content. Generally, the practice of using a combination of human reviewers and AI tools for removing harmful content may be an effective, if not sufficient, intervention.[4]

The development of generative models also may provide an opportunity for intervention, although this research is just emerging. For example, by getting iterative feedback from humans, generative language models can become moderately more truthful, as suggested by new research from DeepMind.[5]

User monitoring is another tactic that may bear fruit. First, a generative AI company can set transparent limits on user behavior through the Terms of Service. For instance, OpenAI says its tools may not be used to infringe or misappropriate any person’s rights, and further limits some categories of images and text that users are allowed to generate. OpenAI appears to have some system to implement these terms of service, such as by denying obvious requests for harassing comments or statements on famous conspiracy theories. However, one analysis found that ChatGPT responded with misleading claims 80% of the time, when presented with a catalog of misinformation narratives. Going further, generative AI companies could monitor users, using algorithmic tools to flag requests that may suggest malicious or banned use, and then suspend users who become repeat offenders.

In a more nascent approach, researchers have proposed using patterns in generated text to identify it later as having come from a generative model, or so-called watermarking. However, it is too early to determine how such a detection might work once there are many available language models, available in different versions, that individual users are allowed to update and adapt. This approach may simply not adapt well as these models become more common.

Collectively, these interventions and others might add up to a moderately effective risk management system. However, it is highly unlikely it would be anywhere near perfect, and motivated malicious actors will find ways to circumvent these defenses. In general, the efficacy of these efforts should be considered more like content moderation, where even the best systems only prevent some proportion of banned content.

RELATED CONTENT

IT IS STILL THE EARLY DAYS OF GENERATIVE AI POLICY

The challenges posed by generative AI, both through malicious use and commercial use, are in some ways relatively recent, and the best policies are not obvious. It is not even clear that “generative AI” is the right category to focus on, rather than including individually focusing on language, imagery, and audio models. Generative AI developers could contribute to the policy discussion by disclosing more specific details on how they develop generative AI, such as through model cards, and also explain how they are currently approaching risk management.

It also warrants mention that, while these harms are not trivial, there are more pressing areas in which the U.S. needs AI governance, such as protections from algorithms used in key socioeconomic decisions, developing meaningful online platform policy, and even passing data privacy legislation.

If perhaps not a priority, it is worth considering regulations for commercial developers of the largest AI models, such as generative AI.[6] As discussed, this might include information sharing obligations to reduce commercialization risks, as well as requiring risk management systems to mitigate malicious use. Neither intervention is a panacea, but they are reasonable requirements for these companies which might improve their net social impact.

This combination might represent one path forward for the EU, which was recently considering how to regulate generative models (under the distinct, but related term, “general-purpose AI”) in its proposed AI Act.[7] This would raise many key questions, such as how to enforce these rules and what to do about their considerable international impact. In any case, if the EU or other governments do take this approach, it is worth keeping policies flexible into the future, as there is still much to be learned about how to mitigate risks of generative AI.

Microsoft provides financial support to the Brookings Institution, including to the Artificial Intelligence and Emerging Technology Initiative and Governance Studies program, where Mr. Engler is a Fellow. Google is a general unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.

The author acknowledges the research support of CTI’s Mishaela Robison and Xavier Freeman-Edwards.

Footnotes

1. These are two key categories of harms from the use of generative AI, although they are not the only harms. For instance, harms from the development process include copyright infringement (as Getty has charged against Stability AI), undercompensated employees working on potentially harmful data labeling, and the furthering of the business incentive towards massive data collection to fuel ever larger generative models. (Back to top)

2. In other contexts, generative AI even has different names that emphasize its value for re-use. A report from Stanford’s AI community calls them instead “foundation” models and notes in the first sentence that their defining quality is that they “can be adapted to downstream tasks.” (Back to top)

3. The European Union’s proposed AI Act describes this emerging trend, in which multiple entities collaborate to develop an AI system, as the AI Value Chain. It is too early to know how dominant this trend might be, but an enormous increase in venture capital funding suggests a coming expansion of commercial experimentation. (Back to top)

4. However, these companies need to take responsibility for the health and wellness of those human reviewers, who are performing the single most harmful task in the development of a generative AI system. Recent reporting from Time states that Kenyan workers were paid only $2 an hour to categorize disturbing text, and potentially images, on behalf of OpenAI. (Back to top)

5. This is an important research development, but it remains very unclear to what extent large language models will be able to become more routinely and robustly truthful, and they should not yet be assumed to be able to do so. (Back to top)

6. Note that the focus on commercial developers is intentional, and this would not include the open-sourcing of generative AI models, for reasons discussed elsewhere. (Back to top)

7. The malicious use of generative AI poses challenges that are similar to content moderation on online platforms concerning which content should be allowed or disallowed. This makes it an ill-fitting problem for the EU AI Act, which is primarily about the commercial and government use of AI for decision-making and in products. A provision aimed at mitigating generative AI’s malicious use may be a better fit for an amendment to the Digital Services Act, which also has more relevant enforcement provisions. (Back to top)


Via Charles Tiayon, EDTECH@UTRGV
Charles Tiayon's curator insight, February 23, 2023 12:29 AM

"Generative artificial intelligence, like ChatGPT, is a misunderstood emerging technology which poses a number of risks in its potential commercial application and malicious use, including its lack of truthfulness and potential for deception..."

#metaglossia mundus

 

Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Grammarly To Take On ChatGPT; Launches AI Writing Assistant 'GrammarlyGO’

Grammarly To Take On ChatGPT; Launches AI Writing Assistant 'GrammarlyGO’ | information analyst | Scoop.it
A direct stab at competitors like QuillBot
By James Paul  March 13, 2023 

This must be the most dreaded moment for content creators or the clients at the receiving end of shoddy copy. With a ChatGPTesque AI writing assistant, Grammarly has taken a stab at the generative AI industry, especially its competitors like QuillBot and others. Grammarly already has a large user base that swear by the online program for all their proofreading requisites. It does the job pretty well for average users. However it bungles up when you write anything that is too creative for the AI to process. However, GrammarlyGO might resolve this while providing users the prowess of a draftsman that takes years of honing the skills of scribbling your thoughts on paper. GrammarlyGO claims of speeding up users workflow with instant

  • Drafts
  • Ideas galore
  • Responses
  • Revisions in personalized tone

The Beta is slated to be released sometime next month.

 
 

 

So what technology is Grammarly using exactly for its latest offering? Responding to a user's query, the company states that it is blending the best of the latest in AI, ML, and NLP technologies. Currently the model is harnessing ChatGPT-3 powered by Grammarly’s proprietary AI and ML models, like how Microsoft did with Bing Chat.

 

Raising a privacy concern a user asked if GrammarlyGo would learn from previously typed mails. The company stated that it won’t read anything one has typed from before its installation. In the beta stage the model will only learn from user interaction to provide personalized response.

 


Via Charles Tiayon, EDTECH@UTRGV
Charles Tiayon's curator insight, March 13, 2023 9:02 PM
A direct stab at competitors like QuillBot
By James Paul  March 13, 2023 

This must be the most dreaded moment for content creators or the clients at the receiving end of shoddy copy. With a ChatGPTesque AI writing assistant, Grammarly has taken a stab at the generative AI industry, especially its competitors like QuillBot and others. Grammarly already has a large user base that swear by the online program for all their proofreading requisites. It does the job pretty well for average users. However it bungles up when you write anything that is too creative for the AI to process. However, GrammarlyGO might resolve this while providing users the prowess of a draftsman that takes years of honing the skills of scribbling your thoughts on paper. GrammarlyGO claims of speeding up users workflow with instant

  • Drafts
  • Ideas galore
  • Responses
  • Revisions in personalized tone

The Beta is slated to be released sometime next month.

 

So what technology is Grammarly using exactly for its latest offering? Responding to a user's query, the company states that it is blending the best of the latest in AI, ML, and NLP technologies. Currently the model is harnessing ChatGPT-3 powered by Grammarly’s proprietary AI and ML models, like how Microsoft did with Bing Chat.

 
Raising a privacy concern a user asked if GrammarlyGo would learn from previously typed mails. The company stated that it won’t read anything one has typed from before its installation. In the beta stage the model will only learn from user interaction to provide personalized response."
 
#metaglossia mundus
 
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Making ChatGPT Work For You

Making ChatGPT Work For You | information analyst | Scoop.it
Lately, those of us working in education cannot go a day without hearing something about ChatGPT, an artificial intelligence (AI) language model that can engage in natural language conversations with humans. It was developed by the company OpenAI and is based on the GPT-3 architecture, which is a state-of-the art deep learning algorithm that is capable of generating human-like responses to text-based prompts. Built on a massive corpus of texts, ChatGPT can produce coherent and contextually relevant responses to a wide range of inputs.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

ChatGPT: Post-ASU+GSV Reflections on Generative AI

ChatGPT: Post-ASU+GSV Reflections on Generative AI | information analyst | Scoop.it
The one question I heard over and over again in hallway conversations at ASU+GSV was “Do you think there will be a single presentation that doesn’t mention ChatGPT, Large Langauge Models (LLMs), and generative AI?”

Nobody I met said “yes.” AI seemed to be the only thing anybody talked about.

And yet the discourse sounded a little bit like GPT-2 trying to explain the uses, strengths, and limitations of GPT-5. It was filled with a lot of empty words, peppered in equal parts with occasional startling insights and ghastly hallucinations. 

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Google's AI experts on the future of artificial intelligence

Google's AI experts on the future of artificial intelligence | information analyst | Scoop.it
We may look on our time as the moment civilization was transformed as it was by fire, agriculture and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer. Which is to say, with creativity, truth, error and lies. The technology, known as a chatbot, is only one of the recent breakthroughs in artificial intelligence -- machines that can teach themselves superhuman skills. We explored what's coming next at Google, a leader in this new world. CEO Sundar Pichai told us AI will be as good or as evil as human nature allows. The revolution, he says, is coming faster than you know.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

AI Pause Urged by Musk, Wozniak and Other Tech Leaders

AI Pause Urged by Musk, Wozniak and Other Tech Leaders | information analyst | Scoop.it
An open letter calls for a six-month break on powerful AI training efforts. The idea is to develop safety and oversight systems and otherwise allow time for consideration of the tech’s rapid development.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Plugins: A Massive Upgrade That Will Change ChatGPT Forever 

Plugins will let ChatGPT have internet access, file uploads, and third-party services. The result? ChatGPT on steroids.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

AI And Machine Learning In Cybersecurity

AI And Machine Learning In Cybersecurity | information analyst | Scoop.it
AI and Machine Learning could help companies protect themselves from the growing number of cyber dangers.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

We gave AI detectors a try-here's what we found

We gave AI detectors a try-here's what we found | information analyst | Scoop.it
AI detection is generating debates around AI's place in education--here's what happens when AI content is run through detection tools

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

The trouble with using Google and Microsoft's AI writing assistants

The trouble with using Google and Microsoft's AI writing assistants | information analyst | Scoop.it
AI is on the verge of becoming a standard feature of word processing. The danger is that it will discourage us from thinking for ourselves.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Generative AI: How It Works, History, and Pros and Cons

Generative AI: How It Works, History, and Pros and Cons | information analyst | Scoop.it
In a matter of seconds, this artificial intelligence technology can produce new content in response to a prompt

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Majority of Americans have heard of ChatGPT, but few have tried it

Majority of Americans have heard of ChatGPT, but few have tried it | information analyst | Scoop.it
Just 14% of all U.S. adults say they have used ChatGPT for entertainment, to learn something new, or for their work.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

What Is an AI Prompt Engineer?

What Is an AI Prompt Engineer? | information analyst | Scoop.it
Getting the best results from a generative AI tool like ChatGPT requires knowing how to tell it what you need. According to some experts, 'AI whispering' is a job skill we may all be forced to master.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Google to incorporate AI chat into its main search engine

Google to incorporate AI chat into its main search engine | information analyst | Scoop.it
Pichai said that Google’s incorporation of AI signals the company’s commitment to competing against the likes of Microsoft-backed OpenAI and others.

By  Ariel Zilber

April 6, 2023 9:52am 

Google is poised to beef up its search engine with flashy artificial intelligence features that will enable it to interact with users in more conversational, human-like ways, according to the tech giant’s boss.

CEO Sundar Pichai said that Google’s incorporation of AI into its ubiquitous search tools signals the company’s commitment to competing against the likes of Microsoft-backed OpenAI and others.

“The opportunity space, if anything, is bigger than before,” Pichai told The Wall Street Journal.

Google has been an industry leader in developing large language models (LLMs) that can generate responses to prompts that resemble those of humans.

The company will now work to use the technology to enhance user experience on its search engine.

“Will people be able to ask questions to Google and engage with LLMs in the context of search? Absolutely,” Pichai told The Journal.

Microsoft recently rolled out its enhanced version of the Bing search engine which is now powered by ChatGPT, the chatbot introduced with great fanfare by the Silicon Valley research firm OpenAI.

The rise of ChatGPT shook the tech world after the AI-powered bot demonstrated advanced conversational abilities that mimicked those of humans.

The technology has been shown to be capable of composing emails, essays, and software code — stoking fears that it could replace people who work in knowledge-based industries.

The generative AI feature is credited with helping Bing exceed 100 million daily active users last month, according to a Microsoft blog post.

After announcing the rollout of its new AI-powered search engine, Microsoft CEO Satya Nadella said the new advances are “going to reshape every software category we know,” including search, much like earlier innovations in personal computers and cloud computing.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser. 

 

Pichai indicated that he welcomes the competition.

In response to pressure from its competitor, Pichai earlier this year unveiled the experimental AI chatbot Bard, a controversial new service that is available exclusively to a group of “trusted testers” before its universal release later this year.

According to Pichai, Bard “seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models.”

“It draws on information from the web to provide fresh, high-quality responses,” according to Pichai.

Google and other tech giants are in the midst of a transition that includes cost-cutting and paring down payroll due to economic headwinds.


Via Charles Tiayon, EDTECH@UTRGV
Charles Tiayon's curator insight, April 6, 2023 8:28 PM

By  Ariel Zilber   April 6, 2023 9:52am  "Google is poised to beef up its search engine with flashy artificial intelligence features that will enable it to interact with users in more conversational, human-like ways, according to the tech giant’s boss.

CEO Sundar Pichai said that Google’s incorporation of AI into its ubiquitous search tools signals the company’s commitment to competing against the likes of Microsoft-backed OpenAI and others.

“The opportunity space, if anything, is bigger than before,” Pichai told The Wall Street Journal.

Google has been an industry leader in developing large language models (LLMs) that can generate responses to prompts that resemble those of humans.

The company will now work to use the technology to enhance user experience on its search engine.

“Will people be able to ask questions to Google and engage with LLMs in the context of search? Absolutely,” Pichai told The Journal.

Microsoft recently rolled out its enhanced version of the Bing search engine which is now powered by ChatGPT, the chatbot introduced with great fanfare by the Silicon Valley research firm OpenAI.

The rise of ChatGPT shook the tech world after the AI-powered bot demonstrated advanced conversational abilities that mimicked those of humans.

The technology has been shown to be capable of composing emails, essays, and software code — stoking fears that it could replace people who work in knowledge-based industries.

The generative AI feature is credited with helping Bing exceed 100 million daily active users last month, according to a Microsoft blog post.

After announcing the rollout of its new AI-powered search engine, Microsoft CEO Satya Nadella said the new advances are “going to reshape every software category we know,” including search, much like earlier innovations in personal computers and cloud computing.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser...."

#metaglossia mundus

Dr. Russ Conrath's curator insight, May 19, 2023 12:12 PM

Google and AI chat to combine. A good idea?

Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Why Artificial Intelligence Will Make Tech Workers More Human

Why Artificial Intelligence Will Make Tech Workers More Human | information analyst | Scoop.it
Despite the flood of pundits predicting widespread disruption among tech workers at the hands of ChatGPT, tech jobs aren’t going anywhere—but their skills are.


Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

How to Begin a Career in Prompt Engineering

How to Begin a Career in Prompt Engineering | information analyst | Scoop.it
Are you interested in a career in prompt engineering? Here's a guide on how to start your journey towards this exciting field.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

ChatGPT: Who's using the AI tool and why?

ChatGPT: Who's using the AI tool and why? | information analyst | Scoop.it
ChatGPT has quickly turned into a global phenomenon, with people using it to find information, ask questions, and generate different types of content. But just how popular is the service? How often do most folks use it and for what purposes? Based on a March 2023 survey of 1,024 Americans and 103 AI experts, a new report from WordFinder by YourDictionary attempts to answer those questions.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Why you shouldn't completely trust ChatGPT's accuracy just yet

Why you shouldn't completely trust ChatGPT's accuracy just yet | information analyst | Scoop.it
If you’re anything like me, you’ve played a lot with ChatGPT since its release. After all, as a writer, it’s essential that I know how good the language-learning model is so that I know exactly what I’m competing with. Mostly, the news hasn’t been promising: ChatGPT is fast and (at least for now) way cheaper than I could ever hope to be. But there’s at least one area where humans are still winning: accuracy.

Via EDTECH@UTRGV
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

AI and the Future of Writing Instruction

AI and the Future of Writing Instruction | information analyst | Scoop.it
The Campus Technology Insider podcast explores current trends and issues impacting technology leaders in higher education. Listen in as Editor in Chief Rhea Kelly chats with ed tech experts and practitioners about their work, ideas and experiences.

Via EDTECH@UTRGV
No comment yet.