Your new post is loading...
Your new post is loading...
|
Scooped by
Carol Hancox
January 12, 12:58 AM
|
By grappling with the messy and unpredictable side of existence, machine learning can have impact beyond the digital.
|
Scooped by
Carol Hancox
January 10, 6:16 PM
|
One of the most important AI copyright legal battles just took a major turn.
|
Scooped by
Carol Hancox
December 23, 2024 7:42 PM
|
WIRED is following every copyright battle involving the AI industry—and we’ve created some handy visualizations that will be updated as the cases progress.
|
Scooped by
Carol Hancox
December 23, 2024 7:39 PM
|
A day after Google announced its first model capable of reasoning over problems, OpenAI has upped the stakes with an improved version of its own.
|
Scooped by
Carol Hancox
December 18, 2024 7:26 PM
|
Embodied says it will try to refund recent purchases of its Moxie robot but makes no promises.
|
Scooped by
Carol Hancox
December 12, 2024 7:43 PM
|
In this installment of WIRED’s AI advice column, “The Prompt,” we answer questions about giving AI tools proper attribution and teaching future generations how to interact with chatbots.
|
Scooped by
Carol Hancox
December 12, 2024 7:26 PM
|
The automaker has sunk billions into making a self-driving car service work. Now it says it will focus on “personal” autonomous vehicles instead.
|
Scooped by
Carol Hancox
December 11, 2024 2:07 AM
|
The opportunity for artificial intelligence to actually do some good has arrived—if it can be redirected toward where it’s needed most.
|
Scooped by
Carol Hancox
December 10, 2024 6:42 PM
|
David Sacks, a member of the infamous “PayPal Mafia,” will lead a group of advisers tasked with steering AI and crypto policy under the Trump administration.
|
Scooped by
Carol Hancox
December 10, 2024 3:07 AM
|
Raymond Baxter enjoys a nice cuppa, courtesy of Unimate.
|
Scooped by
Carol Hancox
November 30, 2024 6:31 PM
|
We Need a New Right to Repair for Artificial Intelligence A growing movement to allow access to algorithmic workings won’t stop the ubiquitous spread of artificial intelligence, but it could restore public confidence in it.
ILLUSTRATION: CARMEN CASADO
THERE’S A GROWING trend of people and organizations rejecting the unsolicited imposition of AI in their lives. In December 2023, the The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class action in California against Nvidia for allegedly training its AI platform NeMo on their copyrighted work. Two months later, the A-list actress Scarlett Johansson sent a legal letter to OpenAI when she realized its new ChatGPT voice was “eerily similar” to hers.
READ MORE
This story is from the WIRED World in 2025, our annual trends briefing.
The technology isn’t the problem here. The power dynamic is. People understand that this technology is being built on their data, often without our permission. It’s no wonder that public confidence in AI is declining. A recent study by Pew Research shows that more than half of Americans are more concerned than they are excited about AI, a sentiment echoed by a majority of people from Central and South American, African, and Middle Eastern countries in a World Risk Poll.
AI Lab Newsletter by Will Knight WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.
SIGN UP By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy. In 2025, we will see people demand more control over how AI is used. How will that be achieved? One example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red teaming exercise, external experts are asked to “infiltrate” or break a system. It acts as a test of where your defenses can go wrong, so you can fix them.
FEATURED VIDEO
How to Avoid AI Scam Calls
Red teaming is used by major AI companies to find issues in their models, but isn’t yet widespread as a practice for public use. That will change in 2025.
The law firm DLA Piper, for instance, now uses red teaming with lawyers to test directly whether AI systems are in compliance with legal frameworks. My nonprofit, Humane Intelligence, builds red teaming exercises with nontechnical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red teaming exercise that was supported by the White House. In 2025, our red teaming events will draw on the lived experience of regular people to evaluate AI models for Islamophobia, and for their capacity to enable online harassment against women.
Overwhelmingly, when I host one of these exercises, the most common question I’m asked is how we can evolve from identifying problems to fixing problems ourselves. In other words, people want a right to repair.
An AI right to repair might look like this—a user could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company. Third party-groups, like ethical hackers, could create patches or fixes for problems that anyone can access. Or, you could hire an independent accredited party to evaluate an AI system and customize it for you.
While this is an abstract idea today, we’re setting the stage for a right to repair to be a reality in the future. Overturning the current, dangerous power dynamic will take some work—we’re rapidly pushed to normalize a world in which AI companies simply put new and untested AI models into real-world systems, with regular people as the collateral damage. A right to repair gives every person the ability to control how AI is used in their lives. 2024 was the year the world woke up to the pervasiveness and impact of AI. 2025 is the year we demand our rights.
|
Scooped by
Carol Hancox
November 30, 2024 6:12 PM
|
Newly published research finds that the flashing lights on police cruisers and ambulances can cause “digital epileptic seizures” in image-based automated driving systems, potentially risking wrecks.
|
Scooped by
Carol Hancox
November 25, 2024 7:03 AM
|
The tech is being used to automatically control the creatures' access to feeders at sites across the UK.
|
|
Scooped by
Carol Hancox
January 12, 12:01 AM
|
The appetite for AI-derived drivel isn’t as strong as many publishers would have you believe, and demand for quality content is growing.
|
Scooped by
Carol Hancox
December 26, 2024 12:12 AM
|
In this 2024 review, we look at the impact of AI on data storage, what’s needed to support AI during training and inference, and storage suppliers’ responses to the rise of AI.
|
Scooped by
Carol Hancox
December 23, 2024 7:40 PM
|
The hype is fading, and people are asking what generative artificial intelligence is really good for. So far, no one has a decent answer.
|
Scooped by
Carol Hancox
December 19, 2024 7:02 PM
|
AI risks arise not from AI acting on its own, but because of what people do with it.
|
Scooped by
Carol Hancox
December 18, 2024 7:23 PM
|
AI is replacing the humans who pretend to be OnlyFans stars in online amorous messages.
|
Scooped by
Carol Hancox
December 12, 2024 7:35 PM
|
A new version of Google’s flagship AI model shows how the company sees AI transforming personal computing, web search, and perhaps the way people interact with the physical world.
|
Scooped by
Carol Hancox
December 11, 2024 2:22 AM
|
Researchers hacked several robots infused with large language models, getting them to behave dangerously—and pointing to a bigger problem ahead.
|
Scooped by
Carol Hancox
December 11, 2024 1:19 AM
|
People use AI pals for all sorts of reasons. Here’s what happens when you take one on a solo trip to Japan.
|
Scooped by
Carol Hancox
December 10, 2024 6:40 PM
|
From personal trainers to in-person therapy, only the wealthy have access to human connection. What are the options for the less advantaged?
|
Scooped by
Carol Hancox
November 30, 2024 6:33 PM
|
SCOOP: The agency dedicated to protecting new innovations prohibited almost all internal use of GenAI tools, though employees can still participate in controlled experiments.
|
Scooped by
Carol Hancox
November 30, 2024 6:29 PM
|
A growing movement to allow access to algorithmic workings won’t stop the ubiquitous spread of artificial intelligence, but it could restore public confidence in it.
|
Scooped by
Carol Hancox
November 30, 2024 6:05 PM
|
Billions of dollars in hardware and exorbitant use costs are squashing AI innovation. LLMs need to get leaner and cheaper if progress is to be made.
|