Your new post is loading...
Your new post is loading...
|
Scooped by
Carol Hancox
December 12, 7:43 PM
|
In this installment of WIRED’s AI advice column, “The Prompt,” we answer questions about giving AI tools proper attribution and teaching future generations how to interact with chatbots.
|
Scooped by
Carol Hancox
December 12, 7:26 PM
|
The automaker has sunk billions into making a self-driving car service work. Now it says it will focus on “personal” autonomous vehicles instead.
|
Scooped by
Carol Hancox
December 11, 2:07 AM
|
The opportunity for artificial intelligence to actually do some good has arrived—if it can be redirected toward where it’s needed most.
|
Scooped by
Carol Hancox
December 10, 6:42 PM
|
David Sacks, a member of the infamous “PayPal Mafia,” will lead a group of advisers tasked with steering AI and crypto policy under the Trump administration.
|
Scooped by
Carol Hancox
December 10, 3:07 AM
|
Raymond Baxter enjoys a nice cuppa, courtesy of Unimate.
|
Scooped by
Carol Hancox
November 30, 6:31 PM
|
We Need a New Right to Repair for Artificial Intelligence A growing movement to allow access to algorithmic workings won’t stop the ubiquitous spread of artificial intelligence, but it could restore public confidence in it.
ILLUSTRATION: CARMEN CASADO
THERE’S A GROWING trend of people and organizations rejecting the unsolicited imposition of AI in their lives. In December 2023, the The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class action in California against Nvidia for allegedly training its AI platform NeMo on their copyrighted work. Two months later, the A-list actress Scarlett Johansson sent a legal letter to OpenAI when she realized its new ChatGPT voice was “eerily similar” to hers.
READ MORE
This story is from the WIRED World in 2025, our annual trends briefing.
The technology isn’t the problem here. The power dynamic is. People understand that this technology is being built on their data, often without our permission. It’s no wonder that public confidence in AI is declining. A recent study by Pew Research shows that more than half of Americans are more concerned than they are excited about AI, a sentiment echoed by a majority of people from Central and South American, African, and Middle Eastern countries in a World Risk Poll.
AI Lab Newsletter by Will Knight WIRED’s resident AI expert Will Knight takes you to the cutting edge of this fast-changing field and beyond—keeping you informed about where AI and technology are headed. Delivered on Wednesdays.
SIGN UP By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy. In 2025, we will see people demand more control over how AI is used. How will that be achieved? One example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red teaming exercise, external experts are asked to “infiltrate” or break a system. It acts as a test of where your defenses can go wrong, so you can fix them.
FEATURED VIDEO
How to Avoid AI Scam Calls
Red teaming is used by major AI companies to find issues in their models, but isn’t yet widespread as a practice for public use. That will change in 2025.
The law firm DLA Piper, for instance, now uses red teaming with lawyers to test directly whether AI systems are in compliance with legal frameworks. My nonprofit, Humane Intelligence, builds red teaming exercises with nontechnical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red teaming exercise that was supported by the White House. In 2025, our red teaming events will draw on the lived experience of regular people to evaluate AI models for Islamophobia, and for their capacity to enable online harassment against women.
Overwhelmingly, when I host one of these exercises, the most common question I’m asked is how we can evolve from identifying problems to fixing problems ourselves. In other words, people want a right to repair.
An AI right to repair might look like this—a user could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company. Third party-groups, like ethical hackers, could create patches or fixes for problems that anyone can access. Or, you could hire an independent accredited party to evaluate an AI system and customize it for you.
While this is an abstract idea today, we’re setting the stage for a right to repair to be a reality in the future. Overturning the current, dangerous power dynamic will take some work—we’re rapidly pushed to normalize a world in which AI companies simply put new and untested AI models into real-world systems, with regular people as the collateral damage. A right to repair gives every person the ability to control how AI is used in their lives. 2024 was the year the world woke up to the pervasiveness and impact of AI. 2025 is the year we demand our rights.
|
Scooped by
Carol Hancox
November 30, 6:12 PM
|
Newly published research finds that the flashing lights on police cruisers and ambulances can cause “digital epileptic seizures” in image-based automated driving systems, potentially risking wrecks.
|
Scooped by
Carol Hancox
November 25, 7:03 AM
|
The tech is being used to automatically control the creatures' access to feeders at sites across the UK.
|
Scooped by
Carol Hancox
November 3, 4:39 AM
|
OpenAI just launched its newest artificial intelligence search experience for ChatGPT. Here’s how to get the AI search update, and WIRED’s initial impressions.
|
Scooped by
Carol Hancox
November 3, 3:23 AM
|
In a new copyright lawsuit against AI startup Perplexity, Dow Jones and the New York Post argue that hallucinating fake news and attributing it to real papers is illegal.
|
Scooped by
Carol Hancox
October 27, 2:03 AM
|
Donald Trump's opposition to “woke” safety standards for artificial intelligence would likely mean the dismantling of regulations that protect Americans from misinformation, discrimination, and worse.
|
Scooped by
Carol Hancox
October 27, 12:46 AM
|
During an event for Tesla’s new self-driving Cybercab—due out 2027—Musk revealed an expansive vision for cities transformed by a robotaxi revolution. Experts say the plan has some hitches.
|
Scooped by
Carol Hancox
October 19, 2:33 AM
|
Human rights groups have launched a new legal challenge against the use of algorithms to detect error and fraud in France's welfare system, amid claims that single mothers are disproportionately affected.
|
|
Scooped by
Carol Hancox
December 12, 7:35 PM
|
A new version of Google’s flagship AI model shows how the company sees AI transforming personal computing, web search, and perhaps the way people interact with the physical world.
|
Scooped by
Carol Hancox
December 11, 2:22 AM
|
Researchers hacked several robots infused with large language models, getting them to behave dangerously—and pointing to a bigger problem ahead.
|
Scooped by
Carol Hancox
December 11, 1:19 AM
|
People use AI pals for all sorts of reasons. Here’s what happens when you take one on a solo trip to Japan.
|
Scooped by
Carol Hancox
December 10, 6:40 PM
|
From personal trainers to in-person therapy, only the wealthy have access to human connection. What are the options for the less advantaged?
|
Scooped by
Carol Hancox
November 30, 6:33 PM
|
SCOOP: The agency dedicated to protecting new innovations prohibited almost all internal use of GenAI tools, though employees can still participate in controlled experiments.
|
Scooped by
Carol Hancox
November 30, 6:29 PM
|
A growing movement to allow access to algorithmic workings won’t stop the ubiquitous spread of artificial intelligence, but it could restore public confidence in it.
|
Scooped by
Carol Hancox
November 30, 6:05 PM
|
Billions of dollars in hardware and exorbitant use costs are squashing AI innovation. LLMs need to get leaner and cheaper if progress is to be made.
|
Scooped by
Carol Hancox
November 24, 6:16 PM
|
Brad Porter helped Amazon deploy an army of warehouse robots. His new creation—Proxie—could help other companies embrace more automation.
|
Scooped by
Carol Hancox
November 3, 4:26 AM
|
If you think the United States is politically divided now, just wait for the AI culture wars.
|
Scooped by
Carol Hancox
October 27, 8:35 AM
|
In a new copyright lawsuit against AI startup Perplexity, Dow Jones and the New York Post argue that hallucinating fake news and attributing it to real papers is illegal.
|
Scooped by
Carol Hancox
October 27, 1:17 AM
|
Inspired by microscopic worms, Liquid AI’s founders developed a more adaptive, less energy-hungry kind of neural network. Now the MIT spin-off is revealing several new ultraefficient models.
|
Scooped by
Carol Hancox
October 19, 2:39 AM
|
Dippy, a startup that offers “uncensored” AI companions, lets you peer into their thought process—sometimes revealing hidden motives.
|