Current:Home > ContactExperts issue a dire warning about AI and encourage limits be imposed -Capitatum
Experts issue a dire warning about AI and encourage limits be imposed
Charles H. Sloan View
Date:2025-04-06 08:11:10
A statement from hundreds of tech leaders carries a stark warning: artificial intelligence (AI) poses an existential threat to humanity. With just 22 words, the statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Among the tech leaders, CEOs and scientists who signed the statement that was issued Tuesday is Scott Niekum, an associate professor who heads the Safe, Confident, and Aligned Learning + Robotics (SCALAR) lab at the University of Massachusetts Amherst.
Niekum tells NPR's Leila Fadel on Morning Edition that AI has progressed so fast that the threats are still uncalculated, from near-term impacts on minority populations to longer-term catastrophic outcomes. "We really need to be ready to deal with those problems," Niekum said.
This interview has been edited for length and clarity.
Interview Highlights
Does AI, if left unregulated, spell the end of civilization?
"We don't really know how to accurately communicate to AI systems what we want them to do. So imagine I want to teach a robot how to jump. So I say, "Hey, I'm going to give you a reward for every inch you get off the ground." Maybe the robot decides just to go grab a ladder and climb up it and it's accomplished the goal I set out for it. But in a way that's very different from what I wanted it to do. And that maybe has side effects on the world. Maybe it's scratched something with the ladder. Maybe I didn't want it touching the ladder in the first place. And if you swap out a ladder and a robot for self-driving cars or AI weapon systems or other things, that may take our statements very literally and do things very different from what we wanted.
Why would scientists have unleashed AI without considering the consequences?
There are huge upsides to AI if we can control it. But one of the reasons that we put the statement out is that we feel like the study of safety and regulation of AI and mitigation of the harms, both short-term and long-term, has been understudied compared to the huge gain of capabilities that we've seen...And we need time to catch up and resources to do so.
What are some of the harms already experienced because of AI technology?
A lot of them, unfortunately, as many things do, fall with a higher burden on minority populations. So, for example, facial recognition systems work more poorly on Black people and have led to false arrests. Misinformation has gotten amplified by these systems...But it's a spectrum. And as these systems become more and more capable, the types of risks and the levels of those risks almost certainly are going to continue to increase.
AI is such a broad term. What kind of technology are we talking about?
AI is not just any one thing. It's really a set of technologies that allow us to get computers to do things for us, often by learning from data. This can be things as simple as doing elevator scheduling in a more efficient way, or ambulance versus ambulance figuring out which one to dispatch based on a bunch of data we have about the current state of affairs in the city or of the patients.
It can go all the way to the other end of having extremely general agents. So something like ChatGPT where it operates in the domain of language where you can do so many different things. You can write a short story for somebody, you can give them medical advice. You can generate code that could be used to hack and bring up some of these dangers. And what many companies are interested in building is something called AGI, artificial general intelligence, which colloquially, essentially means that it's an AI system that can do most or all of the tasks that a human can do at least at a human level.
veryGood! (8274)
Related
- Paris Hilton, Nicole Richie return for an 'Encore,' reminisce about 'The Simple Life'
- Minneapolis' LUSH aims to become nation's first nonprofit LGBTQ+ bar, theater
- A Second Wind For Wind Power?
- 'True Detective: Night Country' tweaks the formula with great chemistry
- What to know about Tuesday’s US House primaries to replace Matt Gaetz and Mike Waltz
- NBA All-Star Game highlights: East dazzles in win over West as Damian Lillard wins MVP
- Here are 6 movies to see this spring
- Men's college basketball bubble winners and losers: TCU gets big win, Wake Forest falls short
- Angelina Jolie nearly fainted making Maria Callas movie: 'My body wasn’t strong enough'
- What is Presidents Day and how is it celebrated? What to know about the federal holiday
Ranking
- Trump wants to turn the clock on daylight saving time
- 'Sounded like a bomb': Ann Arbor house explosion injures 1, blast plume seen for miles
- Tech giants pledge crackdown on 2024 election AI deepfakes. Will they keep their promise?
- What to know about the debut of Trump's $399 golden, high-top sneakers
- Rylee Arnold Shares a Long
- Virginia bank delays plans to auction land at resort owned by West Virginia governor’s family
- Russia says it has crushed the last pocket of resistance in Avdiivka to complete the city’s capture
- Lenny Kravitz Details His Inspirational Journey While Accepting Music Icon Award at 2024 PCAs
Recommendation
Former Syrian official arrested in California who oversaw prison charged with torture
Teen arrested after young girl pushed into fire, mother burned rescuing her: Authorities
OpenAI's new text-to-video tool, Sora, has one artificial intelligence expert terrified
2 officers, 1 first responder shot and killed at the scene of a domestic call in Minnesota
South Korea's acting president moves to reassure allies, calm markets after Yoon impeachment
Get Long, Luxurious Lashes with These Top-Rated Falsies, Mascaras, Serums & More
Kelly Osbourne says Ozempic use is 'amazing' after mom Sharon's negative side effects
Ex-YouTube CEO’s son dies at UC Berkeley campus, according to officials, relative