Log in

Quick registration

Experts warn that AI is an extinction-level threat, and I hope they stop scaring us

Author:techradar Time:2023/06/01 阅读:6561
The people building the world's most advanced AI have just signed an urgent and brief statement warning that mitigating AI's extinction-level capabilities is now our most pressing problem […]

The people building the world's most advanced AI have just signed an urgent and brief statement warning that mitigating AI's extinction-level capabilities is now one of our most pressing problems.

No, I'm not exaggerating this. This is from the Center for Artificial Intelligence Safety, according to which it was signed by OpenAI CEO Sam Altman, Google DeepMind's Demis Hassabis, and Turing Award winners Yoshua Benigo and Geoffrey Hinton:

"Mitigating the risk of AI extinction should be a global priority, in addition to other societal-scale risks such as pandemics and nuclear war."

Imagine your car manufacturer yelling at you as you drive out of a parking lot: "The car might kill you, all your friends, and everyone you know!"

In 23 words, the center paints a dire picture of an out-of-control AI that is already plotting our demise.

Right now, however, the AI is a rowdy teenager, often too smart for its own good, highly engaged, and hallucinating because it doesn't know better.

While none of the co-signers added any color to their signatures, Altman made it clear weeks ago when he spoke before Congress about the need to regulate AI. "We believe regulatory intervention by the government is essential to mitigate the risks of an increasingly powerful model," Altman told lawmakers. So, it's not hard to believe he signed a statement calling for AI mitigation. Still, the "risk of extinction" is on another level—perhaps even hysterical.

If you're wondering what all the fuss is about, you probably haven't had your first conversation with a chatbot yet. No, not the stupid ones who can answer a question or two and then have no new conversation. I mean those generative AI bots built on large language models (Llm) that leverage their training and ability to predict the next most likely word in a sentence to produce an exceptionally realistic response to almost any query. Open N. Chat, Chat is the most well-known and popular of the bunch, but Google's Bard and Microsoft's Bing AI aren't far behind.

ChatGPT-like features are also spreading like a weed thanks to OpenAI's dead-simple plugin tool.

And for every "AI develops new cancer treatment in 30 days" story, there's another about an AI doing its best to find a way to destroy humanity.

With artificial intelligence, things are moving fast, and it no longer feels like a slow, steep mountainside up a cable car, but rather like a Crisco spray-coated sled ride, out of control up the hill while trimming everything

However, the reality of AI in 2023 may lie somewhere in the middle.

we have been afraid for a long time

Here's another headline for you: "Scientists Worry Machines May Outwit People." is from "New York Times" -Year 2009. At the time, a group of concerned scientists met in Monterey Bay, California, to discuss what they believed to be the real risks posed by artificial intelligence. "Their concern is that further progress could cause profound social disruption, with even dangerous consequences," wrote era Journalist John Markoff.

Their main focus is on AI-engineered polymorphic computer viruses, drones that can defy tracking, blocking and eradication, drones that can kill autonomously, and systems that can simulate empathy when interacting with humans.

They foresee the possibility of mass unemployment, as AIs take on many of our most repetitive and boring jobs, and are particularly concerned about criminals hijacking systems that might masquerade as humans.

Just as those who have brought us the most exciting advances in artificial intelligence are ringing the latest alarm bells, the 2009 Doomsday Prediction was organized by the Association for the Advancement of Artificial Intelligence.

I was a little taken aback at the time that so many smart people would try to toddle this very new technology as it progressed from a pregnant fetus to a crawling toddler.

Right now, though, the AI is a rowdy teenager, often too smart for its own good, high levels of engagement, and hallucinating because it doesn't know any better.

stipulate, please

Teenagers need rules, so I agree 100% that AI regulation (preferably on a global level) is needed. However, I don't see how this talk of hysterical language and "extinction-level" events helps anyone. This is the kind of fearmongering that could short-circuit AI development because consumers simply don't understand AI storming Dr. Frankenstein's castle and burning it to the ground.

AI is not a monster, nor are the people who developed it. These concerns are well founded, but the immediate danger, well, it just isn't there yet. Our system is still a bit dumb and often goes wrong. We are at a higher risk of junk data and information loads than catastrophic events.

This is also a big problem. Consumers now have unlimited access to these powerful tools, but what they don't seem to understand is that AI can still go wrong, as can discovering AI's once-in-a-lifetime cure.

We cannot ignore the risks, but inflammatory rhetoric will not help us mitigate them. There is a big difference between warning that AI poses an existential threat and saying it could lead to our extinction. One is vague and, honestly, hard for people to imagine, and the other conjures up images of an asteroid hitting Earth and ending us all.

What we need are smarter discussions and real action on regulation - guardrails not roadblocks. We will only get the former if we stop building AI terror walls.

Leave a Reply


copyright © www.scitycase.com all rights reserve.
Beijing ICP No. 16019547-5