Log in

Quick registration

Silicon Valley debate: Will AI destroy humanity?

Author:Dongzi Time:2023/05/22 阅读:4335
News on May 22, as new technologies such as generative artificial intelligence become a new craze in the technology world, the debate on whether artificial intelligence will destroy humanity has intensified. There are well-known subjects […]

News on May 22, as new technologies such as generative artificial intelligence become a new craze in the technology world, the debate on whether artificial intelligence will destroy humanity has intensified. Prominent tech leaders have warned that artificial intelligence could take over the world. Other researchers and executives say the claims are science fiction.

At a U.S. Congressional hearing last week, Sam Altman, chief executive of artificial intelligence start-up OpenAI, clearly warned that the technology the company disclosed was a security risk.

Altman warned that artificial intelligence technologies such as ChatGPT chatbots could lead to problems such as disinformation and malicious manipulation, and called for regulation.

He said artificial intelligence could "do serious harm to the world".

Altman's testimony before the U.S. Congress comes as the debate over whether artificial intelligence will take over the world is shifting into the mainstream, with disagreements and growing divisions across Silicon Valley and among those working to promote the technology.

The fringe idea that machines might suddenly outsmart humans and decide to destroy them is gaining traction. Some leading scientists even believe that the time for computers to learn to outperform and control humans will be shortened.

But many researchers and engineers say that while many people worry about the emergence of killer artificial intelligence like Skynet in the movie "Terminator", such worries are not based on logical and good science. Instead, it distracts from the real problems the technology is already causing, including those Altman described in his testimony. AI technologies are now obfuscating copyright, exacerbating concerns about digital privacy and surveillance, and could be used to improve hackers' ability to breach cyber defenses.

Google, Microsoft, and OpenAI have all publicly announced breakthrough AI technologies. These technologies can hold complex conversations with users and generate images based on simple text prompts. The debate over nefarious artificial intelligence has heated up.

“This is not science fiction,” says Geoffrey Hinton, the godfather of artificial intelligence and a former Google employee. Hinton said artificial intelligence smarter than humans could be within five to 20 years, compared with his previous estimate of 30 to 100 years.

"It's as if aliens have landed on Earth or are about to land," he said. "We really can't take it because they're fluent, they're useful, they write poetry, they answer boring letters. But they're really aliens."

Still, inside the big tech companies, many engineers with close ties to the technology don't think artificial intelligence replacing humans is something we need to worry about right now.

"Of the researchers active in the discipline, there are far more people who are concerned with the real-world risks of the moment than are those who People who are concerned about whether humans are at risk to survival."

There are many real risks at present. For example, publishing bots trained on bad content will deepen problems such as prejudice and discrimination; the vast majority of training data for artificial intelligence is in English, mainly from North America or Europe, which may make the Internet more deviate from the mainstream. The language and culture of the majority; these bots also often fabricate false information, disguising it as fact; in some cases, they even get caught in an infinite loop of conversations attacking users. In addition, people are not clear about the knock-on effects of this technology. All industries are preparing for the disruption or transformation that artificial intelligence may bring, and even high-paying jobs such as lawyers or doctors will be replaced.

Some people also believe that artificial intelligence may harm human beings in the future, and even control the entire society in some way. While existential risks to humanity appear more dire, many argue that they are harder to quantify and less specific.

"There is a group of people who think that these are just algorithms. They are just repeating what they see online." Google CEO Sundar Pichai said in an interview in April this year: "There is also a view that , these algorithms are emerging with new properties, creativity, reasoning and planning." "We need to be careful about this."

The debate stems from continuous breakthroughs in machine learning techniques in computer science over the past 10 years. Machine learning creates software and techniques that can extract novel insights from vast amounts of data without explicit human instruction. This technique is ubiquitous in applications as diverse as social media algorithms, search engines and image recognition programs.

Last year, OpenAI and several other small companies began releasing tools that use a new machine-learning technique: generative artificial intelligence. The so-called big language models, which have trained themselves on trillions of photos and sentences scraped from the web, can generate images and text from simple prompts, engage in complex conversations with users and write computer code.

Anthony Aguirre, executive director of the Future of Life Institute, said big companies are racing to develop ever-smarter machines with little oversight. The Future of Life Institute was established in 2014 to study existential risks in society. Funded by Tesla CEO Elon Musk, the institute began studying the possibility of artificial intelligence destroying humanity in 2015.

If AIs acquire better reasoning abilities than humans, they will try to achieve self-control, Aguirre said. This is cause for concern, as is the real problem that exists now.

"How to keep them on track is going to get more and more complicated," he said. "A lot of science fiction has been very specific."

In March, Aguirre helped write an open letter calling for a six-month moratorium on training new AI models. The open letter received a total of 27,000 signatures, including Yoshua Bengio, a senior artificial intelligence researcher who won the highest award in computer science in 2018, and Emma, CEO of one of the most influential artificial intelligence start-ups. Emad Mostaque is among them.

Musk is certainly the most notable of them all. He helped found OpenAI and is now busy forming AI companies himself, most recently investing in the expensive computer equipment needed to train AI models.

Musk has argued for years that humans should be careful about the consequences of developing superintelligent intelligence. In an interview on the sidelines of Tesla’s annual shareholder meeting last week, Musk said he funded OpenAI because he felt Google co-founder Larry Page was “casual” about the threat of artificial intelligence .

The US version of Quora is also developing its own artificial intelligence model. Company CEO Adam D'angelo did not sign the open letter. "People have different motivations for making this proposal," he said of the open letter.

OpenAI CEO Altman also did not approve of the content of the open letter. He said he agreed with parts of the open letter, but the overall lack of "technical details" was not the right way to regulate AI. Altman said at a hearing on artificial intelligence last Tuesday that his company's approach is to release artificial intelligence tools to the public as early as possible in order to find and solve problems before the technology becomes more powerful.

But there's a growing debate in the tech world about killer robots. Some of the harshest criticism has come from researchers who have been studying the technology's flaws for years.

In 2020, Google researchers Timnit Gebru and Margaret Mitchell teamed up with Washington University scholars Emily M. Bender and Angelina McMillan-Major co-authored a paper. They argue that the growing ability of large language models to mimic humans exacerbates the risk that people will assume they have feelings.

Instead, they argue that these models should be understood as "random parrots," or simply very good at predicting which word will appear next in a sentence purely based on probability, without needing to understand what they are saying. Other critics call large language models "autocompletion" or "knowledge enema".

They documented in detail how large language models can be scripted to generate sexist and other undesirable content. Gebru said the paper had been suppressed by Google. Google fired her after she insisted on publishing the article herself. A few months later, the company fired Mitchell.

Four collaborators on the paper also wrote a letter in response to the open letter signed by Musk and others.

“It is dangerous to distract us with a fanciful AI utopia or the end of the world,” they said. “Instead, we should focus on the very real, very real exploitation of development companies that are rapidly concentrating their power, exacerbate social inequality."

Google declined to comment on Gebru's firing at the time, but said there are still many researchers working on responsible and ethical AI.

"There's no question that modern AI is powerful, but that doesn't mean they're an existential threat to humanity," said Hooker, director of AI research at Cohere.

Currently, much of the discussion about AI breaking free from human control focuses on how quickly it can overcome its own limitations, much like Skynet in The Terminator.

"Most technologies and the risks that exist in technologies change incrementally," Hook said. "Most risks are exacerbated by the technological constraints that exist today."

Last year, Google fired artificial intelligence researcher Blake Lemoine. He said in an interview that he firmly believes that Google's LaMDA artificial intelligence model has sentience. At the time, Lemmon was heavily rebuked by many in the industry. But a year later, many people in the technology world have begun to accept his point of view.

Hinton, a former Google researcher, said it was only recently after using the latest AI models that he changed his mind about the potential dangers of the technology. Hinton asked the computer program complex questions that, in his view, required the AI model to understand his requests in general terms, not just predict possible answers based on training data.

In March, Microsoft researchers said they had observed a "spark of general artificial intelligence," referring to artificial intelligence that can think for itself like a human, while working on OpenAI's latest model, GPT4.

Microsoft has spent billions of dollars working with OpenAI to develop the Bing chatbot. Skeptics believe that Microsoft is building its public image around artificial intelligence technology. There is always a perception that this technology is more advanced than it actually is, and Microsoft has a lot to gain from it.

In their paper, the Microsoft researchers argue that the technology has developed a world-spatial and visual understanding based solely on the textual content it was trained on. GPT4 can automatically draw unicorns and describe how to stack random objects, including eggs, so that the eggs will not break.

"In addition to mastering language, GPT-4 can solve a variety of complex new problems involving mathematics, programming, vision, medicine, law, psychology, etc., without requiring any special prompts," the Microsoft Research team wrote. In conclusion, artificial intelligence is comparable to human beings in many fields.

But one of the researchers acknowledged that defining "intelligence" remains tricky, despite attempts by artificial intelligence researchers to develop quantitative criteria for assessing how intelligent machines are.

"They all have problems or are controversial," he said.

Leave a Reply


copyright © www.scitycase.com all rights reserve.
Beijing ICP No. 16019547-5