Log in

Quick registration

Interview with Microsoft CEO Nadella: What is the general direction and future of artificial intelligence?

Author:chiming Time:2017/09/29 Reading: 5940
Editor's note: What should the future of artificial intelligence look like? In what direction should it develop? What should we pay attention to during the development process? These are every level […]

Editor's note: What should the future of artificial intelligence look like? In what direction should it develop? What should we pay attention to during the development process? These are questions that everyone who pays attention to the development of science and technology is thinking about. Recently, founder and CEO of O'Reilly Media Tim O'Reilly existAn article was published on Linkedin in which he interviewed Microsoft CEO Satya Nadella.. In this article, they talk about the general direction and future of artificial intelligence, and give their own views on the problems that arise during the development of artificial intelligence. The article was compiled by 36氪.

微软 CEO 纳德拉访谈:人工智能的大方向与未来是什么?

I met Satya Nadella in April this year to interview him for my new book "WTF? What's the Future and Why It's Up to Us" (Translator's Note: Released on October 10, 2017). But most of what I want to discuss in this article is not about my book, but Satya's memoir, "Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone" (Translator) Note: Released on September 26, 2017).

We are all optimistic about the development of technology, and we firmly believe that the challenge posed by artificial intelligence is to allow us and society to define what is truly human. The future will be a world where artificial intelligence enhances and enhances human capabilities, rather than making humans more useless. This is the basis of our conversation.

Tim:You mention in the book that the challenge posed by artificial intelligence will be to define an ambitious and inspiring social goal. You write: “In 1969, President Kennedy committed Americans to the moon, a goal accomplished in large part because of the enormous technical challenges it posed and the global collaboration it required. Likewise, we need to set a goal for artificial intelligence that is big enough and goes beyond anything that can be achieved by improving existing technology." I really like this idea and wonder if you can expand on it.

Satya:If you said, "Wow, a person with a visual impairment can now see...or a person with dyslexia can read." That would be a real breakthrough! And it would be very meaningful for us to be able to say, “AI is finally a truly inclusive technology,” especially as we talk about the massive changes that AI could have on our world. Instead, we can guide AI to create true inclusion and full participation in our societies.

A lot of the ideas came from my own experiences with my children. Because I have a special needs child, he is "locked". I always thought, "If only he could talk!" So when someone talks about "brain-computer connectivity," I'm like, "Wow, now think about what that can do!"

In a world where we are going to create this technology, we can even say: "This is the ultimate technology", this is the path to superintelligence, the design principles that guide us are everything, the moral philosophy that guides us is Everything. In a sense, the biggest path-dependent decision we have to make will be "who will design superintelligence?"

Tim:A chapter in my own book is called "Our Skynet Moment." My point is that one aspect of artificial intelligence that we don't talk about enough is that it can be a collective intelligence. We are building collective intelligence mechanisms using search engines, social media and financial markets. And we've shown that these artificial intelligences, as we call them, are designed to optimize people's interests.

Satya: That's right. Despite our various cognitive biases, humans have great advantages. As Herbert Simon said, we are satisfied when we no longer optimize. This is actually helpful. There is a contentment that is driven by our shared moral foundations. Now, when you say, "This is a problem that should be optimized," which is artificial intelligence, then we have a problem to solve.

Tim:You also talk about augmenting human capabilities, experience, and intelligence. You say you want to focus on human talents, such as creativity, empathy, emotion, body, insight, etc., which can be mixed with artificial intelligence.

Satya:Assuming there is a large amount of "artificial intelligence", the scarce commodity is true intelligence. And then you say, "What is true wisdom?"

True wisdom will be found in the most human of qualities, compassion and compassion, and those will be the most important things. Before we can actually figure out how to embed it into some machine, we have to integrate it with humans.

We must think about what impact education should have on human beings. With artificial intelligence, maybe you don't really need to learn the Fourier transform. You can also do mindfulness, you might want to develop compassion. That's the point I'm making. All people have an obsessive-compulsive disorder and everyone needs to be educated about it, that’s a fact. But have we overemphasized STEM (Science, Technology, Engineering, Mathematics)?

Tim:There’s a concept that’s been at play throughout my career that Clayton Christenson (one of the most influential business theorists today) once said is called “The Law of Conservation of Attractive Profits.” That is, when something originally of high value is commercialized, the things next to it suddenly become valuable. The first time I heard this was around 2004. He and I spoke at the same conference. In my speech, I talked about how Microsoft replaced IBM: PC hardware became a commodity and software became valuable. My point is that open source will commoditize software and other things will become valuable. Eventually I realized that this "something else" was going to be big data. Clay was talking about his idea about the law of conservation of high profits, and I realized, “We’re talking about the same thing.” This law applies completely to artificial intelligence as well. As the mechanical parts of human cognition are commoditized by AI, the parts that are uniquely human will become more valuable.

Satya:This is a very good statement.

Tim:So in the future economy, there are many opportunities based on care and creativity. The creative economy is more than just arts and culture. Every time products become a commodity, we find new ways to make them valuable by combining them with human creativity.

Satya:In fact, it’s basically an obsession of ours, right? When I'm looking at the Minecraft generation, or you could even say the Snapchat generation... to me, the most inspiring thing about this generation is that it's this crazy view of creation, of story. . In Minecraft, boys and girls alike create worlds that don't exist, only their imaginations.

This is the generation I want to create. Even in the latest Windows versions. We bought a product that 100 million people use every day, Microsoft Paint, and said, "Let's make an absolutely amazing creative canvas for 3D creation." Because, in a world of augmented reality and mixed reality, this will become a reality.

Tim:You also talked about the ability to process large amounts of data and do pattern recognition faster. In the development process of artificial intelligence, a kind of creativity is expressed. How do you see AI augmenting the creative process? and come up with creative ways to solve problems.

Satya:Traditional writing tools only provide help with spelling, grammar and phrase correction. Now we add artificial intelligence to them and they become complete writing aids. We're at a point where you can say, "I want to write like Faulkner," and it starts to say, "Here's what you might want to do. "

I didn't go to writing school, I only went to engineering school, but I dreamed of being a great writer and I needed this artificial intelligence to help me.

I was able to do this because I saw China's Xiaoice, which is our conversational robot, including people's discussions about it and the number of topics and questions in a certain period of time. It’s more about people needing to talk. People are expressing their deepest thoughts. I was like, "Wow, this is where AI can be empathetic."

Tim:We've also discussed before the idea of AI becoming the third runtime. Let's discuss this problem and how it happens.

Satya:When we last discussed this, I said this: "The PC or mobile operating system is the first runtime, the second is the browser and the network, and the third is using a personal assistant that knows you and the world, Understand the environment and are helping you.”

And then we said, "Okay, if you think, perceive, and ultimately act at the same time, those three runtimes are there." This assistant essentially embodies the expansion of your mind, the enhancement of your perception, and voice is a , vision is another, and can be acted upon directly. This is the ultimate runtime, right?

That's why in my book, I try to write about these three things:

  • 1) What is the ultimate computing experience? The ultimate computing experience is the real world integrated with computing. This is mixed reality.

  • 2) This will require AI to achieve breakthroughs in perception, cognition and action.

  • 3) Then I digressed slightly and said, "If you want to create that kind of computing power, you have to break the shackles of von Neumann architecture and classical computing."

Can we talk about quantum computers? This is the third thing I wrote.

Tim:I'm a little surprised that quantum computing takes up so much space in your book.

Satya:Progress is real – I mean, we’re already deep into it. But the question is, how quickly will it become a reality. I have no idea. But it’s no longer a “Wow, is this possible?”

Tim:A very important sentence in your book is: "We want not just intelligent machines, but machines that understand." This is a big concern about artificial intelligence at the moment, that we won't really understand how they get it. draw conclusions. Of course, this applies to humans as well. Because we tend to focus only on the rational side of the human mind, we don't realize how many things we do that are completely incomprehensible to us as well. Therefore, this can be said to be the last cry of rationalists.

Satya:certainly. But I have two ways to avoid risks. One is accountability. "What happens when you're a designer and you create artificial intelligence and you need to be responsible?" Tay's whole experience really had a big impact on our thinking. In fact, it's kind of crazy because so many people are targeting it and feeding it training data that could hijack the algorithm. Whose responsibility is this? I dare say it's ours. This is our responsibility.

Tim:Well, clearly this is a Facebook and fake news problem. I had very interesting conversations with security researchers about how to fool self-driving cars. How can you hack them by creating something they will misunderstand in the visual realm? But in real society, there will be all kinds of hostile interference. We also need to design more powerful artificial intelligence.

Satya:When people say: "Oh humans, these DNNs (Deep Neural Networks) are magic. They are just doing things we don't even know"... I agree with the truth behind this statement, but regarding the choice of number of layers and weights ——Humans make many decisions during their learning process. We shouldn’t be so quick to abandon the algorithms we create, and we should be held accountable for the decisions they make. I guess I would say, "Okay, what are the ethics of programming in artificial intelligence?"

Note: The interview has been abridged.

Original link:https://www.linkedin.com/pulse/conversation-satya-nadella-his-new-book-hit-refresh-tim-o-reilly/

Produced by the compilation team. Editor: Hao Pengcheng.

Leave a Reply


copyright © www.scitycase.com all rights reserve.
Beijing ICP No. 16019547-5