Kai-Fu Lee: I disagree with Elon Musk’s “AI will destroy humanity”

11 min read 

Frankly speaking, I do not quite agree with the view of Elon Musk’s that “AI will destroy humanity”. I have specific views about five areas:

  1. AI is just a tool and far from superintelligence. Based on my work experience in the research, development and investment of the AI field over the last 37 years, such outrageous statements like”superintelligence” and “the extinction of humanity” have no practical basis in engineering.

  2. Think tanks and scientists should discuss AI security issues and the changes which AI is bringing to society, but the opinion leaders of the tech community should not mislead the public at this time by telling them that AI will control or destroy mankind. Making such statements is irresponsible on their part. Since most people already have only a limited knowledge of AI, this can cause a false mass panic which is not based on reality.

  3. AI can be a means of creating major wealth, and even slowly resolve the human need of food and clothing. Everyone will also achieve more time and freedom to do the things we enjoy most.

  4. The real problem AI will create is unemployment, which then might cause problems of depression, loss of people’s determination, and even inequality between the rich and poor. The occupational structure which society has become accustomed to will change drastically.

  5. The 100% guaranteed major opportunity which AI presents to us will bring about its own problems as time goes on. But it is unreasonable to express an extremely low or zero probability of “superintelligence” at this point. It will cause people to be afraid of AI or even want to control it, when instead they should embrace the opportunity and try to solve the problem.


What Are the Big-shots in the Field of AI Saying?

The following ideas includes some of my own as well as the consensus of many experts in the field. If we were to ask leading AI experts, most of them would certainly agree with Zuckerberg and myself, for example the winner of the A.M. Turing Award, Rodney Brooks.

Rodney Brooks’s opinion is: “Many people are saying that AI poses a threat to mankind, including people like Stephen Hawking, Sir Martin Rees….this opinion is very common among those who are not directly working in the AI field. It is reasonable that people who do not work in the AI field have a hard time understanding such issues, for all they can do is analyze issues through information learned about product reviews. Based on the achievement of AI in one field, they will lump everything together and assume that AI has superintelligence capabilities in many other fields. In reality, modern AI is only based on the optimization of data in a certain limited field, and can only be described as a powerful recognition engine model. We have no idea how to begin making all-purpose and all-powerful artificial intelligence.

In addition, Rodney Brooks also mentioned that Musk wanted to manage AI through legislation. But since the general AI that you need to manage does not exist, how can you set up laws to manage it?

Actually, while I was dialoguing with three other A. M. Turing Award winners during ACM Turing 50th Celebration Conference held in Shanghai in May of this year, we all shared similar opinions. We all said that humans today do not know how to create “superintelligence”, and we all denied the idea that quantum computing can replace the human brain.

Of course, in addition to Musk, among the supporters of the “AI Doomsday” is Stephen Hawking. But although Hawking is an outstanding physics professor, it does not mean that his opinions in the area of AI is reasonable.

The well renowned AI research specialist, Michael Jordon, made the plain statement that anyone who has questions in the area of physics can find professor Hawking for answers, but if they have questions about AI, there might be other specialists who are more suitable to help them.

When Andrew Ng heard about the opinion of Hawking and Musk regarding the “Super AI”, he could not hold back from covering his face with his hands.

Andrew Ng said, “Being someone who really creates AI products, I cannot see any possibility of AI developing into some evil power in the future.”

Even when listening to ideas such as the “theory of singularities”, Andrew Ng just roll his eyes and says, “When I hear people talk about singularities, my eyes naturally roll back in disbelief.”

Another famous big-shot in the field of AI is Geoff Hinton. He distinctly said that within the next 50 years, he has no hopes of seeing any specific progress in any kind of “Super AI”.

So, as far as all the people I know in the field who have conducted significant computer and AI research work, most of them support Zuckerberg’s opinion.

Actually, many business executives from the Silicon Valley and scientists who have won the A. M. Turing Awards have expressed for some time off stage their disagreement of the theory of Musk and Hawking. I also expressed this “non endorsement” of these men in articles I wrote while working at the New York Times and Wired, although I did not directly denounce them.

I once made the following statement in a Wired article, “We are bombarded with dire predictions by a number of self-appointed futurists about “superintelligence,” “singularity,” “cyborgs,” and the unprovable claim that “we live in a video game.” These dystopian warnings are infectious, because they come from famous people—and perhaps because they are reinforced by the familiar plots of science fiction.”

Practically speaking, in reference to Musk’s series of opinions and viewpoints, I hope everyone can distinguish between “science fiction” and “science”.

To put it more simply, in my opinion, all of those fantastical predictions have no foundation in practical engineering. The “fiction” element of science fiction novels is mainly fiction and not science. The real crisis that we are presently facing is not a concern over whether something like this will occur in the future, but rather how to resolve the future problem of occupation substitution.

During the debate this time, I think the way Zuckerberg directly refuted and criticized Musk shows that perhaps he really believed Musk’s viewpoint was radical. Especially when he was speaking to the high-up US politicians, he spoke very emphatically about AI destruction of the human race.

Prior to this, Michael Jordon made the following critical statement, “AI scientists must conduct their work with absolute honesty, regardless of whether they are dealing with their patrons, sponsors, employees, peers, colleagues, the public or themselves. When we deal honestly with AI, the farfetched predictions of AI (whch influence the public opinion and even political policy) made by people who are not honest or are wistful thinkers do much to make the lives of practical and honest scientists like us difficult.”


Will robots one day be able to do every kind of job? Disagree

I also do not agree with Musk’s point of view that “Robots will one day be able to do every kind of job”. I think this is an overgeneralization. On the one hand, I agree that AI will be able to replace the kinds of work which are simple and repetitive. On the other hand, we will create new kinds of jobs for humans which AI cannot do.

What kind of jobs can AI not do? We will discover that AI cannot complete work that require creativity and social interaction ability. At the most 10% of jobs require creativity, while 90% of jobs are service-oriented and require social interaction. (for further information see the article published by Oxford in 2013: http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf)

As a result, we must encourage more young people to pursue the road of creativity, whether it is making inventions in science and engineering or innovations in literati arts. AI can only optimize what is already created, but only talents have the ability to create.

In addition, there numerous opportunities in service business, especially the kinds of work that require relatively strong social interaction, the services which provide “loving care”. This includes already existing industries: enthusiastic tour guides, detailed officers of etiquette, humorous bartenders, enthusiastic hair dressers, and very creative sushi chefs. We will also create new kinds of industries: door-to-door services which include health food chefs, door-to-door seasonal service provider who can live in your home in different seasons, kind care givers of the elderly who can take your parents to doctor’s visits. There are also some volunteer oriented jobs: assistants at blood banks, teachers at orphanages, instructors at summer camps, initiators of interactive symposiums for helping people with addictions.

We need to change the thinking and education of society in order to help people in the service industry achieve better satisfaction. If the service industry can help other people in various ways, caring for and helping people resolve their problems and worries, and brighten up their lives, will they not thus be able to resolve their need for self-fulfillment? (When you have conducted few of theses kinds of services, don’t you think you will be happy and proud of yourself?)

Even if less than 90% of people can do some kind of service-oriented work, suppose only 50% of people could do this kind of work, there is still a margin of people who will retire early, busy themselves with their hobbies or pass time surfing the internet. Only very few people will contract depression or other problems.

In this way, I think society will be able to become stable, don’t you?


Musk Should Listen More to AI Expert’s Opinions

Finally, speaking frankly, I don’t believe this debate won’t bring any immediately change to Musk’s opinions, being that he seriously believes in “Super AI”–to the extent that it influenced his author to change his ideas. To all appearances, it does not look like he will change his mind. Since Musk is surrounded by OpenAI scientists who research super AI, they certainly will influence each other to promote the idea of dooms day. In the same way, since Zuckerberg is surrounded by Yann LeCun (director of the Facebook Artificial Intelligence Lab) who is directly involved in he relevant work and AI experts who create industrial value, he will certainly maintain an optimistic attitude toward AI.

Even though Elon Musk’s foresight and strategic thinking in the past was very admirable, his original intention of launching the OpenAI–opening AI as a way to control it, this I can accept. It is his over pessimistic and destructive arguments about AI which he continually shares that I seriously disagree with.

Relatively speaking, I think on the topic of “AI and the future of mankind” we should listen more to Zuckerberg’s perspective, because his team is really using AI, solving problems and creating value. The man who works under him, Yann Lecun is leading a first-rate AI research talent team. They know very well what present day AI can do and cannot do, as well as can predict what it will be able to do in the future.

It is said that Musk’s idea of “super AI” was inspired by Nick Bostrom, the director of Future of Humanity Institute, Oxford University and author of the book Superintelligence. But recently, this man who influenced Musk’s ideas publicly “defected”, saying that he believes what he wrote in his book about superintelligence causing a tragic situation cannot really happen in real life.

Even earlier than this, someone told me that the reason Musk is so certain about the danger of super AI is that when the company DeepMind was searching for buyers, they shared the idea that super artificial intelligence is dangerous. As a result, they also required the buying party to setup an ethics committee to control the future developments of DeepMind, so as to keep it from destroying the human race. Being one of the potential buyers at the time, Musk was greatly impacted this idea.

So, regarding the relationship between AI and humans, it is my opinion that Musk should listen more to the ideas of people who are really involved with AI as their work. I think he could exchange ideas with the leading AI experts and A. M. Turing Award winners, and listen to why they do not support his ideas of “superintelligence” and “dooms day”.

Musk has certainly made many great innovative creations over the last many years, and enjoyed success in almost everything he did. But all this does not mean that he does not need to think more before speaking on the topic of “superintelligence”.


This article originally appeared in Kai-Fu Lee’s blog and was translated by Pandaily.

Click here to read the original Chinese article.

Spread the love
  • 31
12 comments Add yours
  1. So, Kai Foo-Lee is obviously aware of the risk of superintelligence, but prefers ordinary people to focus on solving practical problems, and not thinking of what larger artificial neural networks could possibly do — maybe win Go (围棋), or maybe world’s political game.

Leave a Reply

Your email address will not be published. Required fields are marked *