IT'S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns. The person he wanted to talk to most about them? Entrepreneur and MIT Media Lab director Joi Ito. So I sat down with them in the White House to sort through the hope, the hype, and the fear around AI. That and maybe just one quick question about Star Trek. Scott Dadich


Scott Dadich: Thank you both for being here. How's your day been so far, Mr. President?

Barack Obama: Busy. Productive. You know, a couple of international crises here and there.

Dadich: I want to center our conversation on artificial intelligence, which has gone from science fiction to a reality that's changing our lives. When was the moment you knew that the age of real AI was upon us?

November 2016. Subscribe to WIRED.

Obama: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don't notice; and part of the reason is because the way we think about AI is colored by popular culture. There's a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we're not all that useful, and then either they're drugging us to keep us fat and happy or we're in the Matrix. My impression, based on talking to my top science advisers, is that we're still a reasonably long way away from that. It's worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We've been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we're gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.

Joi Ito: This may upset some of my students at MIT, but one of my concerns is that it's been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they're more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn't have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Obama: Right.

Ito: But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence1. Because the question is, how do we build societal values into AI?


Extended intelligence is using machine learning to extend the abilities of human intelligence.

Obama: When we had lunch a while back, Joi used the example of self-driving cars. The technology is essentially here. We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transpor­tation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we're going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It's a moral decision, and who's setting up those rules?


The car trolley problem is a 2016 MIT Media Lab study in which respondents weighed certain lose-lose situations facing a driverless car. E.g., is it better for five passengers to die so that five pedestrians can live, or is it better for the passengers to live while the pedestrians die?

Ito: When we did the car trolley problem2, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car. [Laughs.]

Dadich: As we start to get into these ethical questions, what is the role of government?

Obama: The way I've been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there's a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the govern­ment needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it's disadvantaging certain people or certain groups.



Temple Grandin is a professor at Colorado State University who is autistic and often speaks on the subject.

Ito: I don't know if you've heard of the neurodiversity movement, but Temple Grandin3 talks about this a lot. She says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today.

Obama: They might be on the spectrum.

Ito: Right, on the spectrum. And if we were able to eliminate autism and make everyone neuro-­normal, I bet a whole slew of MIT kids would not be the way they are. One of the problems, whether we're talking about autism or just diversity broadly, is when we allow the market to decide. Even though you probably wouldn't want Einstein as your kid, saying "OK, I just want a normal kid" is not gonna lead to maximum societal benefit.

Obama: That goes to the larger issue that we wrestle with all the time around AI. Part of what makes us human are the kinks. They're the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it's static. And part of what makes us who we are, and part of what makes us alive, is that we're dynamic and we're surprised. One of the challenges that we'll have to think about is, where and when is it appropriate for us to have things work exactly the way they're supposed to, without surprises?