The Risks & Promise of AI, "60 Minutes" & ELO Forums Focus

Geoffrey Hinton, an AI pioneer, issues a muted warning: “It may be we look back and see this [time] as a kind of turning point when humanity had to make the decision about whether to develop these things [AI] further and what to do to protect themselves if they did. I think my main message is there's enormous uncertainty about what's going to happen next. These things [AI-based machines] do understand. And because they understand, we need to think hard about what's going to happen next. And we just don't know.”

AI was in the news again, this time on the widely watched CBS TV Show 60 MINUTES. A segment on October 8th was titled, "Godfather of Artificial Intelligence," featuring Geoffrey Hinton on “The Promise and Risks of Advanced AI” with Scott Pelley.

Geoffrey Hinton has been called "the Godfather of AI." He is a British computer scientist whose controversial ideas helped make advanced artificial intelligence possible. Hinton believes that AI will do enormous good, but he warns that AI systems may be more intelligent than people know and there's a chance the machines could take over. 

Artificial Intelligence (AI) is a game-changer for everyone. The frenzy created around AI is akin to the advent of the internet, and its rise will change everyone’s lives. People are increasingly uncertain about the future of technology. Who will be the winners and losers in this tech shake-up? 

Many problems arise surrounding this issue. How do we understand AI? There are many ethical issues as values must be clarified to make key decisions. What about the perils? People may be losing their jobs. But what is the upside? The rise of AI has created a gold rush of opportunities regarding commercial applications. The objective of the AI focus at the ELO Forums is to help leaders understand AI better. The ELO Forums in Winnipeg, Vancouver, and Toronto will each have a session focused on AI. 

The first session participant is Prof. John Lennox, Professor Emeritus, University of Oxford, one of the leading evangelical Christian apologists of our day, who will speak on “AI, Man, and God." The second session participant is Dr. Andy Steiger, Founder and President, Apologetics Canada, (in Winnipeg and Toronto; in Vancouver, it will be Steve Kim, also with Apologetics Canada)) who will speak on “Artificial Intelligence: What are the Facts and Fiction?"  The third session participant is John Carbrey, an experienced high-tech entrepreneur and founder of FutureSight Ventures who will focus on: Artificial Intelligence: Is There A Gold Rush of Commercial Opportunities?” 

Here are some extracts from the interview of Geoffrey Hinton (GH) by Scott Pelley (SP):

SP: Does humanity know what it's doing?

GH: No. I think we're moving into a period when, for the first time ever, we may have things more intelligent than us. 

SP: You believe they can understand?

GH: Yes.

SP: You believe they are intelligent?

GH: Yes.

SP: You believe these systems have experiences of their own and can make decisions based on those experiences?

GH: In the same sense as people do, yes.

SP: Are they conscious?

GH: I think they probably don't have much self-awareness at present. So, in that sense, I don't think they're conscious.

SP: Will they have self-awareness, consciousness?

GH: Oh, yes.

SP: Yes?

GH: Oh, yes. I think they will, in time. 

SP: And so human beings will be the second most intelligent beings on the planet?

GH: Yeah. That's a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that's something we need to seriously worry about.

SP: What do you say to someone who might argue, "If the systems become malevolent, just turn them off?"

GH: They will be able to manipulate people, right? And these will be very good at convincing people because they'll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they'll know all that stuff. They'll know how to do it.

GH:An obvious area where there's huge benefits is health care. AI is already comparable with radiologists at understanding what's going on in medical images. It's going to be very good at designing drugs. It already is designing drugs. So that's an area where it's almost entirely going to do good. I like that area.

SP: The risks are what?

GH: Well, the risks are having a whole class of people who are unemployed and not valued much because what they-- what they used to do is now done by machines. Other immediate risks he worries about include fake news, unintended bias in employment and policing, and autonomous battlefield robots.

SP: What is a path forward that ensures safety?

GH: I don't know. I can't see a path that guarantees safety. We're entering a period of great uncertainty where we're dealing with things we've never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can't afford to get it wrong with these things. 

SP: Can't afford to get it wrong, why?

GH: Well, because they might take over.

SP: Take over from humanity?

GH: Yes. That's a possibility.

SP: Why would they want to?

GH: I'm not saying it will happen. If we could stop them ever wanting to, that would be great. But it's not clear we can stop them ever wanting to.


To learn more about Artificial Intelligence and its impact on humanity,

join us at the ELO Forums this year in Winnipeg, Vancouver, and Toronto:

buy your tickets