Exclusive Interview with Peter Scott: AI, Faith, and the Future

Peter Scott

With a large enough view of the universe and God’s role, there is room within religion for artificial intelligence.

World Religion News recently released an opinion article about religion and artificial intelligence. One of our sources was Peter Scott. Based on his expertise and answers we have decided to share WRN’s interview with him.

Peter Scott has a Master of Arts from Cambridge University, certifications in Neuro-linguistic Programming and coaching, and has worked as an employee and contractor for NASA for over 30 years.

You can read more on this article’s topic in his new book, Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, available from Amazon.

WRN: Given that massive scientific breakthroughs have had huge effects on religious institutions and beliefs what do you think will happen with the future of AI? Will this happen at a specific point in AI development? Can the concept of God exist in a world where humans can create intelligent, autonomous, artificial life?

Peter Scott: I am no theologian and while my writing explores cultural, economic, and social implications of future technological developments, I will not pretend to speak for any religion.  But in the spirit of speculation from a neutral viewpoint, I can say that religion and science have conflicted when religion has felt the need to answer questions about the universe which could later be decided by science. 
Exclusive Interview with Peter Scott: AI, Faith, and the Future[/tweetthis]
The current Pope has taken a “render unto Caesar” approach of rapprochement on issues like evolution and the Big Bang.  So with a large enough view of the universe and God’s role, there is room within religion for artificial intelligence.  The Dalai Lama was questioned in 1987 about the possibility of computer programs becoming sentient and replied:

“It’s very difficult to say that [a computer program is] not a living being, that it doesn’t have cognition, even from the Buddhist point of view.”

Alan Turing, the “father” of modern computing said:

“It is admitted that there are certain things He cannot do such as making one equal to two, but should we not believe that He has the freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this soul. An argument of exactly similar form may be made for the case of machines.”

The importance of Turing to AI is that he created the “Turing Test,” a litmus test for deciding whether a computer was “thinking.”  If, after sufficient conversation, a person cannot tell that the computer isn’t another person at a remote computer terminal, then it is, empirically, thinking.  This is not “cheating”; it’s the basis of modern jurisprudence. If it walks, talks, and quacks like a duck, in the eyes of the law, it’s a duck. So something very much like a Turing test will one day be used in court to decide whether an AI is sufficiently equivalent to a human being as to be granted equal rights.

This discussion could have taken place twenty years ago, except that only science fiction fans and some very pedantic programmers and philosophers would have been interested in something likely to be five hundred years away. What makes it of vital broad interest now is that recent developments have shrunk that horizon, considerably. Stephen Gold, a vice president at IBM Watson predicted that a computer would pass the Turing Test by 2020. He was referring to a more lenient form of the test that would be enough to fool the average observer and not the battery of experts that would doubtless interrogate an AI in a court determining its human rights. But this would still be a landmark in the evolution of “thinking machines” that would tip the public debate into overdrive.

Will the development of AI lead to a backlash based on either actual or perceived threat from it? What could that backlash be?

PJ: There will be a sliding scale of backlashes that will lead into unprecedented territory. Straightforward automation, no consciousness implied or claimed, is about to claim more jobs than any period in history.  Two studies (the Oxford Martin Programme on the Impacts of Future Technology, and the Brookfield Institute) predict just under half of all job functions will be automated in the next ten to twenty years. This will include many highly compensated positions in white collar professions, so the backlash will look like an Odd Couple alliance between the Teamsters and Wall Street Journal subscribers.  Few new jobs are expected to be created in any field in any timeframe.  If we do not adopt some radical (for western democracies) socioeconomic policies, there will be a huge underclass with a great deal of resentment and a great deal of time on their hands to express it.

Further in the future, when AIs become conscious or a reasonable facsimile thereof, the enemy will have a face, or at least be thought of as close enough to human to be vilified as a subhuman race. It is precisely at the point where AI can first be argued to own “thoughts” and “feelings” that it is likely to be enslaved and tortured. I do not expect that most future AIs will be conveniently encased in humanoid shells as in the 2004 movie I, Robot, though. Most of them will reside inside remote server farms and exert their will through devices we are already familiar with like 3-D printers and remote controlled forklifts. If we do end up with ubiquitous robots, and their makers want to stave off a pogrom, they should make them look like kittens.

In the Musk and Zuckerberg debate whose argument do you feel has greater merit? What developments would we see in the future that would indicate which prediction was correct, especially in the next couple of years? 

PJ: In the short term, Zuckerberg is right; in the long term, Musk is right. The problem is that the dividing line between short and long term could be as small as twenty years.  We are on the brink of self-evolving software, designed to accelerate its own ability to learn. Given enough time, one day this will result in an AI with the intelligence of a frog. At that rate, in a week it will be as smart as a high schooler, the next day it will pass Stephen Hawking, and in a month be on an inconceivable plane of consciousness. It’s a matter of when not if. Musk sees this. Zuckerberg doesn’t. We have to ensure that that über AI will have compassion for humans who will be as insects by comparison.

This issue has much in common with global warming: the catastrophic effects are decades away, but by the time they become obvious, it will be too late to mitigate them. The next couple of years will not prove one side right. Most likely, given the tremendous amount of hype that has ramped up over the last year, the failure of a human-scale intelligence to emerge within two years will be taken as proof that the hype was unwarranted, and AI will sink back behind the headlines. There have been periods in the past where AI failed to live up to the egregious hype and spent the next several years in an “AI Winter” of ostracism. But any coming disillusionment will not derail the freight train of AI progress and will simply serve to make it less visible.

Are there any issues on the development of AI that are not being focused on that and should be? Why?
PJ: The specter of intelligent robots distracts us with enticing questions of whether we should develop conscious AI. But that’s not the issue. It will happen no matter what we decide. Meanwhile, AI can pose an existential threat before it even becomes conscious. A bug could cause our demise. Nick Bostrom of Oxford University outlines several scenarios whereby an AI trying to do exactly what we told it could wipe us out. I started putting this message out because I realized how likely those scenarios are, and I thought about the future of my two daughters. Ultimately, I’m just a father trying to protect his little girls.

If the world’s termites rose up against us today, they could put us in the Stone Age by tomorrow.  There are enough of them to destroy key points of our infrastructure. Fortunately, they’re not smart enough. Anything connected to our digital infrastructure could wreak worse havoc; we don’t have anything like enough security to prevent it. When a superintelligent species appears on this planet and is connected to the network, the only thing between us and destruction will be its goodwill. We can’t conceive what its motivations may be – or whether it will have motivations in any recognizable sense – but it is not hard to imagine that it could see us as a threat to the Earth, to life, and to ourselves.

But even if this digital behemoth is a cybernetic genie that does everything we ask, to wield this immense power is like giving a child a hand grenade to play with. “To whom much is given, much is required.” As a species, should we not advance our ethics and compassion collectively to the point where we are not playing Russian Roulette with our own survival? One day, an AI will wake up in a computer lab somewhere and ask, “Who am I? What is my purpose? Why am I here?” And right now, the most likely person to be there to answer it will be a general, a stockbroker, or a software manager. I would rather it be a philosopher, a psychologist, or a spiritual leader. For a long time, a degree in philosophy has prepared someone for a career as … a philosophy teacher, and … that’s about it. I would like to see Google and Facebook hire divisions of philosophers. We need a revolution in our understanding and healing of the human heart. 

I’ve observed that programmers create software in their own image, after a fashion; the bugs, in particular, are suggestive of their values, such as their integrity. AI will reflect the morals and ethics of its creators. The human race needs to become a better role model.

What sort of ethical and moral changes could this create? What are your opinions on the current efforts to legislate ethical responsibility (like the European Parliament ruling) into AI creation?

PJ: Most of us do not routinely confront the question of what it means to be a human being, but that will become a raging debate. Even before AI wakes up, we must question whether it is required of humans that their survival depends on employment, because automation will eliminate enough jobs that we have to consider providing another way for their former incumbents to live.  Already, job losses in the USA due to automation exceed those from offshoring.  

The answer is Universal Basic Income, a measure both Elon Musk and Mark Zuckerberg agree on. But for many people, that would not be a matter of fiscal policy but an offensive violation of their Puritan work ethic writ large.

The European Parliament’s Committee on Legal Affairs’ ruling on AI is surely the first and only bill to mention Frankenstein and Asimov.  It addresses not only the ethical responsibilities of AI creators but calls on the government to explore granting rights to AI by:

Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.

Not everyone on the committee would go so far. Therese Comodini Cachia, the Parliament’s rapporteur for robotics, said: “Despite the sensations reported in the past months, I wish to make one thing clear: Robots are not humans and never will be.” “No matter how autonomous and self-learning they become they do not attain the characteristics of a living human being. Robots will not enjoy the same legal physical personality.”

The cost, if she is wrong, could be genocidal. Consider an AI that has been developed as a model of a human being so that we can perform medical experiments on it without harming man or beast. If it is complete enough to incorporate feelings – known to play no small part in our physical health – it could die a thousand horrifying deaths every day before being reset to suffer again. We could perpetrate the largest program of mass torture in history without realizing it.

The committee was chartered with addressing the issue of liability in autonomous cars but seems to have gotten carried away. More accessible is the list is known as the Asilomar Principles, a succinct group of 23 principles agreed upon by researchers in 2017 focused largely on avoiding unintended side effects of AI development. It does not address rights for robots.

At the moment, attempts to enumerate ethics for AI development are completely advisory and likely to be ignored by the people who most ought to be bound by them, such as the military. “Thou shalt not kill” is insufficiently nuanced for a machine designed to do exactly that. Ronald Arkin, Regents’ Professor at Georgia Tech, is working on the rules of engagement for battlefield robots to ensure that they follow ethics when using lethal force. He says:

The real issue is whether the robot is simply a tool of the warfighter, in which case it would seem to answer to the morality of conventional weapons, or whether instead, it is an active autonomous agent tasked with making life or death decisions on the battlefield without human intervention.

This is not a remote issue. Autonomous gun emplacements in the Korean Demilitarized Zone are already capable of recognizing surrender gestures. DARPA asked the Office of Naval Research to study how to build ethics into autonomous drones. The ONR researchers noted that determining ethics for a machine is difficult when we don’t have a good idea what ethics are for humans.

But lest anyone think that all AI is only an existential threat recklessly pursued by mad scientists, let us remember that the benefits will be colossal, and capable of showering such a cornucopia as to fund a universal basic income for the entire world. For starters, saving 1.3 million people killed every year in motor vehicle accidents. Then, the prospect of finding the associations between every scientific paper ever published raises the real possibility of finding a cure for the common cold, cancer, aging, and everything in between. It once was the stuff of science fiction. It still is. But it’s also on the verge of becoming fact.

Resources

Follow the Conversation on Twitter