Nearly 60 percent of respondents to this week’s RBJ Daily Report Snap Poll believe artificial intelligence will eventually threaten humanity.
In June 2014, Stephen Hawking and three other leading scientists wrote an open letter that said success in creating artificial intelligence “would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
Concerns about runaway artificial intelligence are not new, but they have grown with the rapid development of smart machines and applications—from Apple’s Siri and self-driving cars to data mining and drones. What if a superintelligent machine could design even smarter machines that humans cannot control?
“With artificial intelligence, we are summoning the demon,” says Elon Musk, the founder of Tesla and SpaceX, who has pledged $10 million for research on reducing the threat from artificial intelligence.
Skeptics contend a half-century of research has failed to produce true artificial intelligence. Any threat would be in the distant future, they say.
Sixty percent of Snap Poll respondents say they are concerned, with 22 percent saying they are very concerned. This compares with 11 percent who are not at all concerned.
Increasingly smart and powerful computers have prompted other concerns as well. One is the risk of “superstupidity,” or system failure on a large scale—for example, a power grid shutdown or a stock market “flash crash” more severe than the one that occurred in 2010.
To 93 percent of RBJ Snap Poll respondents, this is a very real concern. Of those, more than half—55 percent—are very concerned.
Another concern is autonomous weaponized machines—robots capable of killing while not controlled by human operators. (Hawking, Musk and others believe this is “feasible within years, not decades.”)
More than three-quarters of respondents agree with Hawking and Musk. Of the 78 percent who are concerned, more than a third says they are very concerned.
There is almost universal agreement among scientists that the potential benefits of artificial intelligence are enormous, but in Hawking’s words: “Our future is a race between the growing power of technology and the wisdom with which we use it.”
Christine Parker, assistant professor of biology at Finger Lakes Community College, will discuss the current state of genetic research and how close humanity has come to creating artificial intelligence April 1 at the college. Her talk is titled “Brain Science, Artificial Intelligence and Eugenics: Redefining What it Means to be Human.”
Nearly 400 readers participated in this week’s poll, conducted March 14 and 15.
In your view, will artificial intelligence eventually threaten humanity?
Looking ahead over the next 25 years, how concerned are you about each of these potential risks posed by increasingly smart and powerful computers?
Artificial intelligence that humans cannot control
Very concerned: 22%
Somewhat concerned: 38%
Not very concerned: 28%
Not at all concerned: 11%
Large-scale system failure caused by computer error
Very concerned: 55%
Somewhat concerned: 38%
Not very concerned: 5%
Not at all concerned: 2%
Autonomous weaponized machines
Very concerned: 36%
Somewhat concerned: 42%
Not very concerned: 19%
Not at all concerned: 4%
For information on how the Snap Polls are conducted, click here.
The issue has already started with humans losing abilities due to machines. Writing, spelling, social skills, morals—all these skills seems to be slipping due to games and virtual reality. What happens when humans are interfacing with artificial intelligence? Humans are flawed; are their creations made in their own image? It is not hard to imagine a machine that can outthink a human. Imagine if a factory that makes machines that are self-aware begins to make its own modifications—sounds exciting.
Artificial intelligence may in fact save humanity if it is based on logic and is not infused with emotion. One only has to look at the bloody reality that “real” intelligence has plagued humanity with throughout history to come to such a conclusion. Wars over political or religious differences, gang criminal violence over drugs or turf, etc. have killed millions! I am more fearful of “real” intelligence, thank you very much!
—George Thomas, Ogden
The government will find a way to regulate it, so we should have no worries, right!
Anything programmed to be smarter than a human is a threat to humanity. There are emotions and other traits in humans that cannot be replicated in a machine.
We need to work better at educating the American people. Do not let them rely on machines to make decisions.
Among the things I worry about in my life, this one is low on my totem pole. The current political climate where people seem more interested in destroying others rather than working together for the greater good (providing for fellow citizens of the U.S. and the world) is a much bigger and more immediate concern for me. Please vote for the candidates who exhibit the ability to see the best in others and wish to build rather than destroy.
—Nancy J. Peterson, ProNexus LLC
Intelligence without a conscience and sense of humanity is a danger. AI is that threat.
The danger is not in the powerful technology itself; we have faced annihilation from technology before. The real question is, will enough people be cognizant and remain independent and autonomous and not blithely submit to and let it control them. Hawking is right to be concerned about wisdom, which seems to be diminishing of late.
—D. Giambattista, Fairport
Flashbacks to “RoboCop” and “Knight Rider.”
The question is more about money than technology. Corrupt, well-funded exploitation of technology is a deep concern. We already have more than 90 percent of the world’s wealth controlled by a tiny fraction of the world’s population. Wealth buys choices, and if bad choices are made regarding the application of technology, then bad results are inevitable. This is a question much like gun control—if profit comes before safety, then we’ll have a repeat of the American murder debacle, but that doesn’t have to be the outcome. I suppose I’m optimistic about technology and autonomous weaponry and more worried about good old human bigotry and greed.
—Wayne Donner, Rush
It’s only a matter of time until the grid goes down in a big way, by either failure or hacking or other forms of terrorism. When it happens, it’ll get ugly real quick out there.
While many people fear AI, I believe that in the right applications with checks and balances built in, it can help us. Imagine how much safer our roads will be with driverless cars that will cut down on accidents and even allow others—such as the disabled or elderly, and even people coming home late from parties or bars—to get safely to places. I have the pleasure of working with some really smart data scientists at Xerox who are working on building intelligence into computers in so many industries from health care, transportation and retail. The possibilities are amazing.
—Laurie Riedman, Riedman Communications
Every generation since the beginning of time has brought serious concerns with progress. Remember the concerns after the nuclear bombing of Japan. Today, these cities are as robust, advanced and growing of any city in the world!
—J.A. DePaolis, Penfield
Yes, artificial intelligence is something to be at least a bit concerned about. But the unfortunate reality is that human intelligence (or lack of it) has been threatening humanity for quite some time and continues to do so.
—Steve Hooper, Health Economics Group Inc.
The more we use machines for jobs that people perform, the fewer jobs for people. More poverty and unemployment. Then where would the economy be? I want a teller at the bank, a cashier at the stores and my vehicle made by people. “Unhumanizing” society with machines.
I worry more about our ability to educate the next generation of workers than about machines taking over. Much of the improvements we shall see in the quality of products and services will not be the result of robots running the show but rather by virtue of the use of advanced sensing devices and data analysis. In a manufacturing environment, for example, such technology will better identify quality issues on production lines or monitor supply networks. They will increase the demand for industrial engineers and people managing the supply chain. Businesses will use these advances to evolve their offerings consistent with the concept of “shared value” advanced by Harvard’s Michael Porter. For example, a manufacturer that sells industrial equipment like generators or turbines could offer its products as a service, installing and then remotely monitoring them to identify repair and maintenance issues before a breakdown occurs. Maintenance and upgrades would be the responsibility of the manufacturer rather than the
customer. The ability to interpret digital data and manage machinery to optimize availability and cost would be among the new skills required of technicians. Like supply chain coordinators, industrial engineers, robot coordinators, simulation experts, data analysts, accountants and sales people, jobs that already exist will require new skills to accomplish them. The careers of the next generation will require these skills. A traditional 20th century college education—particularly the now overabundant undergraduate business degree—will not serve us well. Where and how will these new skills be acquired? How will the workers of the future know what skills to learn?
—John Calia, Fairport
I’m not afraid of AI (artificial intelligence) as long as I know where the switches and the plugs are and we still have actors like Tom Cruise, Tommy Lee Jones and Will Smith.
—Clifford Jacobson M.D., Vanguard Psychiatric Services PC
3/18/2016 (c) 2016 Rochester Business Journal. To obtain permission to reprint this article, call 585-546-8303 or email firstname.lastname@example.org.