by Loren G. Edelstein | January 10, 2018
Machines are learning so much faster than humans can learn - so where does that leave us humans? That was one of the thought-provoking questions raised by Nicholas Thompson, editor-in-chief of Wired magazine, in his session at the Professional Convention Management Association's Convening Leaders conference, "The Optimistic Technologist: Keeping the Digital Revolution Human-Centric."
 
Rapid technological advances will bring about dramatic changes in our work force, he noted, but we should not fear progress. For example, self-driving cars will eliminate the need for traditional drivers, resulting in sweeping job losses. Yet, people who have been trained to drive trucks could be retrained to navigate self-driving vehicles by using virtual-reality headsets, and guide them through deliveries.
 
"New jobs will be created," said Thompson. "The need for trucking and shipments will increase. Maybe that will create more and better jobs." He added that an even more optimistic view emerges in the big picture: "Look at the history of humanity. Technology always makes things better. The story of humans is one of unadulterated progress through technology."
 
In one far-reaching scenario, Thompson noted, author and futurist Ray Kurzweil predicts humans will achieve immortality by 2045. "We will be able to upload our consciousness into the cloud and share it with other people," he said.
 
"How should we think about super intelligence?" Thompson asked. "Isaac Asimov has said, 'Science gathers knowledge faster than society gathers wisdom.' In other words, humans get slightly smarter as time goes on, but technology gets thousands of times smarter. We are about to create things that are much smarter than humans. And that is complicated."
 
Thompson warned, "The risk of super intelligence is that you have a machine that has infinite intelligence, but you haven't programmed in morals. There are people who think the thing we really have to worry about is market manipulation by algorithms we don't understand. Ultimately, the risk is, we are going to build machines that we don't really understand, and they're going to do all kinds of crazy things."