Why Companies Are Using Artificial Intelligence for the Wrong Things

Swiss Futurist Gerd Leonhard—leading keynote speaker, author and CEO of The Futures Agency in Zürich—tells In The Future that executives need to understand the obvious, but often overlooked, challenges of AI.


Conversation has been edited and condensed for clarity and brevity.

Companies are scrambling to get a foothold in Artificial Intelligence—a growing industry worth billions that for many executives is equal parts promising, worrisome and confusing. Leading Futurist Gerd Leonhard spoke to In The Future about what executives should embrace, and likewise apply caution, when it comes to using AI technology.

Leonhard has published a well-received book on the future, “Technology versus Humanity”, and some of his major clients include banks, technology companies and governments from around the globe. He believes in what he’s dubbed the “realopia,” or future-planning from a place that’s firmly between today’s reality and mere utopia. It’s the careful art of being proactive while also remaining cautionary.

This approach has also influenced his view of AI: although we should move towards the future with openness and hope, we need to remain adequately prepared for the consequences or the externalities that this exponential technological progress can bring us.

What do executives need to know about using AI?

It’s very important to realize that technology is a big deal for being more efficient and increasing the margin. On the other hand, if you just do everything that technology affords you to do, you become a commodity, because everyone will use technology to become more efficient. For example, many large, global companies want to get rid of as many people as possible and are looking to use intelligent software to replace them. That is a very short-sighted undertaking because these applications are not at all like humans in terms of their current potential, and the value of a company is usually derived from the human aspect rather than the technology it is using. In many industries, such as banking, legal or consulting, it’s not that easy to really replace people and just let machines do it—it feels soulless and devoid of humanity for the customer. More often than not, it is the combination of man and machine, HI and AI, that creates the biggest value.

Then where can humanity and technology work better in tandem?

I think anything that doesn’t require human-level understanding or human-to-human relationships will be done by smart machines. Driving a car is a great example. Ultimately we can let go of driving a car—it’s not a human attribute and it was a temporary thing—it came and it went and we’re not going to be worse off because we don’t drive ourselves anymore. Other things, for example, like complicated medical diagnoses beyond just radiology—or predictions as far as your children are concerned, using DNA findings—we shouldn’t automate those things entirely. They have human components that machines don’t understand.

What’s the litmus test for making that judgment?

I think when something is less based on human attributes—like many call enter routines—we can automate it, because there’s little need for empathy or compassion when rebooking a flight or something like that. We always have to look at whether something supports human flourishing, or if we’re automating something we shouldn’t automate. In my opinion, we shouldn’t automate voting, legal rights, the justice system or important medical decisions.

With that in mind, is AI even a useful tool for us?

First of all we have to distinguish between AI and what’s called IA, or Intelligent Augmentation, basically using smart technologies to augment and assist humans. I think what we see today is really 98% IA—it’s basically just fancy software. It’s software that can learn and can think in a machine-like way. It can be totally unlike human intelligence, which involves things like social, emotional and kinaesthetic intelligence. After all, Intelligence is not simply created by having more computing power, or processing speed. That would be too easy.

And how do you see that playing out?

Over the next five years, I think the low-hanging fruit will be using machines to do mostly narrow but huge volume data-processing. It’s way beyond humans to look at a trillion data streams. There’s also the augmentation, making stuff more efficient—for example, analytics and prediction. It’s making things smart. You put in the old businesses like shipping and logistics and you put it into the “smart converter,” as I sometimes call it. Out comes better results, better planning, less environmental strain and more efficient business—but it doesn’t do much more beyond that.

Do you see any big warning signs?

One example is with human resource analytics, when a company uses machines to read people to see what they’re up to. The machine says, well, this person is not going to be very useful. It’s kind of what I call the Trip Advisor problem—it can be very helpful and a good data source, but at least half of the time it’s wrong.

Where do companies draw the line?

If you trust data too much, that’s what I call “machine thinking”—you start thinking like a machine. But humans don’t operate on a 0-1 level. For us there are a lot of other options between zero and one. We like to say, “Well, it depends.” We change our minds. We have imagination. I tell companies to automate the routines, because routines are often just friction. If you can automate those, then I think that’s a good move—but not if it causes humans to have to totally simplify things that are more complex by nature.

Do we need to be worried about AI right now?

I don’t think true AI, in the sense machines that can “think” independently, or act even remotely like humans, is really working yet. Right now, most machines are still too slow—quantum computing will change that—and we don’t have enough good data to feed them. We have many shortcomings. What’s currently being sold as AI is just really software on steroids. But in five to 10 years that will change and that’s when it can be potentially dangerous for us, when we might not be able to control it.

So is AI that one life-changing disruption it’s been built up to be?

AI is that one thing. Making machines smart has the biggest impact and it has vast potential. But anything like this has huge implications on security, safety and social contracts. Right now what I call “mission control for humanity” is Silicon Valley and maybe China. That’s where they decide what happens. It gets to be difficult for everyone else because we may not agree with the scope they put into data protection. The power of technology is exponential—and it will in 10 years be infinite. By the time that comes around, we need to have clear rules as to what we want or otherwise technology will become self-contained. When technology becomes that capable, it’s essentially a black box. We can no longer control it and we can no longer understand it, and we have to get ready for that time.

What’s the best advice you can give executives at this point?

The biggest gains are in IA, to use technology to be faster and more efficient, and to come up with new business ideas. Always keep the human in the loop—it’s essential right now. We should embrace technology, but not become technology.