24-02-2021 09:00

Better AI Requires Human Intelligence

At the recent Stockholm Fintech Week, the session on AI in Insurtech focused heavily on how not to work with artificial intelligence. It seems there are so many misconceptions, empty buzzwords and broken promises that the credibility of the field needs to first be defended before it can be explored.
Human carrying robot

“AI can be fantastic, but it’s not for everyone, and it’s not for every use case,” said Henrik Allert, Vice President at Itello, an Insurtech (short for Insurance technology) that provides digital solutions for the pension and life insurance industry.

The panel of speakers — made up of representatives from banking, consulting and academia, as well as Insurtech startups, which develop new technologies for use in the insurance industry  — were quick to point out that the presence of AI is a real marker of maturity within a company. As data becomes more powerful and companies become more data-centric, AI becomes more feasible and value-adding.

Still, Henrik Allert reminds us, “Many people in the industry are using the term AI very freely.”

AI can be fantastic, but it’s not for everyone, and it’s not for every use case

Henrik Allert, Vice President at Itello

“Fake AI” vs the human brain

Part of the problem is definitional.  AI is not just traditional computing on steroids. Real AI (as opposed to automated processes that are often, yet incorrectly, bestowed with the title) is about approximating how humans think, learn and communicate.

Humans possess “natural intelligence”, meaning the ability to learn, adapt and actually get smarter. The human brain is stunningly complex, processing information both nonlinearly and in parallel. This allows humans to save and retrieve information and make decisions many hundreds of times faster than standard computing.

When we talk about AI in a commercial context, we’re talking about a system that attempts to mirror this “natural intelligence”. Machine learning, for example, uses algorithms or neural networks to solve problems without being explicitly programmed. True AI gets smarter over time, while traditional computing (even really advanced examples, like robotics) simply performs what it is programmed to do.

AI is only as good as the people behind it

The first step in developing an AI solution is choosing a problem to solve. Still, AI is not a panacea, and the biggest competitive advantage a company can have often has nothing to do with their technical capabilities.

“I think it is an interesting situation now where everybody has more or less access to the same technology,” said Mattias Fras, Head of AI Strategy & Innovation at Nordea. “But how do you leverage that technology? And how do you scale your people to make the most out of it? We are competing on the rate of learning now more than anything else.”

Learning in this case is not just limited to developing new and better use cases for AI applications. Learning also comes in the form of asking the right questions.

“One competitive advantage of an incumbent company will be who is better at picking the right problems to solve,” said Mattias. “And that is more about the people, the culture and the problem solving skills than it is about the technology.”

In this light, understanding the technical workings of AI is secondary to creative thinking — something very firmly still in the realm of natural, human intelligence.

One competitive advantage of an incumbent company will be who is better at picking the right problems to solve. And that is more about the people, the culture and the problem solving skills than it is about the technology.

Mattias Fras, Head of AI Strategy and Innovation at Nordea

How AI forces us to confront the ethics of our work

In late 2019, Apple’s latest foray into AI made headlines for all the wrong reasons. Their newly-launched Apple Card, which used algorithms to generate and offer credit limits to applicants, was accused of gender bias when it offered significantly higher limits and more favourable interest rates to men, even when they had worse credit histories than women applicants.

Adding fuel to the controversy, when this discrepancy was pointed out — by Apple co-founder Steve Wozniak, among others — no one at Apple or Goldman Sachs, the issuing financial institution, could explain why. The algorithm, it seems, was too complicated, too opaque, for the humans working there to understand.

Apple and Goldman Sachs both claimed the algorithm couldn’t be biased because it didn’t officially factor in gender. However, the effect was clear, and consumers’ faith in the application process was understandably shaken.

“It’s all about trust,” said Mattias. “It’s about really knowing what you’re doing. Really understanding how these underlying systems work in the context of specific applications such as KYC or credit scores. Are you able to explain well enough why that output was the case?”

It’s all about trust. It’s about really knowing what you’re doing. Really understanding how these underlying systems work in the context of specific applications such as KYC or credit scores. Are you able to explain well enough why that output was the case?

Mattias Fras, Head of AI Strategy and Innovation at Nordea

The European Commission, in its Ethics Guidelines for Trustworthy Artificial Intelligence, lists transparency and accountability as critical features of ethical AI. Systems must be auditable as well as free of “unfair bias.”

“You need to have some control mechanisms to make sure you don’t violate the ethical standards of your company as well as compliance and business requirements,” Mattias continued. “And just because we can do it, should we do it, given our core values and ethical standards. I think there will be more and more focus on that and more people working on those kinds of problems.”

The paradox of AI

Here is the central paradox of AI: tasks that human beings find very difficult and time-consuming become almost effortless for AI-driven processes. Think of a hospital that uses a machine learning algorithm to analyse tissue samples to help make more accurate diagnoses. On the other hand, tasks that humans find intuitive or even instinctive remain highly problematic for machines. Think of the ability to speculate about the future.

Within Insurtech — and financial technology as a whole — success with AI requires more than just technical competence. Being creative and critical about the data sets we use as well as thoughtful about the intended and unintended consequences of our creations is vital. And it requires a human touch.

“I think the future for the insurance industry going forward — to be successful in this very data-driven landscape, you need to have a very open mindset to be able to tap into other datasets to really be competitive and successful in this,” said Henrik Allert. “And that is a sort of cultural and business mindset aspect that is going to be really important going forward.”