Not so smart AI can be quite useful

Reinoud Kaasschieter
Data & AI Masters

--

In October 2021, the completion of Beethoven’s unfinished Symphony №10 was performed. This time the piece of music was not finished by a composer, but by an AI program. Even though the makers were high on the fact that computers are finally creative, the result turned out to be rather meagre. «Bloodless, boring, desecration of majesty» reviewed the Dutch daily newspaper Trouw.

Drawing Creative Commons CC BY-SA 4.0 by Joooojoooo-ooo via Wikimedia

Time and again it is told how computers and intelligent machines will take over the world, trump humanity and perhaps even become evil. Or that computers will solve the world’s big problems, such as climate change and epidemics. But there are also repeated articles that describe how artificial intelligence (AI) does not deliver what it promises.

The question, which we, therefore, have to ask at this time, is: Is Artificial Intelligence as intelligent as we expect? And my follow-up question is: Is this bad?

What do we expect from intelligent machines? That they are creative, do unexpected things, surprise us with something new, cherry-pick the porridge? But who knows how, for example, Machine Learning works, knows that this is not possible. If you let an empty system learn on large amounts of data, that algorithm will get the average out of those large amounts of data.

“One of the mistakes we’ve made as nerds and computer scientists is that we’ve called it artificial intelligence.” (Jim Stolze, writer and entrepreneur)

Because unfortunately or not, the average is usually the most common in our society. The exceptional is drowning in the sea of data. Where our brain is still somewhat capable of seeing the exceptional, the deviation, many machines are barely able to do that. Machines don’t know what to do with exceptions unless they’ve been explicitly told what to do with exceptions.

“A” in AI as in “automation”?

Perhaps we should put less emphasis on the intelligence in Artificial Intelligence. If AI’s goal of properly mimicking or surpassing human intelligence isn’t achievable now, shouldn’t we start thinking about AI differently? And that’s already happening. Successful AI implementations do not see AI so much as a super-smart tool, but mainly as a tool to automate processes. For example, very labor-intensive processes, such as the application of Rapid Process Automation (RPA). Or even processes that cannot even be done by people, such as in high-frequency trading on the stock exchange.

If we see AI primarily as a method of automation, we can use AI more realistically. To do more things faster and cheaper. And maybe, but not necessarily, better than humans. Because as I already described, AI is especially good at processing common cases. Not when doing exceptional things. Automation focuses on this bulk of common cases. This is where the most return can be achieved. Exceptions do not need to be automated; they can continue to be handled manually. In this way, many organizations have set up their processes, even before Artificial Intelligence and Machine Learning could be applied.

In an article from 2017, Tom Rikert states that only with automation do you get a Return-On-Investment (ROI) that makes your AI project profitable. AI projects are complex and expensive, so they must provide sufficient added value. And that is only possible when AI takes a lot of work off your hands.

“You can’t simply sprinkle some AI on top of a problem to expect a good outcome. The misconception is often that you can, but any good modelling is built on the foundations of knowing the domain and what the technology is capable of.” (Simen Norrheim Larsen, application consultant at Capgemini)

“This level of near-total automation requires a ton of trust, hence the value of domain experts to set it up and AI learning to capture data, patterns, and insights at scale,” says Rickert. So in addition to excellent data quality, we also need a lot of expertise to understand this data and to use it efficiently — and above all ethically. The AI software is not going to do that for you by itself.

Automation can also cause ethical problems

It is precisely this ethical — or unethical — application of Artificial Intelligence that causes problems that appear in the newspaper: AI discriminates against populations, and disadvantaged individuals and makes serious errors of judgment. There are ethically irresponsible applications, but there are applications that are ethical. Often the cause lies not so much in the AI software itself, but in the data that is used to learn the AI. If there are biases in it, implicitly or explicitly, the AI software will pick them up mercilessly. That’s why at Capgemini we see the ethics of AI as part of data ethics.

“Ethics programming is going to be a very difficult task because there is no perfect moral algorithm yet.” (Max Herold, consultant at the Dutch Government)

And because in automation we want algorithms to work autonomously, we have to be extra alert to errors. Then it makes little difference whether the algorithm is intelligent, learning or “stupid”. Wrong is wrong. The difference is that with so-called “dumb” or programmed algorithms it is easier to trace the source of the error than with AI models. Because with complicated Machine Learning systems, it is hard to find out where a wrong turn has been taken within the model. The models are not the debugging, they are opaque: a black box.

Do I need learning systems?

Because Artificial Intelligence can be opaque, it is difficult to explain how and why the AI model decided. Before we start applying Artificial Intelligence, we must first check whether the use of learning systems is necessary for the situation where we want to use it. Can’t we fathom the complexity of the problem ourselves and leave that to AI? Can we simplify the situation to make simpler solutions? Do we sincerely understand what the situation is about, or do we simply have insufficient knowledge and does more research bring that insight? Perhaps modesty is in order here. The simpler, the better, the faster results.

Artificial Intelligence can help automate processes better. When applied carefully and ethically, there are great benefits to be gained. Apply AI consciously, know that you are applying and do not use it just because it is hip and modern and may yield something in the long run. Also, financial benefits because it is mainly worthwhile to apply AI for bulk processing. The application of AI within automation processes may not be the most groundbreaking, but it is realistic. And that realism is desperately needed to use Artificial Intelligence usefully, profitably and ethically.

The Dutch version of this article has been published on the Capgemini website.

--

--

Reinoud Kaasschieter
Data & AI Masters

I’m an expert in the field of Ethics and Artificial Intelligence, and Information Management at Capgemini Netherlands.