Ethics of AI is ethics of automation

Reinoud Kaasschieter
3 min readDec 2, 2021

In a 2017 article, Tom Rikert argues that Artificial Intelligence (AI) pays off when it is used to automate processes to a great extent. AI that wants to imitate human thought processes is interesting from an academic point of view but does not yet yield much for companies. Better simple AI that increases productivity than complicated intelligence that can only be used sporadically.

CC0 Public Domain

Suppliers also emphasize the automation of processes in their marketing. From Robotic Process Automation to Automatic Decision-Making, all these applications are aimed at taking people’s work off your hands. Within those processes, Artificial Intelligence is nothing more than a tool with which you can automate processes.

Through mechanisation and automation, companies produce more and at lower costs. Automation even allows companies to offer new products and services that were previously impossible to create. But automation also has downsides. Doesn’t automation lead to impersonal services, where the human dimension and compassion is lacking? Before we look at the ethics of AI tooling, shouldn’t we first study the ethics of automation?

Many ethical considerations surrounding automation are about work. Does automation reduce employment, or does it provide extra work? Do tasks become more tedious, or can we focus on fun things? Ethical questions have been asked since the beginning of the industrial revolution. And they are still important, but are they sufficient?

With the advent of Artificial Intelligence, new ethical issues in automation are being discovered. Discrimination, bias, exclusion and prejudice, and other problems keep popping up. Individuals and groups are excluded from or disadvantaged by computer systems. Without empathy for their backgrounds and histories. That makes people feel powerless against algorithmic decisions. These are serious issues that require all our attention. Are these typical concerns of Artificial Intelligence or is there more to it?

The recent affairs surrounding social security allowances in the Netherlands show that ethical issues can arise out of automation. “Stupid” algorithms can also discriminate against social and economic groups. Algorithms make badly informed decisions causing serious problems for those affected. And because this happens automatically, we don’t know how that works out for individuals and minority groups. When human beings have conversations, they can listen to counterarguments or objections. Automatic systems can hear but cannot listen. Somewhere in the design process, someone has not given sufficient thought to the consequences of the possible decisions a system can and will make. Or the designer has ignored possible future behaviour. With far-reaching consequences because automatic systems can make a lot of decisions in a very short time.

It is well known that Artificial Intelligence can make unethical decisions. But stupid algorithms can make also unethical decisions. And when these decisions can affect large groups of people at lightning speed, harm is already done.

That is why we need to expand the ethics discussion around Artificial Intelligence to an ethics discussion around automating processes. We need to start using the same criteria and principles by which we assess Artificial Intelligence to evaluate automation projects. Only when an automated system is ethically sound can we start thinking about the ethical application of a possible component in that system: Artificial Intelligence.

This article has been previously published on the website of ag connect (in Dutch).

--

--

Reinoud Kaasschieter

I’m an expert in the field of Ethics and Artificial Intelligence, and Information Management at Capgemini Netherlands.