AI ethics is more than a framework
Discussions around Artificial Intelligence (AI) and ethics are getting pace this year. A lot has been done at universities, technology companies and consulting firms. Many large tech firms have published frameworks on how your organization can build customer trust. These frameworks outline how to embed ethics within your AI projects and applications. But is following these frameworks enough to become truly ethical around AI? I’m afraid not.
AI frameworks are a good starting point. But like all frameworks, they’re never complete. (The makers of frameworks never said so either.) So, what extra steps should you make to become truly ethical? I give you four points I think are important.
1. AI ethics is more than a haphazard thing
Well, you can pursue a successful career in IT without much knowledge of the history of IT. Eniac, IBM/360, Turing, Von Neumann, Macintosh, all names which are fun to know. And useful when you see that some hypes are just refurbished old ideas. But probably not need to know for your current day-to-day job in IT. But in ethics things are different. Knowledge about the history of philosophy of ethics is essential to gain knowledge about the ethics. In order to understand what the current ethical positions around AI mean. And to know what the pros and cons for all the positions are.
“It is not always easy to do the right thing, to know what the right thing to do would be, or even to recognise that a question of right or wrong is at stake. For centuries philosophers have thought and debated about what is morally just” (Sogeti)
Basically, you must know what virtue ethics, consequentialism (or utilitarianism) and deontology are within the scope of normative ethics within Western philosophy. I’m afraid getting acquainted with ethics requires study. Leveraging the thoughts and systems great philosophers have given to the world helps you to be better in AI ethics.
Without this background, discussions around ethics and morals reflect personal views. These may differ substantially between the participants in an ethical discussion. Using ethical principles help to streamline that discussion and leverage what’s already been discussed by others. Ethics offers standards of practice and can set expectations.
There are plenty of sites and books introducing you to ethics. The whitepaper “Value Sensitive Architecture” by Sogeti starts with a concise and excellent overview of normative ethics for IT systems.
2. AI ethics is more than risk management
In the short history of AI in the real world (outside the laboratories), some severe mistakes have been made. And are still made. These mistakes do not only have negative consequences for the people involved. The good name of the producers of the AI software and services can be damaged. Or worse, AI failures can decrease profitability and sometimes the existence of companies. I’m not going to re-iterate the failures and accidents of the past, but I’m quite sure those mishaps made organizations more careful about the consequences of applying AI.
“[…] Managers must carefully consider both potential positive and negative outcomes, opportunities, and challenges associated with the use of these [AI] tools.” (IBM)
Hence the tendency for organizations to see AI as a risky venture. Business risks need to be managed. From a consequentialist approach to ethics, negative consequences should be minimized. For this, we have risk management. So far so good. But looking at AI ethics as risks only, is too narrow.
From a moral perspective, minimizing risks is one of the duties of the management of an organization. But whereas risk management focuses on the negatives of an AI solution, the positives of an AI solution are valuable too. These should also be taken into the equation. Some guidelines, like the “Ethics guidelines for trustworthy AI” from the EU also formulate positives. Like benefits for society and the environment. Or even the much-discussed topics of fairness of AI has positive sides. Unfair AI can lead to discrimination, but fair AI will lead to a more inclusive society. We should look at the positive outcomes from an ethical standpoint too. AI should contribute to the common good.
3. AI ethics is more than compliance
Risk management is a tricky business. And hence the need for guidelines for AI ethics. Adhering to guidelines decreases the risks of faults in the ethics around AI implementation. These guidelines and principles give the necessary frameworks to build ethical AI. But one should never think that complying with all guidelines guarantees ethical AI. Ethics is more than compliance only.
“I already see too many organizations that are looking to get approval or an accreditation is an ethical AI organization as some kind of business advantage” (Nigel Willson)
For a starter, complying with ethical guidelines is good. These are rules that we should comply too. When these rules can be regarded as universal rules like those of the EU and UNESCO, we’re getting somewhere. But as an organisation, we shouldn’t comply only for the sake of doing it because it must be done, due to law or regulations That’s not a good starting point. There should be a personal moral obligation or duty to do good. We should have the intention to be an organization that wants to do good in all its activities, not for AI only.
As Nigel Willson, Founding Partner at awakenAI, puts it this way: “We need the frameworks and we need them as guard rails, and we need the legislation because we have to have protection. But for me an organization that’s looking to be an ethical AI organization needs to do it because they want to not because they have to. Ethical AI in practice comes from the heart and action.”
4. AI Ethics is more than an afterthought
It’s my firm belief that AI ethics is not the cherry on the cake. Or something that has to be tested after the solution has been made. AI ethics is in the cake, it’s an important ingredient of the cake. Some would even say it’s the most important ingredient, because AI ethics are there for the common good.
But that’s nothing new. Business ethics have been around for a while. I’m convinced this discipline can help frame AI ethics within a business context. The AI, used in an organization, should at least adhere to the same values as the business itself. “[Values] can be critical in determining how a company deals with certain situations and how it handles internal and external issues. Values help business leaders stay aware of temptations and prevent lapses as the business grows.”, states the website of Embroker.
“But ethics needs to be proactive and prepare for what could go wrong, not what has gone wrong already. […] But as these systems become more powerful and get used in more high-stakes domains, the risks will get bigger.” (Jess Whittlestone)
Methods like Value Sensitive Design build on the idea that any technological design, including AI, should be based on values, foremost ethical values. Those values are held by the stakeholders in a broad sense of the word. The values should be researched and used as requirements. This should all be done before the project is even started.
This, however, is not a guarantee that the final system is ethical either. But is it is a method of using the ethical values held by so many involved positively? When the AI system materialises those values, the world will become somewhat better. Guidelines can always be used to see if there’re no unforeseen negative consequences. At least, these negatives should be outweighed by the positives. But this weighing is not easy, it’s not a simple calculation. “Regard AI [ethics] as a negotiation between utility and humanity,” says David De Cremer, professor at the National University of Singapore.
To let AI serve the common good, AI researchers and practitioners have drawn up the following recommendations:
1. Study the various faces of knowledge and non-knowledge, be wary of framing.
2. Identify and involve stakeholders throughout.
3. Keep looking for individual and common effects; don’t forget the socio-political.
4. Employ proportionality thinking: suitable & necessary, balance of interests?
5. Focus on whole systems instead of parts (e.g. algorithms).
6. Consider feedback loops and causal dynamics.
7. Draw on the state of the art: e.g. anonymisation, discrimination-aware methods.
8. Ask questions, early and again.
9. Be ready to be wrong and embrace learning!
AI ethics is hot, but not hot enough not to handle appropriately. AI ethics is embedded in business ethics and ethics in general. Leveraging what has been found there helps to streamline and structure the discussion — guidelines give guard rails — but we should be kept on thinking critically on the positive and negative consequences of AI for people, organisations, society and the environment.
But more important, ethical thinking and actions should be present in the whole development and use of AI. From the conception of ethical discussion should be conducted with the stakeholders. All the people affected should be included, even if they don’t directly participate in the project itself.
The good news is, is that had been done before. It’s not all new. Methods like Value Sensitive Design help you to structure your project and create AI systems that are not only beneficial for you, but also serve the common good.
Graphics: CC0 Public Domain — Foundry via Pixabay