Design ChatGPT with Passion, Purpose, and Values

Reinoud Kaasschieter
Data & AI Masters

--

The rise of ChatGPT has surprised many people, even AI experts. Not only the bewildering capabilities of the chat program itself but also the massive adaptation by the general public in a very short time. Never has an IT innovation taken off so rapidly. But it seems that this kind of technology adaptation has some serious ethical consequences. How do we deal with them?

Many organisations are investigating how they can employ OpenAI’s ChatGPT, Google Bart, Microsoft Bing and other large language models (LLM’s). Not only for themselves but also for their clients and customers. At Capgemini, a passionate worldwide community has been formed to explore the possibilities of ChatGPT. This includes finding business use cases for our customers. The big question is: How can ChatGPT increase the efficiency, hence profitability, of our customers? All within ethical boundaries. Like Capgemini has set in her Code of Ethics for AI. So far, so good.

Nevertheless, we also see use cases popping up that are questionable. The use of Large Language Models for creating fake news photos, phishing e-mails and news trolling. Besides this malicious use on purpose, we see also other uses popping up that are problematic. Not by intent, but by overly relying on the technology. ChatGPT can give wrong answers, but if the user cannot assess these answers on trustworthiness, the user might assume the answers are true. ChatGPT isn’t a curated knowledge system, but it contains implicit knowledge. So, we use it as a source of knowledge. “Chat GPT does not work that way. It is awful when it comes to hard facts.” writes Burkhard Hilchenbach.

«Perhaps ChatGPT and the technologies that underlie it are less about persuasive writing and more about superb bullshitting. A bullshitter plays with the truth for bad reasons — to get away with something.» (Ian Bogost, The Atlantic)

So, what do we do? We cage the monster. We evaluate the ethics of an AI application by using a checklist when we’ve already started building the system. We discuss privacy and copyright issues when ChatGPT is already used in full swing around the world. We install guardrails to avoid the most serious mishaps when we see things go wrong. We employ filters to filter out the bullshit, the fabulations. (Why are there fabulations at all?) “The content filters of ChatGPT have been designed to make sure this AI tool is safe to use. These content filters help ensure that ChatGPT does not generate any kind of content that may be offensive, inappropriate, or harmful in any manner.” writes Shaheen Banu. But she also writes about how these filters can be bypassed. These guardrails offer only limited protection. When an AI starts to behave in a way we didn’t envision, the guardrails won’t protect us.

Let me use a simple analogy: In the Netherlands, mopeds have a speed limit of 45 km/h. The regulators imposed this limit for road safety. Electronic speed limiters are installed to block speeding. This will stop you from driving faster than the legal speed limit. But these limiters can be bypassed. Or can malfunction. They are not failsafe.

The same accounts for Generative AI, like ChatGPT, and other AI. Filters are not failsafe. They can be bypassed on purpose, but they can also “malfunction” in the sense that they are not able to detect unforeseen and erroneous outcomes or situations.

Ethics by design

Back to our moped. What did the Dutch lawmakers do to tackle the problem of overriding the speed limiters? They imposed on manufacturers that the construction of the moped doesn’t allow the moped to speed. The drive train of a moped should be constructed in such a way, that speeding is not possible. By design. So there is no need for speed limiters at all.

«The technology must be designed in such a way that critical situations do not arise in the first place, which also includes dilemma situations, i.e. a situation in which an automated vehicle is faced with the “decision” to implement one of two evils that cannot be weighed up against each other.» (German Ministry of Transport and Digital Infrastructure)

Steve Jones writes that the best thing is that “AI cannot be made to work beyond its boundaries and cannot manufacture false answers.” Using this design principle will largely prevent deliberate or inadvertent misuse of AI systems. Can it stop all misuse and abuse? That will be a hard nut to crack.

This is the core of ethics by design. We design and construct our systems in such a way, that ethical violations cannot occur. Or the risks of accidents are minimized. The value of safety on our roads, and the value of the trustworthiness of AI outcomes, it is all the same. If we want our societal values to be incorporated into our AI, applying ethics as an afterthought will not suffice. In that case, we will have to patch the AI by adding unreliable filters and other measures. We tame the monster by putting the beast in a cage and hope the cage will hold. But shouldn’t we have created a monster in the first place? Like a modern-day monster of Frankenstein? Why do we create AI systems that can go rogue in the first place?

Values and purpose as the starting point

As a designer, I always take the purpose of an artefact, a physical product, a software system or even an organisational structure, as the starting point of any design project. Every artefact should fulfil its purpose for an individual, a group or even a society. That purpose should have added value. That value can be quantitative, like earning more money, but also qualitative, like improved living conditions or a fairer society. AI systems must serve a purpose too. But the purpose should be ethical. Methods like “Ethics By Design”, “Ethics of Use”, or “Value-driven Design” have been created to take ethical values at the heart of the system. Systems should not only avoid infringing values, but they should also enhance value or realise values.

«Justice demands that we think not just about profit or performance, but above all about purpose.» (Annette Zimmermann, Boston Review)

In engineering, failsafe design is a value. Engineers construct systems, from mopeds to nuclear power plants, that cannot break ethical guidelines or legal laws. Or at least, all conceivable methods are employed to avoid failure. But the strange thing is, that in software engineering this paradigm isn’t employed. Are we building AI like Chornobyl power plants? Waiting for a human error to come, purposely or by accident.

It is my firm belief that AI designed well can still become unethical when applied in a bad way. But AI designed badly is almost always unethical.

“Make the World a Better Place: Design with Passion, Purpose, and Values” is the title of a video by Dr Robert Kozma. This title encompasses it all. If we want AI, like ChatGPT, to behave ethically, we need to create AI algorithms that are fit for purpose. The purpose, and the goals the AI application should meet, should steer the selection of the correct algorithm. So, within the context of the purpose and ethics of the application, AI cannot cross the boundaries we set. By design. That should be our passion when we use ChatGPT and other Large Language Models.

Photo Public Domain CC0 via Dutch National Archives / Wikimedia Commons

--

--

Reinoud Kaasschieter
Data & AI Masters

I’m an expert in the field of Ethics and Artificial Intelligence, and Information Management at Capgemini Netherlands.