Three articles in Computer Weekly

Weekly online magazine ComputerWeekly.com invited me to contribute a series of three essays, to summarize my key insights from my book, Ethics for people who work in tech. The essays are available below, as per January 2024:

  1. Ethics as a process of reflection and deliberation
  2. Alternative perspectives: relational and virtue ethics in tech
  3. Ethical perspectives on ChatGPT

1. Ethics as a process of reflection and deliberation

Published 8 August 2023: In the first of three essays, Marc Steen outlines a three-step process for how organisations can practically integrate ethics into their IT projects and how different ethical perspectives can inform tech development.

Let me start with two questions. Do you think ethics is important in the development and application of algorithms or artificial intelligence (AI systems? And do you find it easy to integrate ethics in your projects, when you develop or apply algorithms or AI systems?

I have asked these two questions at multiple occasions. Almost all people raise their hands after the first question. Almost all hands go down after the second question. We find ethics important. But we have a hard time integrating ethics in our projects.

There are many reasons why you would want to integrate ethics into your projects. Critically, because technology is never neutral.

Algorithms are based on data, and in the processes of collecting and analysing these data, and turning them into a model, all kinds of choices are made, usually implicitly: which data are collected (and which excluded), which labels are used (based on which assumptions). And all these choices create bias.

If the training data mainly consists of light-colour faces, the algorithm will have trouble with dark-colour faces. The notorious example is of Google placing the tag “gorillas” under a photo of two black teenagersa problem that (to my knowledge) they still have not fixed properly.

Responsibility

Since technology is not neutral, we – IT professionals, developers, decision-makers, researchers, designers or consultants – have a responsibility. We contribute to putting something into the world that will have effects in the world. Ethics can help to deal with that responsibility; to take responsibility.

Sometimes people do not like the word “ethics”. They envision a judge wagging or pointing their finger. Or they view ethics as a barrier against innovation.

I understand ethics radically differently. Not as a judge or a barrier. Rather, as a steering wheel. If your IT project is a vehicle, then ethics is the steering wheel. Ethics can help you to keep your project in the right lane, avoid going off the road, and take the correct turns, so that you bring your project to the right destination, without collisions.

A process of reflection and deliberation

You can integrate ethics into your projects by organising a process of ethical reflection and deliberation. You can organise a three-step process for that:

  1. Put the issues or risks on the table – things that you are concerned about, things that might go wrong.
  2. Organise conversations to look at those issues or risks from different angles – you can do this in your project team, but also with people from outside your organisation.
  3. Make decisions, preferably in an iterative manner – you take measures, try them out, evaluate outcomes, and adjust accordingly.

A key benefit of such a process is that you can be accountable; you have looked at issues, discussed them with various people, and have taken measures. Practically, you can organise such a process in a relatively lightweight manner, e.g., a two-hour workshop with your project team. Or you can integrate ethical reflection and deliberation in your project, e.g., as a recurring agenda item in your monthly project meetings, and involve various outside experts on a regular basis.

If you want to work with such a process, you will also need some framework for ethical reflection and deliberation. Below, we will discuss two ethical perspectives that you can use to look at your project: consequentialism and duty ethics.

Consequentialism

Consequentialism is about identifying and assessing pluses and minuses. Imagine that you put this system into the world – what advantages and disadvantages will it bring about, in society, in people’s daily lives? What would be its added value? Or its overall costs? You can compare different design options or alternative solutions with each other. You then choose the option with more or bigger advantages, and with fewer or smaller disadvantages.

This perspective, of assessing plusses and minuses, is often appealing. However, you may encounter complications.

Let us look at self-driving cars. What is, overall, the added value of self-driving cars? What problem do they solve? Are they safer? Can drivers rest while driving? Such questions can help you explore entirely different options, like public transport, which is safer, and where people can rest during transit. As a thought experiment, your project, and its assumptions and starting points, can be up for discussion.

Another question is: Where do you draw the boundaries of the system you analyse? What pluses and minuses do you include? And which do not count? You will probably count the benefits for the owner of the self-driving car. But do you count the costs and risks for cyclists and pedestrians? These are questions about system boundaries.

Now, self-driving cars are often put in the context of a so-called smart city, as parts in a larger network, connected to all sorts of infrastructure, like traffic signs.

That would enable, e.g., ambulances to get priority at intersections. As a variation, you can imagine some premium service that comes with high-end self-driving cars that give them access to exclusive rush-hour lanes.

Do you then also look at the disadvantages for other road users or residents? You can extend the boundaries of your analysis and look at the human and environmental costs that went into producing such cars.

Moreover, there are questions about the distribution of pluses and minuses. How will benefits and costs be distributed between different groups of people – car drivers, cyclists, pedestrians, children? And, if we look at the supply chain, we would need to take into account the costs that come with the extraction of rare minerals and the conditions of workers in other countries.

Duty ethics

Another perspective we can use is duty ethics. It is concerned with duties and rights.

Let us take another example: security cameras. You can imagine a project with a municipality as a client. They have a duty to promote public safety. To fulfil that duty, they hang cameras in the public space. This raises questions about citizens’ rights to privacy.

So, the duties of one party relate to the rights of another party. This municipality then needs to combine two duties: to provide a safe living environment and to respect citizens’ rights to privacy.

People often perceive a conflict here. As if you need to choose between safety or privacy. But you do not have to. You can work with technologies that combine safety and privacy, e.g., through data minimisation or privacy-enhancing technologies. Jaap-Henk Hoepman wrote a book about that: Privacy is hard and seven other myths.

Finally, a rather silly example, to inspire creative solutions. Imagine that you are in the 1970s and you want to go camping. You can choose between a spacious tent that is very heavy, or a lightweight tent that is very small. There is a conflict between volume and weight. That is, until light, waterproof fabrics and strong, flexible poles were invented. Now you can combine a large volume and a small weight. You can look for creative combinations of safety and privacy. Or of security and usability, e.g., in cyber security.

In practice

In practice, different ethical perspectives are intertwined, and you want to use them in parallel. You want to analyse pluses and minuses, and take duties and rights into account.

Let us look at one more example: an algorithm to detect fraud. Pros: the algorithm can flag possible cases of fraud and potentially promote efficient and effectiveness fraud detection. Cons: The cost to build and maintain this algorithm. I added maintenance, because you will need to evaluate such an algorithm periodically and adjust it if necessary, to prevent it from derailing.

Other drawbacks: false positive errors; in our case, these would be flags for cases that, upon further investigation, turn out not to be fraud. This can cause enormous harm to people who were wrongly suspected of fraud, as thousands of Dutch parents experienced. In addition, the organisation will need to make enormous efforts to repair these false positive errors.

Moreover, human rights are at play in duty ethics. That was the case with SyRI, the Dutch System for Risk Indication that was banned by the The Hague District Court in 2020. This was a similar algorithm to detect fraud with social services and benefits.

On the one hand, the government has a duty to spend public money carefully and to combat fraud. On the other hand, citizens have a right to respect for private and family life (Article 8 of the European Convention on Human Rights). The judge weighed these against each other and ruled that the use of SyRI violates this human right.

There are more ethical perspectives than these two. In a subsequent installment, we will discuss relational ethics and virtue ethics.

2. Alternative perspectives: relational and virtue ethics in tech

Published 22 August 2023: In the second of three essays, Marc Steen explores the benefits of grounding ethical considerations in an understanding of social and power dynamics, and how relational and virtue ethics can help.

If you are involved in collecting and analyzing data, or developing or applying algorithms and AI applications, you probably want to do so responsibly. You can turn to documents that list and discuss ethical principles, such as: prevent harms, human dignity, human autonomy, fairness, equality, transparency and explicability. Great principles! But they can remail rather abstract. Possibly, you are looking for practical methods to integrate ethics into your projects.

In a previous installment, I presented ethics as a steering wheel, and ethics as a process. You can use ethics as a steering wheel for your project: to stay in the right lane, take the correct turns, and avoid collisions. And you can organize a process of ethical reflection and deliberation: you put possible issues on the table; organize conversations about these issues; and make decisions, based on those conversations. I also discussed two ethical perspectives as frameworks. With consequentialism, you can assess the potential plusses and minuses of the results (‘consequences’) of your project. You can work to maximize the plusses and minimize the minuses, or choose options with more or larger pluses over options with fewer or smaller minuses. With duty ethics, you can focus on the various duties and rights that are at play in your project. E.g., on the one hand, a city with a duty to promote safety and that therefore installs cameras in public places, and, on the other hand, citizens with rights to privacy. Your challenge is then to combine such duties and rights.

European Enlightenment

These two perspectives were developed during the European Enlightenment: consequentialism by Jeremy Bentham (‘utilitarianism’) and duty ethics by Immanuel Kant (‘Kantianism’). Thus, key assumptions and ambitions of the Enlightenment were embedded in these perspectives. They looked at people as separate individuals, independent of others, and they outlook on the world and on people was one of objective and calculating. This has become our default, ‘normal’ outlook, ever since. But it is only one possible way of looking at the world and at other people, and certainly not the only way.

Below, I will discuss two other perspectives: relational ethics and virtue ethics. The emergence of relationship ethics (as ethics of care, in the 1980s) and the revival of virtue ethics (since the 1970s, e.g. in professional ethics) can be understood as a reaction or addition to consequentialism and duty ethics. Moreover, I’d like to propose that relational ethics and virtue ethics are very useful indeed for the development and application of algorithms and AI applications. Relational ethics can help to understand how technologies affects interactions between people; how people treat each other (differently) through technology. And virtue ethics can help to understand how technologies can help—or hinder—people to cultivate specific virtues, such as justice, courage, self-control, or honesty.

Relational ethics

By way of example, let us use a relational ethics perspective to look at Augmented Reality (AR) glasses. You can think back at Google Glass, introduced in 2013 and out of production since March, or of the recently unveiled Apple Vision Pro, or a future more lightweight version of it. They offer the wearer a combination of a view on the real world with projections of virtual worlds. Now, suppose that we are outside, on the street, and I wear such glasses and look in your direction. You will wonder whether I am filming you, and you will probably not like that. Most people would disapprove of wearing such glasses. Certainly in the vicinity of a children’s playground. Or suppose we are talking to each other. You will want to know whether I am paying attention to you, or looking at something else. Like what we have with smartphones. Wearing AR glasses can make me look at people as object, and less as people: ‘Nice looking; I take a picture’ or ‘Boring person; I’d rather watch a movie’. Dystopian future? Farfetched? Possibly. But we did have the Glasshole experience, ten years ago.

A relational ethics perspective typically includes an analysis of power: how is power distributed and how does power shift through the use of technology? The photos or films that you make with your AR glasses probably go into a cloud of Google, Meta, Apple or Amazon. And because you clicked ‘OK’, that company can use your photos and films for lots of purposes, e.g., to train their AI systems. Subsequently, they can use these AI systems to personalize ads and sponsored content and project these into your eyes. These company exercise power over users. Of course, they already do that via smart phones. But AR glasses will probably be even more intrusive, especially if you wear them all day, which will probably require they first become less bulky.

We can also look at possible positive effects. Through AR, for example, receive support to get rid of fear, learn about people in other cultures, or collaborate in professional contexts. AR will probably bring both desirable and undesirable effects. A relational ethics perspective can help to develop and apply technologies in such ways that people can treat each other humanly, not as objects. Moreover, it can help take a critical look at business models and the distribution of power.

Virtue ethics

Lastly, virtue ethics. From a Western perspective, this tradition starts with Aristotle in Athens. (Other cultures, e.g., Buddhism and Confucianism also have virtue ethics.) First, we need to get a potential misunderstanding out of the way. Some associate virtue ethics with mediocrity and with individual behavior. Both are incorrect. Virtue ethics is concerned with excellence, with finding an excellent ‘mean’ in each specific situation. If you see somebody beating up another person and if you are physically strong, it would be courageous if you intervene. You would act cowardly if you stayed out of it. If, however, you are not physically strong, it would be courageous to keep away from them and call 911. It would be rash if you interfere. Courage, then, is the appropriate ‘mean’ between cowardice and rashness, and depends on the person and the situation. Moreover, virtue ethics is not about individual behavior. It is concerned with organizing a society in which people can live well together. Shannon Vallor has given virtue ethics a wonderful update in her book Technology and the virtues. She proposes to turn to virtue ethics if we want to discuss and shape emerging technologies, where pluses and minuses, and duties and rights, are not yet clear. Virtue ethics then offers a framework to explore how such technologies can help people to cultivate relevant ‘technomoral’ virtues. 

Let us look at a social media app through the perspective of virtue ethics. Usually such an app nudges people to use the app often and long. With notifications, colors, and beeps. Automatic previews of related content. This undermines people’s self-control. It prevents people from cultivating the virtue of self-control. Although your plan is to just check your email, you end up spending 30 minutes or more on Facebook or Youtube. Many social media apps also corrode honesty. They are designed to promote so-called engagement. They present half-truths and fake news and promote rage and polarization. Suppose that you work on such an app. Can you do something differently? Can you develop an app that does helps people to cultivate self-control and honesty? Maybe. If you also change the underlying business model. For example, you can develop an app that people pay for and that asks: What do you want to achieve and how many minutes do you want to spend? After the set number of minutes you get a notification: Done? Maybe do something else now? And for honesty: Are you sure you want to share this? Are you sure it is truthful? Or a reminder like this: Your message contains strong language. Maybe take a deep breath and exhale slowly. Now, how do you want to proceed?

Get started with virtues

Virtue ethics is a very suitable perspective for professionals. What virtues do you, as a professional, need in your work, in your projects? Justice, if you are working on an algorithm and want to prevent the propagation of bias and discrimination. Courage, if you want to take the floor and express your concerns about unwanted side effects of the project. The beauty of virtue ethics is that you can start right away and get better with practice. You can choose a virtue to develop; justice, courage, self-control, curiosity, creativity, diversity. And then select opportunities to act differently from how you normally would: you voice your concern about fairness, you ask an open question, you tolerate a feeling of uncertainty, you invite that other colleague for the meeting. In addition, you can look at people whom you admire for their virtues, and learn from them, possibly model their behaviors.

3. Ethical perspectives on ChatGPT

Published 4 September 2023: In the final of three essays, Marc Steen uses ChatGPT as a case study for how to use different ethical perspectives, and practical steps people can take to start incorporating ethics into their projects.

In previous essays, we discussed ethics as a process of reflection and deliberation; an iterative and participatory process in which you look at your project from four different ethical perspectives: consequentialism (pluses and minuses), duty ethics (duties and rights), relational ethics (interactions and power), and virtue ethics (virtues and living together). We will test-drive these perspectives and use ChatGPT as a case study. After that, several suggestions to get started yourself.

ChatGPT on the table

A good analysis starts with clarifying: What are we talking about? The more specific and practical, the better. If the application you work on is not yet ready, you can create a sketch and put that on the table. It does not need to be ready. Better even, because then you can take what you learn towards further development. For our case study, we can look at how journalists can use ChatGPT in their work. We can also steer development and use towards values like human autonomy, prevention of damage, a fair distribution of benefits and burdens, and transparency and accountability.

Consequentialism: pluses and minuses

With consequentialism, you imagine the plusses and minuses of your project’s outputs in the real world, in society, in everyday life. On the plus side: journalists, and others, can use ChatGPT to work more efficiently. On the minus side: this efficiency can lead to others losing their jobs. On the plus, ChatGPT can help people improve their vocabulary and grammar. On the minus, people can use ChatGPT to very quickly and very cheaply produce tons of disinformation and thereby undermine journalism and democracy. It is increasingly difficult to spot fake news, especially when they go together with synthetic photos or videos. Experts expect that by 2026 no less than 90% of online content will be created or modified with AI.

We can also look at the costs for the environment and for society; mining the materials that go into the chips on which ChatGPT runs and spending the to train ChatGPT. We can also look at the distribution of pluses and minuses. For ChatGPT, people in low-wage countries cleaned-up the data to train it, often under poor working conditions. Without proper regulation, all the plusses go to a small number of companies, while the costs go to the environment and people on the other side of the world.

Now, as a professional, if you work on such an application, you can work to develop or deploy it in ways that minimize disadvantages and maximize advantages.

Duty ethics: duties and rights

With duty ethics, we look at duties and rights. And when they conflict, we seek a balance. Typically, developers have duties and users have rights. This overlaps with a legal view. Which is very topical; in June, the European Parliament passed the AI law, which also regulates Large Language Models, e.g., with regards to transparency. For ChatGPT, fairness and non-discrimination are also critical. We know that bias in data can lead to discrimination in algorithms; Cathy O’Neil and Safiya Umoja Noble wrote about that. From a similar concern, Emily Bender, Timnit Gebru et al., in their Stochastic Parrots paper, called for more careful compiling of datasets. There is a duty of fairness and care on the part of the developers, because of the right to fairness and non-discrimination of users. We can also look at why ChatGPT produces texts that run rather smoothly. That is because it is based on tons of texts written by people. That can infringe on authors’ copyrights. Just recently, two authors filed a lawsuit against OpenAI, the company behind ChatGPT, about their copyrights. Another key concern in duty ethics is human dignity. What happens to human dignity, e.g., if an organizations uses ChatGPT to communicate to you, instead of another human being talking with you?

In short, duty ethics can help to steer your project between the guardrails of legislation, avoid bias and discrimination, respect copyright, and develop applications that empower people.

Relational ethics: interactions and power

Through the lens of relationship ethics, we can look at the influence of technology on how people interact and communicate with each other. When I think of ChatGPT, I think of ELIZA, the chatbot that Joseph Weizenbaum programmed in the 1960s. He was unpleasantly surprised when people attributed all kinds of intelligence and empathy to ELIZA, even after he explained that it was only software. People easily project human qualities onto objects. Blake Lemoine, now ex-Google, believed that LaMDA was conscious.

But the reverse is also possible. Our usage of chatbots can erode human qualities. If you use ChatGPT indiscriminately, you will get mediocre texts and that can erode communication. In addition, ChatGPT can create texts that flow smoothly. But it has no understanding of our physical world. No common sense. And hardly any idea of truth. It produced this sentence: ‘The idea of eating glass may seem alarming to some, but it actually has several unique benefits that make it worth considering as a dietary addition’. Clearly, indiscriminate use of ChatGPT can have serious risks, e.g., in healthcare.

In addition, we can ask questions about power and about the distribution of power. In a world where many people search for information online, the corporations that own and deploy LLMs like ChatGPT have disproportionate power.

Virtue ethics: living well together

Virtue ethics has roots in Aristotle’s Athens. The purpose of virtue ethics is to enable people to cultivate virtues so that we can live well together. Technologies play a role in this. A particular application can help, or hinder, people to cultivate a particular virtue. Spending all day on social media corrodes your ability to self-control. Self-control is a classical virtue. The goal is to find, in each specific situation, an appropriate mean for a specific virtue. Take the example of courage. If you are strong and you see someone attack another, it is courageous to intervene. To stay out of it would be cowardice. But if you are not strong, it would be rash to interfere. Staying at a distance and calling 112 is then courageous.

Now, what could you do as a developer? You can add features that allow people to develop certain virtues. Take social media, as an example. Often, there is disinformation on social media, and that undermines honesty and citizenship. When online media is full of disinformation, it is difficult to determine what is or is not true, and that subverts any dialogue. You can help to create an app that promotes honesty and citizenship. With features that call on people to check facts and to be curious about other people’s concerns, to engage in dialogue—instead of make people scroll through fake news and fuel polarization.

Get started with integrating ethics into your projects

Now, say that you want to get started with such ethical reflection and deliberation in your projects. You can best integrate that into working methods that people already know and use.

For example, you can integrate ethical aspects into your Human-Centred Design methods. You present a sketch or prototype and facilitate a discussion, also about ethical aspects. Suppose that you work on an algorithm that helps mortgage advisors determine what kind of mortgage someone can get. You can ask them what effects the use of such an algorithm can have on their job satisfaction, on their customers’ experiences, or on their autonomy? With their answers you can adjust further development.

It can also be useful to investigate how diverse stakeholders view the application you work on. What values do they find important? You can, e.g., .organize a meeting with a supplier, a technical expert, and someone who knows how the application is used ‘in the field’. Asking questions and further inquiring is important here. Suppose that two people are talking about an SUV and both think safety is important. One thinks from the perspective of the owner of such an SUV. The other thinks about the safety of cyclists and pedestrians. The challenge is to facilitate dialogues on sensitive topics.

Finally, you can look at the composition of your project team. A diverse team can look at a problem from different angles and can combine different types of knowledge into creative solutions. This can prevent tunnel vision and blind spots. In the tradition of Technology Assessment, you can explore what could go wrong with the technology you work on. If this is done timely, measures can be taken to prevent problems and deal with risks. People with different (‘deviant’) perspectives can ask useful questions. That gives a more complete picture—sometimes a complex picture; but hey, the world is complex.