Ethics in digitisation

Interview with business ethicist Dr. Thorsten Busch

“Technology is never neutral”


Digitisation is transforming society. How exactly this will happen, however, is not predefined. That’s because technology itself is by no means value-neutral, says business ethicist Dr. Thorsten Busch.


Text: Peter Sennhauser, Images: ©Hannes Thalmann, 04 December 2017




Dr. Busch: What is ethics?


Ethics is the study of morality. It centres on the question: what should I do? It searches for the right reasons for actions – from the point of view of the individual and of society.


Is there an ethics of digitisation?


Yes – my Twitter account is called that (grins). Well, you can look at digitisation through an ethical lens. For example, there is the “Californian ideology”, the ethos of Silicon valley. There, the people who make the technology, and therefore promote digitisation, live according to their own moral ideas which they prescribe for us.

Personal information

Dr. Thorsten Busch is a Senior Research Fellow at the Institute for Business Ethics at Sankt Gallen University and an Affiliated Faculty member at the Technoculture, Art & Games Research Centre at Concordia University, Montreal. He addresses topics such as digital business ethics, internet research, corporate citizenship in the IT industry, politics and IT, the ethics of social networks and digital sustainability.


«Technical solutions always incorporate the values of the people who invented them as well as those of the people who finance them.»


When people talk of digital transformation, they talk of technology – how can technology reflect a morality? Isn’t it neutral?


Unfortunately, that is a stubborn myth, and this is precisely what I am trying to illustrate. Technical solutions always incorporate the values of the people who invented them as well as those of the people who finance them. The venture capitalists in Silicon Valley influence the morality of a start-up with their financing, just as the founders do.

As well as these facts, there are also plenty of prejudices and distortions: we humans inevitably seem to believe that the machine knows more than we ourselves do. There are old research examples of this, such as the spell check function in Word, which users trust blindly, even when it clearly suggests mistakes.


But these days, technology develops itself almost autonomously: with “machine learning”, which consists of a learning algorithm and large amounts of data. What can go wrong ethically here?


Two things: you can feed the machine with “bad data”. In its learning phase, the machine is like a child, which automatically absorbs and adopts value systems from its environment. If you then take the messages sent by hundreds of thousands of Americans on Twitter or Facebook which are full of sexism and racism as training material, this has a clear effect...


...such as Microsoft’s chatbot “Tay”, which users trained within a few hours to become a racist, sexist slogan-sounder.


Exactly: the algorithm projects what it has learnt from old data into the future. It adopts the morality from its training material.

The second risk zone is the algorithm itself, because when programming, we certainly do determine how the software handles data. An irresponsibly programmed algorithm can cause damage.


But the technology itself is not “evil”?


No, that’s not a prerequisite – a one-sided perspective by the decision-makers before the technology is employed is sufficient. That’s why academics such as Kate Crawford criticise that digitisation is being driven by young, rich, white men. This group plays a dominant role, if not the only role, in the testing of each new application.





Can you clarify that with an example?


In Boston, an app was launched to detect potholes in roads: it recognised vibrations on regular journeys such as journeys to work and reported the GPS position to the infrastructure service. However, instead of resulting in the repair of the worst potholes which affected most people, it resulted in the repair of potholes on the routes to work taken by the relatively well-off upper classes of society. That’s because the prerequisite for participation in the programme was owning a smartphone, a car and having a regular job.


So you say it yourself: morality, the subject of ethics, is formed by culture. This means that ethical behaviour is not a global, objective standard...


...No, ethics is a perspective on the morality of a society. You can assess processes through the lens of ethics, just as you can look at them through the lens of an economist or lawyer. At the same time, morality must be more than the consensus of a group.


So there is an overarching standard?


Yes: if it is cool in the interior of the USA to carry weapons, but it is frowned on in coastal areas, ethics firstly examines the question of how many people think the prevailing situation is good. But then, it also examines the normative quality of the arguments: the basis of a societal debate is a solid and reasoned argument backing up the claim that it makes sense to carry weapons.


Traditionally, Europe has a risk mentality in dealing with innovations, whereas the USA has more of an opportunity mentality. Do nations play a role?


This is very clear and also has its roots in history: in a country such as Germany, which, in the 20th century, was hit by two dictatorships which used surveillance as part of their machinery of power, people pay very close attention to what gets stored and monitored and by whom.


Consequently, in your dissertation five years ago, you compared the large internet companies with nations. Do companies develop their own individual morality?


All the companies in Silicon Valley have a mission statement and their own moral values. At Facebook, for example, those joining the company receive a small book containing the “Company Values”, including the founding myth, the beliefs and values that are considered important.





But companies like to claim that they don’t decide anything and are only responding to demand.


Steve Jobs often said that customers didn’t know what they wanted, so Apple had to show them. Business creates needs and then creates the corresponding products, such as the iPhone. And in this way, companies also help to form our culture.


In cautious Europe, there is an area of tension between individual interests in the form of data protection and the collective benefit of digitisation. Can’t it also be unethical not to apply a useful technology because of occasional negative effects?


This is precisely what society has to confront through discussion and political processes to ascertain where technology is assigned which value for the individual and where it is assigned greater value for society.


...So business has to wait until politics prescribes a democratically endorsed value system for it?


No! That is exactly the wrong way, because regulation always lags behind. On the contrary, companies must open up much more to their stakeholders, take part in the discussion and concede the necessary relevance to objections, criticisms and doubts, for example voiced by NGOs.


If we apply your demands to Switzerland, what should companies do here, in a functioning democracy? Should they anticipate regulatory decision-making?


There are three levels of technological decisions. The micro level is personal decisions: do I install Facebook Messenger or Threema on my smartphone? You can consciously become a customer of a company which not only upholds data protection but makes it its business model, so to speak.

Then there is the macro level: that’s politics. In this area, we hope that politicians will understand the challenges and ultimately prescribe values through regulations.

Exactly between the two is the meso level, on which companies work: on the one hand, they influence politics, and on the other hand they are influenced by it. The same applies to consumers, who have needs, but in whom business also wants to awaken needs.

These three groups must find their way to a dialogue.





Companies should initiate discussions with their customers?


...Yes, and with other companies! In the industry, people can talk to each other and decide jointly on a code for what is done and what is collectively excluded for ethical reasons. This happens in every sector.


The Enlightenment and Industrial Revolution produced human rights, which might be called the ethical inheritance of a knowledge and technological revolution. Can something similar be expected to come from digitisation?


In online gaming communities, they were already discussing the right of players to have a say in programming twenty years ago: I definitely believe that something like basic digital rights can help us. The European data protection regulation, GDPR, perhaps comes closest to this at present, but it doesn’t cover all the topic areas. It still requires a social dialogue about informational self-determination.


«If you want digitisation, you have to accept all the side-effects.»


...Right up to asking at which point technical appliances with artificial intelligence have a consciousness and therefore must have rights?


...Yes, even though this discussion is still currently being held very much under the umbrella of science fiction and mainly by Hollywood and rich white men from Silicon Valley rather than by academics, let alone politicians.


What would you immediately change in connection with digitisation?


The way in which we talk about technology. I would distance myself from promises of salvation and demand a more differentiated view: about the benefits for everybody and for the individual, and about the risks.

Currently, I have the feeling that technology is only talked about in absolute dimensions: if you want digitisation, you have to accept all the side-effects. However, we can also shape technology in another way; we have developments such as human centred design, participatory design, value-sensitive design. These are processes to enable more human and ethical standards to be applied to technological developments.


Like the organic and slow food movements or fair trade?


Definitely. It took ten or twenty years for these movements to enter the mainstream, and with technology we simply don’t think in terms of such long periods, but it can’t hurt to raise our demands now and simply point to an alternative: the digital ecological version, so to speak.


Digital Manifest

As a member of digitalswitzerland, Swisscom is also a co-creator of the "Digital Manifesto".  





Newsletter

Subscribe to the newsletter now and stay informed about trends, industry news and benchmarks.





More on the topic