“Robot Ethics” Is Not an Oxymoron

13 Nov 2017

Robots are changing the way we live, work, and interact with one another. It’s important to consider the ethical implications of new technologies before they are implemented, if we want to insure that they work to the good of society.

STI Experts

A university professor of philosophy of media and technology is just the right specialist to address the ethical issues that robots spur.  Mark Coeckelbergh, from the University of Vienna, and contributor to STI’s Technology and the Good Society experts meeting, discusses them in this interview, on the heels of a UNESCO session titled “Robots: Ethical or Unethical?”


Photo © Philip Provily

You gave a talk at UNESCO last week, at an event about robot ethics. What is UNESCO doing in this area?

UNESCO does work on this in the context of COMEST (World Commission on Ethics of Scientific Knowledge and Technology), which has just released a report on robot ethics. I think it is very important that the international and supranational organizations do work and develop policy in this area. It’s a good moment now for policy makers to pick this up; there is more public attention on issues concerning artificial intelligence and robotics. But the hype can also be a problem.

In what way is it problematic? What do you mean?

I argued in my talk that the current voices in the debate such as Elon Musk and transhumanists unnecessarily and unhelpfully create a climate of fear and focus mainly on science fiction. Against this alarmism, I asked for more attention to current and near-future issues and more concrete ethical problems. Problems that will affect people in their daily lives. In sectors like communication, transport, health care. 

What ethical problems do you identify?

There are some issues that all information technologies have, such as privacy and security. Addiction also, as we all know these phones and algorithms are terribly addictive. They are made for that. They’re part of business models in an attention economy: the more we click, the more companies make profit. But I also pointed to topics specific to robots, artificial intelligence and automation technology, such as agency and autonomy. When robots take over some tasks, do we lose agency? Will they take our jobs? Or can we collaborate with robots in a way that is not problematic? And who is responsible when a self-driving car causes an accident? What is the moral status of robots? Is it wrong to kick them?  This is something I’ve worked on a lot during the past 10 years. Another one: Does automation technology create distance, for instance in the financial world between traders/investors and people affected by their decisions? And how should we deal with vulnerable users such as children and some categories of elderly people, for example in health care robotics? What are potential gender issues, for instance when robots are given a synthetic voice and a human-like shape? 

Lots of questions.

There are many questions and no simple answers. Philosophers can help to analyze the issues. 

And then inform policy makers who may regulate robotics?

Sure, but the problem is that often policy makers – and philosophers – are too late with their laws and principles. The technology is already being used. Therefore I argued for pro-active ethics, for inserting reflection about ethics and social issues already in the process of technology development, in the process of innovation. That’s good for ethics and good for business. Think about the Volkswagen case: if people in business and tech don’t think about ethics, things can go very wrong, for the company and for society. Big as well as small corporations that are now creating the technology of tomorrow need to collaborate with ethics experts, with social scientists, and so on to make sure their work actually contributes to a better society. Technical and business experts are not enough. So I try to give a positive message, rather than being against technology or against business.

Is the message often negative?

Traditionally many philosophers critical of science, technology, and society have projected dystopian futures about what happens when advanced technology takes over. And still today there is often the feeling that technology is threatening or predictions that technology is going to take over. But I think it’s important to be constructive, to also remain open for the new opportunities technological innovation can open for human experience and human values. Philosophy can help here, but so can others such as artists, who can help us to explore new possibilities. 

What does this mean for organizations like UNESCO?

Since the problems of the new technologies are by nature global, we also need global solutions. We need a global policy for robotics and artificial intelligence. One of the things I like about UNESCO is that it also listens to humanities and social sciences, and connects to people in the sector of art and culture. It thus has a rather unique position that has a lot of potential for bringing together people to think about technology in new and creative ways. And international organizations such as UNESCO can also show us non-Western perspectives. Maybe in some contexts low tech problems are more important, such as clean water, electricity, food, etc. The use and meaning of technology depends also to some extent on culture. We need perspectives that show us diversity between societies and between regions in the world. Otherwise, our thinking about technology and society remains rather parochial. Thinking about robotics, for example, should not be a hobby of well-off people in Europe and the U.S. It should be related to social and political issues, such as war/peace and social-economic problems. In February I’m organizing a big conference “Robophilosophy 2018,” which I want to be focused on political, economic, and social issues. Think about the future of work. Will automation require basic income? Think about human relationships. Are sex robots ok or problematic from a gender studies perspective? Robotics is not just about machines and our psychology. It is also about what kind of society and what kind of world we want to live in. 

You seem to be very busy these days, what’s next? 

In Lisbon I’m going to talk about artificial creativity and alterity. Are humans the only ones who can be creative? What happens if a robot performs? Is it a performance? What does it mean if humans and robots do something together? And can robots be quasi-others, a kind of partners in a social world? Why not? Actually I’m going to do 3 talks, then I will fly to Brussels, where the Belgian parliament has asked me to inform them about the ethics of autonomous weapons.

Also an important topic…

Yes, there’s also attention for it at the UN. It’s a typical example of a kind of technology that creates global problems and needs global solutions. But not only from governmental kinds of organizations; I also argued that non-governmental institutions can and should play a role. We should not always think that states can solve everything. I urged policy makers in international organizations and governments to not only regulate top-down but also focus on supporting bottom-up initiatives. Actors in civil society, citizens who take initiative, and of course also scientists. 

Philosophers, too?

Yes, they can contribute. By talking to people from various sectors and backgrounds, here in Vienna and also at UNESCO, I try to bring together humanities, science, and art. Disciplinarity is so 19th century. We also have to re-think the university: it is no longer justified to create fixed borders; academics need to connect with the wider society. They need to take up their responsibility. We need more intellectuals and mediators. If we, as humanity, want to tackle the big problems of today like climate change and the societal transformations due to new technologies, we need to explore new forms of pooling social intelligence and we need new procedures and institutions. 

What should policy makers do? What do you think UNESCO should do?

At the end of my UNESCO talk I challenged the member states’ representatives to think about what they can do, and stressed the urgency of creating a vision and policy together. Now UNESCO can help to bring people together and to develop common policies. It can raise awareness and initiate new processes in this direction. But states, regions, also non-governmental and more local political actors all have to give input and think in their specific contexts about the implications of technology. When it comes to the possibilities and consequences of technology, everyone is a stakeholder. Everyone should feel involved and get involved, and everyone should think about it. 

 

Visit Professor Coeckelbergh's website: coeckelbergh.wordpress.com