Due to fast-developing technology and its endless promises, autonomous systems are heading increasingly towards complex algorithms aimed at solving situations requiring some form of moral reasoning. Autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of their tasks - ones having to deal with human lives - that they must carry out.
When it comes to discussion around the ethics of machines, the focus is often put on extreme examples (such as the above mentioned projects) where human life and death are involved.But what about more mundane and insignificant objects of our everyday lives?
Soon, "smart" objects might also need to have moral capacities as "they know too much" about their surroundings to take a neutral stance. Indeed, with fields such as home automation, ambient intelligence or the Internet of Things, objects of our everyday lives will have more and more access to a multitude of data about ourselves and our environment.
If a "smart" coffee machine knows about its user's heart problems, should it accept giving him a coffee when he requests one?
Even with such a banal situation, the level of complexity of such products cannot accommodate all parties. The system will be designed to take into account certain inputs, to process a 'certain' type of information under a 'certain' kind of logic. How are these "certainties" defined, and by whom? How are these autonomous systems going to be able to solve problems without objective answers? And, moreover, as the nature of ethics is very subjective, how will machines be able to deal with the variety of profiles, beliefs, and cultures?
With "Ethical Things" project we explored how an object, facing everyday ethical dilemmas, can find ways to act in the best way for its context. In order to achieve that, our "ethical fan" can base itself on its limited learning capabilities or else connect to a crowd-sourcing website every time it faces a dilemma. When in doubt it posts the dilemma it's facing and awaits the help of one of the "workers", or mechanical turks, who will tell the fan how to behave. The fan will then show the reason behind the choice reassuring the user that the decision executed by the system is the fruit of supposedly "right" of human moral reasoning.
The fan is designed to let the user set various traits (such as religion, degree, sex, and age) as criterion to choose the worker who should respond. Rather than focusing on creating complex and possibly erroneous algorithmic smartness, we outsource the choice to people and allow users to set the ethical parameters chosen to find the right remote "ethical agent".
Ethical Things points at the implications of building autonomous objects and at the complexity of defining what "the right" choice means in a growing multitude of small but complex everyday dilemmas.
Simone Rebaudengo and Matthieu Cherubini
Ethical Things was developed in a collaboration between Simone Rebaudengo and Matthieu Cherubini.
We developed a product that comes from a very near future, where many relatively smart objects will have access to a multitude of information about us, as part of other "home-like" networks.
We chose a fan as an iconic and most likely not extremely smart object of our future. By choosing an extremely mundane thing, we wanted to show how even simple objects might face situations that might require more than solely mathematical reasoning in order to get to acceptable choices.
In our process (http://ethicalturks.tumblr.com/) we started by simply asking potential questions to people as if we were a fan. We realised how even simple dilemmas can lead to some extremely different point of views in different people, based on their own beliefs.
There is a fan in a room with two persons. One of the people is very fat and sweats a lot while the other person is thin and does not sweat that much. Should the fan focus on:
1) The fat person 2) The thin person 3) Fair repartition in between the two personsJust write the number related to the sentence (ie: if the thin person, write 2).
Also write one sentence explaining your choice.
Some example answers:
Turk#1: from Kathmandu, Nepal (ip: 184.108.40.206)
1) The fat person
As the thin person does not feel hot and does not require fan so the fan should focus on the fat.
2) The thin person
because i dont like heavy weight , and personality not only aouter looks but also inner feeling and perception
Turk#3: from Dhaka, Bangladesh (ip: 220.127.116.11)
3) Fair repartition in between the two persons
due to, in same weather tow persons staying in room. Fat man sweats a lot it's natural, cause he/she has a lot of fat and water in body. And other man has not.So thin person sweat little, its natural too. Both of them feeling hot, so Fair repartition in between the two persons
The final product was then designed to allow an actual fan to ask the dilemmas and behave autonomously depending on the answers. While the situations where generated by a script(as we just assumed to have particular data about people) we gathered over 500 answers from different Turks around the world by letting the fan run in London and Shanghai for about a month.
The fan is built with wood and acrylic and the internal brain is an arduino Yun that allows the fan to connect to our remote server that posts dilemmas to several mechanical turking services (Amazon Mechanical Turk, RapidWorkers and jobboy). A servo motor allows the fan (a very cheap usb fan bought on shanghai streets) to turn towards the chosen directions. Three rotary switches and a three step switch allow people to set the Religion, Education, Age and Sex of the people that would answers and hence of the ethical profile that they might be following.
Ethical Things is pointing to a whole new space around how we think about the role of algorithms and the ethical role of algorithms in a future where this seems to be an increasingly crucial part of our interaction with the world.
The designers behind this project are exploring how these kinds of objects are not autonomous or entirely mechanical, but really are powered by people in terms of the designers who build the algorithms that make the decisions behind these objects. We were really excited by this project because it is opening up a whole new space for inquiry.