Stewart is a tactile interface designed for a fully autonomous car. Self-driving cars offer obvious benefits such as faster travel and enhanced safety. However, they also eliminate a sense of freedom, expression, and control while driving.
Stewart's objective is to accommodate a healthy relationship between man and machine, to be achieved by an intuitive and expressive form of interaction. Stewart provides you with constant updates about the car's behavior and its intentions. If you don't agree on the car's next course of action, you can manipulate Stewart to change this. Stewart will learn from you as you can learn from Stewart, hopefully resulting in a mutually trusting relationship.
Interaction through Stewart will bring about a haptic discussion about what the car's next move will be. Who will win this discussion? Who knows best?
Emotional or rational
Stewart is a tactile interface designed to overcome human resistance to adopting the fully autonomous car. Stewart aims at accommodating a healthy relationship between man and machine by enabling intuitive and expressive forms of interaction with an otherwise autonomous car or vehicle.
Stewart tries to make the driving experience personal again by enabling 'dialogue' or even discussions with an autonomous car. This dialogue can have multiple levels of intensity, from just changing direction to deliberately breaking the law. Changing direction or increasing/reducing speed should be easy to influence when the human driver chooses to do so. There will be barely any dialogue depending on how much effect this action has on parameters like arrival time and fuel consumption. However, if the human driver deliberately intends to break the law by ignoring a traffic light or hit a person, the car will (re)take control and communicate this through an intense haptic discussion. When a decision has to be made fast, the car will always make this decision for the driver. Any decision taken by the car is rational—and essentially better thought-through—which will increase safety during the journey.
Stewart will always communicate the car's intentions and current actions to the driver. Stewart's communication is focused on building and achieving trust. The driver can choose to be involved in the car's decisions through Stewart all the time but also has the option to ignore Stewart and do something else. This is somewhat similar to using a watch: it is there when you need to know what time it is but not intrusive when you don't.
So why would you want to control a car that drives itself? Learning to trust a (new) technology takes time. A feeling of control can help to build a mutually trustful relationship. Humans are very unpredictable creatures that tend to change their minds frequently. For example: while driving you want to make a detour or you may need a coffee break. These changes of plan can easily be communicated to the car through Stewart.
To make things absolutely clear: Stewart is an intermediary—between a self-driving car and its driver—to discuss the next move or action. It is in no way intended to control the car as in traditional driving. Through Stewart you may suggest a certain move—like a right or left turn—but it is the car that will actually decide if and when to make that turn. This allows for putting back emotion into driving within the margins of what is safe.
Maybe... in the future...
The switch from the regular car to the autonomous car is going to be huge. The cultural meaning of what a car means to us will change. I believe the self-driving car has a lot to offer. However, I don't think an overnight transition is going to be the best way of introducing this new disruptive technology. Hopefully, Stewart's concept will contribute to a discussion about what a self-driving car should be like, for now...
Vision on self-driving cars
The autonomous car seems to be an inevitable innovation. Every car brand is working on self-driving cars and R&D centers for autonomous cars are scattered across Silicon Valley. Volvo already claims they will have their first street legal autonomous car in the showroom by 2017. This new iteration of the motorcar is probably going to be just as disruptive as the first car itself.
In the late nineteenth century, Karl Benz, the founder of Mercedes and also the first person to legally operate an automobile on a public road, predicted a limited market for automobiles because of a lack of good drivers [1]. Since that maiden trip in 1885 the car has had major social and cultural impact on our lives. Especially in the USA, the car represents freedom, character and liberty. Cars have had a huge cultural impact on society for over 100 years and switching to self-driving cars is going to change the streets as we know them. But still this question arises: Do we really want to give up the steering wheel? And the gearshift – for that matter?
So, what defines an autonomous car? Basically, it is a car that drives itself from A to B and, generally, it drives way better than any human driver could. It can “think” faster, it reads information faster, it can react faster when emergencies occur, it will never get lost and above all it is devoid of emotions. A machine will never panic and will always solve things in a rational calculated manner. In short: an autonomous car offers faster travel and higher safety levels. Studies show that even when only 5% of all traffic would be autonomous, these cars could work together and form ‘trains’ made of cars that would travel much faster than we do now. Autonomous cars will ultimately communicate with each other to make roads safer.
If freedom, character and liberty are the values a car stands for, what will the autonomous car give us? Will we loose our autonomy by allowing us be driven around? Or will we gain freedom as transportation becomes easier, faster, and safer for everybody? I’m pretty sure we will eventually prefer to give away control and let machines make decisions for us in return for making the world function more efficiently.
Whether “more efficient” is also better, depends on one’s perception. Machines that make rational decisions for us will save us time and energy, which are good things. I’m convinced this is the essence of personal transportation in the future.
However, the transition will be and has to be slow. In order to allow somebody (or something...) to decide for you, you first have to trust him or the device. We are talking about the dynamics of the relationship and levels of trust between people and technology.
I am convinced that designs like Stewart can help us understand technology as well as improve our comprehension levels of our own behavior. In that context, I very much like Donald Norman’s view on self-driving cars. Norman is the author of ‘The Design of Future Things’ and a consultant to BMW. He compares driving an autonomous car to riding a horse that already knows where to go: you either tightly guide the horse or you let the horse guide you. The same should be with autonomous cars. Once we understand we can trust technology we may, eventually, also care about it.
Design is functional if it helps people to understand technology by exposing values one can relate to and which, for better or for worse, can be manipulated to reflect one’s preference. In this way, machines can learn from people and people can learn from machines eventually leading to a level of quality that will reflect mutual care and respect.
We loved this piece both for what it is today - a captivating, beautifully executed, functional piece of hardware - but also for the questions it forced us to ask about how the future of interaction design as artificial intelligence and smart devices continue to evolve.
The idea of creating systems that act as mediators between human intelligence and machine intelligence is incredibly compelling, and we found Stewart to be a thoughtful, well-executed piece of speculative design.