Todays technology provides the ability for people to interact with devices at all time, but it also makes them gaze down and lose touch with their surrounding while doing so.
This project explored alternatives to the current way of providing guidance while walking by designing in the context, rather than design for the context. This led to several iterations of prototypes that were tested with people in the context.
The findings are showcased in a form of a multi modal guidance system called UP, that provides reassurance in all steps of the way without having to look down at the screen.
This project was my interaction design master thesis at Umea Institute of Design. The aim of the project was to explored beyond current interaction types, and rather focus on the context in which devices are being used and how might we design for it. I was heavily intrigued by the way people interact with their devices while in motion and how it effects their daily life. In order to narrow down the scope of the project I put the project in the context of guidance while in an urban environment and looked into "how might we guide people, while maintaining their ability to be aware of their surrounding".
I approached the project with a human centred design lens, which led me to more to 18 in-depth interviews with people(visually impaired & blind people, millennials, teenagers, Gen Z, mobility experts, etc.). This approach helped me understand that the biggest add on that technology brings to people when used on the go is guidance.
Immersive experiences such as being guided blindfolded through Tate modern, being guided by a guide dog,.. and testing existing digital navigation system helped me gain first hand experiences and led to several ideas, that were later designed and tested in context with people. In order to conduct these tests I have used several prototyping techniques with which I have verified, discarded or build on top of by testing with people(wiz-of-oz style) in the context of being guided while walking.
The project findings were showcased in a form of a multi modal guidance system(UP), that provides reassurance in all steps of the way without having to look down at the screen. UP guides people with visual cues or haptic feedback while people are in motion, as wells as provides the ability to recall the guiding information with a simple tap gesture. The information is provided in a way that eliminates the need to stop-to-interact to gain route information and gives people the ability to focus on their surrounding while walking without having to look down at the screen.