Giving seniors independence through conversational user interfaces in driverless cars.


Class | Service Design

Role | Design Lead and Researcher

Tools used |  Sketch,  Adobe Photoshop, Pen & Paper.

Duration | 9 weeks

Team size | 5


How can we design a preferred future for the senior citizens in the age of driverless cars?


The Problem


Seniors are legally not allowed to drive after a certain age hence losing a major part of their freedom. 

The reason driving for seniors becomes such a risky affair is because of:

  1. Loss of hearing with age
  2. Loss of clear eyesight with age
  3. Loss of memory with age
  4. Slower response time and joint stiffness

Most senior citizens find it extremely hard to keep up with the rapidly advancing technology and find it hard to adapt to new solutions due to this reason. Autonomous cars provide a lot of value to individuals who are not independently mobile for various reasons.



Our Solution

We designed Eva, a conversational interface built specially for seniors. Eva is tailored to be a part of autonomous cars used by its senior owner.


Eva Features

volume sensiivity.png
smart drop and pick up.png

Why a Conversational User Interfaces for the Elderly?



Why a Conversational User Interfaces for Autonomous Vehicles?

Passive passenger.png
Loss of control.png

With CUI's speech based interaction technique and the growing need for AV's to provide a human touch due to the lack of a driver, we saw a perfect match of problem and solution.

Venn diagram_2.png

How did we arrive at the solution?


Competitive Analysis

Our research began with an investigation of autonomous vehicles and existing conversational agents. Besides simple voice commands, there hadn’t yet been a CUI within an AV that was tailored to the unique problems in the AV rider experience: specifically, reassurance in the absence of control.


Feature Analysis

It was important for us to analyze  the features used by the existing CUI's and how these features are received by the CUI users to understand what works best. Listed below are some of our most interesting insights from the study that we kept in mind while designing our own system.

Feature Analysis_1.png


In an autonomous vehicle where the user is  relieved of typical responsibility of driving the car, should the Conversational interface take on a less passive role? Potentially as entertainer, conversationalist, or a source of content?


Why Scenarios?

In any conversation, knowing the context is everything. Building and recreating scenarios is an easy way to recreate context.

In a CUI, not having visuals and an interface to play with unlike a traditional interface, makes it challenging for designers to design a solution. Scenarios solve this problem by allowing designers to immerse themselves fully in the situation to help imagine a natural dialogue flow.



Experience Prototyping

The experience prototype is a simulation of a scenario that foresees some of its performances through the use of the specific touch points involved. The experience prototype allows designers to show and test the solution through an active participation of the users.


Why Experience Prototyping?

The experience prototyping session helped expose a few possible further points of exploration for our design.

Image 6.jpg
Image 7.jpg

Important takeaways

Many of our interactions fell into the category of command-and-response; we were encouraged to think of ways to make the interaction more conversational. In response to this, we expanded our palette of possible interactions, thinking about other features or user intents that might lend themselves to more back-and-forth.

  1. Voice activation vs. Gesture activation
  2. There is no natural conversation going on between the user. The CUI only repeats the user's commands.
  3. How will the CUI know which entrance/exit she'll take? How does it know when her errand is complete – especially because she too longer buying more items?

Designing Eva

Designing a CUI is very different from any other visual interface. Affordance, feedback, feedforward, everything that informs the user about how to use the system has to now be thought of as intangible, sound based interactions. The way to design this is using a few key interactions

  1. Pre-attentive prompts

  2. Recognition of Utterances

  3. Prediction of intention

  4. Feedback Response

  5. Session Termination

  6. Error 

  7. Recovery

Dialogue Flow.png

Aspects kept in mind while designing Eva

Effective Values for Seniors


The CUI should make the senior feel as though they are in control, to reinforce the feeling of independence we aspire to give our riders.


The CUI should avoid odd, ambiguous, or "trendy" phrases that could confuse a senior.


The CUI needs to repeat itself frequently or ask for clarification. It should be apologetic and patient in these instances.


The CUI should respect that the senior may default to pleasantries and non-intentful verbiage in conversation, and should reciprocate.


The CUI should have energy but not act in a juvenile manner.



Since CUI's lack any form of visual interface, it was hard to provide feedback to the user without building a way of making the system "talk" to the user rather than actually stating the system state. We design ear-cons, a set of pleasant system sounds which could convey the state of the system to the user.

Error earcon when the passenger's shop request is misunderstood.

Startup earcon to indicate when Eva is "awake" and wants to start talking.

Trunk open earcon.

Volume level increasing earcon



The diagramming process helped us refine the dialogue into a realistic interplay involving intents an entities that could ultimately be mapped to a functional CUI .

It also exposed the types of data that would be necessary to store on the system back end to realistically achieve context-aware conversations. For example, the system might store information about a specific rider's physical impairments when deciding whether to drop or pick them up from the entrance of the store.


Next Steps

While we did the necessary research required to help us narrow down the scope of the project to autonomous vehicles and to choose the best users, a much greater amount of work will be required to actually have the CUI up and running. We would like to conduct focussed usability tests to test how the recognition errors affect user's behaviour and how the design will need to evolve to adapt to this.