October 11, 2019

250 words 2 mins read

I hear voices: Explorations of multidevice experiences with conversational assistants

I hear voices: Explorations of multidevice experiences with conversational assistants

Multiscreen experiences are powerful; we now divide our time between different devices based on context. At the same time, conversational assistants have evolved to be quite usable. But were just beginning to see how one assistant might work across an ecosystem of devices. Karen Kaushansky explores the future of designing with voice across multiple devices.

Talk Title I hear voices: Explorations of multidevice experiences with conversational assistants
Speakers karen kaushansky (Zoox)
Conference O’Reilly Design Conference
Conf Tag Design the Future
Location San Francisco, California
Date January 20-22, 2016
URL Talk Page
Slides
Video

Multiscreen experiences are powerful. Starting a task on one device and finishing on another or having devices work together to create a unique experience are just some of the ways we divide our time between different devices based on context. (Apple’s Handoff and Amazon’s Whispersync are two notable examples.) Conversational assistants, such as Siri, Alexa, and Cortana, have evolved over the past few years. Their ability to have natural conversations with users and provide real value is increasing. But what happens when one of these conversational agents exists on your phone, in your watch, in your car, and in your living room? When you say “Hi Siri,” do all your devices answer? Karen Kaushansky reviews the different approaches to designing conversational agents and explains why design matters. After looking at the shortcomings of the experience with today’s assistants, Karen explores new paradigms for using speech recognition across devices and starts to define multidevice experiences with conversational assistants.

comments powered by Disqus