February 19, 2020

634 words 3 mins read

Executive Briefing: How the growth of voice-based AI stands to blur the lines of big data

Executive Briefing: How the growth of voice-based AI stands to blur the lines of big data

Voiced-based AI continues to gain popularity among customers, businesses, and brands, but its important to understand that, while it presents a slew of new data at our disposal, the technology is still in its infancy. Andreas Kaltenbrunner examines three ways voice assistants will make big data analytics more complex and the various steps you can take to manage this in your company.

Talk Title Executive Briefing: How the growth of voice-based AI stands to blur the lines of big data
Speakers Andreas Kaltenbrunner (NTENT)
Conference O’Reilly Artificial Intelligence Conference
Conf Tag Put AI to Work
Location London, United Kingdom
Date October 15-17, 2019
URL Talk Page
Slides Talk Slides
Video

As voice-based AI continues to gain popularity as a way for customers, businesses, and brands to operate more efficiently, it’s important to understand that, while it presents a slew of new data at our disposal, the technology is still in its infancy. Big data use that stems from user interaction is usually personal and accurate, comes from a single device, and most of the time has a clear context. Like any new baby born to the world, the inability to communicate correctly and the lack of experience understanding its surroundings means it’s inevitable that mistakes will be made. Andreas Kaltenbrunner examines three significant ways voice-based virtual assistants will make big data analytics more complex and explores the various steps you can take to manage the added complexity in your company. First, usage data is noisier because understanding voice depends on understanding each person. This introduces new factors such as accents, tone of voice variances, slang, utterances, etc., which lowers the quality of the conversation to text. Second, virtual assistants are present in different social contexts including family, work, social gatherings, etc., where there are multiple people speaking and multiple virtual assistants present, all processing the interactions at the same time. Virtual assistants must be able to recognize different users, making usage data more complex and leading to increased redundancy as the same conversations are captured several times. Third, multiple people having several different conversations at the same time also increases complexity. This requires the technology to understand overlapping conversations and split them in a meaningful way, making conversation data more difficult to interpret and harder to determine the correct context. Adding to the challenge, big companies haven’t agreed on a standard direct communication between voice assistants, meaning many logged conversations will be between assistants (e.g., Google talking to Alexa) rather than humans, diminishing the power to help people learn. But Andreas offers you steps you can take to manage the added complexity: don’t cut corners to save money on the quality of speech-to-text conversion, particularly in languages your best customers use; devise a protocol where the assistant learns to recognize each user, at least for the ones who will use it periodically—this may require the user to repeat some key phrases after a few interactions—and also prevents unauthorized users from requesting unwanted actions (e.g., your child buying something delivered to his best friend) and helps distinguish human from nonhuman voices such as other assistants; devise a strategy where assistants of the same brand directly agree on who logs what or at least agree to mark which conversations might be redundant; work within your industry to agree on standards for direct communication between assistants and standard ways to share anonymous data usage that may alleviate the complexity (e.g., like the IoT Alliance). Like any good assistant, the purpose of voice-based virtual AI is to help businesses operate more efficiently and generate more revenue. But we must also remember that innovation demands education; it must go through an inevitable learning process. While its initial effects stand to mess a bit with big data, when the complexity is managed correctly, the reward will surely outweigh the effort.

comments powered by Disqus