January 25, 2020

245 words 2 mins read

Deep learning on audio in Azure to detect sounds in real time

Deep learning on audio in Azure to detect sounds in real time

In this auditory world, the human brain processes and reacts effortlessly to a variety of sounds. While many of us take this for granted, there are over 360 million in this world who are deaf or hard of hearing. Swetha Machanavajhala and Xiaoyong Zhu explain how to make the auditory world inclusive and meet the great demand in other sectors by applying deep learning on audio in Azure.

Talk Title Deep learning on audio in Azure to detect sounds in real time
Speakers Swetha Machanavajhala (Microsoft), Xiaoyong Zhu (Microsoft)
Conference Strata Data Conference
Conf Tag Make Data Work
Location New York, New York
Date September 11-13, 2018
URL Talk Page
Slides Talk Slides
Video

There is a great demand for machine learning and artificial intelligence applications in the audio domain, including home surveillance (detecting breaking glass and alarm events), security (detecting explosions and gun shots), self-driving cars (providing more security based on sound event detection), predictive maintenance (predict machine failures via vibrations in the manufacturing sector), emphasizing emotions in real-time translation, and music synthesis. Swetha Machanavajhala and Xiaoyong Zhu explain how to make the auditory world inclusive and meet the great demand in other sectors by applying deep learning on audio in Azure. Swetha and Xiaoyong detail how to train a deep learning model on Microsoft Azure for sound event detection using an urban sounds dataset and offer an overview of working with audio data, along with references to Data Science Virtual Machine (DSVM) notebooks.

comments powered by Disqus