December 30, 2019

224 words 2 mins read

Going deep: A study in migrating existing analytics to deep learning

Going deep: A study in migrating existing analytics to deep learning

In the wake of the financial crisis, Thomson Reuters released a novel text-mining-based credit risk model to assess the default risk of publicly traded companies by quantitatively analyzing text. Six years later, the company is updating it to use deep learning. Ryan Roser discusses the benefits and trade-offs involved in transitioning existing analytics to use deep learning.

Talk Title Going deep: A study in migrating existing analytics to deep learning
Speakers Ryan Roser (Refinitiv)
Conference O’Reilly Open Source Convention
Conf Tag Put open source to work
Location Portland, Oregon
Date July 16-19, 2018
URL Talk Page
Slides Talk Slides
Video

The San Francisco Innovation Lab at Thomson Reuters creates quantitative models for investors. Following the financial crisis, it developed the first commercially available credit risk model to measure corporate financial health by systematically evaluating the language used in news, conference call transcripts, financial filings, and analyst research. Six years later, the company is updating it to take advantage of deep learning. While deep learning promises breakthroughs in the ability to model complex data and power new products, when does it make sense to migrate existing analytics to use deep learning, and what advantages does it offer in practice? Furthermore, what are the trade-offs? Ryan Roser answers these questions and describes the process of updating an existing quantitative model to use deep learning. Topics include:

comments powered by Disqus