January 27, 2020

352 words 2 mins read

Model as a service for real-time decisioning

Model as a service for real-time decisioning

Hosting models and productionizing them is a pain point. ML models used for real-time processing require data scientists to have a defined workflow giving them the agility to do self-service seamless deployments to production. Niraj Tank and Sumit Daryani detail open source technologies for building a generic service-based approach for servicing ML decisioning and achieving operational excellence.

Talk Title Model as a service for real-time decisioning
Speakers Niraj Tank (Capital One), Sumit Daryani (Capital One)
Conference O’Reilly Open Source Software Conference
Conf Tag Fueling innovative software
Location Portland, Oregon
Date July 15-18, 2019
URL Talk Page
Slides Talk Slides

Hosting models and productionizing them is a pain point. Let’s fix that. Imagine a stream processing platform that leverages ML models and requires real-time decisions. While most solutions provide tightly coupled ML models in the use case, these may not offer the most efficient way for a data scientist to update or roll back a model. With model as a service, disrupting the flow and relying on technical engineering teams to deploy, test, and promote their models is a thing of the past. It’s time to focus on building a decoupled service-based architecture while upholding engineering best practices and delivering gains in terms of model management and deployment. Other benefits also include empowering data scientists by supporting patterns such as A/B testing, multiarm bandits, and ensemble modeling. Niraj Tank and Sumit Daryani demonstrate their work with a reference architecture implementation for building the set of microservices and lay down, step by step the critical aspects of building a well-managed ML model deployment flow pipeline that requires validation, versioning, auditing, and model risk governance. They discuss the benefits of breaking the barriers of a monolithic ML use case by using a service-based approach consisting of features, models, and rules. Join in to gain insights into the technology behind the scenes that accepts a raw serialized model built using popular libraries like H2O, scikit-learn, or TensorFlow, or even plain Python source models and serve them via REST/gRPC which makes it easy for the models to integrate into business applications and services that need predictions.

comments powered by Disqus