Automating ML model training and deployments via metadata-driven data, infrastructure, feature engineering, and model management
Mumin Ransom gives an overview of the data management and privacy challenges around automating ML model (re)deployments and stream-based inferencing at scale.
|Talk Title||Automating ML model training and deployments via metadata-driven data, infrastructure, feature engineering, and model management|
|Speakers||Mumin Ransom (Comcast), Nick Pinckernell (Comcast)|
|Conference||Strata Data Conference|
|Conf Tag||Make Data Work|
|Location||New York, New York|
|Date||September 24-26, 2019|
Comcast developed a framework for operationalizing ML models. It covers the full ML lifecycle from data ingestion, feature engineering, model training, and model deployment to model evaluation at runtime. It processes roughly 3 billion predictions per day. The system supports proactive (model inference based on event combinations on a stream) as well as reactive (model inference on demand). Mumin Ransom explores how Comcast solved the “feature store” problem, notably, managing a historical feature store for model training and online feature store for current features to support model inference in the proactive (on event arrival) or reactive (on rest endpoint invocation) mode. Automating and horizontally scaling the platform to train and operationalize ML models to produce billions of predictions per day is complex. Several of the challenges Comcast faced included bottleneck and technology limits in the domain of data management, feature engineering, and rapid model (re)training and (re)deployments. The metadata-driven data, infrastructure, feature-engineering pipelines, model training, and inference pipelines support processes such as consistent feature engineering on stream for model inference and data at rest for training and validation. Mumin details how Comcast allows feature-rich raw data, which contains potentially sensitive information including PII. A solution such as this must allow ML model developers to access this information for feature engineering but still ensure that customer privacy is protected. Mumin also outlines how the framework manages this using a combination of methods such as encryption, removal, anonymization, and aggregations to protect privacy without compromising model efficacy.