October 28, 2019

313 words 2 mins read

Atom smashing using machine learning at CERN

Atom smashing using machine learning at CERN

Siddha Ganju explains how CERN uses machine-learning models to predict which datasets will become popular over time. This helps to replicate the datasets that are most heavily accessed, which improves the efficiency of physics analysis in CMS. Analyzing this data leads to useful information about the physical processes.

Talk Title Atom smashing using machine learning at CERN
Speakers Siddha Ganju (NVIDIA)
Conference Strata + Hadoop World
Conf Tag Big Data Expo
Location San Jose, California
Date March 29-31, 2016
URL Talk Page
Slides Talk Slides
Video

Siddha Ganju explains how CERN uses machine-learning models to predict which datasets will become popular over time. This helps to replicate the datasets that are most heavily accessed, which improves the efficiency of physics analysis in CMS. Analyzing this data leads to useful information about the physical processes. Reproducibility is necessary so that any process can be simulated at different times. Some processes may be more popular and hence need to be made easily accessible. Users access this data from replicas of data stored in specified places, but creating numerous replicas of every dataset is not feasible, so predicting which datasets might become popular is necessary. Siddha explains how CERN solved the classification problem which finds if a dataset will become popular or not by calculating binary values of popular (1 / TRUE) or unpopular (0 / FALSE), giving an example with toy data. (Actual data cannot be disclosed.) After finding which dataset is popular, CERN still had to decide which machine-learning algorithm suits the procedure best. Three algorithms were employed, naive Bayes, stochastic gradient descent, and random forest. These models were combined into an ensemble to check which algorithm offers the best true positive, true negative, false positive, or false negative value. Siddha details how this process offers better data analysis, leading to parallel, real-time processing of the distributed data that is abundantly available in CMS.

comments powered by Disqus