Tuesday
16 Jul/24
14:30 - 15:30 (Europe/Zurich)

Machine Learning for Trigger and Data Acquisition

Where:  

31/3-004 at CERN

Abstract

The LHC produces billions of collisions per second during operation, equivalent to Pb/s of raw data for the largest experiments CMS and ATLAS. It would be impossible to readout, process or store all of this data. A multi-tier trigger system is therefore used to select only a tiny fraction of the most interesting collisions for further analysis, with a high efficiency and low false positive rate. As a result of the throughput and latency constraints, the first level of this trigger must typically make a selection decision within a few microseconds. In the meantime, Machine learning (ML) is a rich and exciting field of research, constantly inventing new and more powerful techniques. At the same time, developers are supporting the growth of ML with faster, more parallel processors and devices designed specifically for ML. In light of this, deploying ML into the real-time processing for trigger and data acquisition  is becoming increasingly possible and relevant. As the LHC upgrades by around a factor of 5 in instantaneous luminosity for the next decade, this ‘fast ML’ at the edge will be undoubtedly required to reduce and filter the vast amounts of data. This lecture covers a recap of neural networks (NN) and the tools and frameworks used to implement them, before diving into a multitude of examples where ML is already being used and/or developed for ultra low-latency event selection, fast reconstruction, anomaly detection, and data reduction/filtering, at the LHC experiments. The implementation of “real-time” machine learning inference on GPU and FPGA devices will be explored, and the latest tools and tricks for optimisation in the space, such as high level synthesis, quantisation, and knowledge distillation will be discussed. 

Bio

Thomas is an applied physicist in the CMS group of CERN, where he applies machine learning (ML) solutions to ultra low-latency (microsecond) data-processing using field programmable gate arrays (FPGAs). Thomas has worked on the CMS experiment since 2013, specialising in data acquisition, fast particle selection, and event reconstruction in FPGAs. He is the CERN Openlab CTO for AI and Edge devices, and has worked on multiple Openlab projects with Micron Technology on real-time event selection using deep learning accelerators, and Compute Express Link (CXL)-based shared memory.

Thomas joined the CERN EP-CMD group in 2019. In 2018, Thomas graduated with his PhD in particle physics from Imperial College London, UK, where he developed a novel FPGA-based online particle track finder for CMS at the High Luminosity LHC, earning him the CMS Thesis Award for that year. Prior to that, he obtained his Masters and Bachelors degree in physics with theoretical physics from the same institute.