Voir en

français

LHC Season 2: CERN computing ready for data torrent

The CERN Data Centre has been upgraded to process the huge amounts of data generated by collisions during Run 2 of the LHC, which starts this week

LHC Season 2: CERN computing ready for data torrent

Racks of servers at the CERN Data Centre (Image: CERN)

This week, the experiments at the Large Hadron Collider (LHC) will start taking data at the new energy frontier of 13 teraelectonvolts (TeV) - nearly double the energy of collisions in the LHC's first three-year run. These collisions, which will occur up to 1 billion times every second, will send showers of particles through the detectors.

With every second of run-time, gigabytes of data will come pouring into the CERN Data Centre to be stored, sorted and shared with physicists worldwide. To cope with this massive influx of Run 2 data, the CERN computing teams focused on three areas: speed, capacity and reliability.

"During Run 1, we were storing 1 gigabyte-per-second, with the occasional peak of 6 gigabytes-per-second," says Alberto Pace, who leads the Data and Storage Services group within the IT Department. "For Run 2, what was once our "peak" will now be considered average, and we believe we could even go up to 10 gigabytes-per-second if needed."

At CERN, most of the data is archived on magnetic tape using the CERN Advanced Storage system (CASTOR) and the rest is stored on the EOS disk pool system – a system optimized for fast analysis access by many concurrent users. Magnetic tapes may be seen as an old-fashioned technology. They are actually a robust storage material, able to store huge volumes of data and thus ideal for long-term preservation. The computing teams have improved the software of the tape storage system CASTOR, allowing CERN's tape drives and libraries to be used more efficiently, with no lag times or delays. This allows the Data Centre to increase the rate of data that can be moved to tape and read back.

Reducing the risk of data loss - and the massive storage burden associated with this - was another challenge to address for Run 2. The computing teams introduced a data 'chunking' option in the EOS storage disk system. This splits the data into segments and enables recently acquired data to be kept on disk for quick access. "This allowed our online total data capacity to be increased significantly," Pace continues. "We have 140 petabytes of raw disk space available for Run 2 data, divided between the CERN Data Centre and the Wigner Data Centre in Budapest, Hungary. This translates to about 60 petabytes of storage, including back-up files."

140 petabytes (which is equal to 140 million gigabytes) is a very large number indeed - equivalent to over a millenium of full HD-quality movies.

Now, in addition to the regular "replication" approach - whereby a duplicated copy is kept for all data - experiments will now have an option to scatter the data across multiple disks. This "chunking" approach breaks the data into pieces. Use of reconstruction algorithms means that content will not be lost even if multiple disks fail. This not only decreases the probability of data loss, but also cuts in half the space needed for back-up storage. Finally, the EOS system has also been further improved to achieve the goal of more than 99.5% availability for the duration of Run 2.

From quicker storage speeds to new storage solutions, CERN is well-prepared for all of the fantastic challenges of Run 2.