The Grid: Software, middleware, hardware


Find out how physics software, middleware, hardware and networking all contribute to the Worldwide LHC Computing Grid

The four main component layers of the Worldwide LHC Computing Grid (WLCG) are physics software, middleware, hardware and networking.

Physics software

WLCG computer centres are made up of multi-petabyte storage systems and computing clusters with thousands of nodes connected by high-speed networks. They need software tools that go beyond what is commercially available to satisfy the changing demands of the high-energy-physics community.

The physics software on the Grid includes programs such as ROOT, a set of object-oriented core libraries used by all the LHC experiments; POOL, a framework that provides storage for event data; and other software for modelling the generation, propagation and interactions of elementary particles. Grid projects supply much of the software that manages distribution and access, as well as job submission, user authentication and authorization. They also supply software known collectively as "middleware".

Middleware

Although the Grid depends on the computer and communications networks of the underlying internet, novel software allows users to access computers distributed across the network. This software is called "middleware" because it sits between the operating systems of the computers and the physics-applications software that can solves a user's particular problem.

The most important middleware stacks in the WLCG are the European Middleware Initiative, which combines key middleware providers ARC, gLite, UNICORE and dCacheGlobus Toolkit developed by the Globus Alliance; OMII from the Open Middleware Infrastructure Institute; and Virtual Data Toolkit.

Hardware

Each Grid centre manages a large collection of computers and storage systems. Installing and regularly upgrading the necessary software manually is labour intensive, so management systems such as Quattor (developed at CERN) automate these services. They ensure that the correct software is installed from the operating system all the way to the experiment-specific physics libraries, and make this information available to the overall Grid scheduling system, which decides which centres are available to run a particular job.

Each of the 11 Tier 1 centres also maintains disk and tape servers, which need to be upgraded regularly. These centres use specialized storage tools – such as the  dCache system developed at the Deutsches Elektronen Synchrotron (DESY) laboratory in Germany, the ENSTORE system at Fermilab in the US or the CERN advanced storage system (CASTOR) developed at CERN – to allow access to data for simulation and analysis independent of the medium (tape or disk) that the information is stored on.

Networking

The Grid file-transfer service, developed by the Enabling Grids for E-science projects, manages the exchange of information between WLCG centres. The file-transfer service has been tailored to support the special needs of grid computing, including authentication and confidentiality features, reliability and fault tolerance, and third-party and partial-file transfer.

Optical-fibre links working at 10 gigabits per second connect CERN to each of the Tier 1 centres around the world. This dedicated high-bandwidth network is called the LHC Optical Private Network (LHCOPN).

Resources

Maps