Comprehensive coverage

The computer cluster is becoming a reality

Experiments in particle physics at the new accelerator at CERN will yield a wealth of information. To deal with the huge amounts of information, the particle physics computer cluster is being developed.

The detector of the Atlas experiment, one of the experiments that will be conducted in the LHC accelerator. Credit: ATLAS
The detector of the Atlas experiment, one of the experiments that will be conducted in the LHC accelerator. Credit: ATLAS

Particle physicists from Great Britain will this week present the largest active computer grid in the world. LCG, the computer cluster of the LHC accelerator, numbering over 6,000 computers at 78 sites around the world, is the first permanent global cluster for scientific purposes. England is a significant part of the LCG, with more than 1000 computers in 12 different sites. At the annual UK e-Science All Hands meeting in Nottingham in 2004, particle physicists will explain to biologists, chemists and computer scientists how they achieved this feat.

Experiments in particle physics at the LHC accelerator, currently under construction at the CERN laboratory in Geneva, will produce about 15 petabytes of data per year - 15 million billion bytes per year. In an attempt to cope with the enormous flow of information, particle physicists around the world are building a cluster of computers. By the year 2007, this cluster will have a computing power equal to that of the 100,000 fastest computers today, working together to create a kind of "virtual supercomputer". The virtual supercomputer can be expanded and developed according to needs. When the experiments at the LHC begin in 2007, they are expected to reveal new physical processes, which had a decisive influence on the design of our universe, and also to pull out of the darkness mysteries such as the origin of the mass of the particles.

Distributed computing has been an important goal for information technology developers and scientists for over five years. Distributed computing allows scientists to access information and computing power from around the world without knowing the location of these computers. The analysis of the results of experiments in particle physics can be done on ordinary supercomputers, but these are expensive, and the demand for them is extremely high. Distributed computing, on the other hand, consists of thousands of cheap units, to which additional units can be added according to demand. Like the Internet before it, the cluster has the potential to affect the computing of all of us.

GridPP, the particle physicist's cluster project in England, was initiated by the Physics and Astronomy Research Council in 2000. On September 2st of this year, the project reached its halfway point with the official end of the first phase and the start of GridPPXNUMX. Dr. Dave Britton (Britton), the manager of the GridPP project, says: "The goal of the first part of the project was to create a prototype of the cluster, and we achieved this goal with great success. Having proven that the cluster can work, we are now focusing on developing a large, stable and user-friendly cluster in conjunction with other international projects. Such a cluster will allow scientists to deal with much more difficult problems than is possible today."

Dr. Jeremy Coles (Coles) from the Rutherford Appleton Laboratory is the production manager of the cluster, and is responsible for the daily work of the cluster. Coles will give a speech on the cluster in Nottingham. He emphasizes that "there are many challenges to deal with in the process of expanding the cluster. Besides the technical issues involved in building a stable cluster under supervision, we must also address broader issues, especially encouraging collaboration between different groups of users."

At the conference in Nottingham, participants will be able to witness the particle physics cluster in action. The GridPP team developed a map that shows the movement of computation jobs in LCG in real time, as they are routed to the most suitable sites in the cluster, run the software and return results. Link to the map at the bottom of the article. Dr. Dave Colling (Colling) from Imperial College London, whose team built the map, said: "People who have never seen a computer cluster in action, may have difficulty imagining what it does. Our map is a simple way to show how Eshkol can give scientists access to resources around the world, right from their desktop. The map is also useful for experts, who can easily see the operation of the cluster."

Professor Tony Doyle, who heads the GridPP project, explained: "This is a tremendous achievement for particle physics and online science. We now have a real international cluster running, running more than 5,000 computations every moment. Our next goal is to multiply the computing power tenfold, so that we will have 10,000 computers in England alone, ready for the LHC in 2007."

translating:
Dikla Oren

The LCG map

Press release

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.