Comprehensive coverage

The Lawrence Livermore lab will run a Linux-based supercomputer

After four Linux clusters are installed and activated, which will be built by Appro - a supplier of high-performance servers, they will benefit from a combined processing rate of 100 teraflops at the Lawrence Livermore National Laboratory in the USA * The clusters will assist in climate research, astrophysics and the management of the US nuclear weapons inventory "B

Sharon Gaudin, InformationWeek
 
The Lawrence Livermore National Laboratory in the US is joining the trend of moving to supercomputers based on Linux clusters, and has already started operating the first of four clusters. The cluster assists in climate research, astrophysics and nuclear weapons inventory management. In the last four years, the laboratory has been assisted by a computer that operates at a rate of 11 teraflops, but now it is no longer enough, and it was decided to move to four new clusters that will be built by Appro - a supplier of high-performance servers, storage arrays and workstations. Together, the four clusters will provide a processing rate of 100 teraflops (a teraflop is equal to one hundred trillion calculations per second).

"The demand for high-performance computing among researchers is only increasing," said Don Johnston of the Livermore Laboratory. "We are trying to meet the demand and make it easier for researchers to enjoy the power of supercomputers." In order to shorten the long queue to use the supercomputers, Johnston decided to use a group of clusters. Each of the four can focus on a particular problem and the number of researchers who can benefit from the computer at the same time increases.

Appro has completed construction of the first of the four clusters, dubbed Rhea, featuring 576 AMD 8,000-series Opteron processors. At its peak, the processing rate of the first cluster reaches 22 teraflops. Completion of the construction of the next three clusters is planned for the first quarter of 2007. After all the clusters are completed, the cluster group will include 2,592 computers with four processors and eight cores and will provide a processing rate of 100 teraflops. And it will be the third largest supercomputer in the laboratory.

"The move reflects the shift that has been evident in recent years to computing clusters," said Charles King, chief analyst at Pund-IT Research. "The clusters are more flexible and can be operated independently and in different configurations." According to him, if in 2001 there were only about thirty clusters in the list of the 500 largest supercomputers in the world, then in June of this year there were already 350 clusters in the list. "Today, computing clusters are the dominant technology in the world of supercomputers," says King.

Adrian Snell, an analyst from IDC, agrees with King and adds that Appro combines off-the-shelf components in clusters in order to avoid too high a price. Thus, if in the past only a handful of laboratories enjoying large budgets, and a few huge companies, could afford to operate supercomputers - then today many more companies can consider the possibility of purchasing supercomputers. "A significant part of the growth of high-performance computing originates from new users in the market. The computing clusters have a decisive contribution to promoting the use of supercomputers," says Snell.

According to Johnston, the basic price of the four clusters is about 15 million dollars. "The great power of the clusters makes it possible to perform more complex simulations, which could not be performed in the past, and advances the research."
 

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.