Comprehensive coverage

The Israeli technology that helps NASA study the solar eclipse

An interview with Gilad Sheiner from the Israeli company Mellanox, about its connectivity technology that will help NASA, and other research bodies in the world, transfer the huge amount of data that was collected during the total solar eclipse in the United States.

Some of the world's leading climate and space research institutes, including NASA, will use Mellanox's connectivity technology to transmit the vast amount of data that will be collected during the total solar eclipse. Photo: Luc Viatour / Wikimedia.
One of the world's leading climate and space research institutes, including NASA, will use Mellanox's connectivity technology to transmit the enormous amount of data collected during the total solar eclipse. Photo: Luc Viatour / Wikimedia.

What types of data does NASA collect?

This is information that is in the possession of NASA and we are not authorized to discuss it on their behalf. In general, this is data that aims to enable learning more about the universe and how events such as the upcoming solar eclipse [the interview was conducted before the eclipse] affect the Earth and its environment.

What special information were they able to discover thanks to this collaboration?

As mentioned, this is information in the possession of NASA and we are unable to elaborate on the subject.

How is that data analyzed?

The data that NASA collects is analyzed on large supercomputers. Mellanox has been working with NASA for many years and provides the highest performance connectivity solutions into NASA's supercomputers. Using Mellanox's Infiniband connectivity solutions, NASA can transmit data at higher speeds and analyze it faster. In this way, it is possible for NASA to understand in a deeper way the phenomena occurring throughout the universe, to explore space more efficiently, to learn more about the climate phenomena in our world and even to design better spacecraft.

Computing - what is the state of the field, trends, important developments?

In the world of high-performance computing (or supercomputing), there is a constant and ever-increasing demand for better performance. Technology providers have worked tirelessly to keep pace with demand, providing with each new generation systems that are faster, more reliable and more efficient. In the last twenty years we have seen a million-fold increase in the performance of supercomputers, and three main revolutions in the basic technologies that made this increase in performance possible.

The first revolution occurred when the industry moved from a symmetric parallel processing (SMP) architecture to computer clusters. This change leveraged high-speed networks to build cost-effective computer clusters through the use of off-the-shelf components that can be scaled as needed, and which can provide a more cost-effective infrastructure.

The second revolution occurred when performance could no longer be improved by increasing the processor frequency. This led to a shift from single-core processors to multi-core processors, which doubled performance but required scalable connectivity to handle multiple processors within a single server.

The third recent revolution in supercomputers (HPC) was the shift from processor-centric data center architectures to data-centric architectures. This change is required to enable faster real-time analysis of the growing amount of data we collect. The industry has recognized that the processor has exhausted its scalability potential, and today it offers a smart grid as a new co-processor that shares the responsibility of handling and accelerating application loads. By implementing data analysis algorithms in the smart grid, we can dramatically improve the performance of data centers and applications.

Smart connectivity solutions are based on an architecture capable of copying the execution of all network functions from the processor to the network, thereby freeing up the processor's operating cycles that will allow it to focus more on calculations, resulting in increased system efficiency. Thanks to these new efforts, the connectivity will increasingly include data algorithms that will be managed and executed within the network, allowing users to run data algorithms on the data as it passes through the system's connectivity, rather than waiting for the data to reach the processor. The new generation of connectivity will provide In-Network Computing and In-Network Memory, which is the leading approach to achieving performance and scalability for exascale systems.

How does this age-old technology of supercomputing become essential to the subject of real-time data analysis? How did you adapt it to the requirements of the current era?

This is not old technology. Supercomputing technology is developing at a rapid pace to meet the world's increasing demands for higher performance and scalability. The new capabilities for analyzing data anywhere, allow the new data centers focused on data to deal with the growing amounts of data and process them more quickly and efficiently. This innovative idea affects not only the design of supercomputer systems, but also data centers and even the Internet of Things (IoT).

The importance of the field toIOT

As for the Internet of Things, in the past people thought about collecting and sending all the data to the data centers, but later it became clear to them that this was not possible and the Fog idea was born, which is basically the same as a data-centric architecture: to bring more computing to where the data is, and to be able to analyze data in any Place.

Why is CPU speed essential for real-time data processing?

The speed of processors has remained static for years, which in effect drove the transition to the era of multi-core processors. An essential element in real-time data analysis is actually bringing the data to a huge amount of processors, and the ability to analyze it anywhere, including within the connectivity. The network has become a computer.

Do you work with other research institutes?

Mellanox works with many supercomputer centers all over the world, helping them conduct research and develop new solutions in the fields of climate, biosciences, energy research, national security and more.

More on the science website about the complete solar eclipse 2017:

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.