Comprehensive coverage

DARPA finances the development of a new type of processor - a graph analysis processor

A completely new type of processor called HIVE that is not based on the Von Neumann architecture is being funded by DARPA, the US military's Advanced Research Projects Agency, to the tune of $80 million over four and a half years. 

Source: DARPA.
מקור: DARPA.

Chip manufacturers Intel and Qualcomm are participating in the project, along with a national laboratory, a university, and Defense Ministry contractor Northrop Grumman. The Pacific Northwest National Laboratory (Richland, Washington) and Georgia Tech University are engaged in creating software tools for the processor and Northrop Grumman will build a center in Baltimore that will expose and transfer the Department of Defense's graph analysis needs to what is known as the world's first Graph Analysis Processor (GAP) .

“When we look at the architectures of computers today, they use the same [John] von Neumann architecture that was invented in the XNUMXs. CPUs and GPUs have become parallel, but each of the cores is still a von Neumann processor,” Truong Tran, Program Manager in the Microsystems Technologies Office (MTO) at DARPA, told EE Times in an exclusive interview.

HIVE is not von Neumann because of the sparseness of its data and the ability to simultaneously perform different processes in different areas of memory at the same time," said Trung. "This non-von Neumann approach allows one large map that many processors can access at the same time, with each of them using their own draft memory and simultaneously performing scatter and gather operations in global memory."

Graph analysis processors do not exist today, but they theoretically differ from mainframes and graphics processors in important ways. First of all, they are optimized for processing primitives of sparse graphs. Because the items they process are sparsely located in global memory, they also have a new memory architecture that can access randomly located memory locations at very high speeds (up to terabytes per second).

Current memory chips are optimized to access locations in a long sequence (to fill their caches) at their highest speeds, which are in the much slower gigabytes per second range. HIVE processors, on the other hand, will access eight bytes of random data from global memory at their highest speed, then process it independently using their own draft memory. The specification of the architecture also includes the ability to increase up to any number of HIVE processors that will be required to perform a specific graph algorithm.

"Of all the data collected today, only about twenty percent is useful - that's why it's sparse - and for that reason our eight-byte granularity is much more effective for big data problems," said Tran.

Together, the new arithmetic processing unit that is optimized for graph analysis plus the chips with the new memory architecture will use, according to DARPA's specifications, a thousand times less electricity than the current supercomputers. The participants, particularly Intel and Qualcomm, will also receive the rights to commercialize the processor and memory architectures they invent to create HIVE.

The graph analysis processor is necessary, according to DARPA, for big data problems, which usually involve "many-to-many" relationships and not "many-to-one" or "one-to-one" that today's processors are optimized for. A military example, according to Darfa, could be the first digital letters of a cyber attack. A civilian example, according to Intel, could be all the people who buy from Amazon mapped to all the items each of them bought (clearly the many-to-many relationship describes the people-to-products relationship).

"From my point of view, the next big problem that needs to be solved is big data, which is currently analyzed using regression that is not effective for relationships between data that are very sparse," Tran said. "We found that the central processor and the graphics processor leave a big gap between the size of the problem and the richness of the results, while the graph theory fits perfectly and we also foresee a developing commercial market for it."

5 תגובות

  1. As far as I understand, the Hidan newspaper is supposed to present and explain innovative and ground-breaking research from around the world to the general Hebrew-speaking audience. I too, who has a doctorate in chemistry and a master's degree in business administration, consider myself a "broad audience".
    Nevertheless, from this article that was translated into Hebrew (and I assume that it is the Hebrew language according to the writing and the linking words that appear in it) I did not understand a thing and a half about the general idea and why it is so innovative and important.
    If this is a newspaper for the general public - please translate it not only by throwing the entire article into Google-Translate, but also by translating the words that are known only to professionals into Hebrew, even if this slightly lengthens the article and changes its position to "less scientific".

  2. The development here is an example of scientific evolution. HIVE exists in the software as a decentralized calculation algorithm in the PHP language and perhaps as part of the HADOOP protocol, which is hot today in big data analysis.
    HADOOP is Philon's doll name of the child of the GOOGLE programmer who invented the protocol. His intention was to show that the protocol is as powerful as Phil, and as pleasant to use as Philon. I am currently trying to raise in my work a not too high amount of $X,000 for such software. What exists today is distributed over 12 processors, or even perhaps up to 1,000 processors, Intel came to the conclusion that it can be concentrated in one chip. So apart from the fact that this is a technological leap, this is another step in my opinion in the development of cognitive intelligence. Once perception and sensing are accelerated by a factor of ten, it becomes possible to build new abstract levels of insight into software that were not possible before.
    And there is another sign here. Intel, which was used to copying and developing for itself, turns to DARPA for funding of $80 million.
    An amount she would have spent on her own in the past. They learned that in risk development it is possible to share funding. And in order to compete with the giants GOOGLE, which makes itself, and IBM, which receives DARPA funding in its 2 architectures: a quantum computer, a cognitive computer, they are also there. learned To this day there is a negative experience in acquiring companies from outside. Intel is not educated to develop them. So it was with DSP, with OPLUS. Good companies that lost as soon as they were acquired by Intel. The ARM processor, developed by the part of DSP that did not merge with Intel, is today the core technology in the processors of cell phones. I wonder how it will be in Mobilya. Apart from that the article gave me insight into another algorithm in software that can do similar things, utilizing mathematics this time rather than computer science protocol, and that's already for myself.

  3. Avi. We missed hearing your voice.
    Another topic: it is possible to solve the problem of sparsity in the algorithm. Today there is a mathematical theory for this. Michael Elad, who appeared here with Verdan Papian, is a good mathematician in that field as well, as well as Yunina Elder, and others.

  4. It looks like this solution will solve the Paul-Newman bottleneck problem and boost the performance of the home processors as well by eliminating the memory access bottleneck.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.