Comprehensive coverage

The next twenty years of microchips

The integrated circuit designers are pushing all the boundaries with the aim of making them smaller, faster and cheaper

The Jacquard loom on display at the Museum of Science and Industry in Manchester was one of the first devices that could be programmed.
The Jacquard loom on display at the Museum of Science and Industry in Manchester was one of the first devices that could be programmed.
By the editors of Scientific American

In 1975, electronics pioneer Gordon Moore formulated his famous prediction, according to which the complexity of integrated circuit chips would double every two years. Developments in manufacturing will allow the transistors on the chip to shrink further and further, so that electrical signals will have to travel smaller distances to process information. The meaning of this "Moore's Law", as it was later called, in terms of the electronics industry and consumers, is that computerized devices will steadily become smaller, faster and cheaper. Thanks to the ceaseless innovation in the design and manufacture of semiconductors, the chips for 35 years have followed a path very close to Moore's prediction.

However, the engineers knew that at some point they would encounter a roadblock. The transistors will reach a thickness of only tens of atoms, and on this scale the basic laws of physics set limits. Even before reaching this barrier, it was likely that two practical problems would arise: the cost of placing transistors so small and dense while maintaining high throughput - number of usable chips versus defective ones - could be too high, and the heat generated by switching the transistor tangle on and off could be so high until it damages the components themselves.

Indeed, engineers have been encountering these obstacles in recent years. The main reason why personal computers are now marketed with great fanfare as having "dual-core" chips, that is, two small processors in one place, is that compressing the necessary number of transistors into a single chip and cooling it have become too problematic. Instead, computer designers choose to place two or more chips side by side, and program them to process information simultaneously.

It seems that the day of Moore's law has come, and the space is running out for the transistors on the chip. If so, how will the engineers continue to create more powerful chips? Two of the possibilities are moving to alternative architectures, and refining nanomaterials so that it is possible to assemble them atom by atom. A third option is to create new ways of processing information, including quantum and biological computing. In the following pages we will take a look at a selection of developments, most of which are now in the prototype stage and may continue the "smaller, faster and cheaper" trend of computing products, which has served us faithfully so far, in the next two decades.

Size: cross the line

The width of the smallest commercial transistors produced today is only 32 nanometers - about 96 atoms of silicon. The industry recognizes that it will be very difficult to create parts smaller than 22 nanometers inside a chip using the conventional lithography techniques, which have been perfected over decades.

One option, which includes similar-sized circle sections but offers greater computing power, is known as the "crossbar design". Instead of producing all the transistors on one plane (similar to cars stuck in traffic on the lanes of a silicon highway), the latitude lines approach involves a collection of parallel nanowires on one plane, which pass over a second collection of wires perpendicular to them (like two roads at right angles). A buffer layer one molecule thick is placed between the two collections. The many junctions between the wires can act as switches called "memristors" and represent 0 or 1 (bits, or binary digits) in the same way as transistors. But memristors can also store information. The combination of these capabilities allows performing a variety of calculation operations. Basically, a single meristor can do the work of 10 or 15 transistors.

Hewlett-Packard (HP) Laboratories has prototyped latitudinal line designs with 30 nm wide titanium and platinum wires using materials and processes similar to those already optimized in the semiconductor industry. The company's researchers believe that it will be possible to reduce the wires down to 8 nanometers. Several other research groups are creating latitude lines from silicon, titanium and silver sulphide.

Heat: refrigerators or wind

When a billion transistors operate on a single chip, disposing of the heat generated as a result of turning them on and off becomes a real challenge. In personal computers there is room for a fan, but even it cannot cool the chips completely: it is able to handle the power of about 100 watts emitted as heat from each of the chips. That's why the designers are developing new ways of cooling. The MacBook Air laptop has a sleek case made of heat-conducting aluminum, which functions as heat sinks. In the Apple Power Mac G5 personal computer, fluid flows through microscopic channels etched into the underside of the processor chip.

However, the combination of electronics and liquids is difficult, and in smaller and more portable gadgets such as smart phones there is simply no room for plumbing - or fans. An Intel research group created a superlattice from thin sheets of the tellurian bismuth material and integrated it into the coating of a chip. This thermoelectric material converts temperature differences into electricity and thereby cools the chip itself.

The startup company Ventiva, based on research conducted at Purdue University, produces a tiny "fan" with no moving parts based on a solid-state device. The device creates a breeze using an effect known as ionic wind - a phenomenon that is also used in silent home air purifiers. Electrical wires pass currents along a slightly concave lattice and create a plasma on a micron scale. The ions formed in this gas-like mixture move the air molecules from the wires towards a nearby plate and create a breeze. Such a fan produces greater airflow than conventional mechanical fans, and is much smaller. Other inventors make fans with Stirling motors, which create wind without consuming electricity (although they are still relatively bulky). These fans are powered by the temperature differences between hot and cool areas on the chip.

Architecture: Multiple cores

Smaller transistors can switch more quickly between the off and on states, which represent 0 and 1, making the entire chip faster. But the clock rate - the number of instructions the chip is able to process per second - stabilized at between 3 and 4 GHz, when the chips reached the heat ceiling. The pursuit of even better performance within the heat and speed limitations led the designers to place two processors, or cores, on the same chip. Each core works at the same speed as the previous processors, but because the two work in parallel they are able to process more data in the same amount of time, and they consume less electricity so less heat is generated. The newest PCs now boast quad-core processors, such as Intel i7 or AMD Phenom X4 processors.

The world's most powerful supercomputers include thousands of cores, but in consumer products, efficient use of even a few cores requires new programming techniques that can divide the data and processing and coordinate tasks. The basics of parallel processing were formulated for supercomputers in the 80s and 90s, so the challenge is to create languages ​​and tools that software developers can use to write applications for the consumer market. Microsoft's research division, for example, created the F# programming language. An early language called Erlang by the Swedish company Ericsson inspired new languages, including Clojure and Scala. Institutions like the University of Illinois are also involved in parallel programming for multi-core processors.

If these approaches can be perfected, desktop and mobile devices will be able to contain dozens (or more) of parallel processors, each of which will have fewer transistors than today's chips, but as a group they will work faster.

Thinner materials: nano-tubules and self-assembly

Already ten years ago, various experts began to praise nanotechnology and see it as a solution to all kinds of challenges in medicine, energy and of course also in integrated circuits. Its adherents claim that it was the semiconductor industry, which produces chips, that brought the field of nanotechnology to the world by creating smaller and smaller transistors.

However, even higher expectations have arisen from this field, namely that engineers will be able to design molecules as they wish using nanotechnological methods. Transistors made from carbon nanotubes, for example, could be much smaller than those in use today. Indeed, engineers at IBM succeeded in creating a standard CMOS circuit, the conductive substrate of which is carbon nanotubes instead of silicon. Jörg Effenzler of the same team, now at Purdue University, is designing new transistors much smaller than CMOS devices that could take advantage of the tiny nanotube base.

Arrangement of molecules and even atoms may be problematic, especially in view of the need to arrange large amounts of them during chip production. One solution could be self-assembling molecules: mix them together, expose them to heat or light or centrifugal force, and they will self-assemble into a predictable pattern.

IBM demonstrated the creation of memory circuits from polymers linked by chemical bonds. When the molecules are woven onto a silicon surface and heated, they stretch and form a honeycomb-like structure with nozzles that are only 20 nanometers in size. This pattern can then be burned onto the silicon and create a memory chip of the same size.

Faster transistors: thin graphene

The idea behind the constant miniaturization of transistors is to shorten the distance that electrical signals have to travel within the chip, thereby speeding up information processing. However, one particular material - graphene - is able to act faster thanks to its natural structure.

Most logic chips that process information use field effect transistors built in CMOS technology. The transistor can be thought of as a narrow rectangular layer cake, with a layer of aluminum (or, more recently, polysilicon) on top, an insulating oxide layer in the middle, and a layer of semiconductor silicon below. Graphene, a structure of carbon that they managed to isolate recently, is built as a flat sheet of hexagons that looks like a honeycomb, but is only one atom thick. Typically, graphene sheets are stacked on top of each other to form the mineral graphite, the familiar form of carbon used to write with a pencil. In its pure crystalline form, graphene conducts electrons faster than any other material at room temperature, and much faster than field effect transistors. Also, the charge carriers lose very little energy as a result of scattering or collisions with atoms in the lattice, so less residual heat is generated. Scientists only managed to isolate graphene in 2004 so the work is still in its infancy, but the researchers are convinced that they will be able to create transistors from graphene with a width of 10 nanometers and a height of only one atom. It may be possible to etch a large number of circuits onto a single tiny sheet of graphene.

Optical computing: fast as light

Because revolutionary alternatives to silicon chips are in such early stages, circuits suitable for commercial use may be delayed by another decade. However, it is likely that Moore's Law will have exhausted itself by then, so work on other forms of computing is being done in full swing.

In optical computing, not electrons carry the information but photons, and they do it much faster - at the speed of light. Unfortunately, controlling light is much more difficult than controlling electrons. The progress achieved in the creation of optical switches, of the type that is placed along optical cables in communication lines, also helped optical computing. One of the most advanced developments is intended, ironically, to create an optical link between traditional processors in multi-core chips. As the cores process information at the same time, a huge amount of data must be transferred between them, and the intermediary electrical wires may become a bottleneck. Photonic links can improve flow. Researchers at Hewlett-Packard Laboratories are testing models that may transmit two orders of magnitude more information.

Other groups are working on optical links, which will replace the slower copper wires that currently connect the processor chip to other components inside the computer, such as memory chips and DVD drives. Engineers at Intel and the University of California, Santa Barbara built optical "data pipes" from indium phosphate and silicon using standard semiconductor manufacturing processes. However, all-optical computing chips will require some fundamental breakthroughs.

Molecular Computing: Organic Logic

In molecular computing, molecules rather than transistors represent bits of 0's and 1's. When it comes to a biological molecule, such as DNA, the category is known as "biological computing" [see "Biological computing: living chips"]. For the sake of clarity, when engineers talk about computing using non-biological molecules, they may call it "molecular logic" or "molecular electronics".

A classic transistor has three parts organized like the letter Y: source, gate and drain. Applying a voltage to the gate (leg of the Y) causes electrons to flow between the source and the drain, creating a 1 or a 0. In theory, branched molecules can cause a signal to pass in a similar manner. Ten years ago, researchers at Yale and Rice universities created molecular switches based on benzene as a building block.

Molecules can be tiny, so circuits built from them can be much smaller than silicon circuits. However, one of the difficulties is finding a way to produce complex circuits. The researchers hope that one of the answers will be found in self-organization. In October 2009, a team at the University of Pennsylvania turned zinc and cadmium sulfide crystals into metal-semiconductor superlattice circuits using only chemical reactions that encouraged self-assembly.

Quantum computing: superposition of 0 and 1

Circuit components made of single atoms, electrons or even photons would be the smallest possible. In such dimensions, the interactions between the components are controlled by quantum mechanics - the laws that explain atomic behavior. Quantum computers may be unbelievably dense and fast, but in practice, manufacturing them and managing the quantum effects they create are enormous challenges.

Atoms and electrons have properties that can exist in different states at the same time and create a quantum bit, or qubit. Several research approaches to handling qubits are currently being tested. One approach, known as spintronics, uses electrons whose magnetic moments spin in one of two directions; Think of a ball spinning in one direction or the other (and thereby representing 1 or 0). However, the two states can also coexist in a single electron, creating a unique quantum state known as a superposition of 0 and 1. With the superposition states, the amount of information that a row of electrons is able to represent increases exponentially compared to a row of silicon transistors, which are only able to accept normal bit states . Scientists at the University of California, Santa Barbara created several different logic gates by manipulating electrons in nozzles etched into a diamond.

In another approach, being researched at the University of Maryland and the US National Institute of Standards and Technology (NIST), a row of ions floats between charged plates, and lasers reverse the magnetic orientation of each ion (that is, its qubits). It is also possible to identify the different types of photons that each ion emits according to the direction it chooses.

Apart from the advantage of superposition, it is also possible to "intertwine" quantum components. Information states are linked across many qubits and thus provide many powerful ways to process information and transfer it from place to place.

Biological computing: living chips

The biological computing replaces the transistors with structures that are usually found in living organisms. Particularly interesting are the DNA and RNA molecules, which actually store the "software" that manages the life of our cells. This is a very tempting vision, since a chip the size of a little finger's nail may indeed contain a billion transistors, but a processor of the same size is capable of containing billions of DNA strands. These strands will simultaneously process different parts of a computing task and join together to represent the solution. A biological chip, apart from the fact that the number of components in it is several orders of magnitude greater, will also be able to perform highly parallel processing.

The primary biological circuits process information by making and breaking connections between strands. Now the researchers are developing "genetic computer programs" that can live and reproduce inside cells. The challenge is to find ways to program collections of biological components to behave in desired ways. Such computers may eventually end up in our bloodstreams, rather than the desktop. Researchers at the Weizmann Institute of Science in Rehovot created a simple processor from DNA, and now they are trying to make the components work inside a living cell and communicate with the environment that surrounds it.

7 תגובות

  1. A very interesting article that briefly reviews each of the future technologies for computing solutions.
    However, the article requires a lot of technical knowledge from the reader and there was room to add several pictures illustrating the different structures and thus make it easier for the average reader

  2. Very interesting article
    I would also love to read about the future of the solar industry.

  3. Nonsense, nonsense, but when there is nowhere to go forward, they will start putting big money on anything that might hint at a possible solution.

  4. A fascinating article, which proves that science is of great and decisive importance, to march humanity forward.

  5. Nonsense, they have been talking about quantum and biological computing for 20 years, it's all talk, you never know when there will be a breakthrough and where it will come from.

  6. Anticipating what will happen with the future of chips seems to me to be a prophecy given to fools

  7. Stunning.
    The most comprehensive article of its kind despite the shortness of the future of computing.
    Although it is written much less interestingly than Roy's articles, it is still of a very high standard.
    It would be nice to read a similar article about solar energies.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.