Comprehensive coverage

SanDisk begins selling the world's first X4 technology memory chips that include technologies developed in Israel

Tel Aviv University also contributed to the development of the new technology, through exclusive licensing to SanDisk of advanced error correction technology, performed by Ramot

Photo: SanDisk Company
Photo: SanDisk Company

SanDisk's X4 technology, whose key components were developed in Israel, enables the production of various types of memory cards with significantly larger storage capacities than those currently known in the industry. The first cards that are now on the market based on the technology: SDHC cards and Memory Stick PRO DUO

SanDisk (NASDAQ: SNDK), the global leader in the flash memory card market, announced that it has begun commercial distribution of flash memory cards based on X4 technology. SanDisk's technology makes it possible to store 4 bits in each memory cell - for the first time in mass production in the flash industry. The background for the announcement: the growing need in the industry today for large volume digital storage solutions in extremely small 'packages'. The innovative technology allows memory card manufacturers to increase the storage capacity, keep the price low and meet the demands of memory-consuming applications, such as music, movies, photos, games and GPS. The first cards to hit the market in September are the SDHC and Memory Stick PRO Duo cards with capacities of 8 and 16 gigabytes.

During the production of the new chips, a 43 nm production technology was used. SanDisk's X4 controller technology - developed by SanDisk in Israel - manages the signal processing and complex requirements of the chip and maintains the level of performance currently accepted for flash components. Tel Aviv University also contributed to the development of the new technology, through exclusive licensing to SanDisk of advanced error correction technology, carried out by Ramot, Tel Aviv University's technology commercialization company.

"The development and commercialization of the X4 technology is a significant milestone for the flash industry," says Sanjay Mahurtra, president of SanDisk. "Our challenge in development was not only to maintain the lowest possible cost from the very use of technology, but also to meet the reliability and performance requirements accepted in the NAND industry. Our development and engineering teams took advantage of the extensive experience they have with 2- and 3-bit chips per cell, and developed new and powerful algorithms that guarantee reliable and trouble-free operation."

According to Joseph Ensworth, research director at Gartner, the integration of 4 bits per cell technology into consumer products is undoubtedly a significant milestone for the flash industry and demonstrates SanDisk's place in an industry where 2 bits per cell is the standard.

9 תגובות

  1. Correcting errors is not just adding a digital signature and checking it. There are codes such as Reed-Solomon, which send additional information along with the original information, so that if an error occurs (up to a certain level = a certain number of bits), then the original can be recovered from the additional information.

  2. If you don't want to use a system that has errors then I suggest you stop using the internet……
    Within one of the layers of TCP/IP there is a layer that is responsible for correcting errors caused by the fact that the information passes through XNUMX% unreliable intermediaries.

    It's the same with phones and cell phones and cable broadcasts..

    Error correction mechanisms are based on all kinds of proven mathematical principles that this message is too short to elaborate on.

    Regarding flash:
    Each "bit flash" has X different voltage levels each indicating a different value, hence X4.
    The problem is that you need a very reliable system and a very accurate controller to control such a thing.
    In X2 things are simpler of course because you don't need such a precise controller.

    In any case, today there are already prototypes of X16 🙂 and yes, they too will have errors and error correction mechanisms and they
    Only enter the market when their reliability reaches a sufficient level.

  3. But the very strong emphasis that is placed on this mechanism, which is the one that makes it possible to reach larger volumes, I just don't like it, if so many errors are created in this system that require a special mechanism to correct them and make the mechanism function, it means that something is wrong with this whole system Just fucked up, sorry for the expression.

  4. It's like the book of criticism in the XNUMXth century
    It is not part of the number or the collection of digits, but if someone makes a mistake in one of the digits, the calculated revision is different from the specified revision...

    One of the ways to guarantee the transmitter packets is by sending the transmitter several times. When it comes to a large volume, the time is oppressive...

  5. Error control is a mechanism that exists at almost every point where information passes from point to point. The more efficient this mechanism is and is based on faster and better algorithms, the faster and more reliable the transfer of information will become (communication, storage devices, etc.).
    As a rule (and in a very basic way), error correction will calculate a mathematical value (a kind of digital signature) for a certain group of information and add the result of the calculation to the information sent. The receiving party will perform the same calculation, generate the signature and compare it to what he received. If there is a difference, the information package will go from side to side again.
    As mentioned, error correction is something that exists and it is likely that while this page was being loaded into your browser, several tens/hundreds of such processes were carried out on your computer at the various levels of information processing.

     

  6. The subject of "advanced error correction" is emphasized so strongly here.... And to me this is very suspicious, in my opinion, in a reliable system, errors should not occur at all, an error correction mechanism, no matter how advanced it is, is simply a kind of band-aid that comes to cover a problem that should not have appeared in the system in the first place.

    What if the file I'm trying to save on the disk-on-key is a binary data file that my software created completely randomly, and errors occurred while copying to the memory chip? How is this intelligent error correction mechanism supposed to guess that a mistake has been made if from the beginning the copied data is random and there is no correlation between one data and another? If one or two bits accidentally changed from 0 to 1 or vice versa, how can the system guess that the data is wrong?

    I don't know about you, but I would prefer a system that doesn't create errors that need to be corrected from the start.

    It just doesn't sound believable to me.

  7. Ramot is interesting but behind the organization there is surely a chubby professor. Is it about Lycin or Snyders?

    Does anyone have any idea what the advantage of this technology is in terms of size and performance?

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.