Comprehensive coverage

Ideas that will change the world / Scientific American

Nine new technologies that will bring about change / the editors

Lung on a chip. Image courtesy of Wyss Institute for Biologically Inspired Engineering
Lung on a chip. Image courtesy of Wyss Institute for Biologically Inspired Engineering

Revolutions sometimes start with the simplest ideas. When a young inventor named Steve Jobs wanted to put computing power in the hands of "people who have no experience with computers and who are not really interested in gaining such experience," he advanced us from the clumsy technology of mainframe computers and command lines to the lightweight developments of the Macintosh and iPhone. His idea helped change our relationship with technology forever.

What other simple but revolutionary ideas are currently in the labs waiting for the right moment to make it big? We found nine of them, and in the following pages we will explain them and how they might shake you up

The existing order: computers that work like brains, batteries that can be filled at the gas station, currency without borders and more. This collection is our tribute to the power of the simple idea.

medicine

Continuous personal monitoring of your health / Elizabeth Svoboda

Your smartphone will be able to monitor the vital medical indicators and alert you in real time when the first sign of a problem appears

Most people go to the doctor when they feel chest pain or a suspicious lump, but these signs of illness often appear too late. Early detection of symptoms requires continuous monitoring, just like a cell phone might do. Medical scanning systems that take advantage of the continuous flow of data from cell phones could help eliminate the dangerous delay between the onset of symptoms and diagnosis. Mobile devices could also help the healthcare system identify and treat problems before they become too severe, and too expensive, to treat effectively. In theory, such continuous alert systems could cut 75% of medical expenses for the treatment of chronic diseases, and extend overall life expectancy by preventing millions of emergency cases.

The online app stores are full of health apps that aren't much more than gimmicks, but they have some unusual systems that promise to help users manage chronic diseases or identify medical alarm bells. AliveCor's iPhone ECG is a plastic cover for the cell phone, which should be approved by the US Food and Drug Administration in early 2012. The cover includes two metal electrodes on the back, which record the heart rate when the user holds the device in both hands or presses it to the chest. This electrocardiography (ECG) data can be transmitted in real time to patients, their families and doctors and inform them of heart rate irregularities. "Not only does it provide early warning," says the device's developer, biomedical engineer David Albert, "but it also does so without the costs associated with conventional EKG machines."

The French company Withings has developed a blood pressure monitoring device that also works with an iPhone. The user wears the thin white cuff and within 30 seconds the measurement is displayed on the screen. If the measurement result is abnormal, a warning will also appear. WellDoc's DiabetesManager application, approved by the FDA, allows diabetics to enter a variety of real-time data into their cell phone, such as blood glucose level, carbohydrate intake and medications they have taken. The software analyzes all this data and recommends what you should do to maintain healthy sugar levels (for example: take insulin or eat something). A trial whose results were published in September 2011 showed that DiabetesManager users achieved improved long-term glucose control compared to patients who did not use the application.

Until now, these new systems have operated separately from each other, and many of them are still under development. Even so, experts say these wireless health systems are the beginning of an era in which mobile health monitoring systems will work together in coordination, giving consumers and their doctors a comprehensive, data-driven picture of overall health. "Technically, you can press a button [on your phone] and say 'I want to see my vital signs in real time,'" says Eric Topol, director of the Scripps Institute for Medical Applications.

The big obstacle is the sensor technology: traditional monitoring of blood glucose requires piercing the skin, and only a few people will agree to wear a blood pressure cuff or a taped electrode all the time and everywhere. However, more convenient alternatives will soon arrive. Japanese scientists recently created injectable phosphorescent fibers that monitor blood glucose levels. According to Topol, a future array of sensors that will be based on nanoparticles and will communicate with smartphones will be able to perform reliable monitoring of vital indicators, and an even more important advantage - to detect telltale signs, such as antibodies to certain diseases, earlier. The sensors will be able, for example, to detect markers of malignant tumors, send an immediate alert to the mobile device and give patients the option to start preventive chemotherapy treatments before the cancer cells take root. Additionally, the simpler mobile health monitoring is, the more likely consumers will use it. A 2010 survey showed that 40% of Americans are willing to pay a monthly subscription fee for a mobile device that will send blood pressure, glucose or heart rate data to their doctors.

Paul Saunier, vice president of the Wireless and Life Sciences Alliance, says early resolution of health problems will be even simpler when mobile health monitoring is combined with genetic analysis. If, for example, the user has a gene that increases his chances of having diabetes or cancer early in life, he could wear a hidden sensor, which would notify him on his cell phone of any abnormal development. "They will have an embedded nano-sensor that will warn even before the insulin-secreting pancreatic cells are attacked or before the first cancer cell appears," says Topol. If the cellular health monitoring systems fulfill their potential, they could serve as a watchman who neither sleeps nor sleeps, protecting people before they even know they are in danger.

computerization

A chip that thinks like a brain / Christopher Mims

Neural computers will excel at all tasks that normal computers are unable to perform

Dharmendra S. Moda is apparently the only chip designer in the world whose team also includes a psychiatrist - and not with the aim of keeping the engineers sane. He and his colleagues, from five universities and a similar number of laboratories at IBM, are developing a chip based on how nerve cells work.

They call their research “cognitive computing,” and its first products—two chips each containing 256 artificial neurons—were unveiled in August 2011. So far, the chips are capable of little more than beating visitors at a game of pong or navigating a simple maze, but the goal The ultimate is more ambitious: to put the neural computing power of the human brain into a small silicon package. The SyNAPSE program, funded by the American Agency for Advanced Scientific Research (DARPA), aims to build a processor that will contain 10 billion neurons and 100 trillion synapses - an order of magnitude similar to that of one hemisphere in the human brain. The scientists believe that the volume of the processor will be about 2 liters and it will consume electricity like 10 light bulbs of 100 watts.

Despite the impression received, Moda insists that he is not trying to create a mind. Instead, his team is trying to create an alternative to the architecture common to almost all computers since the day the computer was invented. Conventional chips must transmit instructions and data through a single, narrow channel, which limits their maximum speed. In Moda's alternative, each artificial neuron would have its own channel, and the massive parallel processing capabilities would be built into the chip from the start. "We are building a universal platform," says Moda, "a technological platform that can be used as a basis for a wide variety of applications."

If the approach is successful, it will be the culmination of 30 years of research in the field of computational neural networks. Says Don Edwards, a neuroscientist at Georgia State University. And even IBM's competitors are impressed. "Neural-like processing has the potential to solve problems that are difficult - and some would say, even impossible - to deal with using conventional systems," says Barry Boulding, vice president of Seattle-based Cray.

Moda emphasizes that cognitive computing architectures will not replace conventional computers but complement them. They will pre-process information from the real and noisy world and convert it into symbols that ordinary computers are able to deal with. For example, Moda's chips will excel at pattern recognition, such as detecting a person's face in a crowd, and will transfer that person's identity to a regular computer.

If this all sounds a little too much like the beginnings of the Machine Rebellion, you can perhaps take some solace in the fact that these chips will be bad at math. "Just as it is difficult to represent the brain in today's computers," says Moda, "so the operations of addition and subtraction, which computers excel at, will be performed very inefficiently in a brain-like network. None of the systems can replace the other."

Of money

Wallet (b) leather / Christopher Mims

Who needs payment systems using the cell phone when a wave of the hand is enough

When students in Florida's Pinellas County schools load their lunch trays in the cafeteria and take them to the cash register, they simply wave their hands and sit down to eat with their friends. The schools in the district have installed sensors in cash registers, which are about 6.5 square centimeters in size, that identify each student according to the pattern of the blood vessels in the palm. Buying the meal does not require cash or any card: the hand itself is the wallet.

The system used, PalmSecure by the Fujitsu company, shortens the time the students stand in line - waiting times have been cut in half since the beginning of the program - and this is an important consideration in a school where the meal time is only half an hour. One of the health funds in the states of Carolina, which operates more than 30 hospitals, uses this technology to identify 1.8 million patients even if they are unconscious. The system is also used as an additional means of identity verification when performing operations at the Japanese Tokyo-Mitsubishi UFJ Bank.

There are many physical characteristics by which a machine can identify a person, but only a few are unique enough and accessible enough to allow such simple use. Fingerprints and facial features are not as unique as we have been led to think, and can lead to false identification. Also, they are easy to fake. The irises are indeed unique, but photographing them requires looking into a device without blinking for a few seconds. This is a process that can easily go wrong, and it evokes an invasive feeling. In contrast, the XNUMXD arrangement of the blood vessels in the palm varies greatly from person to person, and is easy to read using harmless near-infrared light. So why do we still pay with credit cards?

Security expert Bruce Schneier claims that the only barrier to such a "digital wallet" is that the banks and technology companies are not in a hurry to adopt it. "A credit card is just a pointer to a database," says Schneier. "It comes in a convenient rectangular shape, but it doesn't have to be like that. The barriers to entering the market are not related to security, because security is a secondary consideration."

As soon as a large retail chain or a government agency implements such a system - imagine getting on a bus and paying with just a wave of your hand - it will be able to be established everywhere. The financial industry already deals with a considerable number of cases of fraud and false identification, and the transition to biometric means will probably not change the situation. But it will make payment as easy as waving a hand.

computerization

Self-aware computers / Francy Diap

If humans need to manage their time, why shouldn't computers? New software will make them function continuously

Jim Holt's smartphone isn't that smart. It has a map application, which Jim uses to find restaurants, but when he finishes searching, the application continues to consume so much power and memory that Jim, an engineer at Freescale Semiconductor, can't even do something as simple as send a text message.

Holt's phone is a representative example of a general problem with computer systems today: one part of the system does not know what the other part is doing. Each software devours resources as it can, and the operating system is too stupid to realize that the only application that interests the user at the moment has been pushed to the sidelines. The problem exists not only in smartphones, but also in personal computers and supercomputers, and it will become even more complex as more systems rely on multi-core processors. The future of computing will not be able to live up to the expectations raised by its glorious past if the various computer components do not learn to share information about their abilities and needs with each other.

Holt and his colleagues at Project Angstrom, a research consortium led by the Massachusetts Institute of Technology (MIT), have found an answer: a "self-aware" computer. In normal computers, the hardware, the software, and the operating system (which mediates between the hardware and the software) do not know exactly what the others are doing, even though they are all running inside the same machine. For example, the operating system does not know if a video player application is experiencing difficulties, although the viewer will undoubtedly notice the fragmented image.

In 2011, a team from MIT created a research software called HeartBeats, which monitors the state of all the different applications on the computer. It can, for example, tell that a video program is running at a slow rate of 15 frames per second rather than the optimal rate of 30.

Ultimately, the idea is to create operating systems that will be able to identify applications that are running too slowly and consider possible solutions. If the battery is full, for example, more computing power can be directed to the application. If not, the operating system can instruct it to switch to a more economical mode of operation at the expense of quality. The system will learn from the experience, and if the fault repeats itself, it will be able to fix it faster. A self-aware computer would be able to manage complex tasks, such as "run these three programs, but give priority to the first" or "conserve as much energy as possible, as long as it does not harm the quality of the movie I'm watching."

The next step will be to design a monitoring operating system that can adjust the amount of resources each specific software receives. If the movie is played too slowly, the system will allocate more power to it. On the other hand, if it is displayed at a rate of 40 frames per second, the computer will be able to redirect some of the resources elsewhere, because movies do not look better to the human eye at such a rate compared to 30 frames per second. "We can save 40% of the electricity we consume with today's methods," says Henry Hoffman, a computer science PhD student at MIT who is working on the software.

According to Anant Agarwal, the project's chief scientist, self-aware systems will not only make computers smarter, they may be essential to managing more complex computers in the future. In the last decade, engineers have added more and more basic calculation units called "cores" to computers. Most computers today include two or four cores, but in the future the number will increase to tens or even thousands. This will make the division of calculation tasks between the cores - which programmers do today directly - almost impossible. A self-aware system will remove this burden from the programmers and automatically define how the software utilizes the cores.

The ability to manage such a large number of cores may raise the calculation speeds to a whole new level that will allow to continue the trend of speeding up the operation of computers that has been prevalent until now. "As we have more cores, we need systems that have some level of self-awareness," says John Wilsnor, a professor of electronic engineering at the University of California, Los Angeles, who is not involved in the Angstrom project. "I believe that in the coming years we will see some of that."

money

Cross-border currency / Morgan Peck

The first digital currency will eliminate the need for a middleman and preserve the user's anonymity

Imagine if you approached a falafel stand, ordered half a dish, threw a few shekels on the counter, and the seller would say: "Excellent. All I'm missing is just a name, an address to send an invoice to, a phone number, the mother's maiden name, and the bank account number." Most customers would balk at these demands. But that's exactly how we pay for products and services we buy online.

There is no currency in the network that passes to the merchant that is as simple, direct and anonymous as banknotes. Instead, we entrust the handling of transactions to financial entities such as credit companies (and they keep a percentage of the sale for themselves - and your personal details as well.) All this may change when the use of "Bitcoin" (Bitcoin, a currency of bits) begins - a completely digital currency, Liquid and anonymous like cash. "It's like taking a dollar bill, shoving it into the computer and sending it to its destination via the Internet," says Gavin Andersen, one of the leaders of the Bitcoin network.

Bitcoins are bits - strings of code that can be transferred from user to user in a P2P (peer-to-peer) network. Normally, bit strings can be copied over and over ad infinitum (a feature that would render any currency worthless), but a bitcoin string can only be used once. Strong encryption keeps Bitcoin safe from potential thieves, and the P2P network eliminates the need for a central gatekeeper like Visa or PayPal. The system gives the authority to the users instead of the financial intermediaries.

Bitcoin adopts ideas from well-known cryptographic software. The software assigns to each user two unique ciphers: a private key, stored only on the user's computer, and a public address visible to all. There is a mathematical relationship between the key and the address, but discovering the key based on the address is practically impossible. If I have 50 bitcoins that I want to transfer to a friend, the software associates my key with the friend's address. Other people on the network use the connection between my public address and my private key to verify my ownership of the bitcoin amounts I intend to spend, and then they transfer the bitcoins using a code cracking algorithm designed to prevent the same bitcoin from being used again. The computer that finishes the calculation first wins a few bitcoins from time to time, and thus a diverse group of users is mobilized to maintain the system.

The first recorded purchase made in bitcoin was a pizza for 10,000 bitcoins in early 2010. Since then, the bitcoin-dollar exchange rate has risen and fallen like notes in jazz music. Due to the volatility of the currency, few online merchants agree to accept payments in Bitcoin. Currently, the Bitcoin community is a small but enthusiastic community - just like the early Internet users.

Materials Engineering

Tiny mining / Sarah Pecht

Bacteria produce metals and also clean the remaining dirt

Metal mining is a craft that has not changed much since the Bronze Age: in order to extract a valuable metal from ore, the ore must be heated and some material added, such as coal for example. But this method requires a lot of energy, so it is too expensive if the metal concentration in the ore is relatively low.

Miners are increasingly turning to the use of bacteria that can extract metal from low quality ores at low cost and at ambient temperature. A mining company can use bacteria to extract up to 85% of the metal stored in ores whose metal content is lower than 1%, just by sowing them in a pile of mine waste and applying diluted acid. Inside the pile, Acidithiobacillus or Leptospirillum bacteria provide themselves with energy through the oxidation of iron and sulfur. As they eat, they excrete iron ions and sulfuric acid. These active ingredients break down the rocky materials and release the valuable metal.

Biological techniques are also used to clean acidic liquids that leak from old mines, and by the way to extract some more precious metals. Bacteria such as Desulfovibrio and Desulfotomaculum neutralize the acids and create sulfur ions (sulfides) that bind to copper, nickel and other metals and pull them out of the solution.

In recent years, biological mining has expanded at an unprecedented rate, as good quality ore is becoming increasingly rare. Almost 20% of the world's copper comes from biological mining, and according to mining consultant Coral Brierley, the production rate has doubled since the mid-90s. "Materials that the mining companies used to throw away are now considered ores," Brierley says.

The next step is the coating of the microscopic sanitariums with mine waste. David Barry Johnson from the University of Bangor in Wales, who studies biological solutions for draining acidic substances from mines, estimates that it will only take another 20 years for the bacterial cleaning of the mines to begin to pay off economically. "In a world that is increasingly reducing its dependence on carbon, we must look for more natural and more energy-efficient ways of doing things," says Johnson. "That's the long-term goal, and things are starting to move nicely in that direction."

agriculture

Field crops that do not need to be replanted / Christopher Mims

Perennial crops can stabilize the soil and increase yields. They may even be able to help fight climate change

Before the invention of agriculture, most of the land area of ​​the planet was covered with plants that live year after year. These perennial plants were slowly replaced by grain crops that had to be re-sown every year. Today the scientists are examining the possibility of turning the wheel back; to create perennial versions of common crops such as corn and wheat. If they succeed in this, agricultural lands in some of the poorest places in the world will be able to significantly increase their yields. These plants may also be able to absorb some of the remaining carbon in the Earth's atmosphere.

Agricultural scientists have been dreaming of replacing annual crops with their perennial brethren for several decades, but according to agroecologist Jerry Glover, the genetic technology necessary for this has only come into use in the last ten or fifteen years. Perennials have many advantages over crops that have to be replanted year after year: their roots are deeper and prevent drift, and thus the soil is good at keeping essential minerals such as phosphorus. They also need less fertilizer and less watering. The industrial cultivation method based on a single crop (sowing the same crop every year on the same area) involves emitting carbon into the atmosphere, while a perennial field, where there is no need to cultivate the soil, has the ability to fix carbon.

Some farmers are already producing much larger crops after planting rows of a perennial legume known as pigeon pea in their fields between the rows of corn, which is their main crop. This legume is a much-needed source of protein for these farmers, who consume the grain for their livelihood, but it can also increase the amount of water stored in the soil and double its carbon and nitrogen content without harming the amount of yield in a given plot.

However, the march of the perennials to the next level, that is, their growth on a scale similar to the usual field crops, will require a considerable scientific effort. Ed Beckler, a plant geneticist at Cornell University who plans to develop a perennial corn variety, believes it will be another five years before the genes responsible for this trait are identified and another decade before they can grow a viable variety. "Even if we use the most advanced technologies, the development of the perennial corn will probably take about 20 years," says Glover.

The scientists accelerate the development of the perennial with the help of advanced genetic characterization technology. Today, they know how to quickly analyze the genomes of plants with desirable traits, to look for the connections between genes and those traits. After the first generation of plants has grown seeds, the researchers sequence the DNA in the sprouts of those seeds to identify among the thousands of seeds the handful that retain those traits (instead of waiting for them to mature, which can take years).

When there are perennial alternatives to the annual field crops, their introduction into the fields can significantly affect the overall carbon emissions in the world. The secret is in their roots, which will fix in every cubic meter in the top layer of the soil an amount of carbon equivalent to 1% of the mass of that cube of dirt. According to the calculations of Douglas Kell, the CEO of the Council for Biotechnology and Life Sciences of the United Kingdom, if every year 2% of the annual crops in the world are converted to perennial crops, it is possible that the decrease in the amount of carbon as a result could curb the increase in the amount of carbon dioxide in the atmosphere. Converting all the world's fields to perennial crops would result in carbon dioxide sequestration at a rate equivalent to 118 parts per million. In other words, a fixation in an amount that will return the concentration of greenhouse gases in the atmosphere to what it was before the industrial revolution.

energy

Liquid fuel for electric cars / Christopher Mims

A new type of accumulators can enable the replacement of mineral fuel with nanotechnological raw fuel

Improving batteries is the key to building an electric car that can cover several hundred kilometers on a single charge, but technological progress in the matter is maddeningly gradual, and there are no breakthroughs in sight. However, thanks to a new way of organizing the internals of the modern battery, it may be possible to double the amount of energy stored in the battery.

The idea came to the mind of Professor Yit-Ming Chiang from the Massachusetts Institute of Technology (MIT) when he was on sabbatical at A123 Systems - a battery manufacturer that Chiang was one of the founders of in 2001. What if we found a way to combine the good features of the so-called "flow batteries" in which a liquid electrolyte flows through their electrochemical cell (flow batteries), with the energy density of modern lithium-ion batteries that are already used in electrical appliances?

Flow batteries are characterized by low energy density. The energy density is a measure of the amount of energy the battery can store. Their only advantage is that they are easy to increase - simply build larger vessels to store the liquid that stores the energy.

Chiang and his colleagues built a prototype battery that has an energy density similar to that of a standard lithium-ion battery, but whose storage medium is liquid like a flow battery. This liquid, which Chiang calls "Cambridge crude", is a black pulp of nanoparticles and grains of energy-storing metals.

If you could look at Cambridge oil under an electron microscope, you would see particles the size of dust grains made of the same materials that make up the positive and negative electrodes of a lithium-ion battery. These materials can be, for example, lithium-cobalt oxide (for the positive electrode) and graphite (for the negative).

Between these relatively large particles, nano-particles made of carbon float in the liquid. This is the secret spice of invention. The nano particles gather into a sponge-like network and form "liquid wires" that connect to the larger grains in the battery, where the ions and electrons are stored. The result is a liquid that is able to flow in it while the nanoscopic components in it keep open paths for the passage of electrons between the grains of the energy-storing medium.

"This electrical composite is indeed very unique," says Chiang. "I don't know anything like it." The fact that the battery's active material is able to flow opens up some interesting possibilities, including the idea that cars powered by such batteries could pull into a gas station and recharge by filling the tank with Cambridge oil. One of Chiang's partners in the project, W. Craig Carter of MIT, came up with the idea that users could exchange tanks full of electrolyte at the station, like they do with gas cylinders, instead of charging the battery by plugging it into a suitable outlet.

However, filling and draining these batteries with the charged electrolyte is not the first commercial use Chiang is trying to promote. He, Carter and the entrepreneur Throop Wilder, had already founded a company called 24M Technologies, which was designed to bring the fruits of the team's work to the market. Carter and Chiang won't reveal what the company's first product will be, but they emphasize the suitability of these batteries for grid storage applications. Even a relatively small reservoir can have a big impact on the utilization of an intermittent energy source, such as wind and solar, Chang explains. Batteries based on his model, of the order of magnitude used in power plants, will have an energy density ten times greater than the density of normal batteries. Therefore they will be more compact and maybe even cheaper.

But Cambridge oil is still far from going to market. "The skeptics might say that this new model poses more challenges that must be overcome than the advantages that the possible solution would have," says an expert who heads a large university research program on energy storage, who agreed to comment only on the condition of anonymity for fear of offending his colleagues. The additional mechanisms needed to pump the liquid into and out of the battery cells add unwanted mass to the system. "The weight and volume of the pumps, the storage vessels and the pipes and the additional weight and volume needed to hold the electrolyte and the carbon additives, all of these could make [this technology heavier than] the technologies that will be in the forefront." These batteries may also not be as stable as regular lithium-ion batteries over time and after many charge and discharge cycles.

A more fundamental issue is the long charge time these batteries will have, two to four times longer than conventional batteries, Carter says. This makes their application in cars, which need fast energy transfer, difficult. One of the ways to get around this difficulty is to add a normal battery or an ultra-capacitor to the system. These devices can discharge the energy stored in them within seconds, and they will be able to compensate in cases of rapid energy transitions, such as during braking and acceleration.

Still, the new model holds promise. A system that stores energy in "particulate liquids" can match almost any chemical composition of a battery, says Yuri Gogotsi, a materials engineer from Drexel University. Therefore, it is an accelerating factor for further innovations in the field. "It opens up a new way of designing batteries."

medicine

Nanoscale germicides / Elizabeth Svoboda

Tiny knives may be an important weapon in the fight against superbugs

Drug-resistant tuberculosis bacteria are taking all of Europe by storm. This is what the World Health Organization warns. There are not many suitable treatments; Laminia antibiotics do not work on these developed strains, and about 50% of those who contract the disease will die from it. This dismal situation is similar to the situation in the war against other bacteria that are resistant to drugs, such as MRSA - a type of staphylococcus bacteria that causes infections that kill about 19,000 people in the US every year.

The possible salvation is a nanotechnological knife. Scientists at IBM's Almaden Research Laboratory have invented a nanoparticle capable of completely destroying bacterial cells by puncturing their membranes.

The particle's shell is charged with a positive electric charge, therefore it adheres to bacterial membranes that are charged with a negative charge. "The particle comes, attaches, flips inside out and drills into the membrane," says Jim Hedrick, a materials scientist at IBM working on the project in collaboration with scientists from Singapore's Institute for Bioengineering and Nanotechnology. Once the bacterium's membrane is punctured, it deflates and shrinks like a punctured balloon. The nanoparticles are not harmful to humans; They do not touch red blood cells, for example, because the electrical charge of the cell membranes in the human body is different from the charge of the cell membranes of the bacteria. When the nanostructures finish their work, enzymes break them down and they are excreted out of the body.

Hedrick hopes that human trials will begin in the next few years. If this approach proves effective, doctors could spray a gel or lotion full of nanoparticles on a patient's skin to prevent MRSA infection. And the medical staff will be able to inject the particles into the bloodstream to stop drug-resistant microorganisms that tend to spread throughout the body, such as streptococcus bacteria, which can cause sepsis and death. Even if the treatment is successful, we will have to overcome the discomfort associated with the thought of nanometer drills circulating in our blood vessels. But it is not easy to defeat the most malicious bacteria on the planet.

Break the walls of science and technology

Can collaboration between scientists and engineers and science fiction writers and artists be useful for building a better future?

Only 10 years ago, almost everyone who went on a long trip abroad would be equipped with a package of photographic films, document the trip with discretion so as not to break the bank when developing the photos, arrange the photos in an album, exhaust their friends by looking at them, or even worse, In slide show, stores everything on a shelf and remembers them again, if at all, only years later.

The technological crystal ball presents a challenge to scientists and artists alike

In this issue (April-May 2012) we present, as every year, a selection of "ideas that will change the world" collected by the editors of Scientific American. If we had published an article like this 10 years ago, and we would have described a new development in it: a tiny camera embedded in a mobile phone (an innovation from the 90s that began to gain momentum at the beginning of the previous decade), you would probably have said that it was a nice idea, but it is doubtful that you would have understood why it would matter the world. It would take a highly developed imagination and considerable audacity to claim that in 10 years no one will buy photographic film, and that our entire world will be intensively documented, that the images will be broadcast in seconds from anywhere to anywhere and by anyone, and that this will disrupt legal evidence and even accelerate political revolutions.

Who can guess what the effects of the innovations we present in this issue will be? It is very convenient to quickly and reliably identify each student in line for a meal at school by waving the hand, but how will this affect privacy in a world where this concept is losing its meaning? What will be the effect of the continuous monitoring of the vital health indicators of our body on human culture, where the medicalization process of everyday life is increasing? And how will an addition of 5 to 10 years affect our life expectancy? Who among us has a sufficiently developed imagination to predict how these technological innovations will integrate and affect our lives?

When you read the futuristic articles in front of you, you will see that they are painted in the hopeful pink of their developers - scientists and engineers who want to make our world better, faster, more efficient, healthier and greener. But will these hoped-for results be achieved? Who can look into the technological crystal ball and warn us of the medical, ethical, legal, environmental and cultural dangers?

The answer can come from a familiar, but underappreciated in our opinion: art, and more specifically science fiction in literature and cinema. We suggest you watch one of the most successful and thought-provoking examples of recent years: the new British TV series "Black Mirror" which deals with possible near futures. For example, the second episode of the series describes a closed world, where every step is monitored and scored: from brushing your teeth in the morning to watching advertisements, and the accumulated points are used as money. And so it was written here a year ago, in the previous edition of ideas from two worlds: "In the not too distant future, when you stand in front of the mirror in the bathroom and brush your teeth, you may see in front of you, next to the headlines of the morning news, a score chart that will rank how much carbon your house emits compared to the houses of your neighbors. The electric toothbrush will beep to let you know that brushing twice a day for the past six months has earned you enough points to win a 10% discount on your next visit to the dentist." [The Game of Life, John Poulos, Scientific American Israel, April-May 2011].

It seems as if the creators of the series carefully read the articles in Scientific American and chose to place a black mirror against the (too?) pink mirror of the people of technology, engineering and science.

And this appearance is, in our opinion, a necessary thing. We would like to see closer collaboration between artists and creators and engineers and scientists to try and work together on designing the future. A collaboration that can start even in schools and universities. Why shouldn't students majoring in cinema, who also study a science subject (and there are some), combine the fields and present their version of "Black Mirror" in the final project? And why shouldn't PhD students in physics or an advanced degree in electrical engineering try creative writing workshops and stimulate their imaginations in fictional writing?

And maybe one of our readers will pick up the glove and send us a story inspired by one of the articles in this issue? waiting for you.

7 תגובות

  1. It's a bit ironic that a mobile device that could cause cancer would be a device that warns of signs of disease

  2. An idea that will change the face of the world:

    Give the inventor at least half of his invention value!!!

  3. Are you sure this article wasn't published here already a few months ago?

  4. "... and it will consume as much electricity as 10 100 watt bulbs...."

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.