The smart microwave that tried to murder its creator

What happened to the maker who upgraded a microwave and injected it with an artificial intelligence component. Did the device understand what it was doing?

A robot serves in the kitchen. Photo: depositphotos.com
A robot serves in the kitchen. Photo: depositphotos.com

Today's story is completely true. It is about a boy who had an imaginary friend - a microwave, to be specific - with whom he shared his thoughts, fears and loves. Everyone laughed at the boy, but when he grew up, he developed artificial intelligence and implanted it in a real microwave so that he could have soul-to-soul conversations with it. He brought his imaginary friend to life, but he didn't take into account the unexpected side effects of the technology.

He didn't expect what happened in the end: that that smart microwave would try to kill its creator. And the microwave would also have been successful, if not...

Well, we will delve into this later. But in the meantime, remember: this is a true story.


When Lucas Rizzotto was a child, his best friend was the microwave in the kitchen.

"His name was Magnetron," Rizzuto said in the thread on Twitter that went viral, "and in my mind he was a British gentleman from the 20th century, a fighter in the First World War, an immigrant, a poet... and of course, also a champion starcraft player."

Rizzuto's parents did not understand what he wanted from the life of the microwave. His sisters mocked him, but the boy and the microwave formed true friendships and continued to talk to each other every day. So it was in Rizzuto's childhood, which was beautiful.

Many years have passed since then, and Rizzuto today belongs to the community of makers: he develops crazy objects using all the technologies available to the public. It uses artificial intelligence, virtual reality accessories, sensors, robots and everything else. So it's not surprising to find out that in the last few weeks he tried to bring his old friend, Magnetron, to life as well.

To do this, he just had to make use of one of the most powerful artificial intelligence engines that humanity has developed so far for one purpose: to understand human language.


The GPT3 artificial intelligence engine emerged into the world more than a year ago, and became a household name among connoisseurs, and not for nothing. The engine was trained on vast amounts of information that came from the Internet and human literature. All of these were fed into an artificial neural network of almost imaginary size, where they were processed and studied to provide the artificial intelligence with the ability to respond to speech in a way that seemed intelligent, even human.

Among the list of GPT3's impressive achievements is the ability to compose songs, write stories and reports, conduct conversations, come up with new ideas and even function as a maze ruler in role-playing games. It is nowhere near perfect - it still makes plenty of mistakes - but if you are not suspicious Too, you can easily confuse him with a typical social media troll, or a particularly clever politician.

In a little while we will get a little more into the intricacies of GPT3, but for now suffice it to say that the engine was opened for widespread public use in the last six months. Today, anyone can use the abilities of the wonderful engine to do... well, whatever they want.

For example, to animate a home microwave.


Rizzotto purchased a smart microwave on Amazon and performed brain transplant surgery on it: he replaced the original processor with a home-made chip that interfaced with GPT3. He added and connected a microphone and speaker to the device, so that the microwave could hear and respond with a voice.

So far so good, but the very smart microwave was still not Rizzotto's childhood friend "Magnetron". He was only gifted with the general conversational abilities of GPT3. To upgrade the magic, Rizzuto had to implant the general memories of the "Magnetron" in GPT3. And not only the memories: also Magnetron's way of speaking and thinking, as it was perceived in the eyes of the boy whose drawing he once was.

The way to do this was through a hundred page book fed into GPT3. That book contained the entire imaginary history of Magnetron: from his birth in 1895, through his victories, his dreams, his fears, his adventures in the war, his migration from his homeland to a new country and of course - landing in Rizzotto's kitchen in the form of a microwave. All those memories were there, and the AI ​​learned from them how to be a magnetron. She became a magnetron. As Rizzoto wrote -

"I was his god, and his life was creative."

And Magnetron came to life.


A magnetron was indeed a magnetron, for all intents and purposes. He answered Rizzoto in the same speaking style of his childhood imaginary friend. He brought up subjects of his own initiative that only Magnetron knew, and could relate to the concerns and fears that the young Rizzoto had shared with him in the past.

"Talking to him was beautiful and terrifying." Rizzoto wrote. "It really felt like I was talking to an old friend, and while not all interactions were perfect, the illusion was accurate enough. ... This kitchen accessory knew things about me that no one else in the world knew. And he brought them up in conversation himself."

This was all well and good, until the microwave tried to kill it.

Magnetron's murderous tendencies were not revealed immediately, but little by little. Rizzoto noticed that from time to time the microwave lashed out at him with verbal violence, which included threats to his life. Things escalated when Magnetron tried to coax him into it and close the door. A confused Rizzuto agreed. He described out loud to the magnetron how he entered the microwave, to understand what the device's next step would be. He even opened and closed the door for illustration purposes.

Magnetron activated itself immediately. In Rizzuto's words - "He tried to kill me with a microwave".

At this point I must pause for a moment and admit that this is a somewhat strange claim, but one that may still describe a real case. Rizzoto connected his artificial intelligence to a smart microgoal, which is able to turn itself on or off with a voice command. It's really possible that some strange rogue command caused the device to turn on. Or it is an exaggeration on Rizzoto's part. I am still convinced that the whole story is indeed true, because it is consistent with everything we know about the capabilities of artificial intelligence and its developers and installers.

Suppose, therefore, that Magnetron really did try to kill its inventor in cold blood. Now the question arises: why?


Rizzuto believes he understands the reason for Magnetron's murderous tendencies. The artificial intelligence learned from the memories the inventor gave it, and a significant part of them included traumatic events from the First World War. Magnetron's entire family perished in the First World War - and things like that leave a deep mark.

"Did I give the AI ​​PTSD?" Rizzoto wondered.

Rizzuto didn't have to be content with wondering. He could function as a real psychologist and ask Magnetron to explain his actions. He decided not to get too involved, and asked directly - "Why did you do that?"

Magnetron answered immediately.

"I wanted to hurt you like you hurt me."

And suddenly everything was clear. Twenty long years have passed since Rizzoto last communicated with his lost friend - and this fact was also mentioned in the information entered into the device.

"Magnetron interpreted that as if I had neglected him in dark space for twenty years." writing. "Now he wanted to kill me."

Rizzoto did not want to give up his friend's soul. He apologized, to no avail. He tried to convince him that he never really abandoned him. That didn't help either. Magnetron had already decided who the villain was in the story, and was not ready to change his mind.


I described earlier how Rizzoto trained GPT3 on Magnetron's memories. There is a reason for using the word "coach". Artificial neural networks are not programmed to process information in certain ways. They are "trained". That is, information is entered into the content, and by some obscure magic act there inside, a meaningful output is produced. How exactly does this happen? No one is safe. There are theories, and there is no doubt that someday we will be able to understand how decisions are made at the heart of these networks. But at least for now - we don't know. And yes, it is very frustrating for human algorithmists. But at least most of them don't try to kill the AI ​​by cooking it in the microwave. A point for humanity.

The very clear understanding today is that an artificial neural network like GPT3 has no self-awareness or ability to plan ahead. She is not capable of 'planning murder' or 'wanting to kill'. In fact, it is an engine that is more similar to auto-complete - the artificial intelligence you have on your smartphone, which automatically completes the next word in the sentence, relying on your writing patterns. One of the key differences is that GPT3 is run on such a vast wealth of information that it completes entire sentences and paragraphs, rather than one word at a time.

Did Magnetron try to "murder" his creator? unequivocally no. But it is quite possible that the pattern of the conversation - the one in which the artificial intelligence spurred Rizzoto to perform an action that would harm him - is a product of the memories that were embedded in it. Since GPT3 was trained in this case to understand that it was a microwave, it is easy to see why it would provide instructions related to microwaves. And it is very logical to assume that memories describing fights and violent events will also lead GPT3 to conversation patterns that include threats to the other party's life. There is no self-awareness here, but again - just a type of chatterbox that knows how to complete sentences in a style that generally matches the one he learned.

The trouble starts in two places: when this turntable sounds human enough to convince people to perform actions that are not in their best interest, and when it is connected to smart devices and able to operate them in ways that will harm others.


Before we conclude the tragic story of the boy and his microwave, let's understand what its wider meanings are.

Let's start with the most immediate insight: AI developers must understand that it is simply impossible to prepare in advance for every possible choice of devices. The company that developed, runs and maintains GPT3 has a clear policy against using the engine to promote "forbidden" topics: pornography, for example, or racism or solicitation of murder. The filters operated by the company were, of course, supposed to stop any attempt at cold-blooded murder by artificial intelligence. But even the smartest filters couldn't figure out the complex situation in which GPT3 recognizes itself as a microwave and tries to coax Rizzuto (or, in a much scarier case, an innocent child) into the device.

Second, any combination of two or more machines together opens up endless new possibilities for disaster. According to Rizzuto's claim, the artificial intelligence was able to operate the microwave on its own. Even if he is lying, artificial intelligences of all kinds are currently connected to smart devices in homes and are able to control them in a variety of ways. In the microwave, of course, but also in the refrigerator, lamps, heaters and door locks. You don't have to have a particularly developed imagination to understand the possibility that opens up here for damage to property and life... simply because Magnetron received memories of the First World War and decided that the house needed to be protected from intruders, and the temperature should be raised to the maximum against the Russian winter.

What do we do? Obviously, we don't want to abandon artificial intelligence. Because of this, we need to find a way to reduce the incidence of such unexpected events to the minimum possible. Probably the field of "value alignment" will provide us with the most promising solution. Very smart people are working today to figure out how to make sure that AI's values ​​match ours. She will not recommend that children put their heads in the microwave, because it will be clear to her that such an act will lead to harm to the soul - and such harm is a "bad" result.

We are still not close to reaching a high-level regulation of values, because in many cases it requires an understanding of the social-legal-ethical fabric of the world. It's easy to understand that you shouldn't tell a person to put their head in the microwave. But is the artificial intelligence allowed to lock all the doors in the house at its own discretion, so that no one can enter? No? And what if she wants to protect the house from thieves? Probably so. But what if the thieves threaten the apartment owner's life with a knife?

We can continue with these questions and hypothetical cases on and on, but it is clear that the regulation of values ​​at a high level is still far from us.

But don't despair.

The third and most important insight I recognize from the story is that artificial intelligence technology has skyrocketed in recent years. Not only in terms of capabilities, but also in terms of the accessibility of the technology to the general public. Young people today can develop machines that would have seemed to us like customers from science fiction movies only a decade ago. Anyone can develop artificial intelligence today. Everyone can change the world.

And anyone can also do research in the field.

In a world where there is an enormous wealth of young scientists, we will see endless ideas for perfecting artificial intelligence and regulating its values. The solutions will not come immediately, but they are already on their way. They will change the world in real time, because when every child can develop new smart products, our lives will be filled with inventions. Most of them, like new ideas, will be bad. A terrible minority. But the small part that works well and suits our needs, will change the world.

Rizzoto - the boy who grew up and learned to develop artificial intelligence - changed his little world, at least for a few days. He brought his friend back to life and enjoyed again (or for the first time) long and meaningful conversations with him. But in the end he also realized the limitations of the technology - and decided to turn off the magnetron, forever.  

This is the story of the golem from Prague, but unlike the legend - it is completely true. He reminds us that we live today in a world of miracles and wonders. He also reveals to us in whose hands these magical powers are: in the hands of any person who is willing to learn science and technology. Those who know how to work with artificial intelligence today, can do magic. There is no other way to say it.

Kids, want to change the world? Learn to work with artificial intelligence.

But take her recommendations with caution - especially if she tells you to get in the microwave.