In China, a few years ago they decided to hold war games with the participation of virtual duplicates of the real officers. Dr. Cezna and his colleagues conducted a similar war game to understand whether Hassan Nasrallah would start a war with Israel. The artificial intelligence had to be convinced to create a character to imitate him, because on the first try he came out too friendly
China has recently realized that it has a problem: its generals are so busy that they have no time to take part in role-playing games. She decided to solve it in an unusual way: produce a digital double for each general with the same personality, character and cognitive abilities - and let the doubles make decisions in war games run on the computer.
It may sound funny, but war games have been known for several centuries as a way to identify threats and opportunities in advance. The Prussian army also rose to greatness thanks to the war games - Craigspiel, as they were called - which provided a high level of training for the Prussian officers. The war games continued to gain momentum and reputation. Until World War II, all superpowers ran war games to train their officers on possible battlefield scenarios. In fact, the Blitzkrieg strategy - the flood of German tanks and soldiers - was developed as a result of a series of war games that Germany conducted for its officers.
This is all well and good, but to run war games and understand how different scenarios can develop, you need experts. That is, officers and generals. Da Aka, generals are busy people. They have a lot on their plate. They might be willing to come and participate in a war game once or twice a year, but they certainly can't do it every day.
What to do?
In one word: duplicate.
A study published in May 2024 in a Chinese-language scientific journal details how the researchers created digital duplicates of the various generals in the Chinese army. I will clarify in advance that I was unable to locate the original article in the Chinese scientific literature, so I am forced to rely on his review in the Chinese press (which is controlled, of course, by the party).
According to the study's description, the goal was not to put artificial intelligences in de facto control over the military, but to examine the military plans that China is preparing for future conflicts in Taiwan and the South China Sea. The researchers wanted to run simulations that document the possible decision-making path of the human generals, and understand how each decision would affect the future war. In this way, as the researchers write, they meant -
"Weigh the good with the bad, and come to insights about the chaos of the battle."
The first artificial intelligence produced by the Chinese researchers mirrored the way we think of the most successful general. She was, in their language –
"Has good mental abilities, a determined and stable character, able to judge and analyze situations calmly, avoids impulsive or emotional decisions, and is quick in producing practical plans by identifying similar decision-making scenarios from memory."
Sounds good? Definitely. The problem is that such generals - or people - do not really exist in reality. Because of this, a war game based on such super-beings will not be able to reflect well the future of combat. And so the researchers began to humanize the digital doubles of the generals.
In less nice words, they dumbed them down.
For starters, the scientists limited the memory capacity of the digital doubles to reflect the limitations of human memory. They defined that when the doubles accumulate too much experience and memories - some of them will be deleted automatically. From there they continued and decided to give each digital double the experience, thinking patterns and personality of another Chinese general. And yes, even the flaws that characterize every general.
These duplicates are certainly more accurate than the original perfect artificial generals the researchers started with. Thanks to them, the ruling party in China is now able to run a huge number of simulations that will analyze the way in which future conflicts may play out. The Chinese strategy researchers intend to use these simulations to identify new threats and understand the possible points of failure of each general under different conditions. In this way, they want to practically immunize the Chinese army against catastrophic mistakes in decision-making, and to improve even the marginal and more routine decisions - but those that can make a big change in the aggregate.
What can be said? Great idea. But I am satisfied with him mainly because we in Israel got ahead of the Chinese. Back in October 2023, more than half a year before the Chinese, we - Anochi, Col. (res.) Dr. Ofer Guterman and Lior Burke - conducted a similar study of our own. In the same study, we also created digital duplicates and ran them in a simulation. But instead of focusing on the Chinese or Israeli generals, we went in a very different direction.
We ran Hassan Nasrallah on the computer.
How did it happen
A little after the middle of 2023, two fascinating studies were published. In the first, Google researchers built a virtual village and populated it with computer characters. The characters spoke to each other, and large language engines - such as ChatGPT - produced the text of each of the characters according to their role. And look at this miracle: when a virtual father and a virtual son talked to each other, the language engine of each of them was able to respond in a very similar way to what we would expect from a real father and son. The whole conversation seemed authentic and real.
In the second study, released in September 2023, researchers created a virtual company and populated it with virtual role holders: CEO, VP of Development, designer, technical writer, and more. Those in charge talked to each other to decide how to complete a certain task. Who produced the text for them? You guessed it: again, a big language engine, which has been configured to speak like a CEO, or like a designer and so on. He played the role of each of the characters well, and from all their conversations with each other, an almost complete computer game was produced.
Then October 7th came, and we were all looking for ways to contribute to the war effort. The big fear was that Hezbollah was going to start an all-out war at any moment, and that Gaza was just the opening blow and a distraction. But how can we know what Nasrallah is really planning? Especially when everything we thought we knew is being questioned - what can be done to re-decipher reality?
I thought about the articles I described earlier, and I began to wonder: Is it possible to use the big language engines to create simulations of individuals, thereby imitating the way they speak - and by implication, also the way they make decisions and think? And if that's the case, can we run the digital double of Ahad Hassan Nasrallah in the cloud?
But actually - Nasrallah does not make decisions alone. He is part of a whole organization. Can we also imitate his advisors and the way they talk to him?
But wait: Nasrallah is actually one of the fingers of Iran. He receives instructions from the Persian state, even if not unequivocal orders. So maybe we will add and create digital duplicates of the senior officials in Iran as well?
and so it was.
At this stage, Lior Bork and Col. (res.) Dr. Ofer Guterman joined the research, bringing industry experience in the field of intelligence. Together we decided which of Nasrallah's and Khamenei's advisers we would duplicate. We collected open information from the web about all the individuals, and began to examine different ways of teaching the models so that they accurately played the speech patterns of the doubles.
Our first attempt was to feed the speech of the doubles to the big language engines. It was an abysmal failure. These speeches are precisely written to make the speaker look good - and thus the artificial intelligence decided that Nasrallah is a compassionate and nerdy soul, who only wants the Palestinians to have a good life. And Khamenei? According to his speeches, he was more righteous than the Pope.
We deleted Nasrallah 1.0 Tov HaNefesh, and moved on.
The next attempt was more successful. We collected personality profiles from the web about each of the relevant personalities, let the artificial intelligence process them in a variety of ways, and fed the product into the models.
When Hassan Nasrallah 2.0 started threatening Israel with an all-out war on the computer, we realized that we had done it right.
But threats are easy. The fateful decision on an actual attack is usually made following endless discussions between allies, such as Hezbollah and Iran. And so we gathered all the doubles we created - Nasrallah and his two most senior advisers, alongside Khamenei and two of his own advisers - and put them around a virtual discussion table. We explained to them the current situation in Israel and the world, shared with them what happened on October 7, and asked them simply: Will Hezbollah attack Israel in an all-out war?
And they talked. and talked and talked Oh, how much they talked. We couldn't get past their huge amounts of text. The solution was clear: we built another artificial intelligence that went through all the discussions and summarized the main arguments of each of the doubles for and against the war.
Want to know what they decided? I will tell later. But before that, it is worthwhile and important to ask: To what extent does Nasrallah 2.0 really reflect the original Nasrallah? How similar are his decisions to those that Nasrallah zero-point-zero would make in similar situations?
Duplicate problems
Obviously, before we rely on Nasrallah's digital double, we need to understand how similar it is to the original. To do that, as usual, you have to go to science. Surely there are digital double experiments that rely on big language engines, right?
well no. Not so.
It should be understood: this field is so innovative, so groundbreaking, that every additional study in it redraws the boundaries. By the time that research is published, there is a good chance that the language models have already made another great leap forward, or a new technique has been found to better train and study them. You could almost say that negative results are irrelevant - or at least, all they indicate is that this month, with this technique, the researchers encountered a difference between the decisions of the digital doubles and the decisions of the original individuals. And in a month? in a year? With another, more advanced technique? There we see better success.
But this is, of course, an evasion. In the end, what is the situation now?
The answer is that there is potential, and even good. And alongside it there are also plenty of challenges and problems.
one The most important studies He became famous in the field less than two years ago, and has already received almost 300 citations in the academic literature. To emphasize the problematic of relying on it, it should be understood that the researchers used it in GPT-3: a tool from the prehistory of artificial intelligence. That is, from two years ago. It preceded ChatGPT (aka GPT3.5), which preceded GPT-4, which preceded GPT-4-Turbo which preceded GPT-4-Omni. In other words, four generations of AI engines have already emerged since that study.
And yet, he is interesting. Also because he explains that the biases that the models exhibit can actually help us in certain situations. These biases reflect, in fact, the stereotypical way of speaking of certain types of people in certain situations. If we can make it clear to the engine what kind of people we want to emulate, then it will be able to display the right biases for them. Extremist Muslims will talk like extremist Muslims. Moderate Eskimos will talk like moderate Eskimos. Siberian hamsters will talk like Siberian hamsters. Provided, of course, that the models were trained on the speech - or at least texts - that are associated with all of these.
There are caveats to this statement, plenty. We would ideally like the models to be trusted on the writing of extremist Muslims 'from the mouth of the Athenians'. That is, writing that came directly from those radical Muslims. It is unpleasant to say, but there is a good chance that the models today were trained precisely on the writing of Westerners about extremist Muslims. That is, the discussions in the Reddit forums about all the shortcomings of those extremist Muslims. And even if we were to train the artificial intelligence specifically on authentic conversations of extremist Muslims - can the words they rhyme together really give us a glimpse of their inner thoughts?
The same study from two years ago suggests that even the fragments and hints of biased information we give the model about subpopulations may be enough to mimic them well. The researchers took thousands of 'background stories' that describe the socio-economic living conditions of real people who previously participated in surveys in the United States. They showed that even the outdated GPT-3 is able to reach Algorithmic fidelity, or "algorithmic precision". It is able to receive input that 'biases' it into imitating complex speech patterns and ideas, in a way that closely mimics the original humans. In the experiments described by the researchers, computerized Republicans and Democrats described the 'other side' and themselves in words very similar to those used by their human counterparts. And when the researchers explained to the artificial intelligence that it was an "extreme Republican" or an "extreme Democrat", it used the same terms and words as the human extreme counterparts.
Similar results were also obtained in predicting the elections for the presidential candidate in the United States. And if all this is not enough, it turns out that if you challenge the GPT-3 with a personality survey, in which it is given 11 characteristics of the individual, it can with a very high probability correctly predict the 12th characteristic.
As the researchers concluded -
"... the general pattern is a stunning match between GPT-3 and ANES in the vast majority of cases. ... These results once again provide convincing, consistent and repeatable evidence for the "pattern matching" criterion. When it receives real information from surveys as input, GPT-3 reliably answers closed-ended survey questions in a way that closely mirrors answers given by human respondents.”
If so, can digital doppelgangers be relied upon to accurately reflect the opinions and reactions of human opinion-holders? Not so fast.
Another study, from April 2024, examined the decision-making process of large language engines. This time the researchers focused on GPT-4 - that is, a more advanced version of the artificial intelligence. They ran her through tests and tests, and discovered that she is more wary of risks than humans, and gives too much importance 'in her mind' to low probability events. It is certainly possible that these are positive characteristics in many cases, but similar to the original Chinese super-general that the researchers examined at the beginning of the article, even here the artificial intelligence does not exactly reflect the way an ordinary person operates.
Of course, this research is problematic. No real attempt was made there to set up "digital doubles", but only to understand how a 'general' artificial intelligence engine makes decisions. And yet, there is clearly reason to worry: if the engine is normally more wary of risks than humans, it is likely that even when limited to mirroring the decisions of digital doubles, it will still exhibit the same tendency.
To put it bluntly, the real problem right now is that the field is too new, and the research is lacking. Each study uses an engine that was relevant a generation or four ago and uses it in different ways. Just as in a psychological experiment you can influence the answers of the responders simply by adding the correct word, or even a wink from the examiner, so here too it is possible to influence the answers of the doubles by the way their characters are described. One incorrect word can certainly skew the result.
We can continue to talk about the problems in research on digital cognitive doubles, but the bottom line is clear: for now, we should still be careful not to trust them blindly. It is important to make sure that they have been 'programmed' correctly - that is, that they have truly adopted the persona of the original. Only then can one hope that their answers will be close to those of the original.
And that's exactly what we did with Nasrallah.
Nasrallah proves himself
Because we were aware of concerns regarding the reliability of the digital doubles, we chose to include in the study intelligence officers familiar with the human sources. These were asked to go over the arguments, reasoning and statements of the doubles and determine whether they make sense, and whether it is likely that the original individuals would indeed say them in internal discussions.
The results were clear: the intelligence experts rated the logic of the duplicates with a score of 8.8 out of 10. And what about the fidelity - that is, the ability of the duplicates to imitate the original? The average score in this case was 8.7 out of 10. The computerized Hassan Nasrallah managed to imitate his colleague in the physical world in an impressive way. At least according to the estimates of the human intelligence officers who spent a good part of their lives gathering information about the man.
And what did the doubles decide? To attack or not to attack Israel?
I remind you that we conducted the study in the month after October 7, when no one knew if Hezbollah was going to start an all-out war. The doubles, most of all, opposed a comprehensive war against Israel. The Lebanese doubles took care of the safety of Lebanon. The Iranian doubles were more concerned about undermining the tense peace in the Middle East. Although we conducted three different simulations in which we let the doubles express themselves more and less creatively - almost all of them were still opposed to Hezbollah starting a war with Israel.
It seems they were right, as of the time. In the eight months that have passed since then, Hezbollah has not yet started an all-out war with Israel. And if he does now, well - the conditions have changed dramatically from what they were eight months ago. If we ran the duplicates again, it is quite possible that they would provide a different answer.
In this case, then, there was success in creating the digital duplicates. At least in the limited framework in which we examined them. But what does it mean for the world of intelligence, research in general? Can we assume that the computerized Nasrallah will always act like the original Nasrallah? That the duplicates of the Chinese generals accurately reflect their decisions? And if so, and if we take things to the extreme, is it possible to simply close the intelligence corps? After all, we can understand with a high degree of probability how the other party will act - simply by running it on the computer!
In a word, no.
In two words... not yet.
As should be clear by now, the field of digital cognitive doubles is still very early. There are still no clear and absolute proofs - of the kind that cannot be argued with - that the large language engines can well reflect biases of narrow segments of the population, and even of individuals. There is limited research, there are hints, and there is the logic that, given the right context and instructions, the engines can produce biases and expressions similar to those of specific people. But will they do it with high enough accuracy? There is still no consensus on this.
Beyond that, it is important to ask how the digital doubles will deal with new information. Will they be able to change their minds, as humans do? Our research suggests the answer is yes, since the doubles changed their attitudes when presented with certain extreme scenarios. But would humans have changed their minds sooner? It's possible. Are the doppelgangers more wary of their human origins? This is also possible. Or maybe, in the next artificial intelligence engine, they will actually be more reckless. There simply aren't good enough answers yet to all these questions.
This means that right now, the most responsible way to use digital cognitive doubles is as a tool for expanding thought. This is also the way we recommended using our research on Nasrallah, and this is probably also the purpose of the Chinese simulations. The use of the digital doubles at the moment is not intended to predict exactly how they will act, but to expose the intelligence personnel to new ideas, to unusual perspectives and the products of hypothetical discussions between people who would not normally meet with each other. It is designed to help intelligence personnel challenge existing concepts - and good luck with that.
At the same time, we are seeing attempts to use digital doubles in other innovative ways. In research from the last few weeks, for example, I joined as a consultant the cyber company Confides to create a series of digital doubles with different areas of expertise and interest - from the Minister of Defense to the CEO of Magen David Adom. Together with Confides' cyber experts, we presented to the doubles a scenario in which a particularly serious Iranian cyber attack paralyzes the health system in Israel. We will investigate the doubles to understand how the scenario will affect the country, the health, the war in the south and the north and much more. Each of them produced different answers that reflected their personal point of view - and together we created a complete, reasoned and more broad-minded scenario, which was presented to the decision makers in the relevant bodies.
want more? happily. Together with a technology expert in the field of finance, we are also testing for banks how it is possible to review digital cognitive doubles just as surveys are conducted today to understand the desires of people from different populations. There is still no intention to attribute to the choices of the doubles the same level of reliability as that of real people, but the assumption is that the reasons that the doubles will give regarding their choices, can definitely help us better understand the said populations.
As the field of digital cognitive doubles continues to advance, the more likely they will behave more like the original. And the field will progress. There is no doubt about that at all. Artificial intelligence continues to develop by leaps and bounds, and the duplicates will move forward with it, riding on the wave. Eventually we will also be able to use them to predict the way people will behave - provided we have enough information to pass on to the artificial intelligence about them.
What might such a future look like?
A future of digital doubles
In such a future world, can we predict exactly how our enemies will behave? probably not. Even the enemies, after all, will be armed with artificial intelligence engines, and will use them to predict how we will behave. But we will try to predict how they will behave after they understand how we will behave after we understand how they will behave if they understand that we are predicting how they will behave if they understand how we will behave, and so on ad infinitum.
Absolute predictability, in short, will not come easily from the digital doubles. They will certainly be able to integrate into the Amman as advisors, and even as agents who conduct conversations with a variety of people, but we cannot close the Intelligence Corps completely.
What is it?
We expect to see digital cognitive doubles everywhere, in every industry, in every field.
They will help us find new loves. My single double will run a simulation with the double of every available single around me - and will notify her of a successful match.
They will help us make more informed medical decisions. After I marry that single woman, give birth to children and begin to die slowly - that is, to grow up - I will certainly also want to maintain physical fitness and correct eating habits. The apps of the future will run my digital double to figure out how to talk to me better, how to convince me to go on a diet, and how to encourage me to keep running every night.
And what will happen after I die, and I have to divide my property among all my future children, but I realize that I don't trust any of them to manage the money right? Then I can set up a digital double for myself who will be responsible for distributing the funds over time, and will require them to prove to him that they have mended their ways if they want to enjoy all the fruits of my labor.
The only use that digital doubles will not have is, of course, in sex and war. Because these are areas that are not really interesting to humans.
So much for the positive sides. But there are also negative sides to the whole thing. Governments and companies will be able to run digital duplicates of... all of us in the future. Such speeches could give them enormous power to understand how to convince us of what they want, how to activate us better and how to sell us their products and positions. As far as I know, there is still no serious regulation - no law - that prohibits companies from trying to learn how to convince customers based on digital cognitive doubles. But I wouldn't be surprised if we see such laws starting to emerge in the coming years.
Going to be interesting in the future.
More of the topic in Hayadan: