Comprehensive coverage

The future of autonomous systems - and the future of futurism

Dr.Roey Tsezana developed an artificial intelligence tool to study the research process of futurists. Specifically, his own. It receives as its main input the field it is supposed to research, and the target year for which it is supposed to provide forecasts

The article was published in Elta's innovation magazine, commissioned, led and edited by Mellie Merton, Elta's innovation manager. Thanks to Mellie for the invitation to write the cover story for the magazine, and to all those involved in the craft: Mira Mittelman Kleiman, Roital Moore, Shlomit Lika and Hadas Levy Studio

The future of futurism. The image was prepared using DALEE and is not a scientific image
The future of futurism. The image was prepared using DALEE and is not a scientific image

Introduction: The beginning

I am sitting in front of a flickering screen, reading for the first time a new report I wrote about the future of autonomous military systems. It's a strange feeling, reading for the first time a report I wrote myself. The words are not exactly my words, the wording is different from what I would expect, but the method of analysis and the way of thinking are clear: this is the way I would do the research. The logic is very similar to the one I would use in the analysis of the findings. And the scenarios at the end of the report are similar in nature to those I develop in every research paper on the future.

I wrote this report in many important ways. I outlined the way in which the findings would be collected, the logic behind the analysis of each item of information, the breaking and re-fusion of concepts (paradigms) in the field of autonomous systems. the way the scenarios will be told.

And the artificial intelligence did all the rest.

In the present article I will tell about the way in which this research was carried out, the thought behind it and the initial results. We will try to understand the greater meaning of using autonomous systems - not only the military systems, on which we focused our research, but the systems that will be used by us in every role, and especially for the study of the future.

But let's start with the big question: can artificial intelligence even do research?


Chapter One: Can Artificial Intelligence Do Research?

Last year, a lawyer asked ChatGPT to write a document for the court, with legal precedents from previous years. Not surprisingly, the AI ​​produced a document that was impressively written, but full of gross factual errors. She invented six precedents out of nowhere, and only the judge's familiarity with the material stopped him from being tempted to believe the document presented to him. The firm was fined $5,000 and claimed that 

"We made a mistake that was not malicious, by believing that technology could invent cases just like that."

This incident, which has surely been repeated in many places and in many variations, demonstrates an important point: AI can do research, yes, but it will be bad research. Not to mention completely false.

At least if you don't know how to use it correctly.

The artificial intelligence that has made the most significant leap in recent years is "Generative AI". The big language engines like ChatGPT have managed since 2020 to leap forward in terms of their ability to produce text that looks like a logical and eloquent person wrote it. This is, of course, only an illusion. In fact, the big language engines are nothing more than "stochastic parrots": a statistical machine that attaches words and letters to each other in a way that reflects the frequency with which these words appear in juxtaposition in a huge number of texts collected from the Internet, literature and any other possible source.

The meaning is that the creative artificial intelligence is able to produce writings that at first glance seem logical, but on closer examination one can find in them "hallucinations" - hallucinations, in Hebrew. This is not surprising. Artificial intelligence today is not gifted with sound logic, or a real understanding of the world. She cannot browse the Internet to gather new information unless we give her explicit permission to do so - and create the appropriate tools that she can use to exceed the limitations of her knowledge. All she knows how to do is to recycle the words she has already gone through, in a new arrangement so that they create meanings that sometimes seem logical to us - sometimes nonsense.

Of course, some would rightly say that this description also fits much of human writing.

This may all change with the release of more advanced AI models, such as OpenAI's promised Strawberry, which also developed the original GPT. At the rate of change today, it is not impossible that by the time you finish reading this article, everything will have changed again. However, as of writing these words (at least until the end of the sentence), it is clear to everyone that one should be careful not to automatically believe the big language models.

But already today, they can be improved. And almost miraculously, a significant part of the improvement can come not necessarily from the super developers of artificial intelligence, or from computer science researchers, but rather from the people who know how to talk better with artificial intelligence.


Between computing and decomposition

At the beginning of 2024, Cognition AI unveiled its exciting new product: Devin. Devin is an artificial intelligence engine capable of writing high-level code. relatively. The other leading AI engines of the time, such as GPT4, managed to write excellent code only 1.7% of the time (according to the SWE-bench test). Devin, on the other hand, was successful in 13.9% of the cases.

What is Devin's secret? Has Cognition-AI developed a new and unusual large language engine? Maybe she invested billions of dollars in better engine training? 

The truth is probably much simpler. While it's impossible to know for sure, the most likely answer is that the company took a large language model like GPT, and taught it to think in steps. When Devin is given a large and complex problem, he breaks it down into a number of simple steps, each of which he can perform relatively easily. The output of each step is passed to the next step, and sometimes several steps further, to provide the engine with a better context for the desired type of answers.

We will explain with a practical example of website development. If an artificial intelligence like Devin receives the request, it will surely break it down into a series of steps: understanding what the theme of the website is, what color palette should be used, creating the icons for the buttons, creating the images that will appear on the website, deciding on the most suitable font, dividing into pages the different ones, creating the text for each page by itself, and so on and so forth.

Why does such a division lead to better results on the part of artificial intelligence?

"Imagine that you are asked to solve a difficult math puzzle, and you have to immediately answer the first answer that comes to your mind." Explained Leopold Aschenbrenner, an artificial intelligence expert who worked at OpenAI until the beginning of 2024. “It seems clear that you will have a hard time doing this, except for the simplest puzzles. But until recently, this is how large language models solved mathematical problems. Instead, most of us solve the problem step-by-step in a notebook, and can solve much more difficult problems that way."

According to Aschenbrenner, by breaking down complex problems into a series of concrete steps, artificial intelligences can improve tenfold in their abilities in certain areas. For example, in writing code. This is the "unhobbling" technique (informal translation to unhobbling) that Aschenbrenner believes is one of the main factors that will boost the capabilities of artificial intelligence in the coming years. Thanks to this technique, along with a parallel increase in computing capabilities and information collection and analysis, Aschenbrenner and many others believe that by 2030 artificial intelligence will reach a level of writing and analysis that easily competes with that of expert researchers in the field. in any field.

Will we really reach this level in the coming years? I'm wary of making predictions, but a number of serious experts believe it's entirely possible, and even likely. Even if we don't reach this level soon, one fact is still clear: just as the "decomposition" technique can be used to produce artificial intelligence that writes better code, the same technique can certainly be used today to provide better analyzes about the future.

And that's exactly what we did.


Chapter two: Artificial intelligence explores the future

In the middle of 2001, the US Deputy Secretary of Defense, Lynton Wells, wrote an internal memo in which he reviewed the last hundred years of international relations.

“If you were a security policy maker in the world's superpower in 1900,” Wells begins, “you'd be a Brit carefully scrutinizing France, your longtime enemy. 

In 1910, you were united with France, and your enemy was Germany. 

In 1920 you would be fighting and winning World War I, and centered in a naval arms race with your loyal allies, the United States and Japan. … 

In 1930, standard defense planning was based on the assumption that there would be no war for ten years. 

Nine years later, World War II began."

Wells continues the transition over the decades sharply and honestly, and shows how in each decade the perception changed anew. He concludes with the powerful statement that— 

"All of this is to say that I am not sure what 2010 will look like, but I am convinced that it will be very different from what we expect, so we must plan accordingly."

Wells touches in his memo on a truth that we sometimes like to ignore: that we do not have the ability to provide accurate predictions in such a time frame. So many changes can occur and pile on top of each other, completely subverting our current perceptions and assumptions. The changes can come gradually, or they can happen all at once and sharply change everything we thought we knew. Russia experienced such a surprise in its war with Ukraine, where it learned (along with the whole world) the power of the drone technology that is responsible for destroying two-thirds of all Russian tanks. We learned how devastating surprise can be on October 7th.

When futurists look ahead to the long term, they focus less on prediction and more on "exploring futures". We try to understand how different technological and social trends can intersect together, compete, damage and enhance each other to create a different gaming environment in the future. We err in our imaginations, realizing that looking at the long term we must think differently, because what was is not what will be, and any thought that relies too much on what we know and trust can only describe the present and not the future.

Can artificial intelligence do all this?

To give an initial tentative answer to the question, we created the "Tron Contract".

The contract-tron mimics the research process of futurists. Specifically, my own. It receives as its main input the field it is supposed to research, and the target year for which it is supposed to provide forecasts. So it kicks in and runs a series of prompts, one-after-the-other. The output of each of the prompts provides an additional step, and together we get a complete staircase that helps us understand the future.

In the first step, Jose-Tron identifies the most common tools in the field. For example, for "autonomous military systems", he identifies the existence of UAVs, autonomous ground vehicles, autonomous submarines and more. In the second stage, he characterizes the capabilities according to which the tools are measured today: UAVs, for example, are evaluated according to their flight distance, load carrying capacity, resistance to cyber attacks, and more. To each of these abilities he tries to attach quantitative assessments for the most advanced tools, based on the general knowledge he possesses.

Already at this point it should be clarified that Jose-Tron is not free from lucinations - hallucinations. Every piece of information he provides should be carefully examined. However, by designing the right prompts and dividing the task into small, focused tasks, the danger of hallucinations can be greatly reduced. A large language engine that is asked directly about the exact capabilities of a UAV, for example, and is required to rely on reliable sources of information only, will return a concrete answer based on the reports from the Internet on which it was trained.

After Jose-Tron has characterized the current state-of-the-art autonomous military vehicles and capabilities, he is armed with the knowledge of what exists today. Now he has to start looking to the future. To this end, he is asked in a series of different prompts to describe the challenges facing these tools today, and to propose existing solutions that can solve these challenges, and futuristic and innovative solutions (for example, quantum computing, or biomimetics). Each of the solutions is rated according to its innovation - an easy task for the major language engines, since there are certain words they associate with innovation and the future. 

Why do we need the innovation scores? To give the user an additional degree of freedom. The user can pre-define the required level of innovation for the contract-tron. And Jose-Tron only takes the solutions from the same level of innovation and crosses them with the information from the previous way of thinking. That is, he asks himself to what extent each solution can directly affect each of the capabilities of the tools identified in the first stages of research. Again, he is asked to bring quantitative data based on existing knowledge. If a report from the network described that electric batteries are expected to increase tenfold in their energy capacity in the next twenty years, most likely Jose-Tron would know this - and include the figure in his assessment. From this he may conclude - in a leap that is not necessarily logical - that the gliders of the future will also be able to stay in the air ten times longer.

We see here again the limitations of creative artificial intelligence. She has no sense. She composes words and terms, without necessarily dwelling on the logic of the composition. And yet, when we look to the distant future, this very ability may be enough to help us develop new thoughts. Precisely "waving our hands in the air", as we contemptuously call any thinking that is not based on data and clear logic, can help us think about the future without being too bound to the present.

At the end of the first part of the process, a preliminary assessment is received about the theoretical capabilities of the military autonomous tools in 2050, with an explanation of how each technology leads to the required improvement. But how do we use these improved tools? What will be the new concepts that will guide the work around them?

Here too, Jose-Tron has something to offer us. In fact, some would say this is where it really shines, as great language engines are gifted with tireless imaginations. They can come up with endless ideas for us, if only asked the right questions. After all, what is imagination and creativity if not the semi-random connection of different concepts, alongside certain limitations that are intuitively clear to us as humans?

Jose-Tron receives the list of concepts (the tools and their new abilities) and limitations from the previous stages of research, and then begins to fantasize about the world anew. It offers new uses for the military autonomous tools, and at this point the user has to limit it explicitly to military uses, or it expands itself to offer uses in a variety of other fields. In the next prompt, he already continues and identifies the way in which the new capabilities and uses can undermine and reshape the concepts that seem obvious to us today regarding autonomous military tools. From there, he continues and builds narrative scenarios: short pieces of text in which the protagonist (the hero) tells how he uses the same tools in 2050.

And here, finally, Jose-Tron's action ends. ends, and begins. Because this entire series of prompts is contained in one system and can be activated at the click of a button. The user can enter any domain they want, and Jose-Tron will repeat the exact same actions. In a few minutes, and at a cost of a few shekels, he will return all the material that previously required a combined team of futurists, technology researchers and science fiction writers (for writing narrative scenarios) to produce.

Despite the enthusiasm you've probably noticed flickering beneath the words written in this article, I don't want to overstate the capabilities of the contract-tron. It cannot really replace an entire research team. And yet, as someone who was involved and led such research teams to research the future, I can say with full confidence that with the right guidelines and critical and careful editing - a Tron contract can significantly reduce the work that such a research team has to do. At the same time, his ability to reanalyze the existing concepts and propose new scenarios, helps open the mind to new directions that are not necessarily obvious.

In the next part of the article I will review some of Jose-Tron's answers about the new concepts we may have in 2050 about autonomous military tools. Then we will try to answer the last and perhaps the most important question of all, especially for those of us who make a living from such research work: What is the future of futurism, in a world of creative artificial intelligence?


Technological strategies for future management and innovation
Technological strategies for future management and innovation

Chapter Three: Jose-Tron's Predictions

We ran Jose-Tron with a request for "modesty". That is, that he will not choose 'too' innovative technologies. The result was that he left technologies such as "quantum computing", blockchain and self-healing materials out of the picture. Instead, he focused on the promised improvement in technological fields that already exist today. He examined the improvement in energy storage capabilities, in the unification and cross-referencing of the information coming from various sensors, in artificial intelligence systems for analyzing information and explaining it, and much more. After selecting these technologies, he produced an explanation of their expected effects on UAVs and unmanned ground vehicles in 2050.

Well, given that the unmanned vehicles will gain the capabilities identified by Jose-Tron, what will be the modes of their use in 2050?

Jose-Tron proposed that UAVs could perform several types of missions in the future - 

  1. Continuous monitoring and surveillance: UAVs will be able to continuously monitor large areas for extended periods. The UAVs will be equipped with high-resolution sensors and analysis tools that will be supported by artificial intelligence to identify and track enemy movements, and provide real-time intelligence to command centers. For example, a fleet of UAVs will be able to monitor the entire border area, identify and report any unauthorized crossing or suspicious activity.
  2. Autonomous supply chains: Thanks to an improved load-carrying capacity and improved energy storage and management capabilities, UAVs will be able to autonomously transport supplies to the frontline forces. The UAVs will be able to carry medical equipment, ammunition, and even food, ensuring that soldiers in remote or hostile environments receive essential resources without endangering human life. This means that during a prolonged conflict, UAVs will be able to maintain a continuous supply line to the front.
  3. Deception and bait operations: The UAVs will demonstrate improved situational awareness and advanced decision-making ability, so that they can be used to carry out sophisticated deception operations. The UAVs will be able, for example, to imitate flight patterns and signatures of manned aircraft, and draw enemy fire and attention away from real targets. For example, during an airstrike, decoy drones could be launched to confuse the enemy's radar systems, allowing the real strike force to approach undetected.
  4. Coordination of precise attacks: UAVs will be able to coordinate and carry out complex attack missions, thanks to their precise attack capabilities and advanced autonomy and navigation capabilities. The UAVs will be able to identify and attack high-value targets autonomously with minimal human intervention. For example, a squadron of UAVs can be programmed to simultaneously attack multiple enemy installations, overwhelm their defenses and maximize the impact of the attack.

Let's face it: none of these predictions are groundbreaking on their own. But the breakthroughs often come from combining innovative technologies from parallel fields, which have not yet been launched into the military field. When we asked Jose-Tron to add a technology that is not usually associated with military autonomous tools - "artificial intelligence for discourse management" - he agreed to step out of his (and ours) thinking box and suggest more interesting and 'weird' uses. That is, those who really have a better chance of reflecting the distant future. 

"Conversational artificial intelligence interfaces will allow commanders to communicate with the swarm in real time, request specific data or guide the swarm to new locations." Jose-Tron wrote to me (I translated from English). "For example, a commander could ask, 'Show me the enemy artillery positions,' and the swarm would autonomously locate and relay that information back."

He went on to describe a future that seemed more interesting and colorful. "Conversational artificial intelligence will allow field units to request specific tracking data or transmit messages through the satellite network. For example, a ground unit could request, “provide live feed of enemy supply route,” and the UAV would transmit real-time video back to the unit. ... conversational artificial intelligence will enable real-time dialogue between the UAVs and the human operators, and will enable dynamic adjustments to target targeting based on changing battlefield conditions. For example, an operator will be able to say, "Attack the enemy's communications center," and the drones will autonomously adjust their flight paths and attack with high precision."

When Jose-Tron was offered other technological areas that would be significantly improved by 2050, such as "weather manipulation capability", he allowed himself to propose other creative ideas.

"UAVs equipped with advanced sensor and tracking systems could be used to monitor and manipulate weather patterns to create tactical advantages on the battlefield." Wrote. "For example, UAVs will be able to disperse cloud seeding materials to cause rain or fog, thereby reducing visibility to enemy forces and providing cover for ground operations. The enhanced endurance and range capabilities, supported by advanced energy and power management, ensure that these UAVs can operate for extended periods, and cover large areas to achieve the desired weather effects. ... Using the precision attack capability, UAVs will be able to deliver payloads that affect weather with great precision. For example, UAVs will be able to deploy micro-drones that release chemicals to create local weather phenomena, such as hail storms or lightning, targeting enemy positions or disrupting supply lines. The improved situational awareness and decision-making capabilities ensure that these strikes are carried out with high precision, while minimizing collateral damage."

These are products we would expect to see in human-made reports - but were produced automatically by a tron-contract. And although it is extremely important that we continue to be critical and skeptical, I remind you that Jose-Tron did not produce them with complete 'handwaving', but by following a series of logical steps that we defined for him. Each product of each stage led to the products of the following stages. 

And what about the new concepts? How will the current perceptions about autonomous military tools be undermined in 2050? What concepts will they be replaced with?

Jose-Tron identified several current concepts about autonomous tools, and proposed to replace them with new ones. A first concept, according to him, is that autonomous military systems depend on strong communication networks. This concept will be replaced by a new concept: autonomous military systems will operate in a state of "decentralized autonomy". This new concept will be based on the improvement in situational awareness, decision-making, and autonomous navigation capabilities, which will reduce the dependence of the autonomous vehicles on constant communication with command centers.

The second concept that will be undermined, according to Jose-Tron, is about the ethical and legal consequences of the use of autonomous military systems. Jose-Tron explained that the ethical and legal frameworks for the use of autonomous tools will need to be re-evaluated. And in his words - "The ability of these systems to identify and attack targets autonomously, as well as to perform complex tasks such as dismantling improvised explosive devices, raises new questions about responsibility and delegating life and death decisions to machines."

The third concept, which is already being undermined these days, is that "autonomous military systems have significant development and maintenance costs". In its place, Jose-Tron proposed a new concept, according to which "the significant initial investment in autonomous military systems is balanced by their long-term operational efficiency and the reduction of human risk. Advanced artificial intelligence, energy management, and autonomous capabilities improve the durability and effectiveness of these systems, leading to cost savings over time and justifying the financial outlay."

And from here he continued to write short and narrative scenarios. Due to the lack of space, I will not list them all here. Nevertheless, here is a quote from one of the stories, which came from one of the future operators of the tools in 2050 - 

"The ethical and legal implications of these systems bother me a lot. When a swarm of UAVs is activated for a precision attack, the responsibility to ensure compliance with humanitarian laws and to minimize harm to civilians is enormous. The international guidelines and agreements we adhere to are constantly evolving, but the pressure to make the right decision in real time is unrelenting. It's a sobering reminder of the complexities and responsibilities that come with mastering these powerful tools.”


room for improvement

As we have seen, Jose-Tron is able to provide interesting and logical points, diagnoses and ideas about the future. Is it accurate? To answer the question, we need to know what will happen in the future. But for now it can be stated that the answers he provides do not fall in quality from those we would see in the executive summary or in the summary of many reports in the industry.

But is the logic behind them valid? Such summaries, after all, are the product of in-depth research. Did Jose-Tron conduct such research himself?

Here the answer is more complex. Jose-Tron did some research, but he didn't do it the usual way. He followed the logic outlined to him by Atidan Meduplam (writer of these lines), but at each step of the process he took a certain liberty to provide the answer. When he was asked to quantitatively assess, for example, the capabilities of today's most sophisticated UAVs, he did not "go to the Internet" or "read reports" as human researchers would have done. He simply relied on his ability to put words, characters and numbers together based on his training on much of the relevant information available on the Internet.

Every major language model undergoes such general training by the companies in the market. Since the training is "general", it can also lead to hallucinations and delusions. If the web was flooded with rumors about the cancer-causing abilities of drones at the time the great language model was first trained, then there is a good chance that Jose-Tron would report to us with absolute certainty about the link between drones and cancerous tumors.

Fortunately, we don't have to rely only on the big language models available on the market. More sophisticated versions of artificial intelligence such as Jose-Tron will be based on open language models, which can only be trained on the most reliable, contemporary and accurate reports and books about the autonomous military tools. The same versions will also be able, in case of doubt, to send a 'probe' to the Internet, locate reports and reports that have come out in recent days and read them critically in order to provide more relevant answers. All these developments are already in use today in a variety of artificial intelligence applications. There is no reason why we should not turn them around for the benefit of future research.

Future-exploring artificial intelligences, therefore, are not "around the corner". They are already here. And as is the way of artificial intelligences in the last decade, they will only continue to improve in leaps and bounds every month and every year.

All of this brings us to the last question: what will the work of the futurist look like, in a world where artificial intelligence can do most of his work for him?


Chapter four: The future of the futurist's work

As I wrote at the beginning of the article, the expectation today is that by the end of the decade artificial intelligence will reach a level where they can compete with researchers in any field. It is difficult - if not impossible - to understand in advance what the work of humans will look like in such a future. Will they even have a job left? To what extent will we be willing to give the reins to artificial intelligence, and how appropriate and correct is it that we do so?

These are questions that entire books (one of them by the writer of these lines) have been written in a Sisyphean attempt to answer them, with only partial success. However, due to the limited space in the magazine, we will try to answer a simpler question: What will the work of the futurist look like in the near future - that is, in the next five years?

To answer the question, it is necessary to distinguish between three avenues of the futurist's work: the collection of information, the analysis and processing of the information, and the communication of the information to the decision makers.

Precisely in what concerns the collection of information, it is very possible that the work of the futurist will not change significantly. The reason is that today futurists use search engines and artificial intelligence to locate relevant information on the web. Beyond that, expert interviews and holding workshops and round tables are very important in any research work. In all of these we can still expect the active and direct involvement of the research editors.

In the second area of ​​information analysis and processing, the work of the futurist will become that of a manager. The successful futurist will manage the artificial intelligence that will conduct the research for him. He will dictate to her the logic, the required steps, the limitations and the types of information she should include in her analysis. For this, he will have to understand the artificial intelligence in depth, and will have to go through the products himself to make sure they conform to the standard he defined.

Last but not least, a critical part of the futurist's work is to communicate the research results to the decision makers in a way that will lead to understanding and acceptance on their part. To this end, the futurist will use artificial intelligence to produce more readable and clearer reports, more interesting and narrative scenarios, and even photos and videos that reflect the way the future may look. But in the end, when it is necessary to stand in front of the decision makers and explain to them how the research was conducted and what is the logic behind the insights, there will still be no substitute for the human future in the coming years.

The one thing that can be said for sure at this point is that every researcher - in any field - needs to start getting paid in artificial intelligence. We all have a wealth of knowledge about research processes, which can be applied to artificial intelligence, and produce more quality and reliable research in this way.

Yes, this means that our way of working as researchers will change. Yes, we will have to learn new tools, new skills, new ways of thinking.

We will have to, in short, live the future.

More on the subject on the science website: