How ancient road signs and modern principles of artificial intelligence meet each other on the road and in the world of work

The autonomous car stands and surveys its surroundings. The engine is running. It moves forward an inch or two, and stops. It reverses, again for a few inches, and stops immediately. If we didn't know that it was autonomous – a machine without a driver, and certainly without emotion – we would almost say that it was afraid of something. Of some hidden factor, of which only it is aware.
Only upon closer inspection can one see the reason for the car's apparent apprehensions: someone has drawn a magic circle of sea salt around it. This is an ancient and well-known tradition. More than five thousand years ago, it was possible to find Wonderworkers in Sumer and Assyria, who scattered flour or salt on the ground as a way to stop monsters and the forces of darkness.
Today, the same technique is used to stop autonomous vehicles.
That, at least, is the story currently circulating on social media. The truth, as usual, is simpler – but it opens a window to an even more interesting world. One in which we have to work with forces we don’t fully understand, and are forced to speak to them in a language they understand, rather than having them bend to our will.
The story behind the magic circle
Artist James Bridle created the salty magic circle In 2017, Armed with Superpowers: Understanding the Psychology of Artificial Intelligence. He knew that the autonomous vehicles had been 'trained' to obey the traffic laws of the relevant country, which state that a driver can cross a solid lane on the road, but only when that solid lane is accompanied by a dotted line on the side closest to the driver. And what if the dotted line appears on the side farthest from the driver? In that case, the driver would break the law. And so, when the autonomous vehicle finds itself surrounded by a circle with a solid lane and a dotted line on the other side, it stays put. It is imprisoned, trapped.
Of course, a person who was not familiar with these traffic laws would not understand why the vehicle responds to this arbitrary shape of the circle. Why does an outer dashed line stop the vehicle, and an inner dashed line allow it to travel? Like that. Magic.
This story may sound funny, but we should be aware of it because we are all starting to work with 'magical' technology today. That is, one that we do not fully understand. And the consequences, as we begin to discover, could explode in our faces.
Artificial intelligence has fired the HR department
A story published last year in the Economic Times magazine illustrates the danger of unwisely relying on artificial intelligence. One manager at an unnamed company couldn’t figure out why the HR department couldn’t find a suitable candidate for a job opening. He waited a month, two months, three months – and finally decided to take action: he submitted his own CV as the perfect candidate, with a fake name.
And it was rejected too. Automatically.
“[They] didn’t even look at my resume.” He stated painfully.
The reason? As you might guess – artificial intelligence. The HR department at that company chose to rely on artificial intelligence to conduct the initial screening for them. Unfortunately for them, they made a slight error in defining the skills required of employees, so that employees with the appropriate skills were filtered out at the very first stage of the process.
The mistake, it should be noted, was stupid. Very stupid. The manager wanted employees with experience in Angular, which is a framework for developing web applications. However, the HR department stipulated that candidates should have experience in AngularJS – an earlier version of the same framework, which very few people use. The AI, devoid of logic and understanding, did not 'think' twice before disqualifying candidate after candidate, month after month. The HR people, who were supposed to have common sense, did not bother to activate it. It is difficult to find another explanation for the omission. The human employees trusted the AI, and went to play tag on the beach.
Then the manager discovered what had happened, and the humans came for their punishment.
“Half of the HR department was laid off in the weeks that followed,” the manager shared. And while it’s clear that employees need to be trained to work with and alongside AI, it’s hard to think of a more fitting dismissal. The humans in this case completely gave up their independent judgment and chose to hand over full responsibility to AI, without questioning it for a moment.
The trouble is that cases like this are likely to occur more and more.
The managers happily adopt
New OECD report released in February 2025 surveyed the patterns of use of artificial intelligence among managers, and especially the way they used it to make decisions. It turns out that this is a common practice: ninety percent of all managers surveyed in the United States use artificial intelligence in their work. In Europe, the number of users drops to 'only' 79 percent. Ninety percent of managers in the United States, by the way, use automated tools to conduct employee evaluations. More than half of managers monitor the content and tone of voice of participants in conversations, meetings, and emails.
On the surface, AI makes management easier and more efficient. Most managers (almost ninety percent of managers in the United States!) said it improves their decision-making skills, and in the process, increases their job satisfaction. The downside is that almost two-thirds of managers are unsure how much they can trust AI not to make mistakes. In the United States, fifty-eight percent of managers have personally encountered cases where they believed AI did not treat employees properly and did not take their mental and physical health into account.
One of the most worrying conclusions from the survey is that most managers agree that using new AI tools requires them to acquire certain new skills. These include, for example, digital skills, advanced data analysis capabilities, and problem-solving abilities. Without these skills, the manager is at the mercy of AI.
A manager who is not armed with these capabilities, and who is given advanced AI tools, is like a child with a loaded gun, or a sorcerer's apprentice with his first animated broom. He does not know why the AI reacts one way to a solid line, and another way to a dotted line. He uses the tool he is given, and hopes for the best. In the short term, it may work. In the medium and long term – it is a recipe for disaster.
אז מה עושים?
On the bright side, many companies are realizing that they are giving their managers technology that is like magic, and that magic can be equally helpful and harmful. The OECD survey shows that almost all companies – 89 percent of them – have developed mechanisms to promote the positive, safe and reliable use of AI tools.
True, in many cases those "mechanisms" are actually guidelines, or general instructions regarding the use of artificial intelligence. This, of course, is not enough on its own. But this is just the beginning. In some companies, it is customary to conduct routine audits to ensure that artificial intelligence is being used correctly. In 35 percent of companies, one can also find an "ethics director or committee," who will ensure that artificial intelligence does not incite managers to the evil side of power. In more than half of the companies, there are mechanisms that allow employees to report or complain about the misuse of artificial intelligence.
And that's all wonderful and wonderful. Really. These mechanisms will reduce the risk of AI making bad decisions, and more importantly, the risk that some manager will rely on it without thinking.
But what happens when the human managers themselves leave the event?
The coming of the autonomous organizations
In the coming years, we can expect the emergence of a new type of organization: the autonomous organization. Such an organization will be operated largely by artificial intelligence. The CEO? Artificial intelligence. The VPs? Artificial intelligence. The middle managers? Artificial intelligence. And the workers on the ground? Well, maybe there will still be humans there. Managed, of course, by artificial intelligence.
Such organizations will be able to compete with existing companies, sometimes with almost embarrassing success for humans. Their CEOs will be able to make more successful strategic decisions than human CEOs. Their middle managers will be able to be more attentive, more sensitive, and more aware of the emotional state of human employees than any human manager. And it goes without saying that the cost of operating these artificial intelligences will be much lower than the usual salary for managers.
This all sounds well and good, but it's hard not to wonder: Who can guarantee that those artificial managers will make decisions that will serve human society as a whole?
Some argue that the purpose of a commercial company in a capitalist market is only to enrich the pockets of investors and shareholders. Let's face it: there is some truth to this. Commercial companies sometimes act against the broader public interest, just to put more money in the pockets of managers, owners and investors. It is likely that artificial CEOs will act in this way in many cases. And if they are truly more efficient and successful than their human counterparts, then they could cause great damage, without a trace of human oversight.
How do we deal with this future problem? Good question. It's hard to find a solution in advance for problems that aren't here yet. Suffice it to say that AI thinkers are trying to find answers to these questions, but no one is yet sure what will work and what won't.
In the meantime, we will continue to monitor what is happening in the field of artificial intelligence, and we will adhere to one principle: never take our eyes off it for a moment. Not to assume that it is magic that we have no chance of understanding, but to continue to demand that human managers work with it with caution and constant apprehension.
And if they don't know how to do it?
In this case, it might be best not to give them a loaded gun in their hands.
More of the topic in Hayadan: