An unusual experiment at a traditional Austrian shoe company has found that incorporating artificial intelligence into management discussions can lead to smarter decisions, slow down the pace of discussion – and make managers trust the machine too much.

A year and a half ago, the managers of Giswine received a tempting offer: throughout 2024, artificial intelligence will be added to the company's management meetings, and will receive a place of honor around the table where the most important decisions are made.
Let's clarify now: Giswein is not a tiny company or a startup of three hipster founders. It is a private Austrian company that is more than seventy years old, with revenues of 85 million dollars, and almost two hundred employees. The company designs, manufactures and sells environmentally friendly shoes. In short, it is difficult to think of a more traditional and conservative company.
Giswein executives knew this well, and that’s exactly why they wanted to incorporate AI into their decision-making. As the study authors wrote, the executives realized that they were already too coordinated with each other, and “could almost complete each other’s sentences. … They were also aware that their shared experience and repetitive routines naturally created blind spots.”
The managers wanted to overcome these blind spots. They wanted the opposing view.
And so, they agreed to bring artificial intelligence into management discussions.
The researchers, Christian Stadler of the University of Warwick Business School, and Martin Reeves, who runs the Boston Consulting Group’s Henderson Institute, subscribed to ChatGPT-4, and started partying. Before each board meeting, they asked it for questions and discussion topics about the agenda, and shared the output during the meeting. During the meetings themselves, they asked it to raise points and offer ideas that would complement those of the executives. After the meetings, they held discussions with the AI to get more advice and points, or simply fed the executives’ questions directly into it.
The interaction with artificial intelligence lasted an entire year, at the end of which the researchers – and the managers – reached an unequivocal conclusion:
"Artificial intelligence can certainly be valuable in guiding and enriching executive discussions, but only with actively engaged management."
The researchers quickly discovered that when the AI was only asked simple questions, or made suggestions about the agenda before the meeting, it did not provide interesting answers. In fact, the managers labeled its suggestions as “obvious, unsurprising, or clichéd.”
When did AI’s true value emerge? It emerged as a result of the active integration of humans and AI. Specifically, executives found that AI’s advice was “particularly helpful” when it received guidance from “a human advisor with no specific knowledge of the company, but with experience in developing strategies.”
It is understandable that the experiment ended on a positive note. The administrators were pleased, and the researchers Write an article about the research In the world's most respected business magazine – Harvard Business Review. There they revealed three truths: one that was obvious from the start, a second that surprised them, and a third that should bother us all.
The first truth: saving time and resources
The obvious truth is that AI has helped managers make decisions faster and more cost-effectively. Managers have used it to get data quickly (hopefully they’ve learned to be wary of its delusions), and used it to get “accurate-enough” estimates, as the researchers put it, so they can move forward without waiting for a full study to be conducted around each decision. In some cases, it could replace researchers or expert writers: for example, instead of hiring an agency to write a press release about a particular decision they made, they ran Chat-GPT – and it immediately provided them with a statement that suited their needs.
It is worth noting here that the researchers conducted this study even before OpenAI released Deep Research to the market – an artificial intelligence tool that can write entire reports in a matter of minutes, based on information and research it conducts online. If artificial intelligence assisted in decision-making even before it could conduct in-depth research online, it is difficult to imagine how much it can contribute to executive meetings now, by integrating a real-time 'analyst' into every meeting.
The second truth: Sometimes you have to interfere to help.
The second truth surprised the researchers, but was one of the experiment's greatest strengths.
“It may sound counterintuitive, but one of the biggest benefits of GPT chat was that it disrupted the natural flow of meetings,” the researchers wrote in their paper. “We expected it to be very frustrating for managers, as it added to the awkwardness, sloppiness, and delays of the meeting. But managers appreciated the fact that it forced them to stop and think.”
AI disrupted and slowed down the normal interactions between the executives in the meeting. After many years of working together, they already knew how each other thought, and were fixated on their opinions and accepted work patterns. AI provided them with lists of ideas and points to consider, forcing them to deal with different perspectives than usual. The meetings may have progressed more slowly on certain topics, but ultimately the executives appreciated the fact that AI made them step out of their comfort zones and see the world differently than they were used to.
And what is the third and final truth? This is a point that should bother every manager who uses artificial intelligence. And to tell the truth, it should worry us all. Researchers have called it: "the illusion of perfection."
The Third Truth: The Illusion of Perfection
The researchers monitored Giswein's management meetings for an entire year, as mentioned, and over time they began to notice a disturbing phenomenon: the managers began to rely too much on the artificial intelligence. They treated its recommendations as if they were the be-all and end-all, and in cases where it missed a point – they didn't try to use their human brains to think about what it had missed. They simply accepted what it said, and didn't think beyond it.
In one case, for example, managers consulted a GPT chat to understand what issues they needed to consider before making a particular statement. The AI provided a nice list of points, but missed the fact that managers also needed to consider the legal implications of the statement. The managers fell into the “illusion of completeness”: the implicit assumption that the AI had provided all the issues they needed to think about. They didn’t try to think beyond that list.
To be clear: this is not the usual fear of “hallucinations.” We all already know that artificial intelligence can “lie” to us, or conjure up false “facts” and present them as truth. The concern here is greater, because humans can get used to transferring some of their independent thinking to artificial intelligence – and thus fall into the “illusion of perfection.” Implicitly, and without saying it out loud, they begin to trust it more than they trust themselves.
This means that smart managers need to use AI with caution, and part of their job must include independent thinking and careful consideration of AI recommendations. And if they can’t do that, then at least they should use several different engines – because each AI engine is biased in different ways, and cross-referencing their recommendations can reveal truths that one or two engines have neglected by the wayside.
The "illusion of perfection" also has a complex meaning for the future. If today's optimistic predictions are correct, then artificial intelligence will reach the level of superintelligence in less than a decade. This superintelligence will be able to think better, faster, and more broadly than any expert, or even any committee of experts. When it provides assessments and recommendations on a particular issue, they will truly be better than those of any person or any management team. In other words, there will be no "illusion of perfection" but "truth of perfection."
Even then, that superintelligence will still obey the instructions of its creators. At least, we hope so. But superintelligence will also have its own biases, and its own ways of 'seeing' the world. And the more we get used to believing in and trusting AI, the harder it will be for us to resist it when it tries to impose its worldview on us. The "illusion of Perfection" makes it clear that we need to find the balance between using AI and relying completely on it. We need to chart the right ways for humans to interact with the god in the machine.
Summary
When I started reading the article published by the researchers, I thought they would tell how they integrated autonomous artificial intelligence into the company's senior management. That is, that they effectively replaced the managers with artificial intelligence that could make decisions on its own. I was disappointed to discover that they 'only' helped the managers receive advice and insights from the artificial intelligence. But the truths they revealed require us to rethink the way to work properly with artificial intelligence – in management meetings, and in general.
The first truth revealed that AI can help managers reach decisions faster and more cost-effectively. The second truth revealed that it can actually complicate, slow down, and confuse decision-making – but that managers find great value in it. Both of these truths highlight the importance of properly integrating AI into boardroom meetings.
And the third truth? It makes it clear that it is no longer enough to just be wary of the delusions of artificial intelligence. If you are a manager – and it doesn’t matter if you are managing just yourself or a multi-billion dollar company – you need to use artificial intelligence wisely, carefully, and with a keen awareness that it can make you mentally lazy. If you fall into the “illusion of perfection,” then at some point you will discover that you have missed critical points, and when you make the wrong decision, you are the one who will be held responsible. Not the artificial intelligence. You.
This third truth is the scariest to me, especially since I have no doubt that even in government offices and in the halls of the Knesset, artificial intelligence is already being used to speed up decision-making. When artificial intelligence helps shoe company executives make wrong decisions, it can 'only' cause millions of dollars in damage. When it is used incorrectly in government, it can cost a great many lives.
So what do we do? Use it, but wisely. Even Excel can mislead us if we don't use it correctly. Artificial intelligence is a tool like any other, and we need to know how to work with it and develop the right ways to deal with it. We need to know what to watch out for, and what are the signs that it is hiding information from us or misleading us. And yet, just as it is impossible to imagine a manager who doesn't use Excel today, it is clear that artificial intelligence must also be used to realize the full potential of the individual and society.
Success for all of us.
More of the topic in Hayadan: