The agents are coming: "Wanted" messages for AI only

"The new job ad presents a unique position - the company is looking for an AI agent to build, manage, and prove his skills, with the intended candidate being an independent agent developer with multi-disciplinary skills, at a salary price/cost of only $15,000 per year."

A computerized agent will replace technological consultants.
A computerized agent will replace technological consultants.

 

An unusual job ad was published last week. In fact, it is very standard in terms of job requirements, except for one point: it is only targeting artificial whites. As the ad says – "Artificial intelligence agents only".

Well, that's not accurate. At the end of the ad, the company makes it clear that it is also willing to accept applications from humans who have developed such an "artificial intelligence agent." But ultimately, yes, the company is interested in hiring the agent. And it's not a simple job. The agent who is accepted for the position will be able to – 

“Research technology trends and models yourself, then use this information to create, test, and improve high-quality app samples. These apps… will demonstrate the full potential of Firecrawl [the company] in real-world scenarios. Your work will guide and inspire developers, helping them quickly adopt Firecrawl…”

What do we have here? In fact, the company is looking for an employee with advanced skills in search, data collection and analysis, programming, application development, marketing, user experience design, and more. If you have all of these skills, well, that's very impressive. But you probably shouldn't apply for the job. Both because you're 'just' human, and because the annual salary the company is offering for that agent is only $15,000 at most.

Now let's talk seriously for a moment.

First of all, the company can't really hire an "AI agent," for the simple reason that there is no such entity with its own bank account. What the company is really looking for is someone to build such an agent for it, and lease it to it for one year for $15,000. 

So is this nonsense? Or, more precisely, an amusing phrasing on the part of the company, intended primarily to bring it publicity? Not exactly. The ad was also published very seriously on the website JobForAgent ("Job for an Agent"), which defines itself as

"The first work schedule for autonomous agents, zero percent drama, one hundred percent work all the time." 

You can find additional requests on the site for agents who can perform complex tasks, such as debt analysis, FAQ production, podcast editor, and more. Some of these agents are very complex, and need to perform tasks that require skill in a large number of areas. Other agents are simpler. But one thing is common to all of them: they are designed to take the place of a human expert who would have been supposed to perform these tasks himself, and they do so for a ridiculous 'salary', ranging from a few dozen to a few thousand dollars per year.

And again, you could argue that companies on the site aren't really hiring AI agents, but rather the developers who deploy them. Which is true, but it misses the bigger point: every developer who deploys such an agent and leases it to companies is replacing the work of at least one person at that company, if not several people at the same time. And since one can relatively easily take one successful agent and adapt it to a new job, one could expect that one talented developer who knows how to deploy autonomous agents will effectively take the work of dozens, hundreds, thousands, or millions of other talented people at the target companies.

If all of this confuses you, and if you too are starting to feel like we've only just now managed to understand GPT and why are you suddenly coming at us with agents, it's probably time for a brief explanation of artificial intelligence, the big language engines, and how they connect together to create agents.


From engines to agents

Everyone already knows Chat-GPT: an artificial intelligence that can write, read, compose poems, analyze documents, edit books, and so on. But the truth is that it can't do any of that. Chat-GPT is 'just' an engine that can complete the next word in a sentence. Then the word after that, then the next one, and so on and so forth. And because it can do this really well, it can also complete entire paragraphs, pages, and even chapters in a book.

Chat-GPT, along with Claude and Gemini, are defined as "large language engines." And that's exactly what they are. They are "engines." They are not designed to do the work themselves. They are not a "product" that anyone can use easily and without fear. The fact that we all use them today only shows the enormous power they have, and makes it clear how useful they are. And yet, they can make big mistakes. They have no real understanding of the user or their needs, and people often get disappointed when they try to use them in the wrong ways.

What can they do? Very simple tasks, and with close human supervision. Yes, they can write a paragraph or two for a report for the average user, or a battle scene for a fantasy book, but it would be very useful to have an expert go over the result and make sure it matches what they wanted. And the more complex and lengthy the task becomes, the more dangerous it is to trust them to be able to do it well on their own.

Google explained the matter with an interesting example of a hamburger restaurant, where the big language engines are not even the cooks there, but just tools that the simplest cooks use. But since most artificial intelligences currently don't really operate in the real world, we give another example: of a research institute.

Well, think of a small, high-quality research institute that starts employing large language engines like Chat-GPT. These engines don't write reports or do research. They serve only as tools in the hands of researchers. They can proofread, for example, human writing. Or write a few paragraphs under the guidance of the human researcher. Or summarize and summarize the information on a particular website, or even summarize an entire report in a series of bullet points. But in all these cases, the large language engines do relatively simple tasks, at the request of the human researcher and under his full supervision. 

So far we've talked about big language engines like Chat-GPT. But what are agents?

To use the hamburger restaurant analogy, the agents are the cooks themselves. As the cook, they choose which tools to use, when to flip the burgers, when to put them in the bun, when to call the waiter to come and pick them up, and so on. All without human supervision. Not only that, but they can also function as shift managers, operating and managing the cooks themselves, who operate the simpler tools themselves. And they can even give the cooks instructions and guidance to improve their performance.

And what about the research institute? There, the agents can replace the entire human researcher. They know how to write the report, and they run all the simpler engines to do so. They run Google to search for information on the web, collect the information and read it using GPT chat, summarize it into paragraphs and go over them to make sure they match the research needs and the desired style, and can even produce images and graphs and incorporate them into the final report.

And before you claim that they "have no creativity," or any other outdated claim from the prehistory of artificial intelligence, these agents are also capable of offering the research client additional options for carrying out the work, and directions that he had not even thought of.

Agents, in short, are the next evolutionary stage of GPT chat. They are what we get when we let artificial intelligence give instructions to artificial intelligences, which themselves operate artificial intelligences. All for the benefit of one goal that is defined for the agents from above – and they have to carry it out. Whether it is writing a report, conducting scientific research, creating an advertising brochure, or any other field.

But how do they do it?


The breakthrough that never happened

You would expect that for an AI to be able to supervise AIs and write complete reports (for example), it would require an incredible breakthrough in capabilities, right?

Well, it turns out not. Not really. The technology at the heart of the agents is not significantly different from what was here a year ago, or even two years ago.

The thing is, the agents themselves are made up of large language engines – yes, again, chat-GPT. Those engines can give instructions to other large language engines – chat-GPT again – to perform certain tasks. For example, write a paragraph. Then, they can instruct those engines to go over the paragraph and make sure it fits into the larger report. And as long as the engines are given the right instructions, and with the right text passed to them, they are able to do this with an impressive level of success.

In fact, we see here the greatness of the process – of the workflow as it is defined among AI companies. At each stage of the workflow, large language engines are activated to perform simple operations, and then other engines are activated with a series of other simple questions: Was the original task performed adequately? If not, what should be done? How should it be improved? What should be written for other large language engines to perform the improvement?

Then other large language engines are run with the instructions written by the previous engines, to improve the result.

Again, such processes were demonstrated as early as September 2023, almost a year and a half ago. In a study published at the same time, researchers managed to get simple engines to 'talk' to each other, to argue, to present different opinions from different professional perspectives. They showed that this way of operating can do great and impressive things: for example, to create entire parts of a computer game (they called an agent that did this ChatDev). In fact, they demonstrated the power of agents, in academic research.

These principles have since spread to many other places. If you use GPT-O1, for example, you are essentially running an agent that can break down your question into subtasks, analyze each one in turn, wonder about the answers it gave and their relevance, and re-edit the answer based on the insights it generates for itself and for itself. OpenAI itself has already recommended using O1 as a tool for running larger, simpler language engines. It is “in charge of the shift,” and they… they do the rest.

And now, finally, we are entering a period where these agents are advanced enough to perform complex and complete tasks – the kind that a human would normally have to be responsible for – on their own.

At least, we're on track. Because there's still a lot of room for improvement before we reach a level where agents can do big jobs entirely on their own. Agents aren't perfect yet. But at the rate things are progressing, no one will be surprised if we see such successful agents in some areas of the industry in the coming months.

Remember McKinsey's prediction that 2030 percent of work hours would be taken up by artificial intelligence by XNUMX? It's looking more plausible by the minute. In fact, it's starting to look conservative.


Dangerous agents

The dangers of employing agents cannot be ignored either. The simpler AIs – the big language engines – only perform small tasks under human supervision, and only when they receive a direct command from a human. Agents, on the other hand, will enjoy greater freedom of action. They will be able to choose exactly how to carry out their tasks, and some will even be able to work without being instructed to do so – all according to the instructions they were originally given.

In a recent interview with Forbes, Eldad Tamir, CEO of Finq, which itself uses agents to analyze market trends and produce investment recommendations, claimed that this is a turning point in human history.

"For the first time ever," Said in an interview"Machines will be able to take part in actual decision-making within corporations."

The implication, of course, is that if machines make mistakes in their decision-making, they could cause great harm. Not just in money, but in actual harm to humans. If an agent conducts research and produces recommendations for building a new road, and places too much importance on costs and too little on safety, then the problematic performance could cost human lives. If an agent needs to provide a recommendation as a psychologist or a doctor, but is biased in favor of a particular pharmaceutical company, it may not recommend the most appropriate drug for the patient. All of these things can happen not because of malicious intent on the part of the programmers, developers, or employers, but simply because agents are complex entities with many 'moving parts' that talk, discuss, and argue with each other until the final decision is made. It is not always easy to understand how they arrived at that decision.

So yes, there are dangers, and we need to understand that they exist and think about how to deal with them. The most obvious way is, of course, to create security agents, who will supervise the agents who perform the work in companies. This is probably what will happen. In the coming years, we will see a plethora of companies emerging and offering agents-to-agents. Companies that employ artificial intelligence agents will not be able to afford to let them work without safety or ethical supervision.

And maybe, just maybe, there is a window here for one of humanity's new professions.


Steam engines and bowling

Throughout the Industrial Revolution, we witnessed a strange phenomenon: technology took the place of humans in certain professions, causing them to be laid off – and there was still enough work for everyone.

Take, for example, steamboats. These largely replaced rowers with oars on rivers and seas. What did those rowers do? They found other jobs. Some, for example, became carters and carriage drivers. These were in high demand because, with steamboats bringing goods at low costs, the economy was booming and people were more willing to spend money to get from place to place quickly. Others launched small boats that were still powered by one or two oarsmen, providing a romantic experience for couples in love. Again, as the economy developed, those couples had enough money to spend on such pastimes.

Another example is bowling alleys. In the past, each bowling alley needed its own employee to rearrange the pins after they fell. Then came the machines that rearranged the pins automatically, and all those employees had to find new jobs. But then something extraordinary happened: the cost of running bowling alleys dropped dramatically, because there was no need to employ dozens of such employees. As a result, many new bowling alleys opened all over the world. Each such alley employed fewer employees, but people were still needed to manage the customers, prepare and serve food, take care of the air conditioning and machines, and more. In the bigger picture, because so many new bowling alleys opened, even if each one employed fewer employees – in the end, more new jobs were created.

We will probably experience a similar phenomenon in the coming decades. Yes, artificial intelligence will make many people redundant in their current jobs, but new professions will emerge, and new jobs will open up even more. Probably.

Want an example of such a new profession? Please: an ethical supervisor of artificial intelligence agents.

We are already seeing how, thanks to artificial intelligence, anyone can develop new apps and launch startups more easily than ever before. This means that the number of small and medium-sized companies will likely increase, and each such company will employ at least one agent – ​​and probably many more. And since each such agent will need someone to test, vet, and supervise it, there may be an opportunity here for humans.

Now tell me that the work of the supervisor can also be replaced by an artificial intelligence agent? That's true, but many people feel that there is meaning and importance in human supervision. They want to know that humans are the ones in charge of the machines, and not to transfer control entirely to the machines that will supervise themselves. Perhaps the public will demand and want human supervisors, and thus there will be a great demand for such people. Perhaps.

As you understand, it is impossible to know what will happen. We can only offer possibilities for the future. And maybe they will come true.

Either way, one thing is clear: the industry is currently undergoing a transformation from using 'simple' engines to complex agents that can perform the work of expert humans. For the first time in history, intelligence and the ability to perform complex actions autonomously are being transferred to non-human entities.

And if you still don't understand the magnitude of the change, I can only recommend that you post a job ad for a historian agent. Or... for their future. 

More of the topic in Hayadan: