Lema 3.1, and it successfully competes with the most advanced artificial intelligences today produced by OpenAI and Anthamorphic. And yet, as mentioned, Meta releases it for free to anyone who wants it

If someone gave you a gift that cost them billions of dollars, completely free, would you be willing to accept it? without obligation. No prerequisites. just take ready?
This is the question facing the world today, when the company Meta - the one behind Facebook, WhatsApp and Instagram - releases its newest artificial intelligence engine for free. The engine's name is Llama (Llama). Lema 3.1, and it successfully competes with the most advanced artificial intelligences today produced by OpenAI and Anthamorphic. And yet, as mentioned, Meta releases it for free to anyone who wants it.
And the question we all need to ask - and do ask - is, why? Why give the llama for free? And since there are no free meals in this world, how much will we pay in the end, and in what way?
To even try to answer these questions, we first need to understand what it is and why. So.
about what and why
The most advanced AI engines on the market today are GPT4-O and Cloud 3.5. These are 'closed' engines. That is, the companies that developed them offer users access to these engines in exchange for a monthly payment, or according to each application to the engine. Why 3.1 works differently: Meta released it for free. Anyone who wants can download it to their computer and run it. True, you need a computer with a powerful graphics card to run the most advanced version of Lema 3.1, but companies and developers have such computers in abundance.
But will people even want to run it? If we rely on the capabilities of Lema 3.1, the answer is a resounding yes. The new Lemma competes head-to-head with the two most advanced engines today, and even overtakes them in certain tests. It surpasses Claude in its mathematical abilities (according to the GSM8K, MATH tests), for example, and GPT4-O in its ability to follow instructions (IFEval). It leaves both behind in terms of the ability to use digital tools. And yes, it also loses to its competitors in some categories, but the point is that Lemma can stand on the same podium as the big winners in the AI Olympics without being embarrassed. She earned this place with honor.
And yes, it is open. Not just open for use, but also for download and implementation on any computer. And this is a very, very big deal.
The importance of openness
In the middle of 2023, an engineer at Samsung decided to check if ChatGPT could help him improve the code he wrote. He uploaded the code to ChatGPT, got more-or-less good advice, and completed the job on time.
Meanwhile, all the Samsung safety officers' alarm lights started flashing red, while the sirens wailed non-stop. Because they understood what that innocent engineer did not understand: that any information you upload to the Internet, passes through the network cables, through countless routers in different countries and finally reaches the servers of other companies such as OpenAI and Microsoft. Any such step in the process constitutes a security breach. Countries can try to mine the important content by hacking the cables that transmit the information or the routers. They can force the companies that store the information to share it with the government. Hackers can break into companies and take the information home - and then use it to figure out how to break into Samsung's own servers as well.
If that's not enough, then OpenAI itself may use the information to train its next models. Artificial intelligence experts who know how to talk to it just right, may find ways to retrieve parts of Samsung's code, out of all the information the artificial intelligence will be trained on. No company wants its internal and most secret code to be exposed through a ChatGPT request.
Samsung did not share what happened to that engineer, but we know its final decision very well: it publicly announced that its employees Can no longer use ChatGPT. And it was also joined by other companies such as Amazon, JPMorgan, Citigroup, Wells Fargo, Goldman Sachs and others. They are all afraid to take the information out of their internal servers, and therefore their employees are unable to use today's most sophisticated artificial intelligence engines to optimize their work.
And suddenly, why?
Within a few months from today, Lema 3.1 will be installed on thousands of servers of small, medium and large companies all over the world. It will allow every employee to work with one of the most sophisticated artificial intelligence today, without having to worry about the information being leaked to the network. And more than that: Why 3.1 will be more effective for the employees, because the model is open and can be trained on the vast internal information stored on the servers of each company.
What does it mean? Let's take as a purely theoretical example the General Health Insurance Fund. The health insurance company wants to provide an artificial doctor who can diagnose every patient with high efficiency, but she is afraid. why? Because in order to produce such a doctor, he needs to be trained on all the medical information that a general health fund has on its patients, and the fund is not interested in releasing this information to the network for obvious reasons. Yes, she can "anonymize" the information - that is, try to remove from it details that indicate the identity of the patient - but such processes are never perfect. So a general health fund is left stuck with a huge collection of data, which it simply can't use effectively to train its next-generation AI.
And suddenly, why? And now Klalit can install the artificial intelligence of the next generation in its internal servers, train it on all the patients' medical information without fear of information leakage - and now it has a computerized doctor who is perfectly suited to Klalit's patients.
Take the same principle and apply it to clerks, analysts, insurance consultants and any other profession that can be found in the big companies. If until now there have been many complaints that GPT does not understand the exact context of the tasks given to it, then they will soon begin to be resolved within the companies. The new artificial intelligence, based on why, will know how to do the job of many people better than them - simply because it will get to train on all the information produced by their predecessors in the job. It is very likely that after such training, why 3.1 will be more powerful than any other engine in performing tasks within the companies.
And every company will want to install Lema 3.1 in their company, because Meta does not require anything - nothing at all - in exchange for using it. Just download, install and run. This.
but why?
Who knows why and why
The development of new artificial intelligence of the type of ChatGPT is not a cheap matter, to say the least. When the OpenAI company launched GPT3 – the father of ChatGPT – in 2020, it was estimated to have spent almost five million dollars on the AI training process. To launch GPT4, they already had to invest more than a hundred million dollars. According to one insider, they have already spent $5 billion to train GPTXNUMX, and more are on the way.
Companies in the capitalist market do not waste hundreds of millions of dollars in vain. They should see it as a profit. Fortunately for OpenAI, it seems to be succeeding. According to an analysis from June 2024, the company's revenues stand at almost 3.5 billion dollars a year. One of its biggest competitors, Anamorphic, is expected to arrive for income of 850 million dollars in 2024. They collect these sums thanks to the collection of subscription fees and usage fees for the artificial intelligence engines that they develop, as mentioned, in Capital Aviation.
Then comes Meta, training a model with an estimated cost of more than a hundred-million dollars... and offering it for free.
And based on past experience, it's probably a really good idea. At least for meta.
The power of free
When the Finnish student Linus Torvalds started a personal project to create a free operating system in 1991, he did not imagine that a day would come when thousands of programmers and developers would invest their best energy and ideas in improving that system. for free Or giant companies like IBM, Dell and HP will use its operating system, despite being completely transparent and open to changes.
Nevertheless.
Torvalds called his operating system "Linux", and allowed others to offer ideas on how to improve it - and send him pieces of code that he could figure out and integrate into the system. Dozens, hundreds and finally thousands of programmers started sending him directions for improvement, and Torvalds found himself drowning under the load. He had to set up an entire organization to identify the best offers and codes and implement them into the system. The same system, I remind you, that anyone could download from the net and use it. Every individual, every company, owning zero from zero.
Where does the money come from? Mostly from donations, although they are not exactly defined that way. Big companies like Ericsson, Intel, Microsoft and others, Hundreds of thousands of dollars a year are donated to Torvalds' organization on "platinum subscription". In exchange for this small amount (which roughly corresponds to the salary of one senior engineer at these companies), they get the right to sit around the table and be involved in the decision-making process that determines what the next version of Linux will look like. And even if their actual ability to influence is limited, they at least learn about the expected changes ahead of time, and can adapt themselves to the future that is around the corner.
Why is it so important for companies to influence the way Linux will work, or even understand in advance what any upgrade of the system will look like? Because Linux is currently involved in the proper operation of servers and computers all over the world. About four percent of personal computers run Linux. Almost all servers (96.4 percent of the world's largest network servers) rely on Linux. All the supercomputers that exist today are based on Linux, along with Samsung and LG TVs, smart cars (Tesla, Hyundai and others) and even rockets launched into space.
All of these use a completely free and open system, which has become so large and important to the industry that no serious company in the field of computing can afford not to use it. And not only that, but the companies encourage their engineers and developers to contribute to Linux code and ideas for improvement, as a way to gain fame - and perhaps advance their directions a little more.
Meta, of course, wants to do exactly the same with Lema 3.1. Zuckerberg himself clarified in the letter which recently released, that Linux is an example of the way in which open source can prevail over "closed source". Meta released Lemma Modally to the industry so that everyone will use Lemma sooner or later - and there's a good chance that's exactly what will happen.
But what does Meta get from everyone using its engine?
Force
If by now the world sounds too beautiful, well, there's a reason for that. Everything I described is absolutely true. Companies can indeed release open source for free, and even make a decent living from it through donations, once it reaches a suitable size. But there is also a more selfish side to the matter. Specifically, whoever develops the code that everyone else uses, gets power. And even a lot of it.
The companies that install the systems that everyone else uses, determine the rules of use either explicitly or implicitly. They can try to define precisely who can use which system and why, or they can design the system to disallow certain uses. And those who will try to act against the company - will miraculously discover that the most common system in the world no longer supports his uses.
This is the power that can be gained by a company that stands behind the largest "free" and "open" products on the market. And while I don't blame Meta for that, of course, I'm just saying that it would be strange to find a company that would undermine a huge product, and not use the power in its hands to make sure that it is more adapted to its needs and not, say, the competitors. Along the way, Meta can also benefit from all the innovation of startups that rely on its artificial intelligence, and you can acquire them and integrate them into its world more easily.
All this may well succeed for Meta, if it establishes Lema as "the artificial intelligence system of the industry". This may also be the reason why 3.1 is so successful in using tools - more than all the competitors - and even tools it has never seen before. It is intended to be the next system that the entire industry will use. And along the way, give meta a huge power in shaping the future.
But this, of course, is assuming that there will be a future at all. Because Lema 3.1 exemplifies the bigger direction we're going: the one where every kid gains the power to run supercomputers on their home gaming PC.
Why give superpowers to everyone?
Think of a world where every child can run advanced artificial intelligence on their computer to get detailed and impressive plans to carry out terrorist attacks. or to create viruses in the laboratory. or to synthesize new deadly chemical warfare agents. or take over the world.
Basically, leave children. Terrorist organizations, or national armies, will be able to run Lema 3.1 for any malicious purpose they want. They will be able to use it to develop advanced terrorist agents, produce computer viruses and break into computers with weak security. At the same time, intelligence organizations will be able to train it on their internal information to get the most successful intelligence agents and analysts, as a way to continue monitoring the population.
Today it is possible to prevent a significant part of the negative uses, because the artificial intelligence companies monitor the requests they receive. A large number of requests that lead to malicious use of the system will lead to OpenAI not allowing those users to continue sending requests to the system.
But what happens when no one controls AI anymore? When Meta simply releases the most sophisticated and advanced version to the wider world - and expects and hopes that we will all use it for good?
Well, Meta isn't that innocent. She implemented in Lemma 3.1 some limitations on how to use it, and some ways to try to protect the artificial intelligence from hacking attempts. But they can also be bypassed, and it won't even be particularly difficult for those who know how to work with artificial intelligence. That is, for every 12-year-old child and for every terrorist starting in a decade.
Does this mean we are all lost? I wouldn't be so quick to bury us all. Yes, children, terrorists, and hackers-in-service-of-the-state will use open AIs to gain superpowers. But alongside them, good people and governments that want to protect their citizens will also receive similar superpowers. In the future, as in the past, weapons and defense systems will be developed in a never-ending race. This is another excellent reason for any country not to be left behind, and to continue investing in artificial intelligence and the people who develop and work with it.
Why? now you know
More of the topic in Hayadan: