Comprehensive coverage

What you can learn from ChatGPT about being human

Perhaps the success of ChatGPT requires us to update our concepts of what is "thought", what is "creativity" and what is "speech". Does ChatGPT think and talk like us, or is it an imitation of the real thing? What does it say about thought if an AI model can think, and what does it say about us if it is possible to imitate so close to reality?

Figure: Dotan Reis using DALEE 2
Figure: Dotan Reis using DALEE 2

In recent years, artificial intelligence models (hereafter AI) have repeatedly proven that they can do quite a few things that we thought were our exclusive property as humans. Chat GPT, the most famous of which was developed by OpenAI, is able, given a query from the user, to produce an original text that demonstrates not only an understanding of the query and the context and extensive knowledge but a considerable creative ability. Many see this as a threat their occupational future And on society as a whole, and probably rightly so. But another facet that doesn't get enough attention in my opinion is the shadow cast by ChatGPT and the like on our perception of ourselves as human beings. ChatGPT's success may require us to update our notions of what is "thought," what is "creativity," and what is "speech." Does ChatGPT think and talk like us, or is it an imitation of the real thing? What does it say about thought if an AI model can think, and what does it say about us if it is possible to imitate so close to reality?

ChatGPT and human language

Ludwig Wittgenstein (1889-1951), one of the greatest philosophers of the 20th century, especially in the context of language, promoted the idea of ​​"meaning as use". He opposed the notion that language is only a verbal expression of the thought that preceded it: that before we knew something, and only then did we express it in language; that first was the meaning, then the word that mediates it was invented. Although he himself held a similar view at the beginning of his career (like most of his contemporaries), he finally came to the conclusion that such views only confuse us. According to him, the meaning of the words comes from the way they are used. More precisely, the uses of the word they its meaning. ChatGPT certainly knows to use In words, and this is what makes it an intriguing case to test this concept.

The ChatGPT training process raises questions about the relationship between language and reality. The model is able to produce coherent and meaningful language based on statistical patterns and relationships between words, without necessarily having an understanding of objects in the real world, or of concepts that the language refers to. This emphasizes the idea that the relationship between language and reality is complex and not always simple and direct.

The quote above was written by ChatGPT himself (in my translation), and refers, with characteristic ambiguity, to the tension between use and understanding. As humans, we have a direct acquaintance with the world. We can talk about a "chair" and also sit on the chair, pick it up, look at it, build it or destroy it; We can imagine chairs, contemplate them and feel things about them. ChatGPT can only talk about chairs, but somehow, even though he doesn't have all the other experiences we have with chairs, he can talk about them in a perfectly coherent way that doesn't betray his inexperience with the real world at all. Indeed, the relationship between language and reality is "not always simple and direct" (in the original: straightforward).

ChatGPT is an AI model based, in abstraction, on learning a statistical relationship between words. The model is built so that given a text it knows how to guess the next word in it. Before he is trained, he guesses gibberish words, but during the training process he "reads" huge amounts of texts, and activates a learning algorithm that adjusts him according to the words that appear in the text. Thus, with experience, he learns to return words that are likely to appear later in the text given to him. In other words, given a certain text (or user query), ChatGPT returns the word that is statistically most likely to appear next to that text. He will then repeat another word, and another word, until he comes to a stop. And this while at no point in the learning process is there anyone to help him see the connection between words and objects in the world. No one presents a chair in front of him, or lets him sit on it, and helps him attach meaning to the word.

Figure: Dotan Reis using DALEE 2
Figure: Dotan Reis using DALEE 2

In a large part of his thought, Wittgenstein tries to separate the practice from what stands behind it, which is inaccessible. ChatGPT is the ultimate test case for his approach, in that his practice - writing - is very similar to ours, while we know that what is behind it is not human. If the meaning is indeed in use, then there is no difference between a human saying and an AI saying.

In one of the most important parts of his book Philosophical investigations (Magnes Publishing House, 2013), Wittgenstein deals with "following a rule" (sections 138-242). He argues there that while we sometimes think we can encompass something in thought, for example a poem, a certain action or the concept of infinity, in fact we cannot (or should) hold the whole thing at once. What we have is the ability to move forward step by step. We know, after humming the first verse, what the first line of the chorus is. We know how to take any number and add 1 to it. We know, given a conversation with someone, to say the following sentence. This is the meaning of knowing a song, understanding infinity and knowing how to hold a conversation. Thus, when we Understand A certain subject, when we They know Something, the thing we know is the rule we follow.

Wittgenstein's claims imply, in my understanding, that what characterizes us is our dispositions. We have a certain mental device that, given different situations, produces different outputs. If this device knows how to add 1 to every number, the person understands the concept of "infinity". If he knows how to continue the song, he knows the song. If he writes the next line in the proof, he understands math. But Wittgenstein did not imagine us as automatons. We also have a mental world, we have inner experiences and emotions, they are just not part of the analysis of how we use language.

The philosophical problem posed by ChatGPT stems precisely from the fact that it does not have an inner world similar to ours, but it nevertheless speaks like us. He manages to follow the same rules as us, use the same language as us. If we accept Wittgenstein's definition of meaning as use, there is no such thing as "imitation" of language anymore. Those who know how to use words also understand them. And if ChatGPT understands the words without ever having touched the things themselves, without having seen a chair, sat on it, felt it, or received a finger hit from it, how much better is our understanding than his, if at all? If it is possible to know everything about the world just by learning semantic relationships between words, what is the value in all these experiences we have accumulated?

Figure: Dotan Reis using DALEE 2
Figure: Dotan Reis using DALEE 2

Be a ChatGPT

ChatGPT allows us a clearer understanding of "meaning as use" and the implications of this idea. If understanding a language is knowing how to speak (or write), it changes what we understand about ourselves and our ability to think. We have certain dispositions, some conscious and some not, and with the help of them we create the language while speaking or writing. Our inner world has no clear role in this process. This does not mean that it is not important for other things: emotions help us process information, we have feelings that guide us about what information to accept and why to resist, etc. All of these help shape us. But given a "query", we put it aside and just start producing speech.

It is interesting to prove that our language learning process does not assume understanding. When children learn their first words, they learn them not through understanding meaning but through use (Wittgenstein refers to language learning inPhilosophical investigations, sections 1-21). They learn that in certain situations certain things are said. We tell them: "Say, mother." mother". We ask that they complete sentences: "one two three, hands on the ". We teach them to associate a situation with the word: "say bye". And this continues for many years. We say "with appetite" without pondering the "meaning" of the word. The same is true with the words "Amen", "Congratulations", "Good luck". It can be said that we teach children to follow rules. They learn statistically what the next word should be said.

Of course, children's learning is much more difficult than ChatGPT: the world of stimuli is much more complex and rich and includes not only text. They need not only to understand what needs to be achieved now but also how to express it and to whom, and when to speak to them. For this they do need to know what a "chair" looks and feels like. They need the hand to point them to the object while they hear its name. But many times it is also possible without it, and when it comes to abstract concepts there is not even anything to point to. In other words, we start by understanding the usage, and the mental link comes later, or in some cases never comes, or remains vague.

Perhaps our ability to use language has less to do with understanding than we are used to thinking. Maybe when we learn to speak, we're just learning Speak. Our feelings, emotions and character have other roles. Speaking is just an action. When we start this action, we don't always know how it will end and where we will end up. Sometimes it is clear to us what the continuation of the thought is, and sometimes there is uncertainty (which is certainly also true with ChatGPT: sometimes there is one word with a high probability, and sometimes several with a low probability). Sometimes our speech flows and sometimes we get stuck in the middle, or have to go back and correct. Sometimes we come to conclusions that surprise us. Sometimes we find that the "question" dictates the conclusions we reach, Something that also happens to ChatGPT.

Perhaps our use of language is merely following a rule. It's a rule defined by who we are and our experiences, so it's ours, in a sense. But the result is not really ours. Perhaps it is correct to think of speech more like a habit: the habit is ours, when its results are not good, it is possible and desirable to consider changing it.

Figure: Dotan Reis using DALEE 2
Figure: Dotan Reis using DALEE 2

Think, step by step

When ChatGPT "thinks," it can "generate harmful instructions or biased content" and "generate misinformation," as the site warns us. People too, of course, produce harmful guidelines andBiased and incorrect content. In social networks, in talkbacks and also in debates where it is important for us to win. Maybe the difference between us and ChatGPT is not that big. We receive some input from the world and start thinking in language. What makes us believe that our thoughts are necessarily the "right" thoughts? How do you know if your response to this article is correct, or helpful? By what criteria?

In the past I wrote here about Believers of the flat earth that they have A certain method of acquiring knowledge, with the help of which they came to the conclusion that the earth is flat. I tried to argue that if they examine the method itself, and not just the arguments, they will be able to easily see why there is no chance of success with such a method. But that's easier said than done, and probably no "stokhist" was convinced.

It is not necessarily easy or self-evident to find an objective criterion for judging our thoughts, and it is a problem that preoccupies, among others, philosophers who try to define what truth is, psychologists who try to find a benevolent way of thinking, political thinkers who try to develop ways to bridge social polarization, and more. This is one of the things that ChatGPT can't (yet) help us with, but it can inspire us to be less jealous of our opinions and allow us to see different perspectives. Then we will have to decide for ourselves which ones are better. This is (still) our sole responsibility as humans, and perhaps thereby (still) the human is allowed from the machine.


  • In this article I have taken Wittgenstein's philosophy as a given. His view is quite dominant today, but (as with any philosophical view) it has opponents with different opinions. For a successful introduction to Wittgenstein's philosophy and philosophy of language in general: Contemporary Analytic Philosophy / Karl Munitz.
  • Juan Luis Gastaldi He is the historian of science who, among other things, deals with the epistemological implications of AI for the world of mathematics, and his lecture on the subject was an important inspiration for the post.
  • The quote from ChatGPT in the article came as an answer to the query "Explain how chatGPT was trained" and then "Relate that to philosophy of language and meaning". Did you manage to get different and interesting results out of it?

8 תגובות

  1. A bird feather flying in the wind
    Found a friend in a little bird
    that amuses herself with her perception
    teaching her to fly and sing,

    A song of animals and nature
    a love song
    A song of freedom and courage
    A song about a feather and a bird,
    The bird's feather was alone
    in the wide and great world
    But when she saw the little one
    She smiled at her with friendship and a happy heart.

    Written at my request from Microsoft's CHAT GPT to write me a short poem that contains the words bird feather..

  2. Shmulik,
    That's right, my claim is actually only relevant to written communication, which is the only thing GPT knows, and it's communication that doesn't correspond directly with things in the world (like the communication between us right now).
    What you claim about the chair is that GPT user In the word "chair" in a different way, and if this is really true, then also according to Wittgenstein he understands the word in a (slightly) different way, but I am not sure that it is true that he would not know how to say that a tin and a branch are things you can sit on.

  3. I'm not a professor, and I don't know how you define "creativity", but why not?

  4. Is there a place, Professor Dotan Reiss, to talk about "creativity" (human or of an AI system) other than in the context of "inventiveness"?

  5. Fascinating article.
    I have two comments that may also allude to the gap between natural and artificial intelligence.
    1. Wittgenstein did speak of meaning as use, but he did not mean the use of language in isolation from reality. On the contrary, in order to understand the meaning, one must observe the use of language within the situation, in the everyday reality in which the particular use is made. Language is therefore closely related to the way of life. The statistics do not reflect the enormous wealth of situations in reality and it is perhaps easy to understand this from the following example: on the question of what should be done if a fire breaks out in the room I am in - the GPT will list the possible steps in a good way.
    But if a fire does break out in the room, the GPT will not shout loudly "get out of the room quickly!" As any person with a little "common sense" would do.
    2. In the concepts of everyday language there is ambiguity in the sense that it is difficult to find a precise definition for the concept so that it contains all the objects that belong to the concept and only them (a.k.a. a group in group theory).
    You can see this, for example, with the term "chair".
    Is a chair something that has 4 legs, a seat and a backrest or maybe everything you can sit on? None of the definitions are inclusive and exhaustive.
    This ambiguity in the language allows the flexibility of use according to different and new reality situations.
    Thus a tin, a branch of a tree or a suitcase can be a chair if necessary. The GPT, for example, will have difficulty offering a branch of a tree as a chair if statistically this use is rare or non-existent.
    Both comments indicate a gap between the 'understanding' of AI and the 'understanding' of natural intelligence at least when it comes to the relationship between language and reality. This could also explain why the GPT produces false answers so easily.

  6. God forbid I underestimate Wittgenstein. He believed that philosophy should determine boundary areas regarding the number of debates in the various life sciences. For him philosophy is not a body of doctrines but something active based on logic. In the end he came to the conclusion that
    "What can be shown cannot be said" (see: Tractatus Logico-Philosophicus by Wittgenstein chapters 4.005 - 4.112). However, my argument is that there is no need to demarcate the field of thinking of what is acceptable and what is not, but to break through the framework and prove its fallacies, and this is done through things that go beyond the "normal" gender, for example fat tails, similar to the thinkers and scientists who were also the first philosophers of quantum mechanics, Heisenberg, Einstein and Bohr (as far as I know none of them studied in the philosophy department).

  7. 1 - Let's say there is such an Israeli Israel and on five different websites it is written that he is a rapist and a murderer. And they will ask you Who is Israel, an Israeli? Won't you say he is a rapist and a murderer?
    2 - It seems to me that you are a bit underestimating Wittgenstein if you think that he would have thrown his theory in the trash if he had been aware that language could be encoded in 0's and 1's. I don't see it as something very philosophically significant.

  8. It's a shame to mix oranges and apples. At the time, Wittgenstein was not aware of the binary language of a computer on which the ChatGTP is based, i.e. 0 or 1. Artificial intelligence is based on statistics and statistics are very easy to distort. Let's assume that there is no name of Israel, an Israelite, a man upright and fair as snow. I can open five different websites in different countries and write that the same Israeli is a rapist and a murderer. The result will be that if they ask ChatGTP who is Mr. Israeli? An Israeli will receive an answer that the man was a rapist and a murderer. Equally important, and even more important, is the randomness. In a binary language, there is no pure randomness, only pseudo-randomness, and here, too, as in statistics, the randomness can be distorted by finding its weak points. The lesson was learned from the hundreds of billions that were wiped out of the financial markets following models based on statistics and pseudo-random and usually due to events of fat tails (thick tail distributions) or high hunchback (kurtosis).

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.