A linguistic anthropologist argues that the line between humans and machines, in terms of how we communicate with each other, is more blurred than most people would like to admit, and this blurring explains much of the discourse swirling around ChatGPT
By Brendan O'Connor, Associate Professor, School of Transboundary Studies, Arizona State University

ChatGPT is a hot topic at my university, where faculty members are very concerned about academic integrity, while Managers urge us to "embrace the benefits" of this "new frontier". This is a classic example of what I associated Ponia Mishra Mechana "The circle of doom-hype" around new technologies. Also, the media coverage of human-artificial intelligence interaction - whether paranoid or haloed - tends to emphasize its innovation.
In one sense, this is definitely a new era. Interactions with ChatGPT can be unprecedented, as happened when a technology journalist Couldn't get the chatbot to stop professing his love for him. But in my opinion, the line between humans and machines, in terms of how we interact with each other, is more blurred than most people would like to admit, and this blurring explains much of the discourse swirling around ChatGPT.
When they ask me to tick a box to make sure I'm not a robot, I don't think twice about it - of course I'm not a robot. On the other hand, when my email software suggests a word or phrase to complete my sentence, or when my phone guesses the next word I'm about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the software didn't offer it? Am I part of a robot? These large language models were trained on vast amounts of "natural" human language. Does this make the robots part human?
AI chatbots are new, but public debates about language change are not. As a linguistic anthropologist, I find the human reactions to ChatGPT to be the most interesting thing about it. A close look at such responses reveals the beliefs about language that underlie people's ambivalent, uneasy, and still evolving relationship with AI interlocutors.
ChatGPT and its ilk put a mirror to human language. Humans are both highly original and unoriginal when it comes to language. Chatbots reflect this, revealing tendencies and patterns already present in interactions with other humans.
Create or delete?
Recently, the renowned linguist Noam Chomsky and his colleagues claimed that chatbots "Stuck in a prehuman or inhuman stage of cognitive evolution” since they can only describe and predict, not explain. Rather than relying on an infinite ability to generate new phrases, they compensate with vast amounts of input, allowing them to predict which words to use with a high degree of accuracy.
This is consistent with His historical recognition Chomsky's in that it is not possible to produce human language only through children's imitation of adult speakers. Human language ability had to be generative, because children do not receive enough input to account for all the words they produce, many of which they could not hear before. This is the only way to explain why humans - unlike other animals with sophisticated communication systems - have a theoretically infinite capacity to create new expressions.
Noam Chomsky developed the generative theory of language acquisition.
But there is a problem with this argument. Although humans are endlessly capable of creating new strings of language, people generally do not. Humans constantly recycle bits of language they have encountered in the past and shape their speech in ways that respond – consciously or unconsciously – to the speech of others, present or absent.
as as Mikhail Bakhtin put it - a Chomsky-like figure for linguistic anthropologists - "thought itself", along with our language, "was born and is shaped in a process of interaction And struggle with the thoughts of others." Our words "taste" the contexts in which we and others have encountered them in the past, so we constantly struggle to make them our own.
Even plagiarism is less simple than it seems. The idea of stealing someone else's words assumesBecause communication always takes place between people who bring their original ideas to expression. People may like to think of themselves this way, but the reality shows otherwise in almost every interaction - when I tell my daughter something my father said; When the president delivers a speech drafted by someone else that expresses the views of an outside interest group; Or when a therapist communicates with her patient according to principles that her teachers taught her to pay attention to.
In any given interaction, the frame of production - speaking or writing - and reception - listening or reading and understanding - varies in terms of What is said, how it is said, who says it and who is responsible in any case.
What AI discovers about humans
The prevailing view of human language sees communication primarily as something that occurs between people who invent new expressions from scratch. However, this assumption breaks down when Woebot, an AI-based therapy application, is trained to communicate with human clients by analyzing conversations from person-to-person therapy sessions. It broke when one of my favorite songwriters, Colin Malloy from the band The Decemberists, tells ChatGPT write words and chords in his own style. Malloy says that the resulting song is "incredibly mediocre" and lacks intuition, but also resembles the style of Decemberist's songs.
However, as Malloy points out, chord progressions, themes and rhymes in human-written pop songs tend to mirror other pop songs as well, just as politicians' speeches do. pump freely Previous generations of politicians and activists, who were already full of expressions from the Bible. Pop songs and political speeches are especially vivid illustrations of a more general phenomenon. When someone speaks or writes, how much is a new Chomsky? How much is recycled a la Bakhtin? Are we part of a robot? Are robots human?
People like Chomsky who say chatbots are nothing like human speakers are right. However, so are those like Bakhtin who point out that we are never really in control of our words – at least, not as much as we would imagine ourselves to be. In this sense, ChatGPT forces us to rethink an age-old question: how much of our language is really our own?
One response
Maybe we are more similar to a random person sitting next to us than we are to that kid in the nursery we were many years ago?
We are shaped by impulses/emotions that are influenced by genetic expression with the physical and social environment,
On top of that, our social stratum is also shaped, in which the different languages are a central part,
Our impulses/emotions are in constant close interaction sometimes overlapping with the part of us that was shaped by accumulated knowledge of thousands of years of human society,
The new type of AI imitates the outer shell of the person without the system of impulses and desires (they are dynamos) which are a kind of backbone in human thinking,
It is likely that in the future the external boundaries between us and AI will decrease, although as long as it is a system that does not feel there will be a difference
Between us and these systems, it will be a system that is a simulation of consciousness, even a person who is a system with emotions such as empathy for another can turn off parts of his emotions, which can come from social influence, but the AI system will not have any component of emotions that can be turned on at all, it will only be possible to explain them to him The external effects of emotions, but to really understand what emotions are, to feel empathy for the suffering and sadness of another consciousness is something that must be experienced from the inside,
Like a system that analyzes light spectra brings you a number but doesn't see colors like systems that have consciousness such as a person.