"System 0", which will support and improve our cognitive abilities in the future, is the ongoing revolution described in the journal "Nature Human Behavior" by a multidisciplinary group of scientists
The relationship between humans and artificial intelligence creates a new thinking system, a new cognitive thinking scheme, external to the human brain, but capable of improving human thinking abilities. This system is called "System 0", and it works alongside the two models of human thinking: System 1, which is characterized by intuitive, fast and automatic thinking, and System 2, which represents more analytical and in-depth thinking. However, System 0 adds another layer of complexity, radically changing the cognitive space in which we operate. This may be a huge step forward in the evolution of the human ability to think and make decisions. We will have to make sure that this progress is used to improve our cognitive autonomy, without harming it.
This is described in the prestigious scientific journal "Nature Human Behaviour", in an article entitled "The argument for interaction between humans and artificial intelligence as system 0 thinking", written by a team of researchers led by Professor Giuseppe Riva, director of the Humane Technology Laboratory at the Catholic University of Milan campus, The Laboratory of Applied Technology for Neuropsychology at the Auxologico Italiano Institute IRCCS, Milan, and Professor Mario Ubiali from the Catholica University in the Brescia campus. The study was conducted in collaboration with Massimo Chiriati of Lenovo's Infrastructure Solutions Group in Milan, Professor Mariana Gnaffini of the Department of Philosophy at Union College, Schenectady, New York, and Professor Enrico Fanai of the Faculty of Foreign Languages and Linguistics at the University of Cattolica, Milan.
A new external way of thinking
Similar to an external storage medium that allows us to save data that is not present on the computer, and we can connect to it from anywhere, artificial intelligence, with its galactic capabilities in data processing, can represent an external circuit to the human brain capable of improving it. This is where the idea of System 0 comes from, which is actually an "external" way of thinking that relies on the capabilities of artificial intelligence.
By managing vast amounts of data, artificial intelligence can process information and provide suggestions or decisions based on complex algorithms. However, unlike intuitive or analytical thinking, System 0 does not assign internal meaning to the information it processes. In other words, artificial intelligence can perform calculations, predict and create responses without really "understanding" the content of the data it is working with.
Humans, therefore, have to interpret themselves and give meaning to the results produced by artificial intelligence. It's like having an assistant that gathers, filters and organizes information efficiently, but still requires our intervention to make informed decisions. This cognitive support provides important inputs, but ultimate control must remain with humans.
The risks of System 0: loss of autonomy and blind trust
"The risk," emphasize Professors Riva and Obiali, "is to rely excessively on system 0 without exercising critical thinking. If we passively accept the solutions provided by artificial intelligence, we may lose the ability to think for ourselves and develop innovative ideas. In a world that is becoming more and more automated, it is important that humans continue to ask questions and confront the results produced by artificial intelligence," they emphasize.
Furthermore, the issues of transparency and trust in artificial intelligence systems constitute another central dilemma. How can we be sure that these systems are free of bias or distortions, and that they provide accurate and reliable information? "The growing trend of using synthetic data or artificially created data may damage our perception of reality and negatively affect our decision-making processes," warn the professors.
They also point out that artificial intelligence may disrupt the human capacity for introspection - a thought process unique to humans in which we reflect our thoughts and feelings. However, with the advancement of artificial intelligence, we may be able to rely on intelligent systems to analyze our behaviors and mental states. The question that arises is: to what extent can we really understand ourselves through the analysis of artificial intelligence? And can artificial intelligence reproduce the complexity of subjective experience?
Despite these questions, System 0 also offers enormous opportunities, the professors point out. Thanks to its ability to process complex data quickly and efficiently, artificial intelligence can support humanity in solving problems beyond our natural cognitive ability. Whether it is solving complex scientific problems, analyzing huge databases or managing complicated social systems, artificial intelligence may become an inseparable partner.
To realize the potential of System 0, the authors of the article suggest that ethical and responsible guidelines for its use should be urgently developed. "Transparency, accountability and digital literacy are key elements that will enable people to critically interact with AI," they warn. "Educating the public on how to navigate this new cognitive environment will be critical to avoid the risks of over-reliance on these systems."
The future of human thinking
They conclude: if we are not careful, System 0 could interfere with human thinking in the future. "It is essential that we remain aware and critical about how we use it; The true potential of System 0 will depend on our ability to lead it in the right direction."
More of the topic in Hayadan:
One response
The sin of arrogance
The human arrogance of trying to control biology at the level of "harming others but without it harming us" - biological warfare and even chemical warfare
It has been proven again and again and again and again - that always in the end it sometimes hurts the initiator of the war
Still keep trying to use it
(Sometimes = always in the end - because in the statistics test, even if in certain percentages it succeeds in hitting only those who want to but not in "our group" but we try enough times - in the end the percentage of hitting us also reaches 100%)