>

By using AI wisely, we can know its true nature and the secrets of our brain.

Future historians may consider 2023 as a milestone in the advent of Artificial Intelligence (AI). But whether that future will prove to be imaginary, apocalyptic or somewhere in between, no one knows. In February, ChatGPT set a record as the fastest app to reach 100 million users. Next came AI models from Google, Amazon, Meta and other big tech companies, which collectively are set to transform education, health care and many other knowledge-intensive sectors. However, in May a statement signed by leading researchers highlighted the potential for harm from AI. Reducing extinction risk from AI should be a global priority, along with other societal-level risks such as pandemics and nuclear war. In November, responding to growing concern about AI risks, 27 countries (including the UK, US, India, China and the EU) convened at Bletchley Park in England to ensure the safe development of AI for the benefit of all. Promised cooperation at AI Security Summit. To achieve this, researchers focus on AI alignment – ​​that is, how to ensure that AI models are consistent with human values, preferences and goals. But there is a problem – the so-called dark secret of AI: large-scale models are so complex that they are like a black box, impossible for anyone to fully understand.

AI’s black box problem

Although transparency and explainability of AI systems are important research goals, such efforts seem unlikely to keep up with the frenetic pace of innovation. The black box metaphor explains why people’s beliefs about AI vary across the world. Predictions range from fantasy to extinction, and many even believe that artificial general intelligence (AGI) will soon achieve consciousness. But this uncertainty increases the problem. AI alignment must be two-way: We must ensure not only that AI models are consistent with human intentions, but also that our assumptions about AI are accurate. This is because we are remarkably adept at creating a future consistent with those beliefs, even if we are unaware of them. The so-called expectancy effect, or self-fulfilling prophecies, is well known in psychology. And research has shown that manipulating users’ beliefs not only makes a difference in how they interact with AI but also affects how adaptive the AI ​​is to the user. In other words, how our beliefs (conscious or unconscious) influence AI can potentially increase the likelihood of any outcome, including catastrophic outcomes.

AI, computation, logic and arithmetic

We need to probe more deeply to understand the basis of AI – like Alice in Wonderland, go down the rabbit hole and see where it takes us. First of all, what is AI? It runs on computers, and so do automatic calculations. From its origins as the perceptron – an artificial neuron mathematically defined by neurophysiologist Warren McCulloch and logician Walter Pitts in 1943 – AI has been linked with cognitive science, neuroscience and computer science. This convergence of mind, brain, and machines has given rise to the widely held belief that, because AI is computation by machine, then natural intelligence (mind) must be computation by the brain. But what is the calculation? In the late 19th century, mathematicians Richard Dedekind and Giuseppe Peano proposed a set of principles that defined arithmetic in terms of logic, and inspired efforts to base all mathematics on a secure formal foundation. Although logician Kurt Gödel later proved that this goal was unattainable, his work was the starting point for mathematician (and code-breaker) Alan Turing. His Turing Machine, an abstract device capable of universal computation, is the foundation of computer science.

deep structure of perception

Therefore, calculus is based on mathematical ideas that are based on attempts to define arithmetic in logic. But our knowledge of arithmetic predates logic. If we want to understand the basis of AI, we have to go further and ask where arithmetic comes from. My colleagues and I have recently shown that arithmetic is based on the deep structure of perception. This structure is like colored glasses that shape our perception in particular ways, so that our experience of the world is organized and manageable. Arithmetic consists of a set of elements (numbers) and operations (addition, multiplication) that combine pairs of elements to give another element. We asked, in all possibilities, why are numbers elements, and why are addition and multiplication operations? We showed by mathematical proof that when the deep structure of perception was considered to limit the possibilities, arithmetic was the result. In other words, when our mind looks at the abstract world through the same colored glasses that shape our experience of the physical world, it looks at numbers and arithmetic. Since arithmetic is the foundation of mathematics, it follows that mathematics is a reflection of the mind. An expression in symbols of its primal nature and creativity. Although the deep structure of perception is also found in other organisms and is therefore a product of evolution, only humans have invented mathematics. It is our most intimate creation and, by enabling the development of AI, it is perhaps our most consequential achievement.

promethean fire

What are the implications for AI? First of all, AI has no mind and will never become sentient. The idea that we can transcend our biological nature and achieve immortality by uploading our minds to the cloud is mere fantasy. Yet if the theory of mind on which AI is based is shared by all of humanity (and possibly other living beings), it may become possible to overcome the limitations of our individual minds. Because computation is universal, we are free to simulate and create any outcome we choose in our increasingly virtual and physical worlds. In this way, AI is actually our Promethean fire, a gift stolen from the gods to humanity just like in Greek mythology. As a global civilization, we are possibly at a turning point. The AI ​​will not be sentient and decide to kill us all. But we are very capable of sabotaging ourselves with it – hope can create reality. Efforts to ensure AI alignment and security are extremely important, but may not be enough if we lack awareness and collective wisdom. Like Alice, we have to wake up from the dream and recognize the reality and power of our mind.

Leave a Comment