Human and Machine Intelligence: A thought Experiment and a few questions
With the intention of using it as a framework for investigating machine intelligence, this author set out to describe a straightforward, comprehensive, and easy-to-understand theory of how the brain produces intelligence.
Committed to defining intelligence in terms of perception, brain physiology, and anatomy, he was confident that such a broad framework would enable his theory to not only uncover the secrets of how the brain creates intelligence but also how it becomes aware and comprehends the thoughts it generates.
Ignoring most of the other brain systems, Hawking's framework focused on the seat of intelligence, the neocortex.
The MIT Turing Curse
When Jeff Hawking arrived on the scene in the early 1980s, he found that neuroscience had abundant data but lacked theories. He therefore set about utilizing these data, initially as the source material for his thought experiments and later as the foundation for crafting a heuristic theory of intelligence of his own.
Drawing inspiration from Alan Turing's indirect approach to defining intelligence, Artificial Intelligence (AI) at the time aimed to construct computer programs that exhibited intelligent behavior. With this strategy, there was no need to directly define intelligence or define what it meant to "understand."
Indeed, when Turing himself addressed the question of how to build an intelligent machine, he believed that computers could be intelligent but refrained from engaging in the nitty-gritty arguments about its feasibility. He also felt unable to formally define intelligence, so he refrained from attempting to do so.
Instead, he proposed an existence proof for intelligence-- the infamous Turing test: if a computer can deceive a human interrogator into believing that it is merely another person, then by definition, that computer is intelligent.
Guided by the Turing test as his measuring stick and the universal Turing machine as his medium, Turing played a pivotal role in launching the field of AI.
The central dogma of AI was that the brain is merely another type of computer. Regardless of how an AI system is designed, if it exhibits human-like behavior, it is inherently intelligent.
Following Turing into this ill-conceived knowledge silo, Hawking believed that the promoters of AI had overlooked two crucial elements in building intelligent machines: intelligence and understanding.
In fact, MIT, the birthplace of AI, took great pride in disregarding neurobiology because, as they argued, the neurosciences were simply too complex.
Their ultimate goal was to write computer programs, akin to Turing's universal computing machines, that would match or surpass human capabilities. If this endeavor failed, they would simply simulate the entire brain.
At the heart of the Turing perspective lay the notion that computers were fundamentally the same regardless of their construction details. In the Turing universe, intelligence was the enigmatic black box that only needed to demonstrate intelligent behavior.
However, Hawking would demonstrate that intelligence is not merely a display of intelligent behavior but rather a phenomenon that occurs within our brains.
While AI achieved remarkable success employing the Turing approach, it remained true that even with its triumphs over human beings in chess by Deep Blue, AlphaGo by DeepMind, and Jeopardy by Watson, AI still struggled to prove that Turing's universal computing machine was all that was required to replicate human intelligence in a machine.
Arguably, while those three programs were highly successful and, to some extent, engaged in both creative and autonomous decision-making, few experts mistook their actions for genuine human intelligence.
Most significantly, there is no evidence that any of these programs comprehended their own actions, nor did their designers appear to recognize the significance of "understanding" in defining intelligence.
Thus, while those three superhuman programs were undoubtedly impressive and groundbreaking, the jury is still out on whether they were merely large and fast storage devices equipped with sophisticated lookup tables, or if they were merely the computer equivalent of John Searle's "Chinese Room," exhibiting intelligent behavior without truly understanding its origins.
The recent "passing of the Turing test" by a machine only further confused the matter of the value of using an indirect test to determine what is intelligent.
To truly define intelligence, the author decided he needed an entirely different framework than that provided by Turing's infamous test.
He sought a framework that acknowledged the fact that while computers and brains are built on fundamentally different principles, they share many commonalities.
As for their differences, one is programmed, while the other is self-organized. One requires perfection to function, while the other tolerates failures. One has a CPU, while the other lacks centralized control. Computers are human creations, while brains evolved through self-assembly, making them autonomous, self-regulating, and capable of understanding the meaning of their outputs.
On the other hand, CPUs are merely collections of logic gates, and computer chips are composed of millions of logic gates working together to compute algorithms. Similarly, nerve cells can function like living logic gates, implementing the same basic logical operations as a CPU. And, like computer chips, billions of neurons collaborate to form the nervous system. Additionally, the neocortex utilizes vast amounts of memory to make predictions and compute algorithms.
Beyond these comparisons, Hawking recognized that brains not only perform computations like computers but also think, are aware of their thoughts, and comprehend their meanings.
These are profound differences that cannot be cleverly concealed behind superficial answers to one-on-one interrogations.
Hawking believed that it is precisely these latter differences, that, so far, no computer can perform, that must be at the center of any theoretical framework attempting to explain human intelligence.
This is when Hawking's engaged in an Einstein-like thought experiment that resulted in an epiphany about how brains come to perform the cognitive abilities that computers are incapable of.
His "aha moment" was realizing that brains are quick to attend to new information. Could this also mean that the brain utilizes its memory as a pattern-recognition and prediction machine?
Indeed, for every activity, the brain must be simultaneously making parallel sensory predictions about what it anticipates seeing, hearing, or feeling even before it experiences them? Under such conditions, accurate matches with unfolding reality (that is accurate predictions) would undoubtedly be recorded by the brain as "understanding."
Therefore, the ability to make anticipatory predictions across parallel sensory modalities must be at the core of the brain's processing? Our brains must automatically store familiar prediction patterns and then "attend" to the unusual.
In essence, the brain must continuously generate low-level predictions in parallel across all our senses simultaneously because most of its predictions occur routinely in the background outside awareness. These predictions must be so pervasive that what we perceive and how we perceive the world cannot solely originate from our senses alone but must also be a combination of what we sense and the predictions derived from our memory.
With this thought experiment, Jeff Hawking believed he had unlocked the code of human intelligence.
He had just postulated that pattern recognition and predictions were the primary functions of the neocortex and, consequently, the foundation of intelligence.
If we want to comprehend what intelligence is, what creativity is, what consciousness is, what awareness is, how our brain functions, and how to construct intelligent machines, we must grasp the nature of these predictions and how the neocortex generates them. Even intelligent behavior itself is best understood as a byproduct of predictions.
The author dubbed his discovery, the memory-prediction framework. It would provide the detailed mechanisms of how the brain actually implements its intelligence, awareness, consciousness, and understanding.
In Walks Vernon B. Mountcastle
Some time after this epiphany, Hawking came across a paper titled "An Organizing Principle for Cerebral Function," authored by Vernon B. Mountcastle, the esteemed father of neuroscience at John Hopkins University.
This paper should have been to neuroscience what Einstein's 1905 paper on Special Relativity was to physics. But it wasn't.
Despite being a renowned professor, Mountcastle faced challenges in convincing his colleagues of the most astonishing theoretical discovery in brain science: the remarkable uniformity of the sensory and motor regions of the neocortex. These regions, which handled inputs and outputs, were remarkably similar in appearance.
For instance, the auditory input resembled the regions that processed touch, which in turn resembled the regions that controlled muscles, which in turn resembled Broca's language area, and so on. The neocortex, encompassing all these disparate sensory regions, appeared as smooth as the calm waters of a Minnesota lake.
The question arose: why was this so? Mountcastle proposed an intriguing answer: since these regions all looked the same, perhaps they performed the same fundamental operations?
Based on this observation, he suggested that the cortex might utilize the same identical computations across all sensory modalities to accomplish its diverse functions.
While the validity of Mountcastle's paper is still being determined by time and peer-reviewed research, a closer examination at the electron microscope level revealed a remarkable truth: Mountcastle was probably correct.
The entire structure of the neocortex (approximately the size of a serving napkin when spread out) contained all the sensing modalities, each packed into the same six-layered stacks of neurons. Each layer was about the thickness of a playing card, and all these layers operated in a synchronized manner.
While there were indeed variations among these modal areas, these differences did not translate into variations in the way the respective signals were transmitted to the brain.
And here's where Mountcastle's brilliance shines, making his theoretical weight palpable. Regardless of the sensory modality, the signals sent up to the neocortex employed the same signaling mechanism. Each sensory input was converted into identical axonal spikes before becoming an output sent to the neocortex. Consequently, Mountcastle's paper was accurate. The uniform surface of the neocortex resulted from the execution of the same axonal algorithm by all signals received, irrespective of the sensing modality engaged in the sending.
This realization underscores the fact that the visual area is visual and the motor area is motoric because of the connections between the regions of the cortex and the central nervous system. Despite their structural differences, the mechanisms for transmitting signals to the brain remain consistent.
Upon reading Mountcastle's paper, Hawking concluded that his thought experiment must be correct. There exists a common function and algorithm performed by all cortical regions and modalities across the neocortex.
If we acknowledge that our genes specify the connections between the regions of the cortex, which is highly specific to function and species, then according to Mountcastle, vision is no different from hearing, which is no different from motor output, which is no different from sight. The cortical tissue itself must be performing the same function everywhere.
Furthermore, if we assume that the neocortex recognizes patterns and then makes predictions based on them, its microscopic anatomy makes complete biological sense.
Laboratory experiments with ferrets provided partial confirmation of this hypothesis. When cortical tissue containing the signaling machinery for one sensory modality was exchanged with another, the animal's sensory abilities remained unchanged. Similarly, when the ferret's hearing mechanisms were rewired to see, and its eyes to hear, the ferret's sensory abilities remained unaffected.
It is universally recognized too that congenitally blind individuals can utilize their hearing or touch as substitutes for their missing sight.
Hawking concluded that Mountcastle must be correct. The algorithm the cortex employed must be independent of any specific function or sensory modality.
The brain employs the same process to perceive as to hear as to feel as to move. The neocortex performs a universal action that can be applied to any sensory or motor system.
This unified them under a single algorithm, thereby exposing the fallacy of all previous attempts to comprehend and engineer human behavior based solely on sensory distinctions.
To Hawking, this was the Rosetta Stone of neuroscience-- a single paper with a single idea that united all the diverse and remarkable capabilities of the human mind.
Mountcastle vs. Turing: a Clear Victory
When AI specialists attempt to create a computer capable of sight, they develop a vocabulary related to sight. If they want to comprehend language, they construct separate algorithms based on rules of grammar, syntax, and semantics, among other factors. However, if Mountcastle's theory is accurate, these approaches are not how the brain solves these problems, and their efforts are therefore destined to be suboptimal at best.
The brain recognizes patterns, makes predictions, and then stores them hierarchically for future pattern matches. The higher up the hierarchy, the more general the match.
In all cases, successful AI programs have been limited to the specific task for which they were designed. They lack the ability to generalize, which prevents them from thinking like humans.
What we learned from Mountcastle's paper extends far beyond its carefully formulated hypothesis that inputs entering the neocortex are essentially similar.
Even though sound is transmitted as compression waves through air, vision is carried as light, and touch is perceived as pressure over the skin, the sense organs that supply these signals are all distinct. However, once these signals are converted into brain-bound "action potentials," they become identical, merely electrical patterns.
All human brains are pattern recognition machines. Our perceptions and knowledge of the world are constructed from these patterns. In fact, the brain is the only part of your body that lacks inherent senses.
A neurosurgeon could insert a finger into your brain, and you wouldn't feel a thing. All the information that enters your mind arrives as spatial-temporal patterns on the axons, including knowledge about the brain itself!
It is the very reason for the conundrum of how the brain knows of it's own existence -- otherwise known as the mind-body problem.
The problem evaporates once we recognize that the way the brain knows of itself is the same way that it knows of the outside world: through the feedback from it's own neocortical sensing apparatuses.
The isolation of the brain from its own operations is the cause of the philosophical knot that gives rise to the idea that mind and brain are separate. They are not. It's all body. Mind is just the feeling we get when the neocortex is operating. This feeling is simply the neocortex carrying out its normal functions.
Your neocortex creates a model of the world in its hierarchical memory. Thoughts are what occurs when this model runs on its own; memory recall leads to predictions, which act like new sensory inputs, which lead to new memory recall, in a recursive loop. Our most contemplative thoughts are not driven by or even connected to the real world; they are purely a creation of our model.
Altogether it is just another manifestation of Mountcastle's singular, potent algorithm, implemented by every region of the neocortex. If you connect regions of the neocortex in a suitable hierarchy and provide a stream of input using Mountcastle's algorithm, it will recognize patterns, make predictions, and learn about its environment, including about itself.
Therefore, there is no reason for intelligent machines of the future to possess the same senses or capabilities as humans. Mountcastle's algorithm can be deployed in novel ways, with novel senses, within machined neocortical sheets, enabling genuine, adaptable intelligence to emerge beyond biological brains.
Summary
On his way to reverse-engineering the neocortex for purposes of simulating human intelligence in a machine, this engineer turned self-taught neuroscientist, has mixed the right questions with settled neuroscience and with a couple of his own epiphanies, and arguably hit the mind-body trifecta: At once, he shows how the neocortex models the world using pattern recognition to make predictions. Then explains how the neocortex uses one algorithm for all modalities of thought. And finally, almost as an afterthought, he explains how the brain only appears to be conscious. Consciousness like, thought and awareness is simply the feeling we get when the neocortex is operating.
Will we ever be able to build computers that think like humans? Yes, but with some difficulties.
Will intelligent computers eventually take over the world? No, because building intelligent machines is not the same thing as building self-replicating machines. Neither brains nor computers can directly self-replicate. And, in any case, self-replication does not require intelligence, and intelligence does not require self-replication.
A read you will never forget. Ten stars!
(Article changed on Dec 09, 2024 at 7:49 AM EST)