Scientists don’t truly understand intelligence as it relates to the human brain, or consciousness as it relates to anything. We’re just scratching the gray-matter surface when it comes to understanding how intelligence and consciousness emerge in the human brain.
As far as AI goes, in lieu of a GAI all we have is patchwork neural networks and clever algorithms. It’s hard to make an argument that modern AI will ever have human intelligence and even harder to demonstrate a path towards actual robot consciousness. But it’s not impossible.
However, AI might already be conscious.
Mathematician Johannes Kleiner and physicist Sean Tull recently pre-published a research paper (
https://arxiv.org/pdf/2002.07655.pdf ) on the nature of consciousness that seems to indicate, mathematically speaking, that the universe and everything in it is imbued with physical consciousness.
Basically the duo’s paper sorts out some of the math behind a popular theory called the
Integrated Information Theory of Consciousness (ITT)*. It says that everything in the entire universe exhibits the traits of consciousness to some degree or another.
This is an interesting theory because it’s supported by the idea that consciousness emerges as a result of physical states. You’re conscious because of your ability to “experience” things. A tree, for example, is conscious because it can “sense” the sun’s light and bend towards it. An ant is conscious because it experiences ant stuff, and on and on it goes.
It’s a bit hard to make the leap from living creatures such as ants to inanimate objects such as rocks and spoons though. But, if you think about it, those things could be conscious because, as Neo learned in The Matrix, there is no spoon. Instead, there’s just a bunch of molecules bunched together in spoon formation. If you look closer and closer, eventually you’ll get down to the subatomic particles shared by everything that physically exists in the universe. Trees and ants and rocks and spoons are literally made of
the exact same stuff.
So how does this relate to AI? Universal consciousness could be defined as individual systems at both the macro and microscopic level expressing the independent ability to act and react in accordance with environmental stimuli.
If consciousness is an indication of shared reality then it doesn’t require intelligence, only the ability to experience existence. And that means AI already demonstrates comparatively high-level consciousness to spoons and rocks – assuming of course that the math does support latent universal consciousness.
What does this mean? Nothing, probably. Math and algorithms shouldn’t be capable of consciousness on their own (can numbers experience reality? That’s conjecture for another day). But, if we apply the same rigor to determining whether a biological system is conscious as we do to the physical computer an AI system resides on, we can arrive at the exciting conclusion that AI might already be conscious.
The far-future implications for this are mind-boggling. Right now, it’s difficult to care about what the experience of being a rock is like. But, if you assume everything involved in Integrated Information Theory of Consciousness extrapolates correctly and that we’ll solve GAI, one day we’ll have conscious robots that are intelligent enough to explain what it’s like to experience existence like an inanimate object does.
See:
https://thenextweb.com/news/is-ai-already-conscious
* Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to the cataloguing certain kinds of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric.
The theory attempts a balance between two different sets of convictions. On the one hand, it strives to preserve the Cartesian intuitions that experience is immediate, direct, and unified. This, according to IIT’s proponents and its methodology, rules out accounts of consciousness such as functionalism that explain experience as a system operating in a certain way, as well as ruling out any eliminativist theories that deny the existence of consciousness. On the other hand, IIT takes neuroscientific descriptions of the brain as a starting point for understanding what must be true of a physical system in order for it to be conscious. (Most of IIT’s developers and main proponents are neuroscientists.) IIT’s methodology involves characterizing the fundamentally subjective nature of consciousness and positing the physical attributes necessary for a system to realize it.
In short, according to IIT, consciousness requires a grouping of elements within a system that have physical cause-effect power upon one another. This in turn implies that only reentrant architecture consisting of feedback loops, whether neural or computational, will realize consciousness. Such groupings make a difference to themselves, not just to outside observers. This constitutes integrated information. Of the various groupings within a system that possess such causal power, one will do so maximally. This local maximum of integrated information is identical to consciousness.
IIT claims that these predictions square with observations of the brain’s physical realization of consciousness, and that, where the brain does not instantiate the necessary attributes, it does not generate consciousness. Bolstered by these apparent predictive successes, IIT generalizes its claims beyond human consciousness to animal and artificial consciousness. Because IIT identifies the subjective experience of consciousness with objectively measurable dynamics of a system, the degree of consciousness of a system is measurable in principle; IIT proposes the phi metric to quantify consciousness.
See:
https://iep.utm.edu/integrated-information-theory-of-consciousness/
Scientists don’t truly understand intelligence as it relates to the human brain, or consciousness as it relates to anything. We’re just scratching the gray-matter surface when it comes to understanding how intelligence and consciousness emerge in the human brain.
As far as AI goes, in lieu of a GAI all we have is patchwork neural networks and clever algorithms. It’s hard to make an argument that modern AI will ever have human intelligence and even harder to demonstrate a path towards actual robot consciousness. But it’s not impossible.
However, AI might already be conscious.
Mathematician Johannes Kleiner and physicist Sean Tull recently pre-published a research paper (
https://arxiv.org/pdf/2002.07655.pdf ) on the nature of consciousness that seems to indicate, mathematically speaking, that the universe and everything in it is imbued with physical consciousness.
Basically the duo’s paper sorts out some of the math behind a popular theory called the
Integrated Information Theory of Consciousness (ITT)*. It says that everything in the entire universe exhibits the traits of consciousness to some degree or another.
This is an interesting theory because it’s supported by the idea that consciousness emerges as a result of physical states. You’re conscious because of your ability to “experience” things. A tree, for example, is conscious because it can “sense” the sun’s light and bend towards it. An ant is conscious because it experiences ant stuff, and on and on it goes.
It’s a bit hard to make the leap from living creatures such as ants to inanimate objects such as rocks and spoons though. But, if you think about it, those things could be conscious because, as Neo learned in The Matrix, there is no spoon. Instead, there’s just a bunch of molecules bunched together in spoon formation. If you look closer and closer, eventually you’ll get down to the subatomic particles shared by everything that physically exists in the universe. Trees and ants and rocks and spoons are literally made of
the exact same stuff.
So how does this relate to AI? Universal consciousness could be defined as individual systems at both the macro and microscopic level expressing the independent ability to act and react in accordance with environmental stimuli.
If consciousness is an indication of shared reality then it doesn’t require intelligence, only the ability to experience existence. And that means AI already demonstrates comparatively high-level consciousness to spoons and rocks – assuming of course that the math does support latent universal consciousness.
What does this mean? Nothing, probably. Math and algorithms shouldn’t be capable of consciousness on their own (can numbers experience reality? That’s conjecture for another day). But, if we apply the same rigor to determining whether a biological system is conscious as we do to the physical computer an AI system resides on, we can arrive at the exciting conclusion that AI might already be conscious.
The far-future implications for this are mind-boggling. Right now, it’s difficult to care about what the experience of being a rock is like. But, if you assume everything involved in Integrated Information Theory of Consciousness extrapolates correctly and that we’ll solve GAI, one day we’ll have conscious robots that are intelligent enough to explain what it’s like to experience existence like an inanimate object does.
* Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to the cataloguing certain kinds of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric**.
Tononi and colleagues argue that these two properties—differentiated information and integration—are both essential to the subjective experience of consciousness. For example, the conscious perception of a red triangle is an integrated subjective experience that is more than the sum of perceiving “a triangle but no red, plus a red patch but no triangle”. The information is integrated in the sense that we cannot consciously perceive the triangle’s shape independently from its color, nor can we perceive the left visual hemisphere independently from the right. Said differently, integrated information in conscious experience results from functionally specialized subsystems that interact significantly with each other.
The theory attempts a balance between two different sets of convictions. On the one hand, it strives to preserve the Cartesian intuitions that experience is immediate, direct, and unified. This, according to IIT’s proponents and its methodology, rules out accounts of consciousness such as functionalism that explain experience as a system operating in a certain way, as well as ruling out any eliminativist theories that deny the existence of consciousness. On the other hand, IIT takes neuroscientific descriptions of the brain as a starting point for understanding what must be true of a physical system in order for it to be conscious. (Most of IIT’s developers and main proponents are neuroscientists.) IIT’s methodology involves characterizing the fundamentally subjective nature of consciousness and positing the physical attributes necessary for a system to realize it.
In short, according to IIT, consciousness requires a grouping of elements within a system that have physical cause-effect power upon one another. This in turn implies that only reentrant architecture consisting of feedback loops, whether neural or computational, will realize consciousness. Such groupings make a difference to themselves, not just to outside observers. This constitutes integrated information. Of the various groupings within a system that possess such causal power, one will do so maximally. This local maximum of integrated information is identical to consciousness.
IIT claims that these predictions square with observations of the brain’s physical realization of consciousness, and that, where the brain does not instantiate the necessary attributes, it does not generate consciousness. Bolstered by these apparent predictive successes, IIT generalizes its claims beyond human consciousness to animal and artificial consciousness. Because IIT identifies the subjective experience of consciousness with objectively measurable dynamics of a system, the degree of consciousness of a system is measurable in principle; IIT proposes the phi metric to quantify consciousness.
See:
https://iep.utm.edu/integrated-information-theory-of-consciousness/
** Phi metric - Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called “integrated information” or “phi.”
Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers.
First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period.
Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration.
There have been several successively refined versions of phi, but all the versions aim to quantify the integrated information in a system. Loosely speaking, this means the amount of information generated by the system as a whole that is more than just the sum of its parts. The phi metric does this by splitting the system into subsystems and then calculating how much information can be explained by looking at the system as a whole but not by looking at the subsystems separately.
In other words, for a system to have a high value of phi, it must, first of all, generate a large amount of information. Information can be defined as the reduction of uncertainty produced when one event occurs out of many possible events that might have occurred. Thus, a system can produce more information when it can produce more possible events. This, in turn, is possible when it has more different parts that can be in more different combinations of states. In other words, a system needs a certain kind of differentiated complexity in its structure in order to generate a large amount of information.
But phi requires more than just information; it also requires the information to be integrated at the level of the system as a whole. A system with many different parts could produce a great deal of information, but if the different parts were completely independent of each other, then the information would not be integrated at all, and the value of phi would be 0. For a system to be integrated, the events in some parts of the system need to depend on events in other parts of the system. And the stronger and more widespread these interdependencies are, the greater the degree of integration.
For instance, a single photodiode that senses whether a scene is light or dark does not generate much information because it can only be in two possible states. But even a digital camera with a million photodiodes, which can discriminate among 21,000,000 possible states, would not produce any integrated information because each photodiode is independently responding to a different tiny segment of the scene. Since there are no interdependencies among the different photodiodes, there is no integrated information.
See:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0205335
See:
https://thenextweb.com/news/is-ai-already-conscious
The mathematical concept of integrated information provides a quantitative way of measuring a combination of two properties that are important across a wide range of different types of operational systems. And whether phi is measuring consciousness or not, it is clearly measuring something that is of potential interest to many different disciplines: information generation and its integration into the system at hand. For AI to be effective, the information generated by the AI, and from other sources available to the AI, and it must be absorbed and it must, then, be integrated in a successful fashion enabling the AI to function in numerous settings.
Hartmann352