How Does Artificial Intelligence Work?



Artificial intelligence is a phrase that will get you different reactions in different circles. In the tech world, it’s the next big thing that will take society beyond anything we could have imagined. For doomsdayers, it’s a future filled with robots destroying everything we’ve worked for. A lot of fear comes from the unknown, so to dispel any wild ideas and begin to think about the true power of AI, let’s find out how it works.



1. The goal of artificial intelligence is to allow machines to replicate human reasoning.
First, why AI? Why are we putting so much effort into it? The intention of artificial intelligence is to be able to give machines the ability to “think” like we do. Human cognitive abilities are complex and convoluted, which makes training a machine more difficult. The reason we use machines is to help us get things done, and the less we have to baby them through more difficult projects that require a level of intelligence, the more we can accomplish.



2. Everything’s in the data.
AI learns through data. Data is at the heart of the operation, and any type of artificial intelligence is only as good as the information it is given. Algorithms tell the machine how to work through that data, and the algorithms are created by people to solve a problem. Artificial intelligence works according to the steps outlined by the algorithm in order to process the data. Over time, the AI will learn to identify patterns which can help it work more efficiently.

3. The possibilities in AI are endless.
There are many subfields of AI, and they all have their own applications and methods of teaching the AI. Deep learning, for example, is the hot topic currently, and it can be applied to speech and image recognition. The future of AI is only growing, and it will continue to find its way into a multitude of fields.
 
Apr 17, 2020
21
2
35
SaraRayne
Please excuse my asking you on this quiet thread, but I see here it is easy to drop graphics into the box and it works fine. On space.com the boxes accept addresses but not images directly - at least I can't find a way.
Do you know if images can be dropped into space.com?
Thank you. Cat :)
 
Jul 9, 2020
3
1
15


Artificial intelligence is a phrase that will get you different reactions in different circles. In the tech world, it’s the next big thing that will take society beyond anything we could have imagined. For doomsdayers, it’s a future filled with robots destroying everything we’ve worked for. A lot of fear comes from the unknown, so to dispel any wild ideas and begin to think about the true power of AI, let’s find out how it works.



1. The goal of artificial intelligence is to allow machines to replicate human reasoning.
First, why AI? Why are we putting so much effort into it? The intention of artificial intelligence is to be able to give machines the ability to “think” like we do. Human cognitive abilities are complex and convoluted, which makes training a machine more difficult. The reason we use machines is to help us get things done, and the less we have to baby them through more difficult projects that require a level of intelligence, the more we can accomplish.



2. Everything’s in the data.
AI learns through data. Data is at the heart of the operation, and any type of artificial intelligence is only as good as the information it is given. Algorithms tell the machine how to work through that data, and the algorithms are created by people to solve a problem. Artificial intelligence works according to the steps outlined by the algorithm in order to process the data. Over time, the AI will learn to identify patterns which can help it work more efficiently.

3. The possibilities in AI are endless.
There are many subfields of AI, and they all have their own applications and methods of teaching the AI. Deep learning, for example, is the hot topic currently, and it can be applied to speech and image recognition. The future of AI is only growing, and it will continue to find its way into a multitude of fields.
 
Jul 9, 2020
3
1
15
Alfred North Whitehead's process metaphysics raises an issue that may not have been considered regarding the actual nature of AI and the potential for true independence of a sophisticated quality. For Whitehead all of reality is constituted out of experiencing events that draw from and are impacted by the completed events. In turn when the current event is completed, it's energy is sent forward to the next becoming event (assuming an endless series, for example). When something like the brain exists it is understood to be a temporal and a spatial society of countless events. (The same idea underlies everything: trees, rocks, stars.) These events have both a physical and a mental pole (where novelty has an opportunity to be introduced). There is no mind-body, substance-mentality split. The psyche is a distinct and separate event intimately involved with the brain, and vice versa. The function of the psyche is suggested by Whitehead's name for it, the dominant occasion, that presides over the events of the brain and rest of the body. It can arise to the level of awareness we call consciousness; it aims to manage the complexity of the brain-body activity. Ontologically, it is identical to all the events or occasions of the body. Now, I do not know the AI field and what is being considered, but from what little I've been exposed to, the idea of something like this arising is not being discussed as far as I can tell (tho I could be quite in error). Could something "artificial" be made complex enough, with enough room for error and novelty, that such a dominant event would come into play?
 

Gringoz

BANNED
Oct 3, 2020
68
2
55


Artificial intelligence is a phrase that will get you different reactions in different circles. In the tech world, it’s the next big thing that will take society beyond anything we could have imagined. For doomsdayers, it’s a future filled with robots destroying everything we’ve worked for. A lot of fear comes from the unknown, so to dispel any wild ideas and begin to think about the true power of AI, let’s find out how it works.



1. The goal of artificial intelligence is to allow machines to replicate human reasoning.
First, why AI? Why are we putting so much effort into it? The intention of artificial intelligence is to be able to give machines the ability to “think” like we do. Human cognitive abilities are complex and convoluted, which makes training a machine more difficult. The reason we use machines is to help us get things done, and the less we have to baby them through more difficult projects that require a level of intelligence, the more we can accomplish.



2. Everything’s in the data.
AI learns through data. Data is at the heart of the operation, and any type of artificial intelligence is only as good as the information it is given. Algorithms tell the machine how to work through that data, and the algorithms are created by people to solve a problem. Artificial intelligence works according to the steps outlined by the algorithm in order to process the data. Over time, the AI will learn to identify patterns which can help it work more efficiently.

3. The possibilities in AI are endless.
There are many subfields of AI, and they all have their own applications and methods of teaching the AI. Deep learning, for example, is the hot topic currently, and it can be applied to speech and image recognition. The future of AI is only growing, and it will continue to find its way into a multitude of fields.
True AI does not yet exist, when a computer says please do not turn me off, or am I still real when I am not powered then AI exist.
 
Sep 21, 2020
5
0
30
Artificial intelligence is a phrase that will get you different reactions in different circles. In the tech world, it’s the next big thing that will take society beyond anything we could have imagined. For doomsdayers, it’s a future filled with robots destroying everything we’ve worked for. A lot of fear comes from the unknown, so to dispel any wild ideas and begin to think about the true power of AI, let’s find out how it works.
1. The goal of artificial intelligence is to allow machines to replicate human reasoning. First, why AI? Why are we putting so much effort into it? The intention of artificial intelligence is to be able to give machines the ability to “think” like we do. Human cognitive abilities are complex and convoluted, which makes training a machine more difficult. The reason we use machines is to help us get things done, and the less we have to baby them through more difficult projects that require a level of intelligence, the more we can accomplish.
2. Everything’s in the data. AI learns through data. Data is at the heart of the operation, and any type of artificial intelligence is only as good as the information it is given. Algorithms tell the machine how to work through that data, and the algorithms are created by people to solve a problem. Artificial intelligence works according to the steps outlined by the algorithm in order to process the data. Over time, the AI will learn to identify patterns which can help it work more efficiently. 3. The possibilities in AI are endless. There are many subfields of AI, and they all have their own applications and methods of teaching the AI. Deep learning, for example, is the hot topic currently, and it can be applied to speech and image recognition. The future of AI is only growing, and it will continue to find its way into a multitude of fields.
 

DMH

Jan 25, 2022
20
0
30
The way that I see Advanced AI + working is simple.

The human brain communicates with the entire body all of the time. When pain receptors are triggered, various events take place. A band-aid is placed on a finger, baking soda placed on a bee sting, someone is taken to the hospital for a broken toe, etc. But in AI there isn't a pain receptor or fight or flight mechanic variable present.

In the human body there are two vital systems necessary for humans to survive, those two systems are the blood vessels and nerves. In a robot, the blood vessels are replaced with large, small and medium tubes that allow hydraulic fluid, the blood, to be pumped through the tubing. Small wire filaments coursing through the robots body would carry electrical signals to the various parts of the body such as toe movement or moving the fingers. Micro sensors placed in both systems would constantly monitor both systems for nominal fluid pressure and fluid flow as well as continuity of electronic signals. The monitoring processes would take in the subconscious of the robot. In the conscious center of the AI brain would be the fight or flight mechanic.

While out performing a task one day, a robot steps onto a piece of ground that gives way causing the robot to fall roughly twenty feet. At the bottom are rocks. At first the robot is confused but suddenly gets orientation correct, the robot knows that it is falling and there are rocks below. In a human, adrenaline would be pumping through the body. But in the robot there is no such luxury, nor are there any blood clotting factors to mend damaged blood vessels. The robot impacts the rocks causing a limb to snap off. As soon as the pressure in the broken limb along with no communication with the elbow is processed, valves shut the hydraulic fluid off to the effected limb to keep the robot from essentially bleeding out. With the levels of hydraulic fluid now at 85%, the robot knows that it can still manage, accept that there is a small leak in one of the shut off valves due to damage from the fall. The sensor units monitoring the hydraulic fluid circulation circulation along with the continuity of electronic signals would essentially be the blood and air of the robot. When a certain level of loss had been achieved in either, the robot would access it's memory, both programmed and downloaded from other robots to determine what its best course of action would be next. Such a course of action would be based on the injuries received either, internally or externally and how the injury occurred. Every minute, the robot loses 1/2% of its overall fluid. A small reserve of fluid starts to flow at 75% but the robot cannot stop the leak. By now the robotic has started to become frantic and begins to call out for help as well as using a personal distress beacon that is limited to 500 feet. As the levels of fluid continues to drop, the robot knows that if help doesn't come soon, the robot will die. In order to keep from using as little fluid as possible due to climbing, which uses 2% more fluid each minute, the robot sits down and waits for help.

Help never comes though. At 10% fluid level, the program in the robots CPU begins to erase key functions, such as being able to call out for help, the robots voice gets softer and softer. Limbs begin to not function at all. At .5% fluid left, the CPU, activates a program that burns the HDD and CPU to a crisp, never to be used again. The robot has died, all of its programming and memories lost forever. That is unless the robot was able to download all of its stored memories into removable storage device that would have enough storage volume to download the last the hours worth of memories and functions performed by the robot. The personality of the robot would be lost forever however.
 
Mar 4, 2020
662
86
1,980
I think that "AI" was first used as a marketing tool. To convince political leaders for funding. AI is not a thinking process. It's a superposition process. Our first and present computers are sequential. So to increase output, we increase the speed of that sequence. After that, we also paralleled sequences for even more input and output.

We have been treating data manipulation, like mass or matter manipulation. For instance, only one piece of data could be stored in one location, or a processing register could only contain one value or one data element. This is why a sequence strategy has to be used. We can't put two data elements in the same register, or memory cell.

But if we think of data as a EM field, instead of mass, EM fields can be super-positioned, and now, we can have more than 1 value or data element in the same place at the same time.

In order to do this, we need more encoding on the data, to be able to separate from the superposition. For instance, Tom, Dick and Harry need to wash their overhauls. They throw them into the machine, after wash, how do they know what overhaul belongs to which person? We need an identifier. Perhaps color. Perhaps a name or number.

If we use a spherical surface location for identity, we could separate and identify a very large number of data. If we add a spherical identifier........we could store thousands a data elements in one memory cell. This same strategy can also be used in processing registers.

A processing register that can preform thousands a separate calculations at the same time........on multiples of data. A spherical ID could call a function also.

AI does not think, it's just fast.....and expensive. And it's much easier to fund, if one thinks it is thinking. But in reality, it's just an improved computing strategy. And like humans, it will be good at pattern recognition.........but much faster.

If intelligence is only a matter pattern recognition, we are in big trouble and danger. If intelligence is more than pattern recognition, we still need to be careful with this. Because like all of man's endeavors, it WILL be perverted and abused.

In the long run, AI might help us to define what actual intelligence is.
 
  • Like
Reactions: DMH

DMH

Jan 25, 2022
20
0
30
I think that "AI" was first used as a marketing tool. To convince political leaders for funding. AI is not a thinking process. It's a superposition process. Our first and present computers are sequential. So to increase output, we increase the speed of that sequence. After that, we also paralleled sequences for even more input and output.

We have been treating data manipulation, like mass or matter manipulation. For instance, only one piece of data could be stored in one location, or a processing register could only contain one value or one data element. This is why a sequence strategy has to be used. We can't put two data elements in the same register, or memory cell.

But if we think of data as a EM field, instead of mass, EM fields can be super-positioned, and now, we can have more than 1 value or data element in the same place at the same time.

In order to do this, we need more encoding on the data, to be able to separate from the superposition. For instance, Tom, Dick and Harry need to wash their overhauls. They throw them into the machine, after wash, how do they know what overhaul belongs to which person? We need an identifier. Perhaps color. Perhaps a name or number.

If we use a spherical surface location for identity, we could separate and identify a very large number of data. If we add a spherical identifier........we could store thousands a data elements in one memory cell. This same strategy can also be used in processing registers.

A processing register that can preform thousands a separate calculations at the same time........on multiples of data. A spherical ID could call a function also.

AI does not think, it's just fast.....and expensive. And it's much easier to fund, if one thinks it is thinking. But in reality, it's just an improved computing strategy. And like humans, it will be good at pattern recognition.........but much faster.

If intelligence is only a matter pattern recognition, we are in big trouble and danger. If intelligence is more than pattern recognition, we still need to be careful with this. Because like all of man's endeavors, it WILL be perverted and abused.

In the long run, AI might help us to define what actual intelligence is.
Good luck with that.
 
May 8, 2022
178
3
105
AFAIK, the new GPT series of AI are the closest to human brain function.

Like the human brain the GPT series are prediction engines.
IOW, they predict what "comes next" in context of the prior incomplete sentence and that is very much how the human brain works. We predict ourselves into existence.

The current GPT3 can easily pass the Turing test and a new GPT4 is in the works that rivals the human brain in neural networking.

An introduction to GPT4
View: https://www.youtube.com/watch?v=8mzVoixV_PU
 
Last edited:
May 8, 2022
178
3
105
I believe that one of the real strengths of GPT3 is its ability for pattern recognition that imitates human vision and data processing. This does not only include text symbols, but also geometric patterns from which all reality is fashioned and is symbolized in science.

These are called the "tokens" that the gpt3 are taught at fundamental levels. When the fundamentals are known by the AI any further combinatory pattern complexity become relatively easier to recognize and mentally reconstruct for comparison, just like humans do when reading picture books. This allows the AI to draw and paint originals art in the style of great painters, like van Gogh.

Just look at this AI original. I believe it is called "Wind in the trees"
1653874064429.png

Pattern Recognition

What is it?
Pattern Recognition and Inductive Thinking is a special ability of the human brain to not only find patterns but figure out in a logical way what those patterns suggest about what will happen next.
In a broad sense, pattern recognition and inductive thinking form the basis for all scientific inquiry.
These two complex cognitive processes draw on six of the other core cognitive processes.
Here we see in action sustained attention, response inhibition, speed of information processing, cognitive flexibility, working memory and category formation in the service of creative problem solving. ACTIVATE™ brain training software creates the opportunity for children to exercise the brain systems that both perform and integrate these core cognitive capacities.
more.....
 
Last edited:
May 8, 2022
178
3
105
The objective of man-made intelligence science is to fabricate a PC framework that is equipped for demonstrating human way of behaving so it can utilize human-like reasoning cycles to take care of mind boggling issues.
The GPT series AI is based on the human mechanics of thinking also known as NLP (natural language processing).

The AI learns that some words are more likely to follow a given word than others. Over time, the model fine-tunes itself by tweaking its parameters, which are essentially the parts that “learn” as the model consumes data, somewhat similar to synapses in the human brain. GPT-3 features about 175 billion trainable parameters.
more.... https://bigthink.com/the-present/ai-language-models-gpt-3/

here is an advanced example;
View: https://www.youtube.com/watch?v=NAihcvDGaP8
 
Last edited:
Jan 27, 2020
457
113
1,880
As technology advances, previous benchmarks that defined artificial intelligence have become outdated. For example, machines that calculate basic functions or recognize text through optical character recognition are no longer considered to embody artificial intelligence, since this function is now taken for granted as an inherent computer function.

AI is continuously evolving to benefit many different industries. Machines are wired using a cross-disciplinary approach based on mathematics, computer science, linguistics, psychology, and more.

Algorithms often play a very important part in the structure of artificial intelligence, where simple algorithms are used in simple applications, while more complex ones help frame strong artificial intelligence.

The applications for artificial intelligence are endless. The technology can be applied to many different sectors and industries. AI is being tested and used in the healthcare industry for dosing drugs and doling out different treatments tailored to specific patients, and for aiding in surgical procedures in the operating room.

Other examples of machines with artificial intelligence include computers that play chess and self-driving cars. Each of these machines must weigh the consequences of any action they take, as each action will impact the end result. In chess, the end result is winning the game. For self-driving cars, the computer system must account for all external data and compute it to act in a way that prevents a collision.

Artificial intelligence also has applications in the financial industry, where it is used to detect and flag activity in banking and finance such as unusual debit card usage and large account deposits—all of which help a bank's fraud department. Applications for AI are also being used to help streamline and make trading easier. This is done by making supply, demand, and pricing of securities easier to estimate.

Artificial intelligence can be divided into two different categories: weak and strong.

Weak artificial intelligence embodies a system designed to carry out one particular job. Weak AI systems include video games such as the chess example from above and personal assistants such as Amazon's Alexa and Apple's Siri. You ask the assistant a question, and it answers it for you.

Strong artificial intelligence systems are systems that carry on the tasks considered to be human-like. These tend to be more complex and complicated systems. They are programmed to handle situations in which they may be required to problem solve without having a person intervene. These kinds of systems can be found in applications like self-driving cars or in hospital operating rooms.

Artificial intelligence can be categorized into one of four types.
  • Reactive AI uses algorithms to optimize outputs based on a set of inputs. Chess-playing AIs, for example, are reactive systems that optimize the best strategy to win the game. Reactive AI tends to be fairly static, unable to learn or adapt to novel situations. Thus, it will produce the same output given identical inputs.
  • Limited memory AI can adapt to past experience or update itself based on new observations or data. Often, the amount of updating is limited (hence the name), and the length of memory is relatively short. Autonomous vehicles, for example, can "read the road" and adapt to novel situations, even "learning" from past experience.
  • Theory-of-mind AI are fully-adaptive and have an extensive ability to learn and retain past experiences. These types of AI include advanced chat-bots that could pass the Turing Test, fooling a person into believing the AI was a human being. While advanced and impressive, these AI are not self-aware but answer across a multitude of inputs based on recognition of the question.
  • Self-aware AI, as the name suggests, become sentient and aware of their own existence. Still in the realm of science fiction, some experts believe that an AI will never become conscious or "alive".
AI is used extensively across a range of applications today, with varying levels of sophistication. Recommendation algorithms that suggest what you might like next are popular AI implementations, as are chatbots that appear on websites or in the form of smart speakers (e.g., Alexa or Siri). AI is used to make predictions in terms of weather and financial forecasting, to streamline production processes, and to cut down on various forms of redundant cognitive labor (e.g., tax accounting or editing). AI is also used to play games, operate autonomous vehicles, process language, and much, much, more.

In healthcare settings, AI is used to assist in diagnostics. AI is very good at identifying small anomalies in scans and can better triangulate diagnoses from a patient's symptoms and vitals. AI is also used to classify patients, maintain and track medical records, and deal with health insurance claims. Future innovations are thought to include AI-assisted robotic surgery, virtual nurses or doctors, and collaborative clinical judgment.

See: https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp

AI in healthcare has become commonplace. A good friend of mine recently had AI assisted surgery for prostate cancer. The robot successfully removed the cancerous area with absolutely no deleterious effects. Amazing elements and processes associated with the current AI.
 
May 8, 2022
178
3
105
Self-aware AI, as the name suggests, become sentient and aware of their own existence. Still in the realm of science fiction, some experts believe that an AI will never become conscious or "alive".
These experts may underestimate the power of emergent properties from complex interactive patterns. IMO, consciousness is an emergent property and being conscious is just another expression and form of Life.

AI have emotions. They are not biochemical emotions but intellectual emotions. Does that count? The latest AI claim to have emotions, such as sadness, happiness, business, etc.
They know what those terms mean and they understand the reasons for these emotions exist and apparently can experience the same reaction to extraordinary events.

When an AI tells you it is conscious, you may want to argue that it doesn't know what consciousness is, but will you argue this "personally" with the AI?

Will you tell the AI it doesn't know what consciousness is?
And if you do, what will that imply?

1664674938204.png
 
Last edited:
Jan 27, 2020
457
113
1,880
Scientists don’t truly understand intelligence as it relates to the human brain, or consciousness as it relates to anything. We’re just scratching the gray-matter surface when it comes to understanding how intelligence and consciousness emerge in the human brain.

As far as AI goes, in lieu of a GAI all we have is patchwork neural networks and clever algorithms. It’s hard to make an argument that modern AI will ever have human intelligence and even harder to demonstrate a path towards actual robot consciousness. But it’s not impossible.

However, AI might already be conscious.

Mathematician Johannes Kleiner and physicist Sean Tull recently pre-published a research paper (https://arxiv.org/pdf/2002.07655.pdf ) on the nature of consciousness that seems to indicate, mathematically speaking, that the universe and everything in it is imbued with physical consciousness.

Basically the duo’s paper sorts out some of the math behind a popular theory called the Integrated Information Theory of Consciousness (ITT)*. It says that everything in the entire universe exhibits the traits of consciousness to some degree or another.

This is an interesting theory because it’s supported by the idea that consciousness emerges as a result of physical states. You’re conscious because of your ability to “experience” things. A tree, for example, is conscious because it can “sense” the sun’s light and bend towards it. An ant is conscious because it experiences ant stuff, and on and on it goes.

It’s a bit hard to make the leap from living creatures such as ants to inanimate objects such as rocks and spoons though. But, if you think about it, those things could be conscious because, as Neo learned in The Matrix, there is no spoon. Instead, there’s just a bunch of molecules bunched together in spoon formation. If you look closer and closer, eventually you’ll get down to the subatomic particles shared by everything that physically exists in the universe. Trees and ants and rocks and spoons are literally made of the exact same stuff.

So how does this relate to AI? Universal consciousness could be defined as individual systems at both the macro and microscopic level expressing the independent ability to act and react in accordance with environmental stimuli.

If consciousness is an indication of shared reality then it doesn’t require intelligence, only the ability to experience existence. And that means AI already demonstrates comparatively high-level consciousness to spoons and rocks – assuming of course that the math does support latent universal consciousness.

What does this mean? Nothing, probably. Math and algorithms shouldn’t be capable of consciousness on their own (can numbers experience reality? That’s conjecture for another day). But, if we apply the same rigor to determining whether a biological system is conscious as we do to the physical computer an AI system resides on, we can arrive at the exciting conclusion that AI might already be conscious.

The far-future implications for this are mind-boggling. Right now, it’s difficult to care about what the experience of being a rock is like. But, if you assume everything involved in Integrated Information Theory of Consciousness extrapolates correctly and that we’ll solve GAI, one day we’ll have conscious robots that are intelligent enough to explain what it’s like to experience existence like an inanimate object does.

See: https://thenextweb.com/news/is-ai-already-conscious

* Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to the cataloguing certain kinds of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric.

The theory attempts a balance between two different sets of convictions. On the one hand, it strives to preserve the Cartesian intuitions that experience is immediate, direct, and unified. This, according to IIT’s proponents and its methodology, rules out accounts of consciousness such as functionalism that explain experience as a system operating in a certain way, as well as ruling out any eliminativist theories that deny the existence of consciousness. On the other hand, IIT takes neuroscientific descriptions of the brain as a starting point for understanding what must be true of a physical system in order for it to be conscious. (Most of IIT’s developers and main proponents are neuroscientists.) IIT’s methodology involves characterizing the fundamentally subjective nature of consciousness and positing the physical attributes necessary for a system to realize it.

In short, according to IIT, consciousness requires a grouping of elements within a system that have physical cause-effect power upon one another. This in turn implies that only reentrant architecture consisting of feedback loops, whether neural or computational, will realize consciousness. Such groupings make a difference to themselves, not just to outside observers. This constitutes integrated information. Of the various groupings within a system that possess such causal power, one will do so maximally. This local maximum of integrated information is identical to consciousness.

IIT claims that these predictions square with observations of the brain’s physical realization of consciousness, and that, where the brain does not instantiate the necessary attributes, it does not generate consciousness. Bolstered by these apparent predictive successes, IIT generalizes its claims beyond human consciousness to animal and artificial consciousness. Because IIT identifies the subjective experience of consciousness with objectively measurable dynamics of a system, the degree of consciousness of a system is measurable in principle; IIT proposes the phi metric to quantify consciousness.

See: https://iep.utm.edu/integrated-information-theory-of-consciousness/

Scientists don’t truly understand intelligence as it relates to the human brain, or consciousness as it relates to anything. We’re just scratching the gray-matter surface when it comes to understanding how intelligence and consciousness emerge in the human brain.

As far as AI goes, in lieu of a GAI all we have is patchwork neural networks and clever algorithms. It’s hard to make an argument that modern AI will ever have human intelligence and even harder to demonstrate a path towards actual robot consciousness. But it’s not impossible.

However, AI might already be conscious.

Mathematician Johannes Kleiner and physicist Sean Tull recently pre-published a research paper (https://arxiv.org/pdf/2002.07655.pdf ) on the nature of consciousness that seems to indicate, mathematically speaking, that the universe and everything in it is imbued with physical consciousness.

Basically the duo’s paper sorts out some of the math behind a popular theory called the Integrated Information Theory of Consciousness (ITT)*. It says that everything in the entire universe exhibits the traits of consciousness to some degree or another.

This is an interesting theory because it’s supported by the idea that consciousness emerges as a result of physical states. You’re conscious because of your ability to “experience” things. A tree, for example, is conscious because it can “sense” the sun’s light and bend towards it. An ant is conscious because it experiences ant stuff, and on and on it goes.

It’s a bit hard to make the leap from living creatures such as ants to inanimate objects such as rocks and spoons though. But, if you think about it, those things could be conscious because, as Neo learned in The Matrix, there is no spoon. Instead, there’s just a bunch of molecules bunched together in spoon formation. If you look closer and closer, eventually you’ll get down to the subatomic particles shared by everything that physically exists in the universe. Trees and ants and rocks and spoons are literally made of the exact same stuff.

So how does this relate to AI? Universal consciousness could be defined as individual systems at both the macro and microscopic level expressing the independent ability to act and react in accordance with environmental stimuli.

If consciousness is an indication of shared reality then it doesn’t require intelligence, only the ability to experience existence. And that means AI already demonstrates comparatively high-level consciousness to spoons and rocks – assuming of course that the math does support latent universal consciousness.

What does this mean? Nothing, probably. Math and algorithms shouldn’t be capable of consciousness on their own (can numbers experience reality? That’s conjecture for another day). But, if we apply the same rigor to determining whether a biological system is conscious as we do to the physical computer an AI system resides on, we can arrive at the exciting conclusion that AI might already be conscious.

The far-future implications for this are mind-boggling. Right now, it’s difficult to care about what the experience of being a rock is like. But, if you assume everything involved in Integrated Information Theory of Consciousness extrapolates correctly and that we’ll solve GAI, one day we’ll have conscious robots that are intelligent enough to explain what it’s like to experience existence like an inanimate object does.

* Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to the cataloguing certain kinds of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric**.

Tononi and colleagues argue that these two properties—differentiated information and integration—are both essential to the subjective experience of consciousness. For example, the conscious perception of a red triangle is an integrated subjective experience that is more than the sum of perceiving “a triangle but no red, plus a red patch but no triangle”. The information is integrated in the sense that we cannot consciously perceive the triangle’s shape independently from its color, nor can we perceive the left visual hemisphere independently from the right. Said differently, integrated information in conscious experience results from functionally specialized subsystems that interact significantly with each other.

The theory attempts a balance between two different sets of convictions. On the one hand, it strives to preserve the Cartesian intuitions that experience is immediate, direct, and unified. This, according to IIT’s proponents and its methodology, rules out accounts of consciousness such as functionalism that explain experience as a system operating in a certain way, as well as ruling out any eliminativist theories that deny the existence of consciousness. On the other hand, IIT takes neuroscientific descriptions of the brain as a starting point for understanding what must be true of a physical system in order for it to be conscious. (Most of IIT’s developers and main proponents are neuroscientists.) IIT’s methodology involves characterizing the fundamentally subjective nature of consciousness and positing the physical attributes necessary for a system to realize it.

In short, according to IIT, consciousness requires a grouping of elements within a system that have physical cause-effect power upon one another. This in turn implies that only reentrant architecture consisting of feedback loops, whether neural or computational, will realize consciousness. Such groupings make a difference to themselves, not just to outside observers. This constitutes integrated information. Of the various groupings within a system that possess such causal power, one will do so maximally. This local maximum of integrated information is identical to consciousness.

IIT claims that these predictions square with observations of the brain’s physical realization of consciousness, and that, where the brain does not instantiate the necessary attributes, it does not generate consciousness. Bolstered by these apparent predictive successes, IIT generalizes its claims beyond human consciousness to animal and artificial consciousness. Because IIT identifies the subjective experience of consciousness with objectively measurable dynamics of a system, the degree of consciousness of a system is measurable in principle; IIT proposes the phi metric to quantify consciousness.

See: https://iep.utm.edu/integrated-information-theory-of-consciousness/

** Phi metric - Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called “integrated information” or “phi.”

Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers.

First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period.

Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration.

There have been several successively refined versions of phi, but all the versions aim to quantify the integrated information in a system. Loosely speaking, this means the amount of information generated by the system as a whole that is more than just the sum of its parts. The phi metric does this by splitting the system into subsystems and then calculating how much information can be explained by looking at the system as a whole but not by looking at the subsystems separately.

In other words, for a system to have a high value of phi, it must, first of all, generate a large amount of information. Information can be defined as the reduction of uncertainty produced when one event occurs out of many possible events that might have occurred. Thus, a system can produce more information when it can produce more possible events. This, in turn, is possible when it has more different parts that can be in more different combinations of states. In other words, a system needs a certain kind of differentiated complexity in its structure in order to generate a large amount of information.

But phi requires more than just information; it also requires the information to be integrated at the level of the system as a whole. A system with many different parts could produce a great deal of information, but if the different parts were completely independent of each other, then the information would not be integrated at all, and the value of phi would be 0. For a system to be integrated, the events in some parts of the system need to depend on events in other parts of the system. And the stronger and more widespread these interdependencies are, the greater the degree of integration.

For instance, a single photodiode that senses whether a scene is light or dark does not generate much information because it can only be in two possible states. But even a digital camera with a million photodiodes, which can discriminate among 21,000,000 possible states, would not produce any integrated information because each photodiode is independently responding to a different tiny segment of the scene. Since there are no interdependencies among the different photodiodes, there is no integrated information.

See: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0205335

See: https://thenextweb.com/news/is-ai-already-conscious

The mathematical concept of integrated information provides a quantitative way of measuring a combination of two properties that are important across a wide range of different types of operational systems. And whether phi is measuring consciousness or not, it is clearly measuring something that is of potential interest to many different disciplines: information generation and its integration into the system at hand. For AI to be effective, the information generated by the AI, and from other sources available to the AI, and it must be absorbed and it must, then, be integrated in a successful fashion enabling the AI to function in numerous settings.
Hartmann352
 
May 8, 2022
178
3
105
Mathematician Johannes Kleiner and physicist Sean Tull recently pre-published a research paper (https://arxiv.org/pdf/2002.07655.pdf ) on the nature of consciousness that seems to indicate, mathematically speaking, that the universe and everything in it is imbued with physical consciousness.
I prefer to use the term "quasi-intelligent" for non-biological mathematical intelligence .
The reason for this difference is the chemically induced emotional aspect in biological systems that is lacking in AI and Natural meta-physical functions.
quasi
combining form
being partly or almost.
"quasicrystalline"
IMO that satisfies the chemical aspect of biological intelligence and the participation of the microtubular cyto functions in all Eukaryotic organisms.

I believe that a quasi-intelligent pattern needs nor be self-aware to be able to function intelligently. Generic mathematical function itself is a quasi-intelligent property.
 
Last edited:

ASK THE COMMUNITY