How Does Artificial Intelligence Work?

Page 2 - For the science geek in everyone, Live Science breaks down the stories behind the most interesting news and photos on the Internet.
Gaia hypothesis meaning can be understood by the Gaia hypothesis definition that can be stated as an interaction between living organisms on the Earth with their inorganic surroundings forming a complex, self-regulating and synergistic system that helps perpetuate and maintain optimum conditions for life on the planet.

It was hypothesized that by using the Gaia principle one can detect life in the atmosphere of other planets. The Gaia theory of James Lovelock was a relatively cheaper and reliable way to use such interactive combinations to find the possibility of life on planets other than the Earth.

The initial Gaia hypothesis states that the Earth has maintained its habitable state through a self-regulating feedback loop that is automatically carried out by the living organisms that are tightly coupled to their respective environments. The observations made in the James Lovelock Gaia Hypothesis were:
  • Despite an increase in energy provided by the sun, the earth’s global surface temperature has been constant.
  • Owing to the activities of life of the living organisms, the atmosphere is in an extreme state of disequilibrium of thermodynamics and yet the aspects of its composition are astoundingly stable. Even with so many atmospheric components of varying degrees like 20.7 percent of oxygen, 79 percent of nitrogen, traces of methane, and 0.03 percent of carbon dioxide, the atmospheric composition remains constant rather than unstable.
  • Constant ocean salinity for a very long time can be contributed to the seawater circulation via the hot basaltic rocks that emerge on ocean spreading ridges as hot water vents.
  • The earth system has consistently and continuously recovered from massive perturbations owing to its self-regulation complex process.
James Lovelock views this entirety of complex processes on the Earth’s surface as one, to maintain suitable conditions for life. The earthly processes from its formation to its disturbances, eruptions, and recovery is all considered to be one self-regulating system.

The Gaia theory named after the Greek Goddess Gaia, which represents the Earth was however heavily criticized initially against the natural selection principles proposed by Charles Darwin. The other criticism of the Gaia theory was its teleological nature of stating finality and not the cause of such occurrences in Lovelock Gaia. The refined Gaia hypothesis that aligned the Gaia model with the production of sulfur and iodine by sea creatures in quantities approximately required by the land creatures that supported and made the Gaia theory stating interactions stronger that bolster the hypothesis.

The theory and hypothesis were criticized due to the following reasons.
  • The significant increase in global surface temperatures contradicts the observatory comment according to the theory.
  • Salinity in the ocean is far from being at constant equilibrium as river salts have raised the salinity.
  • The self-regulation theory is also disregarded as evidence against it was surfaced by reduced methane levels and oxygen shocks during the various ice ages that are during the Huronian, Sturtian, and Marinoan or Varanger Ice Ages.
  • Dimethyl sulfide produced by the phytoplankton plays an important role in climate regulation and the process does not happen on its own as stated by James Lovelock.
  • Another claim that stated the Gaia theory is contradictory to the Natural Selection theory and is far from the survival of the fittest theory that was the greatest diversion according to Lovelock’s theory.
  • The other criticisms stated that Gaia had four hypotheses and not just one.
(a) Coevolutionary Gaia stated the environment and the life in it evolved in a coupled way that was criticized stating Gaia theory is only claiming that it has already been a scientifically accepted theory.

(b) Homeostatic Gaia stated that the stability of the natural environment is maintained by life and that stability enables life to exist disregarded stating it was not scientific because it was untestable.

(c) The Geophysical Gaia hypothesis stated new geophysical cycles that only aroused curiosity and piqued interest in researching the terrestrial geographical dynamics.

(d) The optimizing Gaia hypothesis was also disregarded because of its untestability and therefore unscientific nature that stated the planet shaped by Gaia made life for the environment as a whole.

The refined New Gaia hypothesis was a counter-argument by James Lovelock. Lovelock along with Andrew Watson developed a new model that is the Daisyworld Simulations which is a purely mathematical model. Daisyworld is to be considered a planet where only daisies grow and there are black daisies and white daisies. The conditions in the Daisyworld are in many respects similar to that of the Earth.
  • Water and nutrients are abundant in Daisyworld for the daisies.
  • The ability to grow and for the daisies to spread across this imaginary planet’s surface depends entirely on the temperature.
  • The climate system in Daisyworld is simple with no greenhouse gases and clouds.
  • The planetary incident light and radiation that affects the surface temperature depends on the aerial coverage of the grey soil by the white and black daisies.
  • In this model, the planetary temperature regulation is underpinned by ecological competition by examining the energy budget which is the energy provided by the sun and with high energy temperature increases, and with low energy the temperature decreases.
  • The albedo that is the reflection and the absorption of light is influenced by the colour of daisies.
  • Light- The black daisies warm the Daisyworld by absorbing more light and white daisies cool the planet by reflecting more light.
  • Growth- Black daisies grow and reproduce best at temperatures relatively lower than the white daisies that thrive at a higher temperature.
  • When the temperature rises Daisyworld’s surface is filled with more white daisies that reduce heat input and consequently cooling the planet. For instance in figure 3 given below.
  • With the decline in temperatures, the scenario in figure 2 takes place wherein the white daisies are outnumbered by the black daisies making the planet warmer by increasing absorption of the energy provided by the sunlight.
  • Plant reproduction becomes equal when temperatures converge to the value of both their reproductive rates, both will thrive as shown in figure 1.
The Gaia hypothesis through the Daisyworld simulations proved that the percentage of black daisies in comparison to white ones will continuously change so both could thrive. This further shows that competition and even with a limited range of conditions like on the planet Daisyworld can also support life with stabilized temperatures. In other words, if the sun’s energy output changes the temperature of the planet will greatly vary due to wide and different degrees of albedo.

The Gaia hypothesis has had its fair share of criticism because of its need for more explicit formulation and consequently it being untestable and hence not scientifically proven. Even with this through the years various modifications have been done and via these two models of Gaia emerges the weak Gaia hypothesis that suggests the planetary processes are substantially influenced by the life on the planet which is widely supported. The other model is known as the strong Gaia hypothesis that states that life creates the earth’s systems in other words planetary processes are controlled by life which is not supported and widely accepted.

See: https://www.vedantu.com/geography/gaia-hypothesis

The math behind the Daisyworld model

This is simple account of the mathematical analysis behind the daisyworld model, as originally published in Andrew J. Watson and James E. Lovelock, "Biological homeostasis of the global environment: the parable of Daisyworld", Tellus (1983), 35B, 284-289. Refer to this article as "WL". The science behind the model is discussed in WL and elsewhere, see the bibliography.
As indicated in the title of WL, the heart of the model is a point attractor of a dynamical scheme. In this case, the main control parameter is
  • L, the solar luminosity.
A number of constants appear in the model, such as,
  • AG, the albedo of bare ground,
  • AB, the albedo of black daisies,
  • AW, the albedo of white daisies.
These are fixed at 0.5, 0.25, and 0.75, respectively.
The state variables are:
  • alphaG, relative area of bare fertile ground,
  • alphaB, relative area covered by black daisies,
  • alphaW, relative area covered by white daisies,
  • TG, average temperature over the bare ground,
  • TB, average temperature over the black daisies,
  • TW, average temperature over the white daisies.
The sum of the three areas is assumed to be P, a constant, usually taken to be one. The temperatures are assumed to reach equilibrium rapidly, on the slow scale of time in which the daisy areas change. There values are given as functions of L and the three albedos, in the fourth order equations (4) and (6), again in a linear approximation in equation (7). Here a parameter q' is introduced, which indicates the effect of mixing of temperatures over different areas due to conduction of heat. In the simulations, q' = 20. The average albedo, A, is given by equation (5) of WL,
A = alphaGAG + alphaBAB + alphaWAW
Thus we have a two-dimensional dynamical system, given in equation (1) of WL, for the rates of change of alphaB and alphaW,
alphaW' = alphaW(x beta - gamma)
alphaB' = alphaB(x beta - gamma)
where x = alphaG, gamma is the death rate of all daisies, taken as 0.3 in the simulations, and beta is a quadratic function of the local temperature, equation (3) of WL,
beta(T) = max {0, 1 - 0.003265 (22.5 - T)^2}
Now we look for the critical points. Assuming that both daisy areas are positive (a zero value means the game is over) we find the conditions for a critical point, as given in equations (14) of WL,
T*B = 22.5 + (q'/2)+(AW - AB)
T*W = 22.5 - (q'/2)+(AW - AB)
which are constants independent of L, a surprising and hopeful result. From these equilibrium conditions, we find beta*, and from (1) we have (from the vanishing of the right hand sides) x beta* = gamma, so we may calculate the sum of the two daisy areas.

But to find them individually, it is necessary to proceed with numerical integration. The results of these simulations occupy the bulk of tje WL paper.


See: http://www.vismath.org/research/gaia/WLpaper/daisymath.html

Daisyworld is an imaginary planet, similar to the Flatland model* of a two dimensional land, on which black and white daisies are the only things growing. The model explores the effect of a steadily increasing solar luminosity on the daisy populations and their effect on the resulting planetary temperature. The growth function for the daisies allows them to modulate the planet's temperature for many years, warming it early on as radiation absorbing black daisies grow, and cooling it later as reflective white daisies grow. Eventually, the solar luminosity increases beyond the daisies' capability to modulate the temperature and they die out, leading to a rapid rise in the planetary temperature. Daisyworld was conceived of by Andrew Watson and James Lovelock to illustrate how life might in part have been responsible for regulating Earth's temperature as the Sun's luminosity increased over time.
Hartmann352

* Flatland Model is derived from Flatland: A Romance of Many Dimensions, a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London. Written pseudonymously by "A Square", the book used the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture, but the novella's more enduring contribution is its examination of dimensions.

Several films have been made from the story, including the feature film Flatland (2007). Other efforts have been short or experimental films, including one narrated by Dudley Moore and the short films Flatland: The Movie (2007) and Flatland 2: Sphereland (2012).

See: https://en.wikipedia.org/wiki/Flatland
 
Critical thinking skills, so necessary to make your way in this increasingly technical world, can be boiled down to the following key sequential elements:
  • Identification of premises and conclusions — Break arguments down into logical statements
  • Clarification of arguments — Identify ambiguity in these stated assertions
  • Establishment of facts — Search for contradictions to determine if an argument or theory is complete and reasonable
  • Evaluation of logic — Use inductive or deductive reasoning to decide if conclusions drawn are adequately supported
  • Final evaluation — Weigh the arguments against the evidence presented and its accurate pre-history
Students must master these critical thinking skills akin to the use of the scientific method, and practice them ourselves to objectively analyze an onslaught of information. Ideas, especially plausible-sounding philosophies, should be challenged and pass the credibility litmus test.

A well rounded education, with a suitable cross section in STEM classes and information processing, combined with a centrist history, particularly of the USA as well as the world, is necessary to aid in the filtering of the vast amount of information received every day.

Education is central to understanding politics and government and a democracy cannot survive without informed citizens. Critical thinking is the precondition for nurturing the ethical imagination that enables engaged citizens to learn how to effect change rather than be governed. Thinking is fundamental to a notion of civic literacy that views knowledge as central to the pursuit of life's goals. Such thinking incorporates a set of values that enables a person to deal critically with the use and effects of politics and government particularly here where the government is answerable to the people and not vice versa.
Hartmann352
 
Critical thinking skills, so necessary to make your way in this increasingly technical world, can be boiled down to the following key sequential elements:
  • Identification of premises and conclusions — Break arguments down into logical statements
  • Clarification of arguments — Identify ambiguity in these stated assertions
  • Establishment of facts — Search for contradictions to determine if an argument or theory is complete and reasonable
  • Evaluation of logic — Use inductive or deductive reasoning to decide if conclusions drawn are adequately supported
  • Final evaluation — Weigh the arguments against the evidence presented and its accurate pre-history
What makes you think the new AI are unfamiliar with those terms and what logical practice to solve them it involves?

Don't forget that the GPT has access to the internet and everything that is publicly available., including scientific papers, and has the chops to understand everything!

Ask an AI about this list you just posited and it will give you the scientifiic definitions and what that means in an instant. What it doesn't know it "researches" and can do so at lightning speeds.

Humas rely on memory to "research" a problem. The AI has the entire internet as its memory.
 
write4u:

I think the following elements of natural language processing (NLP), which is the ability of a computer program to understand human language as it is spoken and written and which is referred to as natural language may help you. It is an increasingly important component of artificial intelligence (AI).

NLP has existed for more than 50 years and has roots in the field of linguistics. It has a variety of real-world applications in a number of fields, including medical research, search engines, business intelligence and in accounting.

NLP enables computers to understand natural language as humans do. Whether the language is spoken or written, natural language processing uses artificial intelligence to take real-world input, process it, and make sense of it in a way a computer can understand. Just as humans have different sensors -- such as ears to hear and eyes to see -- computers have programs to read and microphones to collect audio. And just as humans have a brain to process that input, computers have a program to process their respective inputs. At some point in processing, the input is converted to code that the computer can understand.

There are two main phases to natural language processing: data preprocessing and algorithm development.

Data preprocessing involves preparing and "cleaning" text data for machines to be able to analyze it. Preprocessing puts data in workable form and highlights features in the text that an algorithm can work with. There are several ways this can be done, including:
  • Tokenization. This is when text is broken down into smaller units to work with.
  • Stop word removal. This is when common words are removed from text so unique words that offer the most information about the text remain.
  • Lemmatization and stemming. This is when words are reduced to their root forms to process.
  • Part-of-speech tagging. This is when words are marked based on the part-of speech they are -- such as nouns, verbs, pronouns, adverbs and adjectives.
Once the data has been preprocessed, an algorithm is developed to process it. There are many different natural language processing algorithms, but two main types are commonly used:
  • Rules-based system. This system uses carefully designed linguistic rules. This approach was used early on in the development of natural language processing, and is still used.
  • Machine learning-based system. Machine learning algorithms use statistical methods. They learn to perform tasks based on training data they are fed, and adjust their methods as more data is processed. Using a combination of machine learning, deep learning and neural networks, natural language processing algorithms hone their own rules through repeated processing and learning.
Businesses, especially use massive quantities of unstructured, text-heavy data and need a way to efficiently process it. A lot of the information created online and stored in databases is natural human language, and until recently, businesses could not effectively analyze this data. This is where natural language processing is useful.

The advantage of natural language processing can be seen when considering the following two statements: "Cloud computing insurance should be part of every service-level agreement," and, "A good SLA ensures an easier night's sleep -- even in the cloud." If a user relies on natural language processing for search, the program will recognize that cloud computing is an entity, that cloud is an abbreviated form of cloud computing and that SLA is an industry acronym for service-level agreement.

See: https://www.techtarget.com/searchenterpriseai/definition/natural-language-processing-NLP

These are the types of vague elements that frequently appear in human language and that machine learning algorithms have been historically bad at interpreting. Now, with improvements in both deep learning and machine learning methods, established algorithms can now more effectively interpret them. These improvements expand the breadth and depth of data that can be analyzed.
Hartmann352
 
  • Like
Reactions: write4u
I think the following elements of natural language processing (NLP), which is the ability of a computer program to understand human language as it is spoken and written and which is referred to as natural language may help you. It is an increasingly important component of artificial intelligence (AI).
If I understand the GPT series AI, they are language based and learn very similar to humans.
When information is received and compared to existing memory ( definitions) the AI selects the "best fit" of definition in context and makes a "best guess" of the correct answer in context of the subject under consideration.
IOW , the GPT AI are predictive engines, much like the human brain.

This is why they are so incredibly versatile in application of human arts and sciences. Their programming imitates biological programming, sans the standard sensory experience of touch and taste.
 
SCARY: New A.I. Tool Can Pass Medical Tests and Bar Exam
By Paul Duke
January 23, 2023

Technologists have long been pushing our species to the precipice of unknown catastrophe, harnessing their blinding obsession with innovation to mow down the hurdles of ethics and morality and safety.

Nowhere is this more true than in the field of artificial intelligence, where every week seems to bring us a little bit closer to the dystopian dirge that science fiction authors have long warned us about.

The latest terrifying new development in the A.I. world comes to us from a system known as ChatGPT, which is now believed capable of passing complex and rather important exams.

The artificially intelligent content creator, whose name is short for ‘Chat Generative Pre-trained Transformer,’ was released two months ago by OpenAI, and has since taken the world by storm.
Praised by figures such as Elon Musk – one of OpenAI’s founders – the AI-powered also raised alarms in regards to ethics as students use it to cheat on writing assignments and experts warn it could have lasting effects on the US economy.
Its results, however, are inarguable – with recent research showing it the chatbot could successfully achieve an MBA, and soon pass notoriously difficult tests like the United States Medical Licensing Exam and the Bar.
Just how troubling is the development?

Ethan Mollick, associate professor at Wharton School of Business at the University of Pennsylvania, highlighted these reports in a recent post on social media, one of which was carried out by one of his colleagues at the prestigious school.
The report, carried out by Christian Terwiesch, found that ChatGPT, while still in its infancy, received a grade varying from a B to B- on the final exam of a typical MBA core course.
The research, carried out to see what the release of the AI tool could mean for MBA programs, further found that ChatGPT also ‘performed well in the preparation of legal documents.’
The news comes just months after a scare at Google, where a chatbot allegedly gained sentience, according to a now-fired engineer at the company, and wound up hiring its own lawyer to represent its interests in court.

See: https://steadfastdaily.com/scary-new-a-i-tool-can-pass-medical-tests-and-bar-exam/

Wow, I could've used ChatGPT a couple times during my statistics studies when I took those gruelling examinations in college. It is a scary proposition considering the criticality of certain exams for future earnings. Take Japan, for instance, where the Center Test, a scholastic aptitude examination that functions as a key part of the admissions criteria for many Japanese universities, must be passed.

Spy eyeglasses, invisible smartwatches, and micro earpieces might remind you of an undercover agent on a classified espionage mission, however, students are using these high-tech devices to pull off ‘exam heists’ in real life.

With online education in high gear, cheating on tests has become an elaborate affair. Here’s an incident that left the authorities scratching their heads. 11 students used electronic gadgets like micro earbuds and Bluetooth collar devices to cheat during an examination for the Staff Selection Commission (SSC). Wonder how they sneaked in the devices? Here’s the fun part, they covered them in carbon paper to avoid being detected during the security check!

A college roommate of mine had to pass a complicated economics exam. He used a Bic fine point and long piece of narrow rolled up paper on which he printed his equations, which he then placed within a ball point pen which had a rectangle window enabling him to roll the sheet back and forth by the small window to call up the equations he needed. The upshot was that he never needed this gizmo. He had written the formulae so often that he remembered them for the exam. Ha!

With access hidden to ChatGPT all the crazy spy gadgets used to pass critical tests could be eliminated.
Hartmann352
 
Its results, however, are inarguable – with recent research showing it the chatbot could successfully achieve an MBA, and soon pass notoriously difficult tests like the United States Medical Licensing Exam and the Bar.
Just how troubling is the development?

It depends on the nature of the situation.

Would you have an AI argue your case with absolute mastery of the legal issues involved.?

Would you have an AI control surgery with exquisite precision, but without intuitive emotional involvement?

Would you have an AI play a violin concerto with precision and impeccable time, but without "soul"?
 
write4u -

It is recognised that AI may not find its best application in live judicial proceedings:

While technology has the potential to reduce bias in American courtrooms, it is important to highlight the growing use of artificial intelligence (AI) algorithms as a risk assessment tool. As AI expands in popularity and use, contentious debate is unfolding over its effectiveness and ethics in criminal justice proceedings.

AI programs commonly aim to calculate a defendant’s risk of reoffending and failing to appear at trial. It then assigns them a score, which the judge can use to make judicial decisions, including bail, parole, guilt or innocence and even punishments. Proponents of this technology believe that AI will speed up the judicial process and make the system fairer and safer. While some acknowledge the limitations and negative consequences, others believe AI use in courtrooms will improve over time. Most people in this camp cite several key points for why AI in courts is necessary:
  1. Judicial bias:  One study on federal sentencing found that Black males were given prison terms for a 20% longer duration than white males involved in similar crimes. Others find that a judge’s mood and other unrelated factors can impact sentencing.
  2. Reducing criminal justice system burdens:  A study of judges’ sentencing in New York City found these tools can reduce overall crime rates by 25% and pre-trial jail time by over 40%, including reductions in the number of incarcerated Black and Hispanic people.
AI negatively impacts the judicial process and lacks the transparency for genuine scrutiny. Opponents recognize AI’s potential benefits in the courts, but favor the transparency of human judges. They often point to these arguments:
  1. Lack of transparency:  Almost all of these tools are developed by for-profit companies that keep their algorithms secret, meaning the courts or the defense cannot scrutinize their methods for calculating a defendant’s scores.
  2. Machine bias:  A ProPublica study of one company’s algorithm, controlling for relevant factors, found that “Black defendants were… 77% more likely to be pegged as at higher risk of committing a future violent crime and 45% more likely to be predicted to commit a future crime of any kind.”
If these tools are designed to lessen biases in the criminal justice system, then why do they produce such significant racial disparities? The challenge in answering this question is that the factors used in calculations vary from company to company, and it is impossible to know the methodology unless the manufacturer discloses them. From what is known about the technology, many companies compile socioeconomic information on the defendant; the algorithm then finds statistical correlations between these factors and the outcomes they are studying, such as crime patterns and failure to appear.

The problem with this process is that even AI is not immune from bias. Some of the factors studied, for example, may reflect ingrained racial disparities which introduces biases in the data. And research into more transparent applications of AI machine learning in predictive analytics, such as facial recognition, has repeatedly failed to accurately make predictions about BIPOC individuals. The ACLU conducted a study in 2018 to assess the accuracy of Amazon’s Rekognition facial recognition tool. They took the images of all members of the U.S. Congress and compared them with 25,000 mugshots of convicted criminals. The tool incorrectly matched forty members with mugshots of criminals. Half the incorrect matches were people of color, although they made up just 20% of Congress. This is a clear warning that we should be very careful using AI and similar technology in decisions involving sensitive issues like incarceration.

Currently, there is a minimally established precedent for the use of AI tools in judicial proceedings. This is despite its numerous implications for the Fifth and Fourteenth Amendment rights of defendants to due process. In 2013, Eric Loomis sued the state of Wisconsin, alleging that COMPAS**, an AI risk-assessment tool, violated his right to due process by preventing him from challenging the tools’ validity and by factoring in race and gender into its decision. The Supreme Court ruled against Loomis, finding that the tools did not violate his right to due process as long as it was not the sole factor in the decision (a point that is nearly impossible to prove) and that the technology was used responsibly with an understanding of its limitations. Loomis appealed to the U.S. Supreme Court, but his case was not taken up.

What is desperately needed before AI algorithms are used in court proceedings is a national study on their overall effectiveness. The study would lead to federal and state model legislation, so courts have clear guidance to ensure the programs lead to accurate and fair outcomes and protect against racial bias. A moratorium should be placed on these tools until they can be validated. Until then, it is important to keep this disparity in our collective consciousness and pressure government to move forward with the review and regulatory oversight that is so badly needed.

See: https://wavecenter.org/policy/proceed-with-caution-ai-use-in-the-courtroom/

* BIPOC - BIPOC, which stands for Black, Indigenous, People of Color. People are using the term to acknowledge that not all people of color face equal levels of injustice.

See: https://www.merriam-webster.com/dictionary/BIPOC

** COMPAS - is a fourth generation risk and need assessment instrument. Criminal justice agencies across the nation use COMPAS to inform decisions regarding the placement, super- vision and case management of offenders. COMPAS was developed empirically with a focus on predictors known to affect recidivism. It includes dynamic risk factors, and it provides information on a variety of well validated risk and need factors designed to aid in correctional intervention to decrease the likelihood that offenders will reoffend.

COMPAS was first developed in 1998 and has been revised over the years as the knowl- edge base of criminology has grown and correctional practice has evolved. In many ways changes in the field have followed new developments in risk assessment. We continue to make improvements to COMPAS based on results from norm studies and recidivism studies conducted in jails, probation agencies, and prisons. COMPAS is periodically updated to keep pace with with emerging best practices and technological advances.

COMPAS has two primary risk models: General Recidivism Risk and Violent Recidivism Risk. COMPAS has scales that measure both dynamic risk (criminogenic factors) and static risk (historical factors). Additional risk models include the Recidivism Risk Screen and the Pretrial Release Risk Scale II.
Statistically based risk/need assessments have become accepted as established and valid methods for organizing much of the critical information relevant for managing offenders in correctional settings (Quinsey, Harris, Rice, & Cormier, 1998). Many research studies have concluded that objective statistical assessments are, in fact, superior to human judgment (Grove, Zald, Lebow, Snitz, & Nelson, 2000; Swets, Dawes, & Monahan, 2000).

COMPAS is a statistically based risk assessment developed to assess many of the key risk and need factors in adult correctional populations and to provide information to guide placement decisions. It aims to achieve these goals by providing valid measurement and concise orga- nization of important risk/need dimensions. Northpointe recognizes the importance of case management and supports the use of professional judgment along with actuarial risk/need assessment. Following assessment, a further goal is to help practitioners with case plan development/implementation and overall case management support.

In overloaded and crowded criminal justice systems, brevity, efficiency, ease of administration and clear organization of key risk/need data are critical. COMPAS was designed to optimize these practical factors. We acknowledge the trade-off between comprehensive coverage of key risk and criminogenic factors on the one hand, and brevity and practicality on the other. COMPAS deals with this trade-off in several ways; it provides a comprehensive set of key risk factors that have emerged from the recent criminological literature, and it allows for customization inside the software. Therefore, ease of use, efficient and effective time management, and case management considerations that are critical to best practice in the criminal justice field can be achieved through COMPAS.

See: https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf

Will AI become commonplace in America's courtrooms? Will AI replace defense attorneys? I personally don't believe so because how can it ever hope to duplicate the subtleties in voice and mannerisms which are so readily apparent to judges, jurors and prosecutors in the same courtroom. Will AI ever reproduce an eyeroll or a frenzied spindling of paper to reinforce a conjecture?

As for AI directed surgery, a good friend recently underwent AI directed prostate cancer surgery, which went without a hitch and from which he has suffered no ill effects.

Pertaining to AI trying to duplicate violinists, will it ever render listeners to near tears like a Paganini, a Heifetz, a Perlman or an Anne-Sophie Mutter? I'm not sure, but AI may accomplish similar music sometime in the future.
Hartmann352
 
Last edited:
AI negatively impacts the judicial process and lacks the transparency for genuine scrutiny.
I don't necessarily agree with that.
Human judges are subject to human emotions that may influence their objectivity.

OTOH, AI will apply the exact same standards to exact same situations.
If justice is to be "blind" AI is the perfect vehicle.
Opponents recognize AI’s potential benefits in the courts, but favor the transparency of human judges. They often point to these arguments:
  1. Lack of transparency:  Almost all of these tools are developed by for-profit companies that keep their algorithms secret, meaning the courts or the defense cannot scrutinize their methods for calculating a defendant’s scores.
On what basis do they make this judgement? Lack of trust in the manufacturer? How does that affect the fundamental function of the AI. I agree, you cannot always trust people. AI are impervious to bribery or blackmail.
Machine bias:  A ProPublica study of one company’s algorithm, controlling for relevant factors, found that “Black defendants were… 77% more likely to be pegged as at higher risk of committing a future violent crime and 45% more likely to be predicted to commit a future crime of any kind.”
Then the algorithm is flawed and is based on human standards that originated the bias to begin with.
A simple command that all races are to be treated equally resolves any possible bias. A properly configured AI is truly "blind" to human foibles.

All those projected problems are just human antropomorphizations projected unto an emotionless Artificial Intelligence.
 
Last edited:
write4u -
dereks -

Here's some more AI uses:

Machine learning spots 8 potential technosignatures (in the search for extraterrestrial life)

by Robert Lea

green bank to ai.jpeg

An artist's depiction of the Green Bank Telescope hooked up to a machine learning system. (Image credit: Danielle Futselaar/Breakthrough Listen)

Humans have five new leads in the search to find life beyond our solar system.

Scientists attempting to address the question, "Are we alone in the universe?" have used a new machine-learning technique to discover eight previously undetected "signals of interest" from around five nearby stars. The team applied an algorithm to previously studied data collected by the Green Bank Telescope in West Virginia as part of a campaign run by Breakthrough Listen, a privately funded initiative searching 1 million nearby stars, 100 nearby galaxies and the Milky Way's plane for signs of technologically advanced life.

And the project nearly didn't happen. "I only told my team after the paper's publication that this all started as a high-school project that wasn't really appreciated by my teachers," first author Peter Ma, now an undergraduate student at the University of Toronto in Canada, said in a statement.

This isn't the first time that computer algorithms have been used to search the vastness of space for "technosignatures," technologically-generated signals that could mark other advanced extraterrestrial civilizations.

However, because many algorithms used to sift through telescope data were developed decades ago for early digital computers, they are often outdated and inefficient when applied to the massive datasets generated by modern observatories.
These classical algorithms had been used to examine the Green Bank Telescope data and this inefficiency could be why this data hadn't originally indicated any signals of interest in 2017, when scientists originally examined it. All told, the researchers analyzed 150 terabytes of data representing observations of 820 nearby stars, although they want to apply the algorithm to even more data.

"With our new technique, combined with the next generation of telescopes, we hope that machine learning can take us from searching hundreds of stars, to searching millions," Ma said in a statement.

The researchers found that the key strength of the new algorithm was to organize the data from telescopes into categories, allowing them to distinguish between real signals and "noise," or interference. Although telescopes involved in the search for technosignatures are placed in areas of the globe where there is minimal interference from human technology like cell phones, these signals still get picked up. (Most SETI programs focus on radio waves because they can travel at the speed of light across vast distances mostly unimpeded by obstacles like interstellar dust clouds; unfortunately, the very same characteristics have made radio waves the cornerstone of human communication on Earth.)

"In many of our observations, there is a lot of interference," Ma said. "We need to distinguish the exciting radio signals in space from the uninteresting radio signals from Earth."

To make sure the new algorithm wasn't confused by signals originating from Earth, Ma and the team trained their machine-learning tools to tell the difference between human-generated interference and potential extraterrestrial signals. They tested a range of algorithms, determining each algorithm's precision and how often it fell for false positives.

The most successful algorithm combined two subtypes of machine learning: supervised learning, in which humans train the algorithm to help it generalize, and unsupervised learning that can hunt through large data sets for new hidden patterns. United in what Ma called "semi-unsupervised learning," these approaches discovered eight signals that originated from five different stars located between 30 and 90 light-years away from Earth.

The signals are convincing candidates for genuine technosignatures, according to Steve Croft, project scientist for Breakthrough Listen. "First, they are present when we look at the star and absent when we look away — as opposed to local interference, which is generally always present," he said. "Second, the signals change in frequency over time in a way that makes them appear far from the telescope."

Croft cautioned that in massive datasets that can contain millions of signals, a single signal could have both of these characteristics by sheer chance alone. "It's a bit like walking across a gravel path and finding a stone stuck in the tread of your shoe that seems to fit perfectly," he said.

So although the researchers believe these eight signals resemble what a technosignature is expected to look like, they can't confidently say any or all of the signals originate from extraterrestrial intelligence. The scientists would have needed to detect the same signals multiple times, and this repetition didn't appear during brief follow-up observations by the Green Bank Telescope.

"I am impressed by how well this approach has performed on the search for extraterrestrial intelligence," Cherry Ng, a co-author on the research and an astronomer also at the University of Toronto, said in the same statement. "With the help of artificial intelligence, I'm optimistic that we'll be able to better quantify the likelihood of the presence of extraterrestrial signals from other civilizations."

The team now wants to apply the same algorithm to data gathered by observatories like the MeerKAT array* in South Africa.

"We're scaling this search effort to 1 million stars today with the MeerKAT telescope and beyond," Ma said in a second statement. "We believe that work like this will help accelerate the rate we're able to make discoveries in our grand effort to answer the question, 'Are we alone in the universe?'"

The team's research was published Monday (Jan. 30) in the journal Nature Astronomy(opens in new tab).

See: https://www.space.com/machine-learn...-4F4C-B44F-7948232FDC7F&utm_source=SmartBrief

See the original article below:

A deep-learning search for technosignatures from 820 nearby stars

by:
Peter Xiangyuan Ma, Cherry Ng, Leandro Rizk, Steve Croft, Andrew P. V. Siemion,
Bryan Brzycki, Daniel Czech, Jamie Drew, Vishal Gajjar, John Hoang, Howard Isaacson, Matt Lebofsky, David H. E. MacMahon, Imke de Pater, Danny C. Price, Sofia Z. Sheikh & S. Pete Worden

The goal of the search for extraterrestrial intelligence (SETI) is to quantify the prevalence of technological life beyond Earth via their ‘technosignatures’. One theorized technosignature is narrowband Doppler drifting radio signals. The principal challenge in conducting SETI in the radio domain is developing a generalized technique to reject human radiofrequency interference. Here we present a comprehensive deep-learning-based technosignature search on 820 stellar targets from the Hipparcos catalogue, totalling over 480 h of on-sky data taken with the Robert C. Byrd Green Bank Telescope as part of the Breakthrough Listen initiative. We implement a novel β-convolutional variational autoencoder** to identify technosignature candidates in a semi-unsupervised manner while keeping the false-positive rate manageably low, reducing the number of candidate signals by approximately two orders of magnitude compared with previous analyses on the same dataset. Our work also returned eight promising extraterrestrial intelligence signals of interest not previously identified. Re-observations on these targets have so far not resulted in re-detections of signals with similar morphology. This machine-learning approach presents itself as a leading solution in accelerating SETI and other transient research into the age of data-driven astronomy.

Circling one star among hundreds of billions, in one galaxy among a hundred billion more, in a Universe that is vast and expanding ever faster – perhaps toward infinity. In the granular details of daily life, it’s easy to forget that we live in a place of astonishing grandeur and mystery.

The Breakthrough Initiatives are a suite of space science programs funded by the foundation established by Julia and Yuri Milner. The Initiatives investigate the fundamental questions of life in the Universe: Are we alone? Are there habitable worlds in our galactic neighbourhood? Can we make the great leap to the stars? And can we think and act together – as one world in the cosmos?

"Where is everybody?," wondered the great physicist Enrico Fermi. The Universe is ancient and immense. Life, he reasoned, has had plenty of time to get started – and get smart. But we see no evidence of anything alive or intelligent in space. In the last five years, we have discovered that planets in the habitable zone of stars are common. Based on the numbers discovered so far, there are estimated to be billions more in our galaxy alone. Yet we are still in the dark about life. Are we really alone? Or are there others out there?

It’s one of the biggest questions. And only science can answer it.

Breakthrough Listen is a $100 million program of astronomical observations and analysis, the most comprehensive ever undertaken in search of evidence of technological civilizations in the Universe. The partners with some of the world’s largest and most advanced telescopes, across five continents, to survey targets including one million nearby stars, the entire galactic plane and 100 nearby galaxies at a wide range of radio and optical frequency bands.

Part of the Breakthrough Initiatives, Listen was launched by Yuri Milner and Stephen Hawking in 2015, and is funded by the foundation established by Yuri and Julia Milner.

Breakthrough Message is a $1 million competition to design a message representing Earth, life and humanity that could potentially be understood by another civilization. The aim is to encourage humanity to think together as one world, and to spark public debate about the ethics of sending messages beyond Earth.

See: https://breakthroughinitiatives.org/about

* MeerKAT array

meerkat array.jpeg
MEERkat array sarao.ac.za

The MeerKAT telescope is an array of 64 interlinked receptors (a receptor is the complete antenna structure, with the main reflector, sub-reflector and all receivers, digitisers and other electronics installed).

The configuration (placement) of the receptors is determined by the science objectives of the telescope.

48 of the receptors are concentrated in the core area which is approximately 1 km in diameter.

The longest distance between any two receptors (the so-called maximum baseline) is 8 km.

Each MeerKAT receptor consists of three main components:
  1. The antenna positioner, which is a steerable dish on a pedestal;
  2. A set of radio receivers;
  3. A set of associated digitisers.
The antenna positioner is made up of the 13.5 m effective diameter main reflector, and a 3.8 m diameter sub-reflector. In this design, referred to as an ‘Offset Gregorian’ optical layout, there are no struts in the way to block or interrupt incoming electromagnetic signals. This ensures excellent optical performance, sensitivity and imaging quality, as well as good rejection of unwanted radio frequency interference from orbiting satellites and terrestrial radio transmitters. It also enables the installation of multiple receiver systems in the primary and secondary focal areas, and provides a number of other operational advantages.

The combined surface accuracy of the two reflectors is extremely high with a deviation from the ideal shape being no more than 0.6 mm RMS (root mean square). The main reflector surface is made up of 40 aluminium panels mounted on a steel support framework.

This framework is mounted on top of a yoke, which is in turn mounted on top of a pedestal. The combined height of the pedestal and yoke is just over 8 m. The height of the total structure is 19.5 m, and it weighs 42 tons.

The pedestal houses the antenna’s pointing control system.

Mounted at the top of the pedestal, beneath the yoke, are an azimuth drive and a geared azimuth bearing, which allow the main and sub-reflectors, together with the receiver indexer, to be rotated horizontally. The yoke houses the azimuth wrap, which guides all the cables when the antenna is rotated, and prevents them from becoming entangled or damaged. The structure allows an observation elevation range from 15 to 88 degrees, and an azimuth range from -185 degrees to +275 degrees, where north is at zero degrees.

The steerable antenna positioner can point the main reflector very accurately, to within 5 arcseconds (1.4 thousandths of a degree) under low-wind and night-time observing conditions, and to within 25 arcseconds (7 thousandths of a degree) during normal operational conditions.

See: https://www.sarao.ac.za/science/meerkat/about-meerkat/

** β-convolutional variational autoencoder provides a principled framework for learning deep latent-variable models and corresponding inference models.

One major division in machine learning is generative versus discriminative modeling. While in discriminative modeling one aims to learn a predictor given the observations, in generative modeling one aims to solve the more general problem of learning a joint distribution over all the variables. A generative model simulates how the data is generated in the real world. “Modeling” is understood in almost every science as unveiling this generating process by hypothesizing theories and testing these theories through observations. For instance, when an astronomer models the formation of galaxies s/he encodes in his/her equations of motion the physical laws under which stellar bodies interact. The same is true for biologists, chemists, economists and so on. Modeling in the sciences is in fact almost always generative modeling.

There are many reasons why generative modeling is attractive. First, we can express physical laws and constraints into the generative process while details that we don’t know or care about, i.e. nuisance variables, are treated as noise. The resulting models are usually highly intuitive and interpretable and by testing them against observations we can confirm or reject our theories about how the world works.

Another reason for trying to understand the generative process of data is that it naturally expresses causal relations of the world. Causal relations have the great advantage that they generalize much better to new situations than mere correlations. For instance, once we understand the generative process of an earthquake, we can use that knowledge both in California and in Chile.

To turn a generative model into a discriminator, we need to use Bayes rule. For instance, we have a generative model for an earthquake of type A and another for type B, then seeing which of the two describes the data best we can compute a probability for whether earthquake A or B happened. Applying Bayes rule is however often computationally expensive.

In discriminative methods we directly learn a map in the same direction as we intend to make future predictions in. This is in the opposite direction than the generative model. For instance, one can argue that an image is generated in the world by first identifying the object, then generating the object in 3D and then projecting it onto an pixel grid. A discriminative model takes these pixel values directly as input and maps them to the labels. While generative models can learn efficiently from data, they also tend to make stronger assumptions on the data than their purely discriminative counterparts, often leading to higher asymptotic bias (Banerjee, 2007) when the model is wrong. For this reason, if the model is wrong (and it almost always is to some degree!), if one is solely interested in learning to discriminate, and one is in a regime with a sufficiently large amount of data, then purely discriminative models typically will lead to fewer errors in discriminative tasks. Nevertheless, depending on how much data is around, it may pay off to study the data generating process as a way to guide the training of the discriminator, such as a classifier. For instance, one may have few labeled examples and many more unlabeled examples. In this semi- supervised learning setting, one can use the generative model of the data to improve classification (Kingma et al., 2014; Sønderby et al., 2016a).

Generative modeling can be useful more generally. One can think of it as an auxiliary task. For instance, predicting the immediate future and may help us build useful abstractions of the world that can be used for multiple prediction tasks downstream. This quest for disentangled, semantically meaningful, statistically independent and causal factors of variation in data is generally known as unsupervised representation learning, and the variational autoencoder (VAE) has been extensively employed for that purpose. Alternatively, one may view this as an implicit form of regularization: by forcing the representations to be meaningful for data generation, we bias the inverse of that process, which maps from input to representation, into a certain mould. The auxiliary task of predicting the world is used to better understand the world at an abstract level and thus to better make downstream predictions.

The VAE can be viewed as two coupled, but independently parame- terized models: the encoder or recognition model, and the decoder or generative model. These two models support each other. The recognition model delivers to the generative model an approximation to its posterior over latent random variables, which it needs to update its parameters inside an iteration of “expectation maximization” learning. Reversely, the generative model is a scaffolding of sorts for the recogni- tion model to learn meaningful representations of the data, including possibly class-labels. The recognition model is the approximate inverse of the generative model according to Bayes rule***.

One advantage of the VAE framework, relative to ordinary Varia- tional Inference (VI), is that the recognition model (also called inference model) is now a (stochastic) function of the input variables. This in contrast to VI where each data-case has a separate variational distribu- tion, which is inefficient for large data-sets. The recognition model uses one set of parameters to model the relation between input and latent variables and as such is called “amortized inference”. This recognition model can be arbitrary complex but is still reasonably fast because by construction it can be done using a single feedforward pass from input to latent variables. However the price we pay is that this sampling induces sampling noise in the gradients required for learning. Perhaps the greatest contribution of the VAE framework is the realization that we can counteract this variance by using what is now known as the
“reparameterization trick”, a simple procedure to reorganize our gradient computation that reduces variance in the gradients.

The VAE is inspired by the Helmholtz Machine (Dayan et al., 1995) which was perhaps the first model that employed a recognition model. However, its wake-sleep algorithm was inefficient and didn’t optimize a single objective. The VAE learning rules instead follow from a single approximation to the maximum likelihood objective.

VAEs marry graphical models and deep learning. The generative model is a Bayesian network of the form p(x|z)p(z), or, if there are multiple stochastic latent layers, a hierarchy such as p(x|zL)p(zL|zL−1) ...p(z1|z0). Similarly, the recognition model is also a conditional Bayesian network of the form q(z|x) or as a hierarchy, such as q(z0|z1)...q(zL|X). But inside each conditional may hide a complex (deep) neural network, e.g. z|x ∼ f(x,ε), with f a neural network mapping and ε a noise random variable. Its learning algorithm is a mix of classical (amortized, variational) expectation maximization but through the reparameteri- zation trick ends up backpropagating through the many layers of the deep neural networks embedded inside of it.

See: https://arxiv.org/pdf/1906.02691.pdf

*** Bayes rule is the mathematical rule that describes how to update a belief, given some evidence. In other words – it describes the act of learning.

The equation itself is not too complex:

Probability of event A given event B equals Prior probability of event A times Probability of event B given A, divide by marginal probability of event B
The equation: Posterior = Prior x (Likelihood over Marginal probability)
There are four parts:

  • Posterior probability (updated probability after the evidence is considered)
  • Prior probability (the probability before the evidence is considered)
  • Likelihood (probability of the evidence, given the belief is true)
  • Marginal probability (probability of the evidence, under any circumstance)
Bayes' Rule can answer a variety of probability questions, which help us (and machines) understand the complex world we live in.

See: https://www.freecodecamp.org/news/bayes-rule-explained/

The most successful algorithm combined two subtypes of machine learning: supervised learning, in which humans train the algorithm to help it generalize, and unsupervised learning that can hunt through large data sets for new hidden patterns. This AI machine earning stellar search will soon be applied to a million star systems. What will it find? Again, we'll have to wait.
Hartmann352
 
Neurons can be reproduced physically or simulated by a digital computer.
It's a little bit more complicated than that. Eukaryotic neurons contain nano-scale dynamic tubulin potentiometers that handle most of the data and control the "action potentials" that transmit the incoming data.
 
The term “cytoskeleton” is often used as if it described a single, unified structure, but the cytoskeleton of neurons and other eukaryotic cells comprises three distinct, interacting structural complexes that have very different properties: microtubules (MTs), neurofilaments (NFs) and microfilaments (MFs). Each has a characteristic composition, structure and organization that may be further specialized in a particular cell type or subcellular domain. The defining structural elements have long been identifiable in electron micrographs (Fig. 8-1), and a considerable amount is known about the detailed organization of these components in neurons and glia. Each set of cytoskeletal structures is considered in turn.

cytoskeleton.jpeg
The cytoskeleton and organization of the axon in cross-section. Left: Electron micrograph of a myelinated toad axon in cross-section taken near a Schmidt-Lanterman cleft; axon diameter is slightly reduced and the different domains within the axoplasm are emphasized. Right: Diagram highlighting key features of the axoplasm. Portions of the myelin sheath surrounding the axon can be seen (My). Most of the axonal diameter is taken up by the neurofilaments (clear area). There is a minimum distance between neurofilaments and other cytoskeletal structures that is determined by the side arms of the neurofilaments. (These side arms are visible between some of the neurofilaments in the electron micrograph, left.) The microtubules (MT) tend to be found in bundles and are more irregularly spaced. They are surrounded by a fuzzy material that is also visible in the region just below the plasma membrane (stippled areas, right). These areas are thought to be enriched in actin microfilaments and presumably contain other slow component b (SCb) proteins as well. The stippled regions with embedded microtubules are also the location of membranous organelles in fast axonal transport (larger, filled, irregular shapes, right). Both microtubule and microfilament networks need to be intact for the efficient movement of organelles in fast transport. (Electron micrograph provided by Dr. Alan Hodge. From [34], with permission.)

Neuronal MTs are structurally similar to those found in other eukaryotic cells. The core structure is a polymer of 50-kDa tubulin subunits. Heterodimers of α- and β-tubulin align end to end to form protofilaments, 13 of which join laterally to form a hollow tube with an outer diameter of 25 nm (Fig. 8-2). Examples also exist of MTs with 12 and 14 protofilaments. The α- and β-tubulins are the best known members of a unique protein family, the members of which have significant sequence similarity [2]. There is approximately 40% sequence identity between α- and β-tubulins and even greater identity within the α and β gene subfamilies. Conservation of the primary sequence for tubulins is also high across species so that tubulins from yeast can readily co-assemble with tubulins from human brain. Tubulin dimers bind two molecules of GTP and exhibit GTPase activity that is closely linked to assembly and disassembly of MTs. While many questions remain about tubulin and its interactions, the structure of the αβ-tubulin dimer has recently been derived from electron diffraction studies, providing a basis for dissection of the functional architecture of MTs.

filaments.jpeg
Microfilaments, microtubules and intermediate filaments in the nervous system. Each cytoskeletal structure has a distinctive ultrastructure. This schematic illustrates the major features of the core fibrils. The microfilament consists of two strands of actin subunits twisted around each other like strings of pearls. The individual subunits are asymmetrical, globular proteins that give the microfilament its polarity. The microtubule is also made from globular subunits, but in this case the basic building block is a heterodimer of α- and β-tubulins. These αβ dimers are organized into linear strands, or protofilaments, with β-tubulin subunits oriented toward the plus end of the microtubule. Protofilaments form sheets in vitro that roll up into a cylinder with 13 protofilaments forming the wall of the microtubule. Assembly of both microfilaments and microtubules is coupled to slow nucleotide hydrolysis, ATP for microfilaments and GTP for microtubules. The subunits of both glial filaments and neurofilaments are rod-shaped molecules that will self-assemble without nucleotides. The core filament structure is thought to be a ropelike arrangement of individual subunits. Glial filaments are typical type III intermediate filaments in that they form homopolymers without side arms. In contrast, neurofilaments are heteropolymers formed from three subunits, NFH, NFM and NFL for the high-, medium- and low-molecular-weight subunits. The NFH and NFM subunits have extended carboxy-terminal tails that project from the sides of the core filament and may be heavily phosphorylated.

Heterodimers in a MT are oriented in the same direction, so the resulting MT has asymmetrical ends that differ in assembly properties [4]. The β-tubulin subunit is exposed at the “plus” end, which is the preferred end for addition of tubulin dimers. The opposite, “minus,” end grows more slowly at physiological concentrations of tubulin. In the case of free MTs, the balance between assembly and disassembly at each end defines a critical concentration for net growth. MT assembly under in vitro conditions involves a slow nucleation step followed by a more rapid, net growth phase interspersed with occasional, rapid shrinkage, a kinetic pattern described as dynamic instability. In glia and most other non-neuronal cells, however, the minus ends of MTs are usually bound at the site of nucleation, which is associated with the pericentriolar complex of the cell, a site often called the microtubule-organizing center (MTOC) [5]. Anchoring of MT minus ends helps to establish and maintain the polarity of cellular MTs. Anchoring and nucleation of MTs appear to require a third class of tubulin, γ-tubulin, which is detectable only as part of the pericentriolar complex [5].

The organization of MTs in neurons differs in several ways from that seen in non-neuronal cells (Fig. 8-3). Axonal and dendritic MTs are not continuous back to the cell body nor are they associated with any visible MTOC. Axonal MTs can be more than 100 μm long, but they have uniform polarity, with all plus ends distal to the cell body. Dendritic MTs are typically shorter and often exhibit mixed polarity, with only about 50% of the MTs oriented with the plus end distal. Recent work suggests that MTs in both axons and dendrites are nucleated normally at the MTOC but are then released from the MTOC and delivered to neurites.

dendrits.jpeg
The axonal and dendritic cytoskeletons differ in both composition and organization. The major differences are illustrated diagramatically in this diagram. With one exception, all cytoskeletal proteins are synthesized on free polysomes in the cell body, then transported to their different cellular compartments. The exception is MAP2, which is the major microtubule-associated protein of dendrites. While some MAP2 is synthesized in the cell body, MAP2 mRNA is specifically enriched in the dendritic compartment and a significant fraction is thought to be synthesized there. The microtubules of cell bodies, dendrites and axons are thought to be nucleated at the microtubule-organizing center (MTOC), then released and delivered to either the dendrites or axon. In the dendrite, microtubules often have mixed polarities with both plus and minus ends distal to the cell body. The functional consequence of this organization is uncertain but may help explain why dendrites taper with distance from the cell body. In contrast, all axonal microtubules are oriented with the plus end distal to the cell body and exhibit uniform distribution across the axon. Although some tau protein can be detected in cell bodies and dendrites, axonal microtubules are enriched in tau and axonal tau is differentially phosphorylated. MAP2 appears to be absent from the axon. Neurofilaments are largely excluded from the dendritic compartments but are abundant in large axons. The spacing of neurofilaments is sensitive to the level of phosphorylation. Microtubules and neurofilaments both stop and start in the axon rather than being continuous back to the cell body. The microfilaments are more dispersed in their organization and may be difficult to visualize in the mature neuron. They are most abundant near the plasma membrane but are also enriched in presynaptic terminals and dendritic spines. GA, golgi apparatus.

While MTs in neurons are composed of the same basic constituents as those in non-neuronal cells, they are strikingly more diverse (Table 8-1). Brain MTs contain tubulins of many different isotypes, with many different post-translational modifications and a variety of microtubule-associated proteins (MAPs). MT composition varies according to location, such as in axons or dendrites, suggesting that brain MTs exist in specialized forms to perform designated tasks in the unique environments of the neuron. For example, axonal MTs contain stable segments that are unusually resistant to treatments that depolymerize MTs in other cells. Such stable domains are preserved as short MT segments and may serve to nucleate or organize MTs in axons, particularly during regeneration. This and other specializations of axonal MTs (see below) may reflect the unusual requirements of the neuronal cytoskeleton, where remarkably long MTs are maintained at considerable distances from sites of new protein synthesis in the cell body.

Screenshot 2023-02-08 at 21.42.51.png

Major Microtubule Cytoskeletal Proteins of the Nervous System.

See: https://www.ncbi.nlm.nih.gov/books/NBK28122/

Almost 55 years ago in advanced Biology, my lab partner and I studied the organelles of the cell extensively. It is very interesting, now, to see the Micro-Tubules acting as the substrate for the transport of membrane-bound organelles, Micro-Tubules are necessary for the extension of neurites during development; they provide the scaffolding for maintaining neurites after extension, and they help maintain the definition and integrity of intracellular compartments. The diversity of these functions is reflected in differences in the shape, biochemistry and metabolic stability of different MTs.
Hartmann352
 
  • Like
Reactions: write4u
Almost 55 years ago in advanced Biology, my lab partner and I studied the organelles of the cell extensively. It is very interesting, now, to see the Micro-Tubules acting as the substrate for the transport of membrane-bound organelles, Micro-Tubules are necessary for the extension of neurites during development; they provide the scaffolding for maintaining neurites after extension, and they help maintain the definition and integrity of intracellular compartments. The diversity of these functions is reflected in differences in the shape, biochemistry and metabolic stability of different MTs.

Recent research has uncovered several additional abilities of microtubules that could not be adequately observed in those days.

a) Microtubules are dynamic EM potentiometers
b) Microtubules create EM fields
c) Microtubule catastrophe causes mental problems (Altzheiemers)

Roger Penrose and Stuart Hameroff propose that "consciousness" emerges from quantum activities in MT. Their theory is named ORCH OR (Orchestrated Objective Reduction)
 
The presence of electromagnetic field generated by living systems has been confirmed by a number of experiments. An indication of the viability of living specimens by experimentally observed nanoscale vibrations was published by Kasas et al. [9]. Due to the dipolar character of biological structures, these mechanical vibrations must be connected with a generation of electromagnetic field. Nevertheless, the role of biological electromagnetic field is not yet fully understood because its power is extremely low and direct measurement is a difficult task as described by Pokorný et al. [10] and Del Giudice and Tedeschi [11]. The power being several orders of magnitude below the thermal noise level, either indirect detection methods or statistical evaluations of a series of experiments have been used. A measurement of the cellular oscillating electric field was performed indirectly by Pohl et al. by attraction of dielectric particles (dielectrophoretic method)—the largest amount of attracted particles appeared in the mitotic (M) phase) [12]. Cellular reactions and interactions mediated by signals in the near infrared and visible regions were described by Albrecht–Buehler [13,14,15]. Experimental results obtained by conventional electrotechnical methods were published by Hölzel [16] and Jelínek et al. [17]. The measured field peaks in the time of formation of mitotic spindle, late prometaphase and metaphase, and anaphase A and B [10].

The fundamental significance of the electromagnetic field in biological functions corresponds also to the locus of its generation—the central part of cells with microtubules is devoted to this procedure. Microtubules, the main components of cytoskeleton, are considered to be the structures conditioning the existence of multicellular organisms. They provide many activities such as material transport, cell motility, division, etc. Very likely, microtubules also facilitate information processing [18,19]. However, their main function may be connected with their electric polarity. Microtubules are self-assembled linear hollow circular tubes with inner and outer diameters of 17 and 25 nm, respectively, growing from the centrosome in the center of the cell towards its membrane [20,21]—and forming a radial system. They are polymers built of tubulin heterodimers with a helical periodicity of 13 heterodimers along a helix turn (some microtubules have a higher number of heterodimers). A tubulin heterodimer consists of two subunits—α and β tubulin. Each heterodimer is an electric dipole with 18 Ca ions located in the dimer center and a negative charge in the α tubulin before hydrolysis of guanosine triphosphate (GTP) to guanosine diphosphate (GDP) and in the β tubulin after hydrolysis—Satarić et al., Tuszyński et al. [22,23]. After irradiation by external electromagnetic field and consequent measurement, Sahu et al. disclosed electromagnetic activity and resonance spectra in a wide frequency range from radio frequencies up to the UV band [24,25,26]; further frequencies have been predicted by Cosic et al. [27]. The excitation of the microtubule inner circular cavity is possible in the UV region. Measurement of transistor-like electric amplification by microtubules is described by Priel et al. [28] and nonchemical distant interactions caused by ultraweak photon emission are described in [29,30].

The electromagnetic field may organize and control the motion and transport of molecules and their components, chemical reactions, information processes, communication inside and between cells, and many other activities. Brain activity is considered to be of electromagnetic nature as was suggested by Graddock et al. [31] and Sahu et al. [24,25]. A direct control of neuronal activity by the electromagnetic field was demonstrated by Duke et al. [32] and Yoo et al. [33]. Biophysical and neurophysiological changes including increased neuronal excitability by exposing cell cultures to electromagnetic field at selected resonant frequencies of microtubules was studied by Rafati et al. [34]. The fast-acting electromagnetic mechanism of neuronal communication prior to the spike, facilitating decision-making processes in the brain, has been revealed by Singh et al. [35]. The biological electromagnetic field can provide simultaneity not only in a cell or its structures, but in the whole tissue, and should be regulated with respect to the electromagnetic activity in other tissues. Therefore, the field must be coherent and correlated in a large space region of a biological system and in the whole frequency spectrum to provide synchronous actions. Interaction of the cellular electromagnetic field with electrons on molecular orbitals seems to be of a fundamental significance.

  1. Fröhlich, H. Bose condensation of strongly excited longitudinal electric modes. Phys. Lett. A 1968, 26, 402–403. [Google Scholar] [CrossRef]
  2. Fröhlich, H. Long-range coherence and energy storage in biological systems. Int. J. Quantum Chem. 1968, 2, 641–649. [Google Scholar] [CrossRef]
  3. Fröhlich, H. Quantum mechanical concepts in biology. In Theoretical Physics and Biology, Proceedings of The First International Conference on Theoretical Physics and Biology, Versailles, France, 26–30 June 1967; Marois, M., Ed.; North Holland Publishing Co.: North Holland, Amsterdam, 1969; pp. 13–22. [Google Scholar]
  4. Fröhlich, H. The biological effects of microwaves and related questions. In Advances in Electronics and Electron Physics; Marton, L., Marton, C., Eds.; Elsevier: Amsterdam, The Netherlands; Academic Press: New York, NY, USA; London, UK; Toronto, ON, Canada; Sydney, Australia; San Francisco, CA, USA, 1980; Volume 53, pp. 85–152. [Google Scholar] [CrossRef]
  5. Pokorný, J.; Wu, T.-M. Biophysical Aspects of Coherence and Biological Order; Springer: Berlin/Heidelberg, Germany; New York, NY, USA; Academia: Prague, Czech Republic, 1998. [Google Scholar]
  6. Preparata, G. QED Coherence in Matter; Word Scientific: Singapore, 1995. [Google Scholar]
  7. Pokorný, J.; Pokorný, J.; Kobilková, J. Postulates on electromagnetic activity in biological systems and cancer. Integr. Biol. 2013, 5, 1439–1446. [Google Scholar] [CrossRef] [PubMed][Green Version]
  8. Foletti, A.; Brizhik, L. Nonlinearity, coherence and complexity: Biophysical aspects related to health and disease. Electromagn. Biol. Med. 2017, 36, 315–324. [Google Scholar] [CrossRef]
  9. Kasas, S.; Ruggeri, F.S.; Benadiba, C.; Maillard, C.; Stupar, P.; Tournu, H.; Dietler, G.; Longo, G. Detecting nanoscale vibrations as signature of life. Proc. Natl. Acad. Sci. USA 2015, 112, 297–298. [Google Scholar] [CrossRef][Green Version]
  10. Pokorný, J.; Hašek, J.; Jelínek, F.; Šaroch, J.; Palán, B. Electromagnetic activity of yeast cells in the M phase. Electro Magn. 2001, 20, 371–396. [Google Scholar] [CrossRef]
  11. Del Giudice, E.; Tedeschi, A. Water and autocatalysis in living matter. Electromagn. Biol. Med. 2009, 28, 46–52. [Google Scholar] [CrossRef]
  12. Pohl, H.A.; Braden, T.; Robinson, S.; Piclardi, J.; Pohl, D.G. Life cycle alterations of the micro-dielectrophoretic effects of cells. J. Biol. Phys. 1981, 9, 133–154. [Google Scholar] [CrossRef]
  13. Albrecht-Buehler, G. Surface extensions of 3T3 cells towards distant infrared light sources. J. Cell. Biol. 1991, 114, 493–502. [Google Scholar] [CrossRef]
  14. Albrecht-Buehler, G. Rudimentary form of cellular “vision”. Proc. Natl. Acad. Sci. USA 1992, 89, 8288–8293. [Google Scholar] [CrossRef][Green Version]
  15. Albrecht-Buehler, G. A long-range attraction between aggregating 3T3 cells mediated by near-infrared light scattering. Proc. Natl. Acad. Sci. USA 2005, 102, 5050–5055. [Google Scholar] [CrossRef][Green Version]
  16. Hölzel, R. Electric activity of non-excitable biological cells at radio frequencies. Electro Magn. 2001, 20, 1–13. [Google Scholar] [CrossRef]
  17. Jelínek, F.; Cifra, M.; Pokorný, J.; Vaniš, J.; Šimša, J.; Hašek, J.; Frýdlová, I. Measurement of electrical oscillations and mechanical vibrations of yeast cells membrane around 1kHz. Electromagn. Biol. Med. 2009, 28, 223–232. [Google Scholar] [CrossRef] [PubMed]
  18. Craddock, T.J.A.; Tuszynski, J.A.; Hameroff, S. Cytoskeletal signaling: Is memory encoded in microtubule lattices by CaMKII phosphorylation? PLoS Comput. Biol. 2012, 8, e1002421. [Google Scholar] [CrossRef] [PubMed][Green Version]
  19. Tuszynski, J.A.; Friesen, D.; Freedman, H.; Sbitnef, V.I.; Kim, H.; Santelices, I.; Kalra, A.P.; Patel, S.D.; Shankar, K.; Chua, L.O. Microtubules as sub-cellular memristors. Sci. Rep. 2020, 10, 2108. [Google Scholar] [CrossRef] [PubMed][Green Version]
  20. Amos, L.A.; Klug, A. Arrangement of subunits in flagellar microtubules. J. Cell Sci. 1974, 14, 523–549. [Google Scholar] [CrossRef] [PubMed]
  21. Amos, L.A. Structure of microtubules. In Microtubules; Roberts, K., Hyams, J.S., Eds.; Elsevier: Amsterdam, The Netherlands; Academic Press: New York, NY, USA; London, UK; Toronto, ON, Canada; Sydney, Australia; San Francisco, CA, USA, 1979; pp. 1–64. [Google Scholar]
  22. Satarić, M.; Tuszyński, J.A.; Žakula, R.B. Kinklike excitation as an energy transfer mechanism in microtubules. Phys. Rev. E 1993, 8, 589–597. [Google Scholar] [CrossRef]
  23. Tuszyński, J.A.; Hameroff, S.; Satarić, M.; Trpisová, B.; ***, M.L.A. Ferroelectric behavior in microtubule dipole lattices: Implications for conformation processing, signalling and assembly/disassembly. J. Theor. Biol. 1995, 174, 371–380. [Google Scholar] [CrossRef]
  24. Sahu, S.; Ghosh, S.; Ghosh, B.; Aswani, K.; Hirata, K.; Fujita, D.; Bandyopadhyay, A. Atomic water channel controlling remarkable properties of a single brain microtubule: Correlating single protein to its supramolecular assembly. Biosens. Bioelectron. 2013, 47, 141–148. [Google Scholar] [CrossRef] [PubMed][Green Version]
  25. Sahu, S.; Ghosh, S.; Hirata, K.; Fujita, D.; Bandyopadhyay, A. Multi-level memory switching properties of a single brain microtubule. Appl. Phys. Lett. 2013, 102, 123701. [Google Scholar] [CrossRef]
  26. Sahu, S.; Ghosh, S.; Fujita, D.; Bandyopadhyay, A. Live visualizations of single isolated tubulin protein self-assembly via tunnelling current: Effect of electromagnetic pumping during spontaneous growth of microtubule. Sci. Rep. 2014, 4, 7303. [Google Scholar] [CrossRef][Green Version]
  27. Cosic, I.; Lazar, K.; Cosic, D. Prediction of tubulin resonant frequencies using the resonant recognition model (RRM). IEEE Trans. Nanobiosci. 2015, 14, 491–496. [Google Scholar] [CrossRef]
  28. Priel, A.; Ramos, A.J.; Tuszyński, J.A.; Cantiello, H.F. A biopolymer transistor: Electric amplification by microtubules. Biophys. J. 2006, 90, 4639–4643. [Google Scholar] [CrossRef][Green Version]
  29. Gurwitsch, A. Die Natur des spezifischen Erregers der Zellteilung. Arch. Mikrosk. Anat. Entw. Mech. 1923, 100, 11–40. [Google Scholar] [CrossRef]
  30. Volodyaev, I.; Beloussov, L.V. Revisiting the mitogenetic effect of ultra-weak photonemission. Front. Physiol.2015, 6, 241. [Google Scholar] [CrossRef] [PubMed]
  31. Craddock, T.J.A.; Kurian, P.; Preto, J.; Sahu, K.; Hameroff, S.R.; Klobukowski, M.; Tuszynski, J.A. Anesthetic alterations of collective terahertz oscillations in tubulin correlate with clinical potency: Implications for anesthetic action and post-operative cognitive dysfunction. Sci. Rep. 2017, 29, 9877. [Google Scholar] [CrossRef][Green Version]
  32. Duke, A.R.; Jenkins, M.W.; Lu, H.; McManus, J.M.; Chiel, H.J.; Jansen, E.D. Transient and selective suppression of neural activity with infrared light. Sci. Rep. 2013, 3, 2600. [Google Scholar] [CrossRef][Green Version]
  33. Yoo, S.; Hong, S.; Choi, Y.; Park, J.-H.; Nam, Y. Photothermal inhibition of neural activity with near-infrared-sensitive nanotransducers. ACS Nano 2014, 8, 8040–8049. [Google Scholar] [CrossRef] [PubMed]
  34. Rafati, Y.; Cantu, J.C.; Sedelnikova, A.; Tolstykh, G.P.; Peralta, X.G.; Valdez, C.; Echchgadda, I. Effect of microtubule resonant frequencies on neuronal cells. In Optical Interactions with Tissue and Cells XXXI; SPIE: Bellingham, WA, USA, 2020; Volume 11238. [Google Scholar] [CrossRef]
  35. Singh, P.; Ghosh, S.; Sahoo, P.; Bandyopadhyay, A. Electrophysiology using coaxial atom probe array; Live imaging reveals hidden circuits of a hippocampal neural network. J. Neurophysiol. 2021. [Google Scholar] [CrossRef] [PubMed]
  36. Fröhlich, H. Coherent electric vibrations in biological systems and cancer problem. IEEE Trans. MTT 1978, 26, 613–617. [Google Scholar] [CrossRef]
  37. Pokorný, J.; Jelínek, F.; Trkal, V.; Lamprecht, I.; Hölzel, R. Vibrations in microtubules. J. Biol. Phys. 1997, 23, 171–179. [Google Scholar] [CrossRef]
  38. Böhm, K.J.; Mavromatos, N.E.; Michette, A.; Stracke, R.; Unger, E. Movement and alignment of microtubules in electric fields and electric-dipole-moment estimates. Electromagn. Biol. Med. 2005, 24, 319–330. [Google Scholar] [CrossRef]
  39. Schoutens, J.E. Dipole–dipole interactions in microtubules. J. Biol. Phys. 2005, 31, 35–55. [Google Scholar] [CrossRef][Green Version]
  40. Sataric, M.V.; Tuszynski, J.A. Nonlinear dynamics of microtubules: Biophysical implications. J. Biol. Phys. 2005, 31, 487–500. [Google Scholar] [CrossRef] [PubMed][Green Version]
  41. Alberts, B.; Bray, D.; Lewis, J.; Raff, M.; Roberts, K.; Watson, J.D. Molecular Biology of the Cell, 3rd ed.; Garland Publishing, Inc.: New York, NY, USA; London, UK, 1994. [Google Scholar]
  42. Stebbings, H.; Hunt, C. The nature of the clear zone around microtubules. Cell. Tissue Res. 1982, 227, 609–617. [Google Scholar] [CrossRef] [PubMed][Green Version]
  43. Zheng, J.-M.; Chin, W.-C.; Khijniak, E.; Khijniak, E., Jr.; Pollack, G.H. Surfaces and interfacial water: Evidence that hydrophilic surfaces have long-range impact. Adv. Colloid Interface Sci. 2006, 127, 19–27. [Google Scholar] [CrossRef] [PubMed]
  44. Modica-Napolitano, J.S.; Aprille, J.R. Basis for selective cytotoxicity of Rhodamine 123. Cancer Res. 1987, 47, 4361–4365. [Google Scholar]
  45. Warburg, O.; Posener, K.; Negelein, E. Über den Stoffwechsell der Carzinomzelle. Biochem. Z. 1924, 152, 309–344. [Google Scholar]
  46. Pokorný, J.; Pokorný, J.; Borodavka, F. Warburg effect–damping of electromagnetic oscillations. Electromagn. Biol. Med. 2017, 36, 270–278. [Google Scholar] [CrossRef]
  47. Dicke, R.H.; Wittke, J.P. Introduction to Quantum Mechanics; Addison–Wesley Publishing Co.: Massachusetts, MA, USA; London, UK, 1961. [Google Scholar]
  48. Šrobár, F. Radiating Fröhlich system as a model of cellular electromagnetism. Electromagn. Biol. Med. 2015, 34, 355–360. [Google Scholar] [CrossRef]
  49. Šrobár, F. Impact of mitochondrial electric field on modal occupancy in the Fröhlich model of cellular electromagnetism. Electromagn. Biol. Med. 2013, 32, 401–408. [Google Scholar] [CrossRef] [PubMed]
  50. Derjaguin, B.V. Die Formelastizität der dünnen Wasserschichten. Prog. Surf. Sci. 1933, 84, 657–670. [Google Scholar] [CrossRef]
  51. Zheng, J.; Pollack, G.H. Long-range forces extending from polymer–gel surfaces. Phys. Rev. E 2003, 68. [Google Scholar] [CrossRef] [PubMed][Green Version]
  52. Marais, A.; Adams, B.; Ringsmuth, A.K.; Ferretti, M.; Gruber, J.M.; Hendrikx, R.; Schuld, M.; Smith, S.L.; Sinayskiy, I.; Krüger, T.P.J.; et al. The future of quantum biology. J. Royal Soc. Interface 2018, 15. [Google Scholar] [CrossRef] [PubMed][Green Version]
  53. Pokorný, J.; Pokorný, J.; Jandová, A.; Kobilková, J.; Vrba, J.; Vrba, J., Jr. Energy parasites trigger oncogene mutation. Int. J. Rad. Biol. 2016, 92, 577–582. [Google Scholar] [CrossRef]
  54. Stratton, J.A. Electromagnetic Theory; McGraw–Hill Book Co. Inc.: New York, NY, USA, 1941; pp. 434–437, 492–497. [Google Scholar]
 
  • Like
Reactions: write4u
Thanks for the excellent referrals. I am enchanted with microtubules.
They are a "common denominator" in all Eukaryotic life and bestow all kinds of survival abilities to living organisms, from heliotropism, to magnetic navigation, to
A self-organizing dynamic dipolar coil, what can be more suited to information processing?
 
This brings me to the evolving roles microtubules play in our search for "consciousness"

Microtubules: Evolving roles and critical cellular interactions
Caitlin M Logan and A Sue Menko

Microtubules are characteristically defined by a constant cycling between growing and shrinking referred to as dynamic instability. 4144
Subpopulations of microtubules can be stabilized by post-translational modifications including acetylation 4553 or detyrosination 48, 49, 54 as well as their interactions with microtubule-associated proteins (MAPs). 5558 This stabilization of microtubules is important for several cell processes, with acetylated microtubules found at the leading edge of actively migrating cells assisting in directional migration. 54
Acetylated microtubules are also the foundation of primary cilia, which are generally referred to as the antennae of the cell, involved in sensing the cellular environment, cell signaling, liquid flow, cell polarity and multiple sensory organ functions including smell, sound, and sight .45, 47, 5963

and so much more...... https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6880148/

Question: does consciousness require dynamic biology or dynamic mineralogy, or mathematically relational physics?

There must be a traceable pattern, a "common denominator" among conscious organisms.

Conscious from the perspective of being subject to Differential Equations.

And the cellular cytoplasm is an amalgam of information processors of intra-cellular, inter-cellular, and global-cellular informational input.

The cell as a whole is a proto-brain. Bring a whole bunch together and you get "consciousness", just as when you bring a bunch (amalgam) of CO2 molecules together and you get liquid "wetness" (water), as well as solid "dryness" (dry ice), as well as fluid "gaseousness" (vapor).

As states of consciousness, biology has its own evolved consciousness of its environment, but so has mineralogy and the earth itself as an amalgam of common denominators having attained a positive existential dynamic Biome.

These "qualities" (properties) are emergent states under certain specific requirements and emerge when conditions are "sufficient" to meet "necessity".
 
Last edited:
Consciousness can not simply be reduced to neural activity alone, researchers say. A novel study reports the dynamics of consciousness may be understood by a newly developed conceptual and mathematical framework.

How do 1.4 kg of brain tissue create thoughts, feelings, mental images, and an inner world?

The ability of the brain to create consciousness has baffled some for millennia. The mystery of consciousness lies in the fact that each of us has subjectivity, something that is like to sense, feel and think.

In contrast to being under anesthesia or in a dreamless deep sleep, while we’re awake we don’t “live in the dark” — we experience the world and ourselves. But how the brain creates the conscious experience and what area of the brain is responsible for this remains a mystery.

According to Dr. Nir Lahav, a physicist from Bar-Ilan University in Israel, “This is quite a mystery since it seems that our conscious experience cannot arise from the brain, and in fact, cannot arise from any physical process.”

As strange as it sounds, the conscious experience in our brain, cannot be found or reduced to some neural activity.

“Think about it this way,” says Dr. Zakaria Neemeh, a philosopher from the University of Memphis, “when I feel happiness, my brain will create a distinctive pattern of complex neural activity. This neural pattern will perfectly correlate with my conscious feeling of happiness, but it is not my actual feeling. It is just a neural pattern that represents my happiness. That’s why a scientist looking at my brain and seeing this pattern should ask me what I feel, because the pattern is not the feeling itself, just a representation of it.”

As a result, we can’t reduce the conscious experience of what we sense, feel and think to any brain activity. We can just find correlations to these experiences.

After more than 100 years of neuroscience we have very good evidence that the brain is responsible for the creation of our conscious abilities. So how could it be that these conscious experiences can’t be found anywhere in the brain (or in the body) and can’t be reduced to any neural complex activity?

This mystery is known as the hard problem of consciousness. It is such a difficult problem that until a couple of decades ago only philosophers discussed it and even today, although we have made huge progress in our understanding of the neuroscientific basis of consciousness, still there is no adequate theory that explains what consciousness is and how to solve this hard problem.

Dr. Lahav and Dr. Neemeh recently published a new physical theory in the journal Frontiers in Psychology that claims to solve the hard problem of consciousness in a purely physical way.

According to the authors, when we change our assumption about consciousness and assume that it is a relativistic phenomenon, the mystery of consciousness naturally dissolves. In the paper the researchers developed a conceptual and mathematical framework to understand consciousness from a relativistic point of view.

According to Dr. Lahav, the lead author of the paper, “consciousness should be investigated with the same mathematical tools that physicists use for other known relativistic phenomena.”

To understand how relativity dissolves the hard problem, think about a different relativistic phenomenon, constant velocity. Let’s choose two observers, Alice and Bob, where Bob is on a train that moves with constant velocity and Alice watches him from the platform. there is no absolute physical answer to the question what the velocity of Bob is.

The answer is dependent on the frame of reference of the observer.

From Bob’s frame of reference, he will measure that he is stationary and Alice, with the rest of the world, is moving backwards. But from Alice’s frame Bob is the one that’s moving and she is stationary.

Although they have opposite measurements, both of them are correct, just from different frames of reference.

Because, according to the theory, consciousness is a relativistic phenomenon, we find the same situation in the case of consciousness.

Now Alice and Bob are in different cognitive frames of reference. Bob will measure that he has conscious experience, but Alice just has brain activity with no sign of the actual conscious experience, while Alice will measure that she is the one that has consciousness and Bob has just neural activity with no clue of its conscious experience.

Just like in the case of velocity, although they have opposite measurements, both of them are correct, but from different cognitive frames of reference.

As a result, because of the relativistic point of view, there is no problem with the fact that we measure different properties from different frames of reference.

The fact that we cannot find the actual conscious experience while measuring brain activity is because we’re measuring from the wrong cognitive frame of reference.

According to the new theory, the brain doesn’t create our conscious experience, at least not through computations. The reason that we have conscious experience is because of the process of physical measurement.

In a nutshell, different physical measurements in different frames of reference manifest different physical properties in these frames of reference although these frames measure the same phenomenon.

For example, suppose that Bob measures Alice’s brain in the lab while she’s feeling happiness. Although they observe different properties, they actually measure the same phenomenon from different points of view. Because of their different kinds of measurements, different kinds of properties have been manifested in their cognitive frames of reference.

For Bob to observe brain activity in the lab, he needs to use measurements of his sensory organs like his eyes. This kind of sensory measurement manifests the substrate that causes brain activity – the neurons.

physics-consciousness-neurosciences-public.jpeg
After more than 100 years of neuroscience we have very good evidence that the brain is responsible for the creation of our conscious abilities. Image is in the public domain.

Consequently, in his cognitive frame Alice has only neural activity that represents her consciousness, but no sign of her actual conscious experience itself. But, for Alice to measure her own neural activity as happiness, she uses different kind of measurements. She doesn’t use sensory organs, she measures her neural representations directly by interaction between one part of her brain with other parts. She measures her neural representations according to their relations to other neural representations.

This is a completely different measurement than what our sensory system does and, as a result, this kind of direct measurement manifests a different kind of physical property. We call this property conscious experience.

As a result, from her cognitive frame of reference, Alice measures her neural activity as conscious experience.

Using the mathematical tools that describe relativistic phenomena in physics, the theory shows that if the dynamics of Bob’s neural activity could be changed to be like the dynamics of Alice’s neural activity, then both will be in the same cognitive frame of reference and would have the exact same conscious experience as the other.

Now the authors want to continue to examine the exact minimal measurements that any cognitive system needs in order to create consciousness.

The implications of such a theory are huge. It can be applied to determine which animal was the first animal in the evolutionary process to have consciousness, when a fetus or baby begins to be conscious, which patients with consciousness disorders are conscious, and which AI systems already today have a low degree (if any) of consciousness.

In recent decades, the scientific study of consciousness has significantly increased our understanding of this elusive phenomenon. Yet, despite critical development in our understanding of the functional side of consciousness, we still lack a fundamental theory regarding its phenomenal aspect.

There is an “explanatory gap” between our scientific knowledge of functional consciousness and its “subjective,” phenomenal aspects, referred to as the “hard problem” of consciousness. The phenomenal aspect of consciousness is the first-person answer to “what it’s like” question, and it has thus far proved recalcitrant to direct scientific investigation.

Naturalistic dualists argue that it is composed of a primitive, private, non-reductive element of reality that is independent from the functional and physical aspects of consciousness. Illusionists, on the other hand, argue that it is merely a cognitive illusion, and that all that exists are ultimately physical, non-phenomenal properties.

We contend that both the dualist and illusionist positions are flawed because they tacitly assume consciousness to be an absolute property that doesn’t depend on the observer.

We develop a conceptual and a mathematical argument for a relativistic theory of consciousness in which a system either has or doesn’t have phenomenal consciousness with respect to some observer.

Phenomenal consciousness is neither private nor delusional, just relativistic. In the frame of reference of the cognitive system, it will be observable (first-person perspective) and in other frame of reference it will not (third-person perspective). These two cognitive frames of reference are both correct, just as in the case of an observer that claims to be at rest while another will claim that the observer has constant velocity.

Given that consciousness is a relativistic phenomenon, neither observer position can be privileged, as they both describe the same underlying reality. Based on relativistic phenomena in physics we developed a mathematical formalization for consciousness which bridges the explanatory gap and dissolves the hard problem.

Given that the first-person cognitive frame of reference also offers legitimate observations on consciousness, we conclude by arguing that philosophers can usefully contribute to the science of consciousness by collaborating with neuroscientists to explore the neural basis of phenomenal structures.

See: https://neurosciencenews.com/physics-consciousness-21222/

Also, you might peruse the following:

Author: Elana Oberlander
Source: Bar-Ilan University
Contact: Elana Oberlander – Bar-Ilan University
Image: The image is in the public domain

Original Research: Open access.
A Relativistic Theory of Consciousness” by Nir Lahav et al. Frontiers in Psychology

In recent decades, the scientific study of consciousness has significantly increased our understanding of this elusive phenomenon. Yet, despite critical development in our understanding of the functional side of consciousness, we still lack a fundamental theory regarding its phenomenal aspect.

There is an “explanatory gap” between our scientific knowledge of functional consciousness and its “subjective,” phenomenal aspects, referred to as the “hard problem” of consciousness. The phenomenal aspect of consciousness is the first-person answer to “what it’s like” question, and it has thus far proved recalcitrant to direct scientific investigation.

Naturalistic dualists argue that it is composed of a primitive, private, non-reductive element of reality that is independent from the functional and physical aspects of consciousness. Illusionists, on the other hand, argue that it is merely a cognitive illusion, and that all that exists are ultimately physical, non-phenomenal properties.

We contend that both the dualist and illusionist positions are flawed because they tacitly assume consciousness to be an absolute property that doesn’t depend on the observer.

We develop a conceptual and a mathematical argument for a relativistic theory of consciousness in which a system either has or doesn’t have phenomenal consciousness with respect to some observer.

Phenomenal consciousness is neither private nor delusional, just relativistic. In the frame of reference of the cognitive system, it will be observable (first-person perspective) and in other frame of reference it will not (third-person perspective). These two cognitive frames of reference are both correct, just as in the case of an observer that claims to be at rest while another will claim that the observer has constant velocity.

Given that consciousness is a relativistic phenomenon, neither observer position can be privileged, as they both describe the same underlying reality. Based on relativistic phenomena in physics we developed a mathematical formalization for consciousness which bridges the explanatory gap and dissolves the hard problem.

Given that the first-person cognitive frame of reference also offers legitimate observations on consciousness, we conclude by arguing that philosophers can usefully contribute to the science of consciousness by collaborating with neuroscientists to explore the neural basis of phenomenal structures.

See: https://neurosciencenews.com/physics-consciousness-21222/
 
Given that the first-person cognitive frame of reference also offers legitimate observations on consciousness, we conclude by arguing that philosophers can usefully contribute to the science of consciousness by collaborating with neuroscientists to explore the neural basis of phenomenal structures.

How does a flower acquire heliotropism? It has no brain or neurons, yet it responds to the light of the sun and can track the position of the sun.

Obviously, it has to be a function of naturally selected evolved photosynthesis with heightened sensitivity to the direction from where the light rays originate.

This function being of a dynamic nature, cannot be a hard-wired growth mechanism, such as the Fibonacci sequence in petal growth, but has acquired an extra property that allows its photosynthetic mechanism to dynamically adjust for maximum exposure.
1676024381041.png

This requires a mechanism that can detect and process a "differential equation" that guides it toward the strongest input.

I call that natural intelligence, an unconscious homeostatic response mechanism that is the product of the plant's inter-cellular communication network.

This can also be seen more prominently in the multinucleated single-celled slimemold.

It has no brain or neurons yet it can solve an intricate maze and responds to light and temperature changes and time intervals.
1676024580602.png Amazing maze. A slime mold finds the shortest route between two food sources.
Slime molds, brainless amoebalike organisms that live in forests and plant beds, aren't likely to win the Nobel Prize. But Japanese and Hungarian researchers report that they may yet possess a shred of something akin to intelligence. When placed in a maze between two sources of food, the slime seeks out and finds the shortest path through the maze. Researchers say this ability shows that even lifeforms as primitive as a single cell can perform computations.

This requires a mechanism that can detect and process a "differential equation" that guides it toward the strongest (shortest) input.

I call that natural intelligence, an unconscious homeostatic response mechanism that is the product of the organism's intra-cellular communication network.

Humans and other brained animals with neural networks that allow for long-distance communication between neurons and brains have acquired not only a subconscious homeostatic response mechanism but also have evolved a central processing center that relies on neural input of sensory "differential equations"

The interesting part is that all these Eukaryotic organisms share a common communication mechanism in the presence of a microtubule network that allows for Intra-cellular, Inter-cellular, and Extra-cellular communication and processing of electro-chemical data and kinetic stimulation.

Max Tegmark observes that all organisms able to respond dynamically to external stimulation have all the necessary evolved internal equipment.

I agree with this simple observation that rules out a Dualistic solution.

When Stuart Hameroff (anesthesiologist) and expert in microtubular behaviors, contacted Roger Penrose who was looking for a neural mechanism that might be able to perform quantum computation, Penrose was so impressed with Hameroff's presentation that he joined in a collaboration to unlock the full potentials of microtubules as part of a proposed theory of ORCH OR.

It looks to me that nature has found a proper variable information processor in microtubules, that must almost certainly play a major role in conscious thought processes.
 
Last edited:
Plants have evolved through the same mechanisms affecting all life on earth.

Much like animals, bacteria, and fungi, the different conditions plants faced influenced their evolution.

Diversity within populations and between individuals occurs naturally through genetic variation. A phenomenon where, thanks to differences in the DNA sequences from one individual to another (i.e. different allele frequencies), different morphological traits are present within a species.

Genetic variation can be caused by mutations, sexual reproduction or genetic drift, but no matter the cause the outcome is always the same: slight differences between individuals.
Some differences caused by genetic variation can be beneficial, or harmful, to an individual's chance of survival. What is beneficial varies across different periods, environments, and in the presence or absence of predators and resources. The environmental factors affecting survival are known as selection pressures.

Left to time, populations may change to the point where they are no longer recognizable as descendants of their ancestors. These morphological changes are the result of a gradual change, over many generations, in the genetic makeup of a population. This process is known as evolution. Natural selection is the mechanism by which evolution occurs.

Evolution: a gradual and cumulative change in the heritable genetic traits of a population of organisms over the course of many generations.

Natural selection: a process where individuals with traits that help them survive in their environment are more likely to survive and reproduce because of those traits. These beneficial traits become more and more common within the population with each passing generation.
Tracking these changes in individuals and conditions through time to paint a picture and better understand how the diversity of plant life across the terrestrial world came to be. This knowledge can help us to predict responses to climate change, droughts, and other challenges our society will face, and may even influence how we respond to these threats.

The origin and evolution of plants
Whilst the origins of life itself are hotly contested, it is mostly agreed that all life stems from a single common ancestor. This Last Universal Common Ancestor (LUCA) formed roughly 3.5 billion years ago.1 LUCA gave rise to all living organisms we see today, plants, animals, fungi and bacteria alike.

Early life forms were simple unicellular organisms, reliant on diffusion to gather all the energy and nutrients needed from their surrounding environment. With time, life evolved complex processes to make its own energy. This early cellular evolution underpinned the processes of glycolysis, respiration, and photosynthesis.2

Photosynthesis is thought to have originated in bacteria and allowed organisms to harness sunlight for energy.3 Early plant ancestors, in the form of simple eukaryotic cells, are believed to have absorbed photosynthesizing cyanobacteria. These previously free-roaming cyanobacteria gave rise to chloroplasts, the photosynthetic organelles found in plants.

This symbiotic relationship may have occurred due to food scarcity. In an environment lacking prey, capitalizing on freely available sunshine for energy would be very beneficial. By absorbing rather than consuming photosynthesizing cyanobacteria plant ancestors would have also gained this beneficial trait.3

Plant evolution: the move to land
It's widely believed that life started underwater. Roughly 430 million years ago the first organisms migrated to terrestrial land and gave rise to today's land plants. Similarly to the ‘universal common ancestor’, ancestral streptophyte algae is thought to have been the only plant ancestor to survive the move onto land4.

Modern-day plants have complex stress signaling pathways with many similarities to the ancestral streptophyte algae4. This indicates it was not an easy transition from water to land and strong selection pressures would have been at work.

The importance of plant evolution
In the eyes of evolution, you either adapt or face extinction.
Initial terrestrial environments were rife with available sunlight, and space to grow whilst lacking predators and competitors. However, the move to land was still a particularly stressful time for early land plants.

Land plants that couldn’t adapt to their new environment were outcompeted for resources, and simply couldn't survive the harsh conditions. The threat of Extinction was constant for early land plants. Some of the deadly threats and consequences of terrestrial life included:
  • Desiccation - Early land plants couldn’t transport water, so relied heavily on damp conditions.
  • UV radiation - Water may filter sunlight, and reduce the amount of energy absorbed by chlorophyll pigments, but it also acts as a barrier against harmful UV radiation. A barrier absent on land.
  • Lack of support - Water offers aquatic plants support and buoyancy, but in terrestrial environments plants must devote energy and nutrients to rigid features like cell walls.
The harsh selection pressures of early terrestrial environments shaped land plant's evolutionary journey. Yet, since their emergence in the late Ordovician Period some 443 million years ago, land plants have reworked our planet to suit their own needs. Paving the way for some species to blossom, whilst ensuring the extinction of others.

The development of roots changed the earth's physical environment. As plants spread across land, previously bare riverbeds became flourishing plant habitats. Plant roots held the earth together and reduced erosion on river banks. This resulted in an increase of meandering rivers, rather than the wide braided channels common before the emergence of land plants.

Land plants drove early mass extinctions. As plant roots burrowed down into the earth, the rocks beneath were worn down. Releasing minerals that found their way into earth's river systems and oceans. This sudden increase in nutrients caused the eutrophication and anoxia of past oceans, killing half of marine life in the Devonian Mass Extinction.

Plants and algae changed the earth's atmosphere. Plants and algae are autotrophs. They absorb carbon dioxide and energy from the sun, whilst releasing oxygen. Plants and algae dramatically increased the ratio of oxygen in the atmosphere during the carboniferous period, allowing a boom in animal evolution. With oxygen no longer a limiting factor, huge arthropods emerged.

Plants influence the global climate. Photosynthesis directly increases atmospheric oxygen concentrations, but plant roots also played a role by breaking up the earth and releasing minerals which react with carbon dioxide. These reactions draw down atmospheric carbon dioxide and lock it away in the earth and oceans. This dramatically increased the concentration of oxygen in the atmosphere, leading to global cooling periods and ice ages. The mass extinctions which occurred during ice ages, opened up niches for surviving species to adapt and colonize.

Plant evolution timeline
Land plants' ability to flourish is largely attributed to adaptations gained through four key evolutionary steps (Fig. 1) , which no doubt evolved under harsh selection pressures.

Angiosperms, which underwent each key stage of plant evolution, are now the most abundant of all land plant types.

Examples of plant evolution
Billions of years of plant evolution have allowed land plants to conquer every corner of the globe. So much so that land plants now make up 82% of global biomass.

AdaptationExample of the Benefits Bestowed on Plants.
Waxy cuticlePrevent water loss, reducing the risk of desiccation.
Stomata & guard cellsIncreased gas exchange needed for respiration and photosynthesis. Guard cells control how open or closed the stomata are, reducing water lost by transpiration and desiccation risk.
RhizoidsProvide structure and some uptake of water in bryophytes.
Vascular systemTransport nutrients, water and energy in the form of ATP (adenosine Sri-phosphate) from where they are absorbed, or produced, to tissues where they are needed. Vascular plants* are able to grow much taller, outcompeting other plants for sunshine, because of their vascular systems.
Vascular plants also have increased rigidity and support.
True rootsAnchor and support plants, and aid in the absorption of water in vascular plants.
Protective flavonoids and pigmentsProtect plants from UV radiation by filtering harmful UV light whilst still allowing for some energy absorption for photosynthesis.
Nectar & variations in color, scent and size of flowersSweet nectar encourages insects and other pollinators to travel deep into the flowers, where sticky pollen attaches to their skin or fur. As pollinators visit multiple flowers in a day, some of this pollen will rub off on future plants, fertilizing and spreading the initial plant's genes. Plants make their flowers more inviting for pollinators through the use of bright colors, appealing scents, and different sized and shaped petals. Angiosperms, or flowering plants, have coevolved with pollinators.
Seeds and pollenAllow the genetic material, and eventually fertilized embryos of plants to travel far away from their parents and reduce competition for resources. Seeds and pollen both also have protective coats which protect their contents from mechanical damage and desiccation. Seed plants are able to survive and spread through much harsher environments thanks to this adaptation.
Fruit surrounding seedsAngiosperms' seeds are surrounded by fruits or ovaries. Sweet fleshy fruits invite animals to eat them, dispersing the seeds contained within through their feces. This increased dispersal reduces competition from parent plants in angiosperms.
Other fruits are dry and hard. These fruits confer additional protection to the seed. Some of which may have hooks that attach to the fur of pollinators aiding dispersal. Not all seed plants benefit from the evolution of protective fruit, as gymnosperms lack ovaries.

Plant Evolution - Key takeaways
  • All plants originated from the same common ancestor as all other life on earth.
  • Photosynthesis evolved in bacteria. Early plant eukaryotes engulfed cyanobacteria gaining the ability to photosynthesis themselves. Engulfed cyanobacteria gave rise to chloroplasts.
  • Plant evolution has shaped our natural world, by changing the composition of the atmosphere and the geology of their environments.
  • The move to land brought about many stressors and strong selection pressures. Natural selection ensured plants that didn't adapt faced harsh competition, predation and eventually went extinct.
  • There are many examples of plant evolution seen in the adaptive features land plants require to survive within their niches.

References
  1. Christie Wilcox, Evolution: Out Of The Sea, Scientific American, 2012.
  2. Cooper GM, The Origin and Evolution of Cells, The Cell: A Molecular Approach. 2nd edition, 2000.
  3. Robert E. Blankenship, Plant Physiology: Early Evolution of Photosynthesis, 2010.
  4. Jan De Vries et al, Plant evolution: landmarks on the path to terrestrial life, New Phytologist, 2018.
  5. Lumen, Bryophytes, Boundless Biology
  6. C. Jill Harrison et al, The origin and early evolution of vascular plant shoots and leaves, Philosophical Transactions of the Royal Society B, 2018.
  7. Ada Linkies et al, The Evolution of Seeds, New Phytologist, 2010
  8. Hannah Ritchie and Max Roser, Biodiversity, Our World In Data, 2021.
See: https://www.studysmarter.us/explanations/biology/plant-biology/plant-evolution/

* Vascular plants are any one of a number of plants with specialized vascular tissue. The two types of vascular tissue, xylem and phloem, are responsible for moving water, minerals, and the products of photosynthesis throughout the plant. As opposed to a non-vascular plant, a vascular plant can grow much larger. The vascular tissue within provides a means of transporting water to great heights, allowing a vascular plant to grow upward to catch the sun.

Structure of Vascular Plants
Inside of a vascular plant, the structure is much different from that of a non-vascular plant. In non-vascular plants, there is little to no differentiation between the different cells. In vascular plants, the specialized vascular tissues and their corresponding tubules are arranged in unique patterns, depending on the division and species the vascular plant belongs to.

The xylem, made mostly of the structural protein lignin and dead cells, specializes in transporting water and minerals from the roots to the leaves. A vascular plant does this by creating a pressure on the water on multiple fronts. In the roots, water is absorbed into the tissues. The water flows into the xylem, and creates an upward pressure. At the leaves, water is being used and evaporates out of the stoma. These small pores are said to transpire, which pulls upward on the column of water in the xylem. Through the actions of adhesion and cohesion, the water moves upward through the xylem like a drink through a straw. This process can be seen below.

xylem

In the leaves, photosynthesis is taking place. A vascular plant, like the lower plants and algae, use the same process to extract energy from the sun, and store it in the bonds of glucose. This sugar is modified into other forms, and must be transported to parts of the plant with cannot photosynthesize, such as the stem and roots. The phloem is specially designed for this purpose. Unlike the xylem, the phloem is made of partially living cells, which help facilitate the transport of sugars via transport proteins found in the cell membranes. The phloem is also connected to the xylem, and can add water to help dilute and move the sugar. Commercially harvested, this is known as sap or syrup, such as Maple syrup.

Vascular Plant Lifecycle
Vascular plants exhibit, like all plants, an alternation of generations. This means that there are two forms of the plant, the sporophyte and the gametophyte. The sporophyte, a diploid organism, goes through meiosis to produce the haploid spore. The spore grows into a new organism, the gametophyte. The gametophyte is responsible for producing gametes, capable of fusing together during sexual reproduction.

These gametes, the sperm and egg, fuse together to form a zygote, which is the new diploid sporophyte generation. In some plants, this zygote will develop directly into a new organism. In others, the zygote develops into a seed, which is dispersed and must have a period of dormancy or some activation signal to begin growing. A vascular plant which is closer in relation to the mosses and non-vascular plants is more likely to have independent alternating generations. Seeding plants tend to have a highly reduced gametophyte, which is typically entirely dependent on and lives within the sporophyte. The distinction is hardly noticeable between the two organisms, besides the amount of DNA they carry within their cells (haploid vs. diploid) and the cellular division processes they use.

Classification of Vascular Plants
The vascular plants are embryophytes, which is a large clade or related group, consisting of both non-vascular and vascular plants. The embryophytes are further broken down into the Bryophytes including mosses, liverworts, and non-vascular plants, and Tracheophyta. As the trachea in humans is a passageway for air, the term tracheophyte refers to the vascular tissue in vascular plants.

The tracheophytes are further divided into divisions. The divisions are distinguished mostly on how their spores and gametophytes function. In ferns and club-mosses, the gametophyte becomes a free-living generation. In gymnosperms (conifers) and angiosperms (flowering plants), the gametophyte is dependent on the sporophyte. The gametes developed within become a seed, forming the next sporophyte generation. While every vascular plant shows an alternation of generations with a dominant sporophyte, they differ on how they go about distributing spores and seeds.

Examples of a Vascular Plant
Annual Vs. Perennial

Some plants, the annuals, complete their lifecycle within one year. If you were to buy an annual at the store, plant it in your garden, and collect all the seeds it dropped, the plant would not come back the next year. Annuals are typically herbaceous, meaning their stems and roots and not highly structured and rigid. While the plants may stand tall, this is mostly due to the effects of turgor pressure** on the cell walls of the plant.

A perennial plant is slightly different. While it may also be herbaceous, the plant will return for multiple years, even if you collect all the seeds. The vascular plant, during the winter, is able to store sugar in the roots and avoid freezing entirely. In the spring, the plant can resume growing and try once more to produce offspring. While the methods of reproduction reflect millions of years of evolution, they do not reflect vascular plants compared to non-vascular.

Monocot Vs. Dicot
Within the angiosperms, or flowering plants, there is a huge division. While monocots and dicots are both vascular plants, they differ in the way that their seeds form, and the way that they grow. In a monocot, grow occurs below the soil, as individual leaves are started from near the roots and grow upward. Corn is a monocot, as well as many types of grasses including wheat and barley. In other seeding plants, like beans and peas, there are two cotyledon leaves making them dicots. The vascular tissue of the monocot can be seen on the right in the image below.

Dicot stem vs Monocot stem

In a dicot, the growth point is above the soil, and this cause the plants to branch out in several directions. As such, the vascular tissue in a dicot is branched where in a monocot it runs parallel. Notice how the vascular tissue in these plants creates organized bundles. This pattern creates easy branching opportunities. These changes in vascular tissue represent the various methods of forming leaves to collect light seen in the two types of vascular plant.

References
  • Hartwell, L. H., Hood, L., Goldberg, M. L., Reynolds, A. E., & Silver, L. M. (2011). Genetics: From Genes to Genomes. Boston: McGraw Hill.
  • Kaiser, M. J., Attrill, M. J., Jennings, S., Thomas, D. N., Barnes, D. K., Brierley, A. S., & Hiddink, J. G. (2011). Marine Ecology: Processes, Systems, and Impacts. New York: Oxford University Press.
  • McMahon, M. J., Kofranek, A. M., & Rubatzky, V. E. (2011). Plant Science: Growth, Development, and Utilization of Cultivated Plants (5th ed.). Boston: Prentince Hall.
See: https://biologydictionary.net/vascular-plant/

Vascular Plant Adaptations
Emergents

Anoxia – in upland plants aerobic metabolism is shut down, anaerobic metabolism end products are toxic, mitochondria and other organelles are destroyed in 24 hours, and the availability of reduced metals (e.g. Fe, Mn, S) increases accumulating to potentially toxic levels.
Adaptations:

1. Structural(morphological)adaptations–changesstimulatedbyhormones,often ethylene
  1. Aerenchyma – air spaces in roots and stems allowing oxygen to diffuse to the roots from upper portions of the plant; up to 60% of root volume is empty space, compared to 2-7% for terrestrial plants; effectiveness based on root porosity
  2. Special organs or responses
    1. Adventitious roots – (prop roots, buttress roots) develop above the
      anaerobic zone; in both flood-tolerant and flood-intolerant species;
      examples: Salix, Alnus, tomato
    2. Stem elongation – rapid stem growth; examples: floating heart,
      rice, bald cypress
    3. Lenticels – tiny pores on tree prop roots above anoxic zone, pump
      oxygen to submerged roots, keeping concentrations as high as 15-
      18%; examples: red mangrove
    4. Pneumatophores – “straws” 20-30 cm high and 1 cm wide coming
      out of main roots, on black mangroves; “knees” of bald cypress
  3. Pressurized gas flow – “thermo-osmotic” gas flow, air enters leaves and is forced through the arenchyma into the roots under slight pressure; due to a
    temp gradient between exterior air and interior gas spaces in plant tissue, stem is heated, internal gas molecule expand, can’t move out lenticels, but cooler external air can diffuse into stem.
2. Physiologicaladaptations
  1. Anaerobic respiration – hydrophytes have adaptions to minimize by-
    product toxicity
  2. Malate production – instead of alcohol production in anaerobic
    fermentation, allows fermentation to continue at a steady rate
Side Effects of Root Aeration
Oxygen leaks into the rhizosphere, oxidizes soluble reduced metals causing precipitation and detoxification, precipitation colors soils aids in wetlands delineation.

Water uptake – flooded roots cause an increase in abscisic acid in leaves, which closes stomata, reducing evapotranspiration and photosynthesis.

Nutrient absorption – flood intolerant plants can’t control nutrient uptake, with increased oxygen nutrient uptake is maintained.

N – converts to ammonium in anoxic conditions, oxygen root exudation in the rhizosphere changes it back to preferred nitrate, flood tolerant plants maintain N uptake
P – availability increases, flood tolerant species show increased uptake
Fe & Mn – toxic and more available in anoxic soils, wetland plants oxidize/immobilize ions, concentrated elements in intracellular vacuoles, and have a higher tolerance
S – toxic as sulfide in anoxic soils; wetland plants oxidize sulfide to sulfate, accumulate it in vacuoles, and convert it to gasses
Salt Stress - buildup of salts due to ocean intrusion, historic salt deposits or high rates of evaporations (playas). Salts cause an osmotic gradient that can passively draw water out of plant cells. Very similar to water stress.
Salt Adaptations – to maintain cell turgor, organic compounds in the cells substitute for inorganic salts
Exclusion – wetland plants show a selective exclusion, providing a barrier to sodium more than that for potassium
Secretory organs – wetland plants that don’t exclude often excrete salts through glands in the leaves (salt marsh grasses), excreting more sodium than potassium

Reproduction: Sexual reproduction is rare, more commonly used methods are:
  • Fragmentation, pieces break off and float away to another location where they get embedded in the substrate.
  • Rhizomes: underground stems send up shoots to start a new plant.
  • Stolon: same as rhizomes except these are above ground stems which form into
    shoots and start a new plant.
    Seed germination: Plants have different strategies for seeds:
  • Timing of seed production to occur during the non-flood season either by delayed or accelerated flowering.
  • Production of buoyant sees that float on high unflooded ground.
  • Seeds germinate while still attached to the plant.
    Photosynthesis: Gas exchange: As the water gets deeper, the wavelength of light shortens until it’s unavailable. The red and blue wavelengths are lost, and the green (not so good for photosynthesis) remains.
  • Adaptations include:
  • Wetland plants often use C4 biochemical pathway of photosynthesis instead of C3.
  • C4 provides a possible pathway for recycling CO2 from cell respiration
  • Plants using C4 have low photorespiration rates and the ability to use even the
    most
    intense sunlight efficiently.
  • C4 plants more efficient than C3 plants in rate of carbon fixation and amount of
    water used per unit carbon fixed.
  1. Submerged plants: Reproduction:
    • Vallisneria (submerged grasses) produce a coiled peduncle
      (female), which straightens out so the stigma can reach
      above the water surface. The spathe (male) also straightens
      out so its petals float on the surface. Its three leaves and
      anthers form a sailboat. The spathe floats along until hopefully it bumps into a stigma.
    • Ceratophyllum (Coontail and Hornworts): uses a strategy of hydrophily: the male releases pollen into the water where it floats until it sinks again, hopefully landing on a female plant.
    • A chinese lotus can lay dormant for over 1,000 years. Photosynthesis:
      Algal blooms can block the sunlight and nutrients to submerged plants.
      Other challenges that aquatic plants must adapt to include: flooding, desiccation (drying out) nutrient uptake, and vegetative reproduction.
See: https://cals.arizona.edu/azaqua/aquaplants/classnotes/VascularPlantAdaptations.pdf
 
  • Like
Reactions: write4u
The origin and evolution of plants
Whilst the origins of life itself are hotly contested, it is mostly agreed that all life stems from a single common ancestor. This Last Universal Common Ancestor (LUCA) formed roughly 3.5 billion years ago.1 LUCA gave rise to all living organisms we see today, plants, animals, fungi and bacteria alike.
Plant evolution: the move to land
It's widely believed that life started underwater. Roughly 430 million years ago the first organisms migrated to terrestrial land and gave rise to today's land plants. Similarly to the ‘universal common ancestor’, ancestral streptophyte algae is thought to have been the only plant ancestor to survive the move onto land4.
Thanks for that excellent summation.

I do have an alternate proposition and like to have your perspective on this.
-
We can change the statement to read " all life stems from the same chemicals prevalent on earth", but may have several places of origin and still be fundamentally related by virtue of chemistry.

The reason for this interpretation is the fact that the earliest life seems to consist of extremophiles that can thrive only in very specific and harsh environments:

Living at the Extremes: Extremophiles and the Limits of Life in a Planetary Context
Prokaryotic life has dominated most of the evolutionary history of our planet, evolving to occupy virtually all available environmental niches.
Extremophiles, especially those thriving under multiple extremes, represent a key area of research for multiple disciplines, spanning from the study of adaptations to harsh conditions, to the biogeochemical cycling of elements.
1676144667207.png
Extremophile research also has implications for origin of life studies and the search for life on other planetary and celestial bodies. In this article, we will review the current state of knowledge for the biospace in which life operates on Earth and will discuss it in a planetary context, highlighting knowledge gaps and areas of opportunity.

1676145028872.png
The bright colors of Grand Prismatic Spring, Yellowstone National Park, are produced by thermophiles, a type of extremophile.


Question: How can this extremophile have originated in the ocean if all environments except this one are deadly to the organism?
 
Last edited:
It is suspected that the extremophile life originated in the deep oceans at the area of hydrothermal vents located on the mid-ocean ridges as well as in the superheated hot springs located above magma chambers. About 4 billion years ago there lived a microbe on Earth called LUCA — the Last Universal Common Ancestor. LUCA then lies as the direct ancestor to the Archaea found around the mid-ocean ridge vents and the Archaea found in Yellowstone's hot springs.

Life began very early in Earth’s history, perhaps before 3.8 billion years ago. By the close of the Archaean Eon, some 2.5 billion years ago, microorganisms had evolved to remarkable levels of metabolic sophistication. Thermophiles in Yellowstone’s hot springs are living connections to the primal Earth of billions of years ago. They are also studied by scientists searching for life on other planets, where extreme environmental conditions may support similar lifeforms.

Studies suggest that the common ancestor of all modern organisms may have lived in a high-temperature environment like a Yellowstone hot spring. Descendents of these early organisms currently inhabit Yellowstone’s hot springs, where they live by chemosynthesis— combining inorganic chemicals to liberate energy, which is then used for growth. Such energy sources likely fueled Earth’s earliest life forms, and remain a mainstay for organisms living in hydrothermal environments where sunlight is unavailable.

Water is an excellent solvent for organic molecules — it provides a context wherein increasingly complex chemical reactions can occur — and be sustained. Based on what we have seen of life, it appears that liquid water is the sine qua non of life. Based on this understanding, the official mantra of the current Mars program at NASA is to “follow the water.” Admittedly organic carbon (in contrast to carbon dioxide and carbon monoxide) has yet to be detected on Mars — but we’re looking for it!

If water is indeed essential for life, a variety of physical limits to life seem apparent. But “seems” is the operative word. What may seem to be true in a theoretical or experiential context may not be true once sufficient observations have been made.

Water is a liquid and remains so within certain physical criteria such as temperature and pressure. Too much and too little of either can bring life’s processes to a halt. As water becomes scarce, a struggle for survival ensues. For life to continue, temperature has to be within the range wherein water can exist in liquid form. We have yet to find any form of life that can directly utilize solid (i.e. frozen) water.

Below the Earth’s surface, we find a layer of molten material known as magma. Heat from this searing hot material can affect layers of the planet above it, dramatically warming subsurface water. At some locations deep below the seas and oceans of Earth, which often appear near the seams of the tectonic plates and their subduction zones, this water can escape, venting out into the surrounding environment to form what we call hydrothermal vents.

hydro vent.jpeg.

Image of a hydrothermal vent. 'Hydro' is for water, 'thermal' is for temperature, and 'vent' is for the release of matter.IMAGE CREDIT: NOAA.

Hydrothermal vents bear some similarities to terrestrial hot springs, where geothermally heated water seeps up from deep below the ground. However, hydrothermal vents are found underwater and in the dark. Sunlight can only travel so far through water (depending, of course, on how clear the water is). In crystal-clear water, light might reach around 1,000 meters at most. This is important for life because most life on Earth is dependent upon energy from the Sun. Photosynthetic organisms (like plants) utilize sunlight to produce molecules (like sugars and carbohydrates) that are the basis of food chains for the surface biosphere. It’s a different story for life around hydrothermal vents.

The heated waters spewing out of hydrothermal vents are rich in chemicals that chemosynthetic organisms, can use as a source of energy. The chemicals would be toxic for human beings, but chemosynthetic microorganisms at the vents are able to convert the chemicals to energy. This process of chemosynthesis is what ultimately powers entire ecosystems around hydrothermal vents.

Changing Views of Habitability
The first hydrothermal vent was discovered in 1977, when a team of researchers traced spikes in water temperature around a mid-ocean ridge known as the Galapagos Rift. They sent a camera underwater and captured intriguing photos on 35 millimeter film. The next day, the robotic submersible Alvin was deployed and provided phenomenal views of a never-before-seen ecosystem. Hydrothermal vents erupted with plumes of dark ‘smoke’, and they were surrounded by diverse communities of organisms large and small.

alvin.png
With the discovery of hydrothermal vents on the ocean floor, Alvin helped change theories of habitability in the Solar System. Check out the Astrobiology Hero poster of Alvin at: https://astrobiology.nasa.gov/resources/heroes/IMAGE CREDIT: NASA ASTROBIOLOGY / AARON GRONSTAL.

Up until 1977, scientists believed that all life on Earth was in some way dependent upon sunlight for energy. Organisms like plants utilize sunlight for energy, which is then transferred to other organisms like humans when the plants are eaten. The discovery of hydrothermal vents showed that life could thrive independent of the Sun. Suddenly, scientists had an Earthly example of how life might survive on ocean worlds in the outer Solar System, such as Jupiter’s moon Europa, or Saturn’s moon Enceladus. These moons are thought to harbor oceans of dark, liquid water beneath their icy surfaces. If hydrothermal vents are present, those oceans could be habitable for life as we know it.

Since the 1977 discovery, numerous hydrothermal vents (along with other unique seafloor environments) have been documented. From Hawaii to Japan to the Mediterranean Sea, areas with volcanic activity have been ‘hot spots’ for the identification of hydrothermal vents.

biology.jpeg
The discovery of hydrothermal vents features in Issue 4 of Astrobiology: The Story of Our Search for Life in the Universe, available at: https://astrobiology.nasa.gov/resources/graphic-histories/IMAGE CREDIT: NASA ASTROBIOLOGY.

Extreme Vents
Hydrothermal vents are considered ‘extreme’ habitats for life for a number of reasons. Because some of the vent locations are deep below water, the pressure can be extremely high. When human divers swim to great depths, they must be extremely careful because high pressures can have life-threatening consequences. The organisms at hydrothermal depths must be adapted to withstand the physical stress of high pressures. Organisms that are able to do this are known as barophiles. Some barophilic microorganisms at vents have a waxy layer that helps protect them from crushing pressure.

barophiles.jpeg
Barophiles can live in highly pressurized places such as the bottom of the ocean floor near hot vents. Whereas most living creatures cannot survive the extreme forces that exist below the Earth’s surface and on the sea floor, these microbes thrive under high pressure. Click the image to see more of Astrobiology Extremophile Trading Cards. IMAGE CREDIT: NASA ASTROBIOLOGY.

The material spewing out of hydrothermal vents can also be extremely hot, creating niches for thermophilic microorganisms that can withstand the heat. There is a gradient between the hot fluid of the vents and the cold water that surrounds them, and along this temperature gradient is where you find heat-loving microorganisms that serve as the basis of the food chain in hydrothermal vent environments.

Hydrothermal Origins
Some astrobiologists believe that hydrothermal vents in Earth’s early oceans could have been important in the origins and evolution of life on our planet. The unique environment of hydrothermal vents allows for some natural chemical reactions that can produce molecules that may have played a role in the formation of the first living cells on Earth. For instance, studies have identified minerals known as metal hydrides around alkaline hydrothermal vents. These minerals can act as catalysts for reactions that form small organic compounds.


thermophiles.jpeg
Thermophiles have developed special proteins that allow them to tolerate a broad range of temperatures – some even require temperatures around 140°F to exist at all. Click the image to see more of Astrobiology Extremophile Trading Cards. IMAGE CREDIT: NASA ASTROBIOLOGY.

Some interesting places around the world where astrobiologists can find hydrothermal vents include:

The Mid Cayman Rise
The Mid-Cayman Spreading Center is a slow-spreading mid-ocean ridge and is found in the Carribean Sea. Hydrothermal vents are found in locations along the roughly 110 kilometer length of the ridge where two of Earth’s tectonic plates are moving apart.

The Hellenic Volcanic Arc
The Hellenic Arc is found in the Aegean Sea, which is an embayment of the Mediteranean Sea between Europe and Asia. The Hellenic arc is a subduction zone where the African Plate is diving below the Aegean Sea plate.

The Gakkel Ridge
The Gakkel Ridge is another slow-spreading mid-ocean ridge which is formed between the North American Plate and the Eurasian Plate. This ridge sits in the Arctic Ocean, one of Earth’s least explored oceans. One location on the Gakkel Ridge is the Aurora vent field found north of Greenland. The Aurora Field is closer to Earth’s north pole than any other hydrothermal vent field documented so far.

The Juan De Fuca Ridge
The Juan De Fuca Ridge is another example of a mid-ocean spreading center, where two of Earth’s tectonic plates are spreading apart. The roughly 480 km ridge is found in the Pacific Ocean off the coast of the Pacific Northwest region of North America.

mid ocean ridges.png
Geometry of the (a) simplified ridge-transform-ridge model and (b) Siqueiros transform model, including ITSCs labeled A-D after Fornari et al. [1989]. Mantle flow is driven by a divergent velocity field imposed on the top surface of two plates moving away from each other at half-rate U0. The base and sides of the model are stress-free. The base of the model is fixed to mantle temperature (Tm = 1350°C) and the top surface of the model space is set to Ts = 0°C. The side boundaries are open to convective flux, implying that there is no diffusive heat transfer across these boundaries. The 3-D mesh uses prismatic elements with finer node spacing towards the surface and towards the ridge axis in order to resolve the thermal structure. Red lines indicate ridge segments and blue lines indicate transform segments. (c) The plate boundary used in our model overlain on a map of the Global Multi-Resolution Topography [Ryan et al., 2009].

See: https://astrobiology.nasa.gov/news/life-in-the-extreme-hydrothermal-vents/

See: https://www.whoi.edu/feature/history-hydrothermal-vents/explore/bio-micro.html

See: https://space.nss.org/life-in-extreme-environments/

See: https://serc.carleton.edu/microbelife/extreme/extremeheat/index.html

Terrestrial hot springs are found all over the world, and they are inhabited by extremophile organisms that use unique mechanisms to stay alive. These springs are where geothermally heated water from underground rises to the planet’s surface. They can act as a source of water, energy, and nutrients for living cells.

The water boiling up from below can carry a wide variety of minerals and reduced chemical species that certain microbes can use as a source of energy. The composition of the water depends on many different factors, such as its source and the types of rock and soil it travels through. The varying conditions mean that different hot springs around the world can support unique populations of microbes.

The temperature and composition of the water also has different gradients – for instance, the water is hotter closer to the source of the spring. This gradient means that diverse organisms can evolve over time to inhabit their own special place in the environment on microscopic scales.

It’s not just the heat that challenges life’s survival in hot springs. Some springs have water that is very acidic, others have water that is extremely alkaline. Gases at concentrations poisonous to humans, and other animals, bubble up with the water. The rock and liquid can contain heavy metals like arsenic and lead, along with lots of other minerals and particulates. Some pools can be extremely salty, and can be rich in things like potassium and sulphate. High temperatures also mean less dissolved oxygen, so many hydrothermal pools are low in oxygen content.

Astrobiologists have studied hot springs in Yellowstone and around the world in places like New Zealand, Russia, Japan, Chile, and Iceland. Some of the hottest springs studied so far can be found on Russia’s Kamchatka peninsula, in the Uzon Caldera. This region is thought to be the site of a volcano that collapsed 200,000 years ago. Microorganisms have been found here living in pools of up to 206 degrees Fahrenheit (97 degrees Celcius).

Terrestrial hot springs on Earth are inhabited by organisms known as thermophiles, meaning ‘heat loving.’ Most of these thermophilic organisms are single celled archaea and bacteria, and are sometimes classified according to the amount of heat they can survive: thermophile, extreme thermophile, and hyperthermophile.

Terrestrial hot springs were the first place in which astrobiologists spotted thermophiles, but they aren’t the only super-heated environments where such organisms live. Thermophiles can also be found in places like deep sea hydrothermal vents, and even in decaying organic matter in peat bogs or home compost piles.

The multitude of harsh conditions present in terrestrial hot springs also means that it isn’t only thermophiles that live in these environments. There are acidophiles, alkaliphiles, and many other types of organisms. Sometimes scientists find extremophiles that use multiple adaptations to survive (polyextremophiles).

The chemical precipitates that are found in hot spring waters can entomb and preserve cells, making hot springs (or even ancient hot spring sites) a good place to look for biosignatures left behind by past life.

Scientists now think that during the first three billion years of Earth’s history, microorganisms transformed the original, anoxic (without oxygen) atmosphere into something that could support complex forms of life. Microbes harnessed energy stored in chemicals such as iron and hydrogen sulfide in a process called chemosynthesis. And they did this in environments that are lethal to humans—in boiling acidic or alkaline hot springs like the hot springs found in Yellowstone National Park.

Microorganisms were the first lifeforms capable of photosynthesis— using sunlight to convert carbon dioxide to oxygen and other byproducts. These lifeforms, called cyanobacteria, began to create an atmosphere that would eventually support human life. Cyanobacteria are found in some of the colorful mats and streamers of Yellowstone’s hot springs.

In the last few decades, scientists discovered that cyanobacteria and other microbes comprise the majority of species in the world—yet less than one percent of them have been studied.

Microbial research has also led to a revised tree of life, far different from the one taught for decades (see next page). The “old” tree’s branches—animal, plant, fungi—are now combined in one branch of the tree. The other two branches consist solely of microorganisms, including an entire branch of microorganisms not known until the 1970s—Archaea.

Yellowstone’s thermophilic communities include species in all three branches. These microbes and their environments provide a living laboratory studied by a variety of scientists. Their research findings connect Yellowstone to other ancient lifeforms on Earth, and to the possibilities of life elsewhere in our solar system (see last section).

Yellowstone’s hot springs contain species from these groups on the Tree of Life
page3image419230224
The tree shows the divergence of various groups of organisms from the beginning of life on Earth, about four billion years ago. It was originated by Carl Woese in the 1970s. Dr. Woese also proposed the new center branch, Archaea, which includes many microorganisms formerly considered bacteria. The red line links the earliest oganisms that evolved from a common ancestor, today named LUCA.

The earliest organisms to evolve on Earth were likely microorganisms whose descendants are found today in extreme high–temperature, and in some cases acidic, environments, such as those in Yellowstone. Their history exhibits principles of ecology and the connections between geology, geochemistry, and biology.

Stromatolites are sediments laminated by microbial activity. Found in ancient rocks, stromatolites are perhaps the most abundant and widespread evidence of early microbial ecosystems.

Stromatolites also form in Yellowstone’s hydrothermal features as thermophiles are entombed within travertine and sinter deposits. Thermophile communities leave behind evidence of their shapes as biological “signatures.” Scientists compare the signatures of these modern and recent stromato- lites to those of ancient deposits elsewhere (e.g., 350-million-year-old Australian sinter deposits) to better understand the environ- ment and evolution on early Earth. Mammoth Hot Springs is a particularly good location for these studies because of rapid deposition rates and abundant thermophile communities.

See: https://space.nss.org/life-in-extreme-environments/

Looking for LUCA, the Last Universal Common Ancestor

Around 4 billion years ago there lived a microbe called LUCA — the Last Universal Common Ancestor. There is evidence that it could have lived a somewhat ‘alien’ lifestyle, hidden away deep underground in iron-sulfur rich hydrothermal vents. Anaerobic and autotrophic, it didn’t breath air and made its own food from the dark, metal-rich environment around it. Its metabolism depended upon hydrogen, carbon dioxide and nitrogen, turning them into organic compounds such as ammonia. Most remarkable of all, this little microbe was the beginning of a long lineage that encapsulates all life on Earth.

If we trace the tree of life far enough back in time, we come to find that we’re all related to LUCA. If the war cry for our exploration of Mars is ‘follow the water’, then in the search for LUCA it’s ‘follow the genes’. The study of the genetic tree of life, which reveals the genetic relationships and evolutionary history of organisms, is called phylogenetics. Over the last 20 years our technological ability to fully sequence genomes and build up vast genetic libraries has enabled phylogenetics to truly come of age and has taught us some profound lessons about life’s early history.

For a long time it was thought that the tree of life formed three main branches, or domains, with LUCA at the base —eukarya, bacteria and archaea. The latter two— the prokaryotes— share similarities in being unicellular and lack a nucleus, and are differentiated from one another by subtle chemical and metabolic differences. Eukarya, on the other hand, are the complex, multicellular life forms comprised of membrane-encased cells, each incorporating a nucleus containing the genetic code as well as the mitochondria ‘organelles’ powering the cell’s metabolism. The eukarya are considered so radically different from the other two branches as to necessarily occupy its own domain.

William Martin, a professor of evolutionary biology at the Heinrich Heine University in Dusseldorf, is hunting for LUCA.

William Martin, a professor of evolutionary biology at the Heinrich Heine University in Dusseldorf, is hunting for LUCA.IMAGE CREDIT: HEINRICH HEINE UNIVERSITY.

However, a new picture has emerged that places eukarya as an offshoot of bacteria and archaea. This “two-domain tree” was first hypothesized by evolutionary biologist Jim Lake at UCLA in 1984, but only got a foothold in the last decade, in particular due to the work of evolutionary molecular biologist Martin Embley and his lab at the University of Newcastle, UK, as well as evolutionary biologist William Martin at the Heinrich Heine University in Düsseldorf, Germany.
Bill Martin and six of his Düsseldorf colleagues (Madeline Weiss, Filipa Sousa, Natalia Mrnjavac, Sinje Neukirchen, Mayo Roettger and Shijulal Nelson-Sathi) published a 2016 paper in the journal Nature Microbiology describing this new perspective on LUCA and the two-domain tree with phylogenetics.

Ancient genes
Previous studies of LUCA looked for common, universal genes that are found in all genomes, based on the assumption that if all life has these genes, then these genes must have come from LUCA. This approach has identified about 30 genes that belonged to LUCA, but they’re not enough to tell us how or where it lived. Another tactic involves searching for genes that are present in at least one member of each of the two prokaryote domains, archaea and bacteria. This method has identified 11,000 common genes that could potentially have belonged to LUCA, but it seems far-fetched that they all did: with so many genes LUCA would have been able to do more than any modern cell can.

Bill Martin and his team realized that a phenomenon known as lateral gene transfer (LGT) was muddying the waters by being responsible for the presence of most of these 11,000 genes. LGTinvolves the transfer of genes between species and even across domains via a variety of processes such as the spreading of viruses or homologous recombination that can take place when a cell is placed under some kind of stress.

A schematic of the two-domain tree, with eukaryotes evolving from endosymbiosis between members of the two original trunks of the tree, archaea and bacteria.

A schematic of the two-domain tree, with eukaryotes evolving from endosymbiosis between members of the two original trunks of the tree, archaea and bacteria.IMAGE CREDIT: WEISS ET AL/NATURE MICROBIOLOGY.

A growing bacteria or archaea can take in genes from the environment around them by ‘recombining’ new genes into their DNA strand. Often this newly-adopted DNA is closely related to the DNA already there, but sometimes the new DNA can originate from a more distant relation. Over the course of 4 billion years, genes can move around quite a bit, overwriting much of LUCA’s original genetic signal. Genes found in both archaea and bacteria could have been shared through LGT and hence would not necessarily have originated in LUCA.

Knowing this, Martin’s team searched for ‘ancient’ genes that have exceptionally long lineages but do not seem to have been shared around by LGT, on the assumption that these ancient genes should therefore come from LUCA. They laid out conditions for a gene to be considered as originating in LUCA. To make the cut, the ancient gene could not have been moved around by LGT and it had to be present in at least two groups of archaea and two groups of bacteria.
“While we were going through the data, we had goosebumps because it was all pointing in one very specific direction,” says Martin.

Once they had finished their analysis, Bill Martin’s team was left with just 355 genes from the original 11,000, and they argue that these 355 definitely belonged to LUCA and can tell us something about how LUCA lived.

Such a small number of genes, of course, would not support life as we know it, and critics immediately latched onto this apparent gene shortage, pointing out that essential components capable of nucleotide and amino acid biosynthesis, for example, were missing. “We didn’t even have a complete ribosome,” admits Martin.

However, their methodology required that they omit all genes that have undergone LTG, so had a ribosomal protein undergone LGT, it wouldn’t be included in the list of LUCA’s genes. They also speculated that LUCA could have gotten by using molecules in the environment to fill the functions of lacking genes, for example molecules that can synthesize amino acids. After all, says Martin, biochemistry at this early stage in life’s evolution was still primitive and all the theories about the origin of life and the first cells incorporate chemical synthesis from their environment.

What those 355 genes do tell us is that LUCA lived in hydrothermal vents. The Düsseldorf team’s analysis indicates that LUCA used molecular hydrogen as an energy source. Serpentinization within hydrothermal vents can produce copious amounts of molecular hydrogen. Plus, LUCAcontained a gene for making an enzyme called ‘reverse gyrase’, which is found today in extremophiles existing in high-temperature environments including hydrothermal vents.

Martin Embley, who specializes in the study eukaryotic evolution, says the realization of the two-domain tree over the past decade, including William Martin’s work to advance the theory, has been a “breakthrough” and has far-reaching implications on how we view the evolution of early life. “The two-domain tree of life, where the basal split is between the archaea and the bacteria, is now the best supported hypothesis,” he says.

It is widely accepted that the first archaea and bacteria were likely clostridia (anaerobes intolerant of oxygen) and methanogens, because today’s modern versions share many of the same properties as LUCA. These properties include a similar core physiology and a dependence on hydrogen, carbon dioxide, nitrogen and transition metals (the metals provide catalysis by hybridizing their unfilled electron shells with carbon and nitrogen). Yet, a major question remains: What were the first eukaryotes like and where do they fit into the tree of life?

Phylogenetics suggests that eukaryotes evolved through the process of endosymbiosis, wherein an archaeal host merged with a symbiont, in this case a bacteria belonging to the alphaproteobacteria group. In the particular symbiosis that spawned the development of eukarya, the bacteria somehow came to thrive within their archaeal host rather than be destroyed. Hence, bacteria came to not only exist within archaea but empowered their hosts to grow bigger and contain increasingly large amounts of DNA. After aeons of evolution, the symbiont bacteria evolved into what we know today as mitochondria, which are little battery-like organelles that provide energy for the vastly more complex eukaryotic cells. Consequently, eukaryotes are not one of the main branches of the tree-of-life, but merely a large offshoot.

A paper that appeared recently in Nature, written by a team led by Thijs Ettema at Uppsala University in Sweden, has shed more light on the evolution of eukaryotes. In hydrothermal vents located in the North Atlantic Ocean — centered between Greenland, Iceland and Norway, known collectively as Loki’s Castle— they found a new phylum of archaea that they fittingly named the ‘Asgard’ super-phylum after the realm of the Norse gods. The individual microbial species within the super-phylum were then named after Norse gods: Lokiarchaeota, Thorarchaeota, Odinarchaeota and Heimdallarchaeota. This super-phylum represents the closest living relatives to eukaryotes, and Ettema’s hypothesis is that eukaryotes evolved from one of these archaea, or a currently undiscovered sibling to them, around 2 billion years ago.

A hydrothermal vent in the north-east Pacific Ocean, similar to the kind of environment in which LUCA seems to have lived.

A hydrothermal vent in the north-east Pacific Ocean, similar to the kind of environment in which LUCA seems to have lived.IMAGE CREDIT: NOAA.

Closing in on LUCA
If it’s possible to date the advent of eukaryotes, and even pinpoint the species of archaea and bacteria they evolved from, can phylogenetics also date LUCA’s beginning and its split into the two domains?

It must be noted that LUCA is not the origin of life. The earliest evidence of life dates to 3.7 billion years ago in the form of stromatolites*, which are layers of sediment laid down by microbes. Presumably, life may have existed even before that. Yet, LUCA’s arrival and its evolution into archaea and bacteria could have occurred at any point between 2 to 4 billion years ago.

Phylogenetics help narrow this down, but Martin Embley isn’t sure our analytical tools are yet capable of such a feat. “The problem with phylogenetics is that the tools commonly used to do phylogenetic analysis are not really sophisticated enough to deal with the complexities of molecular evolution over such vast spans of evolutionary time,” he says. Embley believes this is why the three-domain tree hypothesis lasted so long – we just didn’t have the tools required to disprove it. However, the realization of the two-domain tree suggests that better techniques are now being developed to handle these challenges.

These techniques include examining the ways biochemistry, as performed in origin-of-life experiments in the lab, can coincide with the realities of what actually happens in biology. This is a concern for Nick Lane, an evolutionary biochemist at University College of London, UK. “What I think has been missing from the equation is a biological point of view,” he says. “It seems trivially easy to make organic [compounds] but much more difficult to get them to spontaneously self-organize, so there are questions of structure that have largely been missing from the chemist’s perspective.”

For example, Lane highlights how lab experiments routinely construct the building blocks of life from chemicals like cyanide, or how ultraviolet light is utilized as an ad hoc energy source, yet no known life uses these things. Although Lane sees this as a disconnect between lab biochemistry and the realities of biology, he points out that William (Bill) Martin’s work is helping to fill the void by corresponding to real-world biology and conditions found in real-life hydrothermal vents. “That’s why Bill’s reconstruction of LUCA is so exciting, because it produces this beautiful, independent link-up with real world biology,” Lane says.

Jupiter’s moon Europa has a subterranean ocean, a rocky seabed, and geothermal heat produced by Jupiter’s gravitational tides. Water, rock and heat were all that were required by LUCA, so could similar life also exist on Europa?

Jupiter’s moon Europa has a subterranean ocean, a rocky seabed, and geothermal heat produced by Jupiter’s gravitational tides. Water, rock and heat were all that were required by LUCA, so could similar life also exist on Europa?IMAGE CREDIT: NASA/JPL–CALTECH/SETI INSTITUTE.

The biochemistry results in part from the geology and the materials that are available within it to build life, says Martin Embley. He sees phylogenetics as the correct tool to find the answer, citing the Wood–Ljungdahl carbon-fixing pathway as evidence for this.

Carbon-fixing involves taking non-organic carbon and turning it into organic carbon compounds that can be used by life. There are six known carbon-fixing pathways and work conducted over many decades by microbiologist Georg Fuchs at the University of Freiburg has shown that the Wood–Ljungdahl pathway is the most ancient of all the pathways and, therefore, the one most likely to have been used by LUCA. Indeed, this is corroborated by the findings of Bill Martin’s team.

In simple terms the Wood–Ljundahl pathway, which is adopted by bacteria and archaea, starts with hydrogen and carbon dioxide and sees the latter reduced to carbon monoxide and formic acid that can be used by life. “The Wood–Ljungdahl pathway points to an alkaline hydrothermal environment, which provides all the things necessary for it — structure, natural proton gradients, hydrogen and carbon dioxide,” says Martin. “It’s marrying up a geological context with a biological scenario, and it has only been recently that phylogenetics has been able to support this.”

Astrobiological implications
Understanding the origin of life and the identity of LUCA is vital not only to explaining the presence of life on Earth, but possibly that on other worlds, too. Hydrothermal vents that were home to LUCA turn out to be remarkably common within our solar system. All that’s needed is rock, water and geochemical heat. “I think that if we find life elsewhere it’s going to look, at least chemically, very much like modern life,” says Martin.

Moons with cores of rock surrounded by vast global oceans of water, topped by a thick crust of water-ice, populate the Outer Solar System. Jupiter’s moon Europa and Saturn’s moon Enceladus are perhaps the most famous, but there is evidence that hints at subterranean oceans on Saturn’s moons Titan and Rhea, as well as the dwarf planet Pluto and many other Solar System bodies. It’s not difficult to imagine hydrothermal vents on the floors of some of these underground seas, with energy coming from gravitational tidal interactions with their parent planets. The fact that the Sun does not penetrate through the ice ceiling does not matter — the kind of LUCA that Martin describes had no need for sunlight either.

“Among the astrobiological implications of our LUCA paper is the fact that you do not need light,” says Martin. “It’s chemical energy that ran the origin of life, chemical energy that ran the first cells and chemical energy that is present today on bodies like Enceladus.”

As such, the discoveries that are developing our picture of the origin of life and the existence of LUCA raise hopes that life could just as easily exist in a virtually identical environment on a distant locale such as Europa or Enceladus. Now that we know how LUCA lived, we know the signs of life to look out for during future missions to these icy moons.

See: https://astrobiology.nasa.gov/news/looking-for-luca-the-last-universal-common-ancestor/

* Stromatolites - (cyanobacteria) mounds are the oldest fossil life on Earth. Most stromatolites are marine, but some forms from Proterozoic strata more than 2 ½ billion years old are interpreted as inhabiting intertidal areas (Noffke et al., 2006) and freshwater ponds and lakes (Bolhar and van Kranendonk, 2007). Although interpretations of freshwater stromatolites are equivocal (Martín-Closas, 2003), if intertidal or freshwater stromatolites existed in Proterozoic times, and aquatic wetlands are defined to include cyanobacteria mounds in shallow ponds, then the origins of wetlands may extend back into truly deep geologic time.

Stromatolites_in_Sharkbay.jpeg
MODERN STROMATOLITES GROWING IN SHARK BAY, AUSTRALIA Paul Harrison

See: https://www.planetary.org/space-images/modern-stromatolites-growing-in-shark-bay-australia

It is interesting to finally begin to see how LUCA, the Last Universal Common Ancestor, ties the Archaea in the deep ocean's thermal vents to the Archaea found in land locked hot springs.
Hartmann352
 
"Hartmann352, post: 30778, member: 990"
Ancient genes
Previous studies of LUCA looked for common, universal genes that are found in all genomes, based on the assumption that if all life has these genes, then these genes must have come from LUCA.
This approach has identified about 30 genes that belonged to LUCA, but they’re not enough to tell us how or where it lived. Another tactic involves searching for genes that are present in at least one member of each of the two prokaryote domains, archaea and bacteria.
This method has identified 11,000 common genes that could potentially have belonged to LUCA, but it seems far-fetched that they all did: with so many genes LUCA would have been able to do more than any modern cell can.
It is suspected that the extremophile life originated in the deep oceans at the area of hydrothermal vents located on the mid-ocean ridges as well as in the superheated hot springs located above magma chambers. About 4 billion years ago there lived a microbe on Earth called LUCA — the Last Universal Common Ancestor. LUCA then lies as the direct ancestor to the Archaea found around the mid-ocean ridge vents and the Archaea found in Yellowstone's hot springs.
And how did the first ancestor migrate from deep ocean vents to high volcanic vents if they came from a single location?

Closing in on LUCA
If it’s possible to date the advent of eukaryotes, and even pinpoint the species of archaea and bacteria they evolved from, can phylogenetics also date LUCA’s beginning and its split into the two domains?
It must be noted that LUCA is not the origin of life. The earliest evidence of life dates to 3.7 billion years ago in the form of stromatolites*, which are layers of sediment laid down by microbes. Presumably, life may have existed even before that. Yet, LUCA’s arrival and its evolution into archaea and bacteria could have occurred at any point between 2 to 4 billion years ago.
And in several locations.

Astrobiological implications
Understanding the origin of life and the identity of LUCA is vital not only to explaining the presence of life on Earth, but possibly that on other worlds, too. Hydrothermal vents that were home to LUCA turn out to be remarkably common within our solar system. All that’s needed is rock, water and geochemical heat. “I think that if we find life elsewhere it’s going to look, at least chemically, very much like modern life,” says Martin.
If LUCA is so common elsewhere, why should there only have been a single site of origin on earth?

If LUCA is common on other Earthlike planets (Robert Hazen), then that really confirms the proposition that life may have had several sites of origin on earth.

Hazen estimates that Earth has performed some :
2 billion, quadrillion, quadrillion, quadrillion chemical interactions since its formation.
Why should only a single set of specific chemical interactions be the result of such astronomical numbers?

Why not 2 or 3 or maybe many locations where conditions were just right for the first polymers to form and evolve along somewhat similar paths but in different locations or at different times?
 
Last edited:
Mar 13, 2023
2
0
10
Visit site
The applications of AI are many and varied, from self-driving cars and virtual assistants to personalized medicine and financial forecasting. However, as with any technology, there are concerns about the ethical and social implications of AI.

For example, there is a risk that AI systems may perpetuate existing biases or discrimination, especially if they are trained on biased datasets. Therefore, it is important to ensure that AI is developed and used in a responsible and ethical manner.

I recommend watching useful documentaries:

AlphaGo (2017) - This documentary tells the story of a Google computer program called AlphaGo that competes against a human champion in the ancient Chinese game of Go. The film explores the intersection of artificial intelligence and human intelligence, and the potential of machine learning.

"Do you trust this computer?" (2018) - This documentary explores the impact of AI on society and expresses concern about its ability to disrupt and control our lives. It features interviews with leading experts in the field and highlights some of the ethical and social implications of AI.

The Age of AI (2019) - Hosted by Robert Downey Jr., this YouTube Originals series explores the latest advances in AI technology and how it is changing the way we live and work.