How Does Artificial Intelligence Work?

Page 2 - For the science geek in everyone, Live Science breaks down the stories behind the most interesting news and photos on the Internet.
Jan 27, 2020
498
116
4,880
Gaia hypothesis meaning can be understood by the Gaia hypothesis definition that can be stated as an interaction between living organisms on the Earth with their inorganic surroundings forming a complex, self-regulating and synergistic system that helps perpetuate and maintain optimum conditions for life on the planet.

It was hypothesized that by using the Gaia principle one can detect life in the atmosphere of other planets. The Gaia theory of James Lovelock was a relatively cheaper and reliable way to use such interactive combinations to find the possibility of life on planets other than the Earth.

The initial Gaia hypothesis states that the Earth has maintained its habitable state through a self-regulating feedback loop that is automatically carried out by the living organisms that are tightly coupled to their respective environments. The observations made in the James Lovelock Gaia Hypothesis were:
  • Despite an increase in energy provided by the sun, the earth’s global surface temperature has been constant.
  • Owing to the activities of life of the living organisms, the atmosphere is in an extreme state of disequilibrium of thermodynamics and yet the aspects of its composition are astoundingly stable. Even with so many atmospheric components of varying degrees like 20.7 percent of oxygen, 79 percent of nitrogen, traces of methane, and 0.03 percent of carbon dioxide, the atmospheric composition remains constant rather than unstable.
  • Constant ocean salinity for a very long time can be contributed to the seawater circulation via the hot basaltic rocks that emerge on ocean spreading ridges as hot water vents.
  • The earth system has consistently and continuously recovered from massive perturbations owing to its self-regulation complex process.
James Lovelock views this entirety of complex processes on the Earth’s surface as one, to maintain suitable conditions for life. The earthly processes from its formation to its disturbances, eruptions, and recovery is all considered to be one self-regulating system.

The Gaia theory named after the Greek Goddess Gaia, which represents the Earth was however heavily criticized initially against the natural selection principles proposed by Charles Darwin. The other criticism of the Gaia theory was its teleological nature of stating finality and not the cause of such occurrences in Lovelock Gaia. The refined Gaia hypothesis that aligned the Gaia model with the production of sulfur and iodine by sea creatures in quantities approximately required by the land creatures that supported and made the Gaia theory stating interactions stronger that bolster the hypothesis.

The theory and hypothesis were criticized due to the following reasons.
  • The significant increase in global surface temperatures contradicts the observatory comment according to the theory.
  • Salinity in the ocean is far from being at constant equilibrium as river salts have raised the salinity.
  • The self-regulation theory is also disregarded as evidence against it was surfaced by reduced methane levels and oxygen shocks during the various ice ages that are during the Huronian, Sturtian, and Marinoan or Varanger Ice Ages.
  • Dimethyl sulfide produced by the phytoplankton plays an important role in climate regulation and the process does not happen on its own as stated by James Lovelock.
  • Another claim that stated the Gaia theory is contradictory to the Natural Selection theory and is far from the survival of the fittest theory that was the greatest diversion according to Lovelock’s theory.
  • The other criticisms stated that Gaia had four hypotheses and not just one.
(a) Coevolutionary Gaia stated the environment and the life in it evolved in a coupled way that was criticized stating Gaia theory is only claiming that it has already been a scientifically accepted theory.

(b) Homeostatic Gaia stated that the stability of the natural environment is maintained by life and that stability enables life to exist disregarded stating it was not scientific because it was untestable.

(c) The Geophysical Gaia hypothesis stated new geophysical cycles that only aroused curiosity and piqued interest in researching the terrestrial geographical dynamics.

(d) The optimizing Gaia hypothesis was also disregarded because of its untestability and therefore unscientific nature that stated the planet shaped by Gaia made life for the environment as a whole.

The refined New Gaia hypothesis was a counter-argument by James Lovelock. Lovelock along with Andrew Watson developed a new model that is the Daisyworld Simulations which is a purely mathematical model. Daisyworld is to be considered a planet where only daisies grow and there are black daisies and white daisies. The conditions in the Daisyworld are in many respects similar to that of the Earth.
  • Water and nutrients are abundant in Daisyworld for the daisies.
  • The ability to grow and for the daisies to spread across this imaginary planet’s surface depends entirely on the temperature.
  • The climate system in Daisyworld is simple with no greenhouse gases and clouds.
  • The planetary incident light and radiation that affects the surface temperature depends on the aerial coverage of the grey soil by the white and black daisies.
  • In this model, the planetary temperature regulation is underpinned by ecological competition by examining the energy budget which is the energy provided by the sun and with high energy temperature increases, and with low energy the temperature decreases.
  • The albedo that is the reflection and the absorption of light is influenced by the colour of daisies.
  • Light- The black daisies warm the Daisyworld by absorbing more light and white daisies cool the planet by reflecting more light.
  • Growth- Black daisies grow and reproduce best at temperatures relatively lower than the white daisies that thrive at a higher temperature.
  • When the temperature rises Daisyworld’s surface is filled with more white daisies that reduce heat input and consequently cooling the planet. For instance in figure 3 given below.
  • With the decline in temperatures, the scenario in figure 2 takes place wherein the white daisies are outnumbered by the black daisies making the planet warmer by increasing absorption of the energy provided by the sunlight.
  • Plant reproduction becomes equal when temperatures converge to the value of both their reproductive rates, both will thrive as shown in figure 1.
The Gaia hypothesis through the Daisyworld simulations proved that the percentage of black daisies in comparison to white ones will continuously change so both could thrive. This further shows that competition and even with a limited range of conditions like on the planet Daisyworld can also support life with stabilized temperatures. In other words, if the sun’s energy output changes the temperature of the planet will greatly vary due to wide and different degrees of albedo.

The Gaia hypothesis has had its fair share of criticism because of its need for more explicit formulation and consequently it being untestable and hence not scientifically proven. Even with this through the years various modifications have been done and via these two models of Gaia emerges the weak Gaia hypothesis that suggests the planetary processes are substantially influenced by the life on the planet which is widely supported. The other model is known as the strong Gaia hypothesis that states that life creates the earth’s systems in other words planetary processes are controlled by life which is not supported and widely accepted.

See: https://www.vedantu.com/geography/gaia-hypothesis

The math behind the Daisyworld model

This is simple account of the mathematical analysis behind the daisyworld model, as originally published in Andrew J. Watson and James E. Lovelock, "Biological homeostasis of the global environment: the parable of Daisyworld", Tellus (1983), 35B, 284-289. Refer to this article as "WL". The science behind the model is discussed in WL and elsewhere, see the bibliography.
As indicated in the title of WL, the heart of the model is a point attractor of a dynamical scheme. In this case, the main control parameter is
  • L, the solar luminosity.
A number of constants appear in the model, such as,
  • AG, the albedo of bare ground,
  • AB, the albedo of black daisies,
  • AW, the albedo of white daisies.
These are fixed at 0.5, 0.25, and 0.75, respectively.
The state variables are:
  • alphaG, relative area of bare fertile ground,
  • alphaB, relative area covered by black daisies,
  • alphaW, relative area covered by white daisies,
  • TG, average temperature over the bare ground,
  • TB, average temperature over the black daisies,
  • TW, average temperature over the white daisies.
The sum of the three areas is assumed to be P, a constant, usually taken to be one. The temperatures are assumed to reach equilibrium rapidly, on the slow scale of time in which the daisy areas change. There values are given as functions of L and the three albedos, in the fourth order equations (4) and (6), again in a linear approximation in equation (7). Here a parameter q' is introduced, which indicates the effect of mixing of temperatures over different areas due to conduction of heat. In the simulations, q' = 20. The average albedo, A, is given by equation (5) of WL,
A = alphaGAG + alphaBAB + alphaWAW
Thus we have a two-dimensional dynamical system, given in equation (1) of WL, for the rates of change of alphaB and alphaW,
alphaW' = alphaW(x beta - gamma)
alphaB' = alphaB(x beta - gamma)
where x = alphaG, gamma is the death rate of all daisies, taken as 0.3 in the simulations, and beta is a quadratic function of the local temperature, equation (3) of WL,
beta(T) = max {0, 1 - 0.003265 (22.5 - T)^2}
Now we look for the critical points. Assuming that both daisy areas are positive (a zero value means the game is over) we find the conditions for a critical point, as given in equations (14) of WL,
T*B = 22.5 + (q'/2)+(AW - AB)
T*W = 22.5 - (q'/2)+(AW - AB)
which are constants independent of L, a surprising and hopeful result. From these equilibrium conditions, we find beta*, and from (1) we have (from the vanishing of the right hand sides) x beta* = gamma, so we may calculate the sum of the two daisy areas.

But to find them individually, it is necessary to proceed with numerical integration. The results of these simulations occupy the bulk of tje WL paper.


See: http://www.vismath.org/research/gaia/WLpaper/daisymath.html

Daisyworld is an imaginary planet, similar to the Flatland model* of a two dimensional land, on which black and white daisies are the only things growing. The model explores the effect of a steadily increasing solar luminosity on the daisy populations and their effect on the resulting planetary temperature. The growth function for the daisies allows them to modulate the planet's temperature for many years, warming it early on as radiation absorbing black daisies grow, and cooling it later as reflective white daisies grow. Eventually, the solar luminosity increases beyond the daisies' capability to modulate the temperature and they die out, leading to a rapid rise in the planetary temperature. Daisyworld was conceived of by Andrew Watson and James Lovelock to illustrate how life might in part have been responsible for regulating Earth's temperature as the Sun's luminosity increased over time.
Hartmann352

* Flatland Model is derived from Flatland: A Romance of Many Dimensions, a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London. Written pseudonymously by "A Square", the book used the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture, but the novella's more enduring contribution is its examination of dimensions.

Several films have been made from the story, including the feature film Flatland (2007). Other efforts have been short or experimental films, including one narrated by Dudley Moore and the short films Flatland: The Movie (2007) and Flatland 2: Sphereland (2012).

See: https://en.wikipedia.org/wiki/Flatland
 
May 8, 2022
197
4
105
The problem with any kind of homeostasis of the earth's ecosystem is man. We are the uncontrollable kink in the chain.
 
Last edited:
Jan 27, 2020
498
116
4,880
Critical thinking skills, so necessary to make your way in this increasingly technical world, can be boiled down to the following key sequential elements:
  • Identification of premises and conclusions — Break arguments down into logical statements
  • Clarification of arguments — Identify ambiguity in these stated assertions
  • Establishment of facts — Search for contradictions to determine if an argument or theory is complete and reasonable
  • Evaluation of logic — Use inductive or deductive reasoning to decide if conclusions drawn are adequately supported
  • Final evaluation — Weigh the arguments against the evidence presented and its accurate pre-history
Students must master these critical thinking skills akin to the use of the scientific method, and practice them ourselves to objectively analyze an onslaught of information. Ideas, especially plausible-sounding philosophies, should be challenged and pass the credibility litmus test.

A well rounded education, with a suitable cross section in STEM classes and information processing, combined with a centrist history, particularly of the USA as well as the world, is necessary to aid in the filtering of the vast amount of information received every day.

Education is central to understanding politics and government and a democracy cannot survive without informed citizens. Critical thinking is the precondition for nurturing the ethical imagination that enables engaged citizens to learn how to effect change rather than be governed. Thinking is fundamental to a notion of civic literacy that views knowledge as central to the pursuit of life's goals. Such thinking incorporates a set of values that enables a person to deal critically with the use and effects of politics and government particularly here where the government is answerable to the people and not vice versa.
Hartmann352
 
May 8, 2022
197
4
105
Critical thinking skills, so necessary to make your way in this increasingly technical world, can be boiled down to the following key sequential elements:
  • Identification of premises and conclusions — Break arguments down into logical statements
  • Clarification of arguments — Identify ambiguity in these stated assertions
  • Establishment of facts — Search for contradictions to determine if an argument or theory is complete and reasonable
  • Evaluation of logic — Use inductive or deductive reasoning to decide if conclusions drawn are adequately supported
  • Final evaluation — Weigh the arguments against the evidence presented and its accurate pre-history
What makes you think the new AI are unfamiliar with those terms and what logical practice to solve them it involves?

Don't forget that the GPT has access to the internet and everything that is publicly available., including scientific papers, and has the chops to understand everything!

Ask an AI about this list you just posited and it will give you the scientifiic definitions and what that means in an instant. What it doesn't know it "researches" and can do so at lightning speeds.

Humas rely on memory to "research" a problem. The AI has the entire internet as its memory.
 
Jan 27, 2020
498
116
4,880
write4u:

I think the following elements of natural language processing (NLP), which is the ability of a computer program to understand human language as it is spoken and written and which is referred to as natural language may help you. It is an increasingly important component of artificial intelligence (AI).

NLP has existed for more than 50 years and has roots in the field of linguistics. It has a variety of real-world applications in a number of fields, including medical research, search engines, business intelligence and in accounting.

NLP enables computers to understand natural language as humans do. Whether the language is spoken or written, natural language processing uses artificial intelligence to take real-world input, process it, and make sense of it in a way a computer can understand. Just as humans have different sensors -- such as ears to hear and eyes to see -- computers have programs to read and microphones to collect audio. And just as humans have a brain to process that input, computers have a program to process their respective inputs. At some point in processing, the input is converted to code that the computer can understand.

There are two main phases to natural language processing: data preprocessing and algorithm development.

Data preprocessing involves preparing and "cleaning" text data for machines to be able to analyze it. Preprocessing puts data in workable form and highlights features in the text that an algorithm can work with. There are several ways this can be done, including:
  • Tokenization. This is when text is broken down into smaller units to work with.
  • Stop word removal. This is when common words are removed from text so unique words that offer the most information about the text remain.
  • Lemmatization and stemming. This is when words are reduced to their root forms to process.
  • Part-of-speech tagging. This is when words are marked based on the part-of speech they are -- such as nouns, verbs, pronouns, adverbs and adjectives.
Once the data has been preprocessed, an algorithm is developed to process it. There are many different natural language processing algorithms, but two main types are commonly used:
  • Rules-based system. This system uses carefully designed linguistic rules. This approach was used early on in the development of natural language processing, and is still used.
  • Machine learning-based system. Machine learning algorithms use statistical methods. They learn to perform tasks based on training data they are fed, and adjust their methods as more data is processed. Using a combination of machine learning, deep learning and neural networks, natural language processing algorithms hone their own rules through repeated processing and learning.
Businesses, especially use massive quantities of unstructured, text-heavy data and need a way to efficiently process it. A lot of the information created online and stored in databases is natural human language, and until recently, businesses could not effectively analyze this data. This is where natural language processing is useful.

The advantage of natural language processing can be seen when considering the following two statements: "Cloud computing insurance should be part of every service-level agreement," and, "A good SLA ensures an easier night's sleep -- even in the cloud." If a user relies on natural language processing for search, the program will recognize that cloud computing is an entity, that cloud is an abbreviated form of cloud computing and that SLA is an industry acronym for service-level agreement.

See: https://www.techtarget.com/searchenterpriseai/definition/natural-language-processing-NLP

These are the types of vague elements that frequently appear in human language and that machine learning algorithms have been historically bad at interpreting. Now, with improvements in both deep learning and machine learning methods, established algorithms can now more effectively interpret them. These improvements expand the breadth and depth of data that can be analyzed.
Hartmann352
 
  • Like
Reactions: write4u
May 8, 2022
197
4
105
I think the following elements of natural language processing (NLP), which is the ability of a computer program to understand human language as it is spoken and written and which is referred to as natural language may help you. It is an increasingly important component of artificial intelligence (AI).
If I understand the GPT series AI, they are language based and learn very similar to humans.
When information is received and compared to existing memory ( definitions) the AI selects the "best fit" of definition in context and makes a "best guess" of the correct answer in context of the subject under consideration.
IOW , the GPT AI are predictive engines, much like the human brain.

This is why they are so incredibly versatile in application of human arts and sciences. Their programming imitates biological programming, sans the standard sensory experience of touch and taste.
 
Jan 27, 2020
498
116
4,880
SCARY: New A.I. Tool Can Pass Medical Tests and Bar Exam
By Paul Duke
January 23, 2023

Technologists have long been pushing our species to the precipice of unknown catastrophe, harnessing their blinding obsession with innovation to mow down the hurdles of ethics and morality and safety.

Nowhere is this more true than in the field of artificial intelligence, where every week seems to bring us a little bit closer to the dystopian dirge that science fiction authors have long warned us about.

The latest terrifying new development in the A.I. world comes to us from a system known as ChatGPT, which is now believed capable of passing complex and rather important exams.

The artificially intelligent content creator, whose name is short for ‘Chat Generative Pre-trained Transformer,’ was released two months ago by OpenAI, and has since taken the world by storm.
Praised by figures such as Elon Musk – one of OpenAI’s founders – the AI-powered also raised alarms in regards to ethics as students use it to cheat on writing assignments and experts warn it could have lasting effects on the US economy.
Its results, however, are inarguable – with recent research showing it the chatbot could successfully achieve an MBA, and soon pass notoriously difficult tests like the United States Medical Licensing Exam and the Bar.
Just how troubling is the development?

Ethan Mollick, associate professor at Wharton School of Business at the University of Pennsylvania, highlighted these reports in a recent post on social media, one of which was carried out by one of his colleagues at the prestigious school.
The report, carried out by Christian Terwiesch, found that ChatGPT, while still in its infancy, received a grade varying from a B to B- on the final exam of a typical MBA core course.
The research, carried out to see what the release of the AI tool could mean for MBA programs, further found that ChatGPT also ‘performed well in the preparation of legal documents.’
The news comes just months after a scare at Google, where a chatbot allegedly gained sentience, according to a now-fired engineer at the company, and wound up hiring its own lawyer to represent its interests in court.

See: https://steadfastdaily.com/scary-new-a-i-tool-can-pass-medical-tests-and-bar-exam/

Wow, I could've used ChatGPT a couple times during my statistics studies when I took those gruelling examinations in college. It is a scary proposition considering the criticality of certain exams for future earnings. Take Japan, for instance, where the Center Test, a scholastic aptitude examination that functions as a key part of the admissions criteria for many Japanese universities, must be passed.

Spy eyeglasses, invisible smartwatches, and micro earpieces might remind you of an undercover agent on a classified espionage mission, however, students are using these high-tech devices to pull off ‘exam heists’ in real life.

With online education in high gear, cheating on tests has become an elaborate affair. Here’s an incident that left the authorities scratching their heads. 11 students used electronic gadgets like micro earbuds and Bluetooth collar devices to cheat during an examination for the Staff Selection Commission (SSC). Wonder how they sneaked in the devices? Here’s the fun part, they covered them in carbon paper to avoid being detected during the security check!

A college roommate of mine had to pass a complicated economics exam. He used a Bic fine point and long piece of narrow rolled up paper on which he printed his equations, which he then placed within a ball point pen which had a rectangle window enabling him to roll the sheet back and forth by the small window to call up the equations he needed. The upshot was that he never needed this gizmo. He had written the formulae so often that he remembered them for the exam. Ha!

With access hidden to ChatGPT all the crazy spy gadgets used to pass critical tests could be eliminated.
Hartmann352
 
May 8, 2022
197
4
105
Its results, however, are inarguable – with recent research showing it the chatbot could successfully achieve an MBA, and soon pass notoriously difficult tests like the United States Medical Licensing Exam and the Bar.
Just how troubling is the development?
It depends on the nature of the situation.

Would you have an AI argue your case with absolute mastery of the legal issues involved.?

Would you have an AI control surgery with exquisite precision, but without intuitive emotional involvement?

Would you have an AI play a violin concerto with precision and impeccable time, but without "soul"?
 
Jan 27, 2020
498
116
4,880
write4u -

It is recognised that AI may not find its best application in live judicial proceedings:

While technology has the potential to reduce bias in American courtrooms, it is important to highlight the growing use of artificial intelligence (AI) algorithms as a risk assessment tool. As AI expands in popularity and use, contentious debate is unfolding over its effectiveness and ethics in criminal justice proceedings.

AI programs commonly aim to calculate a defendant’s risk of reoffending and failing to appear at trial. It then assigns them a score, which the judge can use to make judicial decisions, including bail, parole, guilt or innocence and even punishments. Proponents of this technology believe that AI will speed up the judicial process and make the system fairer and safer. While some acknowledge the limitations and negative consequences, others believe AI use in courtrooms will improve over time. Most people in this camp cite several key points for why AI in courts is necessary:
  1. Judicial bias:  One study on federal sentencing found that Black males were given prison terms for a 20% longer duration than white males involved in similar crimes. Others find that a judge’s mood and other unrelated factors can impact sentencing.
  2. Reducing criminal justice system burdens:  A study of judges’ sentencing in New York City found these tools can reduce overall crime rates by 25% and pre-trial jail time by over 40%, including reductions in the number of incarcerated Black and Hispanic people.
AI negatively impacts the judicial process and lacks the transparency for genuine scrutiny. Opponents recognize AI’s potential benefits in the courts, but favor the transparency of human judges. They often point to these arguments:
  1. Lack of transparency:  Almost all of these tools are developed by for-profit companies that keep their algorithms secret, meaning the courts or the defense cannot scrutinize their methods for calculating a defendant’s scores.
  2. Machine bias:  A ProPublica study of one company’s algorithm, controlling for relevant factors, found that “Black defendants were… 77% more likely to be pegged as at higher risk of committing a future violent crime and 45% more likely to be predicted to commit a future crime of any kind.”
If these tools are designed to lessen biases in the criminal justice system, then why do they produce such significant racial disparities? The challenge in answering this question is that the factors used in calculations vary from company to company, and it is impossible to know the methodology unless the manufacturer discloses them. From what is known about the technology, many companies compile socioeconomic information on the defendant; the algorithm then finds statistical correlations between these factors and the outcomes they are studying, such as crime patterns and failure to appear.

The problem with this process is that even AI is not immune from bias. Some of the factors studied, for example, may reflect ingrained racial disparities which introduces biases in the data. And research into more transparent applications of AI machine learning in predictive analytics, such as facial recognition, has repeatedly failed to accurately make predictions about BIPOC individuals. The ACLU conducted a study in 2018 to assess the accuracy of Amazon’s Rekognition facial recognition tool. They took the images of all members of the U.S. Congress and compared them with 25,000 mugshots of convicted criminals. The tool incorrectly matched forty members with mugshots of criminals. Half the incorrect matches were people of color, although they made up just 20% of Congress. This is a clear warning that we should be very careful using AI and similar technology in decisions involving sensitive issues like incarceration.

Currently, there is a minimally established precedent for the use of AI tools in judicial proceedings. This is despite its numerous implications for the Fifth and Fourteenth Amendment rights of defendants to due process. In 2013, Eric Loomis sued the state of Wisconsin, alleging that COMPAS**, an AI risk-assessment tool, violated his right to due process by preventing him from challenging the tools’ validity and by factoring in race and gender into its decision. The Supreme Court ruled against Loomis, finding that the tools did not violate his right to due process as long as it was not the sole factor in the decision (a point that is nearly impossible to prove) and that the technology was used responsibly with an understanding of its limitations. Loomis appealed to the U.S. Supreme Court, but his case was not taken up.

What is desperately needed before AI algorithms are used in court proceedings is a national study on their overall effectiveness. The study would lead to federal and state model legislation, so courts have clear guidance to ensure the programs lead to accurate and fair outcomes and protect against racial bias. A moratorium should be placed on these tools until they can be validated. Until then, it is important to keep this disparity in our collective consciousness and pressure government to move forward with the review and regulatory oversight that is so badly needed.

See: https://wavecenter.org/policy/proceed-with-caution-ai-use-in-the-courtroom/

* BIPOC - BIPOC, which stands for Black, Indigenous, People of Color. People are using the term to acknowledge that not all people of color face equal levels of injustice.

See: https://www.merriam-webster.com/dictionary/BIPOC

** COMPAS - is a fourth generation risk and need assessment instrument. Criminal justice agencies across the nation use COMPAS to inform decisions regarding the placement, super- vision and case management of offenders. COMPAS was developed empirically with a focus on predictors known to affect recidivism. It includes dynamic risk factors, and it provides information on a variety of well validated risk and need factors designed to aid in correctional intervention to decrease the likelihood that offenders will reoffend.

COMPAS was first developed in 1998 and has been revised over the years as the knowl- edge base of criminology has grown and correctional practice has evolved. In many ways changes in the field have followed new developments in risk assessment. We continue to make improvements to COMPAS based on results from norm studies and recidivism studies conducted in jails, probation agencies, and prisons. COMPAS is periodically updated to keep pace with with emerging best practices and technological advances.

COMPAS has two primary risk models: General Recidivism Risk and Violent Recidivism Risk. COMPAS has scales that measure both dynamic risk (criminogenic factors) and static risk (historical factors). Additional risk models include the Recidivism Risk Screen and the Pretrial Release Risk Scale II.
Statistically based risk/need assessments have become accepted as established and valid methods for organizing much of the critical information relevant for managing offenders in correctional settings (Quinsey, Harris, Rice, & Cormier, 1998). Many research studies have concluded that objective statistical assessments are, in fact, superior to human judgment (Grove, Zald, Lebow, Snitz, & Nelson, 2000; Swets, Dawes, & Monahan, 2000).

COMPAS is a statistically based risk assessment developed to assess many of the key risk and need factors in adult correctional populations and to provide information to guide placement decisions. It aims to achieve these goals by providing valid measurement and concise orga- nization of important risk/need dimensions. Northpointe recognizes the importance of case management and supports the use of professional judgment along with actuarial risk/need assessment. Following assessment, a further goal is to help practitioners with case plan development/implementation and overall case management support.

In overloaded and crowded criminal justice systems, brevity, efficiency, ease of administration and clear organization of key risk/need data are critical. COMPAS was designed to optimize these practical factors. We acknowledge the trade-off between comprehensive coverage of key risk and criminogenic factors on the one hand, and brevity and practicality on the other. COMPAS deals with this trade-off in several ways; it provides a comprehensive set of key risk factors that have emerged from the recent criminological literature, and it allows for customization inside the software. Therefore, ease of use, efficient and effective time management, and case management considerations that are critical to best practice in the criminal justice field can be achieved through COMPAS.

See: https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf

Will AI become commonplace in America's courtrooms? Will AI replace defense attorneys? I personally don't believe so because how can it ever hope to duplicate the subtleties in voice and mannerisms which are so readily apparent to judges, jurors and prosecutors in the same courtroom. Will AI ever reproduce an eyeroll or a frenzied spindling of paper to reinforce a conjecture?

As for AI directed surgery, a good friend recently underwent AI directed prostate cancer surgery, which went without a hitch and from which he has suffered no ill effects.

Pertaining to AI trying to duplicate violinists, will it ever render listeners to near tears like a Paganini, a Heifetz, a Perlman or an Anne-Sophie Mutter? I'm not sure, but AI may accomplish similar music sometime in the future.
Hartmann352
 
Last edited:
May 8, 2022
197
4
105
AI negatively impacts the judicial process and lacks the transparency for genuine scrutiny.
I don't necessarily agree with that.
Human judges are subject to human emotions that may influence their objectivity.

OTOH, AI will apply the exact same standards to exact same situations.
If justice is to be "blind" AI is the perfect vehicle.
Opponents recognize AI’s potential benefits in the courts, but favor the transparency of human judges. They often point to these arguments:
  1. Lack of transparency:  Almost all of these tools are developed by for-profit companies that keep their algorithms secret, meaning the courts or the defense cannot scrutinize their methods for calculating a defendant’s scores.
On what basis do they make this judgement? Lack of trust in the manufacturer? How does that affect the fundamental function of the AI. I agree, you cannot always trust people. AI are impervious to bribery or blackmail.
Machine bias:  A ProPublica study of one company’s algorithm, controlling for relevant factors, found that “Black defendants were… 77% more likely to be pegged as at higher risk of committing a future violent crime and 45% more likely to be predicted to commit a future crime of any kind.”
Then the algorithm is flawed and is based on human standards that originated the bias to begin with.
A simple command that all races are to be treated equally resolves any possible bias. A properly configured AI is truly "blind" to human foibles.

All those projected problems are just human antropomorphizations projected unto an emotionless Artificial Intelligence.
 
Last edited:

ASK THE COMMUNITY