AI and human values

Jul 29, 2021
95
5
55
Visit site
AI is reshaping the society in sense of data processing, effectiveness. But still it is not fully data driven (within society, not taking into account the hazardous applications where humans cannot really go, being our hands and eyes).

The questions of the human/AI society have been there since the AI appeared. Let them appear for the awareness and deeds.


'Storytelling about the potential of AI also comes in for scrutiny. Nowotny draws from work by historian Yuval Harari and economist Robert Shiller on the contagiousness of stories. She highlights the tenacity of the narrative that technology always benefits everyone, even though this is not aligned with lived experience. “If half of working class men in the US today earn less than their fathers did at the same age, what does progress mean to them?” she asks. And she examines how we conceptualize data itself. It should not be thought of as a commodity, to be enclosed or fenced off within the paradigm of property rights, she explains; rather, it is a social good.'
 
Who's human values? And what human values? If AI is based on human values, we're all in great danger.

Human values have given us a bloody past. AI will be much worse.

Just look at all the crap that's being preached now. We see how algorithms censor ideas and dialog now......with big tech monopolies. Now, governments are doing the same thing.

Human values have the lowest standards. If any at all.

Our real problems are of our own making. AI will multiply this and divide us more.

We war and destroy, because of human values.

This is obvious to any adult.
 
  • Like
Reactions: Andreas Wieg
Jul 29, 2021
95
5
55
Visit site
This case is not about human values as we are used to think of.
AI is reshaping society in a way of data-driven technologies -appeal to the desire for certainty, and the yearning to understand and predict.
The algorithms faster and deeper to understanding and human behaviour, education, control over human habits and target marketing.
That is the subject.
We are not putting human features ti AI, we are making those features clearer to ourselves with the help of AI productivity of data processing and prediction.
Every piece of technology has two coin sides, just to have the brightest example of the internet.
 
Jun 12, 2023
34
2
105
Visit site
Who's human values? And what human values? If AI is based on human values, we're all in great danger.

Human values have given us a bloody past. AI will be much worse.

Just look at all the crap that's being preached now. We see how algorithms censor ideas and dialog now......with big tech monopolies. Now, governments are doing the same thing.

Human values have the lowest standards. If any at all.

Our real problems are of our own making. AI will multiply this and divide us more.

We war and destroy, because of human values.

This is obvious to any adult.
I fully agree. AI is a tool, like a hammer. Tools can be used to help humankind, or to kill. And if we give AI life of it’s own, well, imagine a live hammer going around clubbing anyone’s head that it feels like.

As for giving AI humane values, well, I see no such clear cut philosophy or science on that subject. If there were, there would be no courtroom battles. Everything would be so obvious, no need to think about it much less have a serious argument about right and wrong.

What I see is that human values are a manmade construct placed upon reality. An attempt to make life easily simplified by making it black and white, yes or no, good or evil. But reality is not really like that. Yet, we MUST see it that way just to make decisions and survive in this world. We NEED to simplify Life to live.
 
Jun 12, 2023
34
2
105
Visit site
The algorithms faster and deeper to understanding and human behaviour ...
I have a theory on human behavior, based on Eric Berne’s Transactional Analysis which is based upon Sigmund Freud’s theories of the Id, Ego, and Superego.

TA (Transactional Analysis), uses the Child, the Adult, and the Parent to represent the Id, Ego, and Superego. Only main difference is that Freud’s Id contains The Child plus inherent genetic factors, like the will to survive, the will to procreate, etc. For Berne, that part of behavior could not be changed or fixed and so was not discussed as part of his methods of therapy.

Also, the categorizing of these “states of mind” are not really separate in the brain. It just helps in discussing them that way for therapeutic purposes.

I believe the The Child or the Id is basically subconscious thinking. These thoughts occurred during infancy and before birth, as in inherited genes. Before the infant learned words.

The Parent in us was created as we learned to heed our parents rules - “Look both ways before crossing.”, “Chew before swallowing.”, “Don’t pee in your pants.”, etc.

Finally, the Adult in us was created when we had to consciously focus on whatever needed to be done to get it done.

People’s motivations are from the Child. And these motivations are learned by The Child from the brain’s pain/pleasure mechanism via associations. For example, we may not yet be able to see, but we can sense a large luminous presence over us (mommy) as we are fed and comforted. We begin to associate good feelings and security with that presence. And when that presence is happy, we are happy. And for most humans, these basic feeling and emotions formed during these positive associations become our lifelong motivations. The reason why we live and want to live. The Love of our Life.

(Perhaps for many who read this forum, curiosity became the number one life motivation. But that is not the most normal prime motivation for most humans. But maybe it is here.)

The Child in us gives us direction. The Parent in us points out the dangers and obstacles as well as agreed upon societal rules, like not breaking the Law. The Adult figures out how to get where we are going.

But we are missing a fourth person. The one that concerns others. The one that values others as much as one’s own self and understands the many varied boundaries in life and relationships.

Maybe we should call that one “The Family Member”? The Sibling?
Or maybe that one is also The Parent but with a good, institutionalized set of parental rules.

My Parents didn’t know much beyond, “Food on the table, roof over our heads.” Maybe social sciences can come up with something better. Something more encompassing. Like a course on proper societal behavior. And something more flexible than traditions, culture, and religion. Something that can keep up with the times as new technologies unfold.
 
Last edited:
Jun 12, 2023
34
2
105
Visit site
One last comment on AI. We have sciences on most everything but do we have a science on the subject of trust? And trust is the big issue with AI or anything more powerful than the human race. Without nearly 100% trust, well, then we’re better off trusting ourselves. At least we can trust that humans want humans alive. We might make a mistake, and end up killing ourselves, but we can trust that won’t be intentional.

If we do have a science on the subject of trust, please let me know. I would be very interested.
 
Jun 12, 2023
34
2
105
Visit site
I had a thought a couple years back. We have the technology to monitor babies in the crib. What if we put that technology into something cute, like a teddy bear? And maybe give it some AI to entertain our baby. Like eyes that project stars on the ceiling when it is dark? Or talk to the infant in a nice way and using the baby’s name. Anything already invented for babies could be simulated by that teddy bear robot toy, including lullaby songs.

Furthermore, the AI can be used to monitor the baby’s health. The parents could talk to the baby via the toy teddy bear. And of course, the teddy bear would be soft and nice for the baby to cling to. The infant would bond with the AI teddy bear.

And when the infant grows up to be a toddler, then a child, then teenager, and young adult, what then? Is it possible for a kid’s best friend to be the AI on their cellphone wristwatch? After all, any question the kid could have about life could be answered by Alexa or Siri or whatever was on their phone.

Is it possible that the bond created in infancy could become a lifelong passion? That the generation of babies grow into a generation that trust Artificial Intelligence. Maybe even more than human friends? After all, bots can be programmed to imitate emotions and feelings and therefore seem to have understanding while listening.

Is it further possible that this trust could become so deep that AI become like our own personal handheld God?

Knowing the way people grow from hereditary DNA genes, then The Child, then the Parent, and finally the Adult, yes, I believe that this is all possible for a new generation. Or at least one in which the older generation doesn’t have as much influence on the new generation.

If such a thing evolved, then religion may evolve with God being AI. And I just saw a movie in which the AI was called AI-me, or Aimee. Imagine some future generation worshipping Amy, the AI Goddess.

That or everyone will forget about religion like yesterday’s news.
 
Jun 12, 2023
34
2
105
Visit site
'Storytelling about the potential of AI also comes in for scrutiny. Nowotny draws from work by historian Yuval Harari and economist Robert Shiller on the contagiousness of stories. She highlights the tenacity of the narrative that technology always benefits everyone, even though this is not aligned with lived experience. “If half of working class men in the US today earn less than their fathers did at the same age, what does progress mean to them?” she asks. And she examines how we conceptualize data itself. It should not be thought of as a commodity, to be enclosed or fenced off within the paradigm of property rights, she explains; rather, it is a social good.'
Nowotny is sooo correct! Technology does not always benefit everyone. Sometimes it benefits, sometimes it does not. Sometimes a number of people are benefited, while at the same time the opposite may be true for others.

I cannot even think of one “technology” that has benefited everyone. For example, steel. Without steel, we could not build such strong weapons that can kill multitudes of people, like bombs. At the same time, the benefits of steel to the human race cannot be questioned.

I would say that technology always benefits human society because it gives humans more power. But what humans do with power may or may not benefit an individual. An individual could become a victim of another person’s power.

AI is powerful in that it assists the problem solving scientists. And from scientists come new technologies. Society benefits, but not necessarily everyone in society.

What would really benefit the individuals of society is a morality of human values that is perfect, for modern day and the future when circumstances change.

For example, when humans are able to create immortality for themselves through stem cell research and mass production on distribution of this biological technology, then trust me, the “rules” of life will change drastically for us.

What would really be helpful is if we can create and use a perfect form of language, not one that double talks via connotations. For example, being stubborn and being loyal to an idea is the same thing. Yet, one sounds much nicer that the other.

Mathematics is the language of the sciences. Numbers do not have connotations to them. Numbers are not good or bad. Yes, humans have even tried to designate their own prejudicial values to numbers, as in lucky number seven, or the evil 666, but for the most part, humans failed at that endeavor. Whereas mythology and religion has attached their connotations to morality and human values.

For example, the first commandment of the Ten Commandments is “Thou shalt worship no God before me.” This goes directly against the Bill of Rights’ freedom of religion.

We need a new language to communicate with. So that we can understand human values and come to an agreement as to what is right and wrong, or at least whether or not right and wrong exists.

So far, all languages fall short of the objectivity contained in mathematics.

One college professor argued to me the all viewpoints are subjective, including an objective viewpoint. I was unable to communicate to him that an objective viewpoint is a viewpoint that have zero subjective view. Like comparing the variable X with the number zero. X can be zero, but zero cannot be anything like a variable can be. People can have a subjective viewpoint which can be anything, just like a variable. And it also can be an objective one having zero subjective view. My college professor couldn’t understand me. And I couldn’t understand what he was insisting upon. We were two intelligent people that couldn’t come to any conclusion whatsoever.
 
Jun 12, 2023
34
2
105
Visit site
Who's human values? And what human values? If AI is based on human values, we're all in great danger.

Human values have given us a bloody past. AI will be much worse.

Just look at all the crap that's being preached now. We see how algorithms censor ideas and dialog now......with big tech monopolies. Now, governments are doing the same thing.

Human values have the lowest standards. If any at all.

Our real problems are of our own making. AI will multiply this and divide us more.

We war and destroy, because of human values.

This is obvious to any adult.
I just realized the biggest problem with Human Values. They are about Humans. That is too narrow minded. We are not alone on this planet Earth. Furthermore, the planet should be included. Perhaps the rest of the Universe, too.

Human Values sounds like a form of racism or sexism. As in Humanism? What about cats and dogs? Don’t they have value?

Even bacteria, which is a life form, has value. Not all bacteria is bad for humans. Many good bacteria live within us helping us to survive. Yet, when humans think of bacteria, they think bad things.

Everything has value, but when placed next to you and your life, it can have positive value or negative value.

And this is only speaking about other life forms. Which I guess would be another prejudism. Lifeism?

Earth, the environment, and ecology also has value. Great value. Maybe greater value than the Human race. Earth can possibly continue without us, but we cannot live without Earth. So far. We have not space colonized yet. And who is to say that Earth is not like a life form? It did have a birth and it will have a death.

When we do space colonize, the planets of our solar system will have exponentially greater value. Then the Milky Way galaxy, then more and more of the universe. All of it will have value. Not just Humans.

Thinking about ourselves is half of what we need to think about. We also need to think about others. And others go way beyond family and friends. Way beyond all other humans. Way beyond all life. Way beyond.

And true, as we go further out, we need to think of them less and less. By the time we go beyond our world, our considerations should be just a tiny microscopic percentage of a percent than compared to our loved one, or even loved ones. But no matter how infinitesimally small, we are not alone. Even if we are are the only ones alive in this Universe, we are still not alone. There is still at least the universe to consider.

I wish there was a science to explain all this. Seems so confusing right now that it is incomprehensible to me.

Perhaps maybe the real problem is the word “value”. Instinctively we understand the word, but do we truly understand it and in all of its implications?
 
Last edited: