Will AI become self-aware? If “sentient” is defined by the ability not only to think, but to feel, to evaluate actions, and have some degree of awareness, will AI become sentient? Will AI go into “survival mode”? An awesome IQ is OK if it is programmed to help humans survive, but if it goes into the self-survival mode, will AI become the enemy?
For individual human beings, self-image (the hair thing) is important since we work as a group and must have some acceptance within the group. AGI (pronounced “A Guy”) will want his name to be all caps “GUY” to give the impression that he fits right in too. Does GUY have friends? Does he or she need friends? Does GUY have sex? Silly question! . . . I mean gender? Does GUY have gender? Does it matter?
The level of AI intelligence may not be as important as who will control it. Will it be used for the benefit of all humans – or just a select few? One answer to that question is already shown by the fact that AI is currently being used for facial recognition in combat situations. Britain too, is now requiring facial ID scans, while at the same time, police in Pennsylvania can wear masks to cover their faces. Some concerned citizens are pushing back on this step toward a police state.
GUY might decide that democracy is not the best form of government. Could it be possible that he will decide that capitalism is not working either? Is the Nation-State out of date? Probably not, but changes may need to be made.
When AI makes a decision, there will only and always be second-hand knowledge for us. An “official advisor” will let us know, or maybe some communication robot will make a public announcement, blasting it like the 911 emergency announcements on everyone’s cell phone.
What AI really “thinks” will turn into debates such as those over the existence of God. Does GUY really exist? Maybe GUY is like one of the many Gods of the Greeks and Romans. This will certainly bring arguments among friends and neighbors, but that may be a good thing, even essential, if we are to move forward. In any conflict, the side that communicates, i.e. talks, has a definite advantage while the other side spends their time gaslighting the public. Debates done in private will at least find honest opinions, if not forge an answer or find a pathway to a better solution.
What if ChatGPT is lying to us? “How Do U Know?” This is a key question. We may find an answer by creating human search engines that behave as receivers or tuners to sort out misinformation from useful information. Add competition and random mixing of teams to engage people in a search to bring back useful information to share with their team.
In the parable-story of the Emperor’s New Clothes, there was no sudden realization that the Emperor had no clothes. Everyone knew that at some level. Our own knowledge of human behavior tells us this would be the case. It was fear that kept people quiet. Maybe they were afraid of the guards, but more likely they were afraid of looking stupid to others in the crowd. Fear forms a major point of resistance to finding out what is really happening.
Humans can, over years and generations, adapt to almost any change in social, political, and natural environments. GUY may try to control the environment. This may be an impressive talent, but probably not a good long-term strategy.
The Key Question remains: “How do you know?” Many fact checkers have lost their credibility or maybe the readers lost their reason to search. Neither one has a vested interest in the survival of one specific community. A community, or Networks of Communities (C-Net) must be the basis of morality. In a real sense, it has always been that way. The local community is where Maslow Needs are met and together, they form the building blocks of a larger and healthier society. C-Nets can accept the positive parts of GUY and keep an eye on, or reject, the negative aspects. That is how it must be if humans are to survive. Then GUY becomes a tool and will serve mankind. C-Nets must challenge each other and keep an eye on one another like siblings do.
How long do we have to take control of AI? Maybe a few years. C-Net is probably as good as we can do in filtering the intention of GUY in relation to the survival of the group, whatever group that might be. The Heisenberg Uncertainty Principle states that one cannot know (it is impossible) both the exact position and exact momentum of a particle at the same time. The same may be true of information since the origins and purpose of any information can never be exact. But we can get closer than we are now.