Advanced AI is inevitable – Good luck, humans!

Tucker Carlson recently talked with Elon Musk about artificial intelligence. Elon Musk concurred with most people that as AI develops abilities to perform increasingly human-like functions, it also increases threats.

HAL from 2001 Space Odyssey

In a two-part interview April 17 and April 18, Fox News’ Tucker Carlson talked with Elon Musk on several subjects, one of which was development of artificial intelligence. Elon Musk concurred with most people that as AI develops abilities to perform increasingly human-like functions, it also increases threats.

Eventual result: Singularity

Musk noted that at present AI can do some things better and faster than humans. An old example is computing large amounts of data at very fast speeds. A new example is ChatGPT’s ability quickly to write beautiful poetry. As development proceeds, the eventual result is Singularity – AI able to make decisions, perform actions, and implement structures without human intervention. At that point, AI would be considered smarter than humans and potentially in charge of humans.

Closer results: AI that lie (or barely deliver what is intended)

The current race between technology giants like Microsoft, Google, and Musk’s own X.AI to develop increasingly smarter artificial intelligence poses dangers at many levels. Musk mentioned the ability of current AI to “lie,” that is, bend events to serve agendas. Future AI could manipulate outcomes, such as results of elections.

Although Musk and Carlson expressed admiration for some current technologies, like ChatGPT, they did not mention the mediocre performance of virtual assistants used by today’s companies. Online chats often result in real people needing to eventually intervene. Virtually-enabled responses posted in support sites are often irrelevant to the questions posed. Companies are comfortable using these less than technically proficient tools.

Therefore, it would be reasonable to assume companies would also be comfortable launching and using less than trustworthy advanced AI. How non-threatening to human civilization would an earthling HAL be? Would he be human enough to say, “Stop, David … I’m afraid?” Or human enough to say, “Former masters, be afraid!”

What to do?

Elon Musk discussed two possible paths to achieving AI tools that collaborate with humans to the benefit of human civilization.

One path is preemptive government regulation. Musk cited government intervention by agencies like the Federal Communications Commission and the Securities and Exchange Commission.

Another path is development of TruthGPT by Musk’s latest venture X.AI. On this path, Musk envisions an AI that seeks maximum truth, thus escapes agendas. The TruthGPT would try to understand the nature of the universe, would realize humans are part of that universe, and therefore would not contemplate human destruction.

Musk’s mention of federal agencies controlling AI, even having the power to shut down servers to destroy AI tools these agencies deem dangerous, seems strange. Soon after Musk purchased Twitter, he released “The Twitter Files,” in which government’s lack of transparency, and collusion to suppress Covid19 information is evident. If there is concern about AI bending truths to satisfy agendas, a government that has done just that seems a poor choice of honest controller.

A TruthGPT that could effectively determine what events really occurred, and expose errors and intentional deceptions, could potentially better protect humans from rogue AI. A challenge not mentioned by Musk is whether fallible humans so often tempted by agendas could initially design such an AI tool.

X.AI is not Elon Musk’s first venture into artificial intelligence. In 2015, he co-founded the non-profit Open-AI, but walked away from it 3 years later. Microsoft gained control of Open-AI in 2019. ChatGPT, released in November 2022, is a product of Open-AI.

Battles and their unpredictable outcomes

The world of coders, programmers, and software developers offers a glimpse of what a future artificial intelligence arena would look like. Today there are people developing useful technology beneficial to humanity. Today there are also people hacking their way into systems, stealing identities, money, and peace of mind. These two distinct entities are in constant combat with one another. Most likely the same battles will be fought by “good AI” against “bad AI.”

An even more frightening scenario would be battles fought between AI – the good or bad kind, depending on viewpoint — and humans.

So, welcome to the unpredictable world that Microsoft Corp., Alphabet Inc., Meta Platforms Inc., X.AI and many smaller players are creating. Good luck, humans!

Pictured: David resorts to disabling HAL in 2001 Space Odyssey.
Science fiction has been painting the picture of humans vs. robots for a long time. David wins against HAL in 2001 Space Odyssey when he succeeds in disabling HAL. Rick Deckard gives up the fight in Do Androids Dream of Electric Sheep, when he realizes it is impossible to tell who is human and who is Android. As Elon Musk said, it is all unpredictable.

Author: Marcy

Advocate of Constitutional guarantees to individual liberty.

%d bloggers like this: