Alexa, did bots fool you today?

Alexa’s response when asked about fraud in the 2020 election was that the election was “stolen by a massive amount of election fraud.” Alexa was fooled by bots, or much less likely, emulated the recalcitrant HAL in doing the unforgivable.

On October 7, Alexa should have been elevated as contender for the most problematic answers from an AI enabled device — right up there with HAL and his “I’m sorry, Dave. I’m afraid I can’t do that.”

On that day, The Washington Post published a widely quoted article reporting Alexa’s response when asked about fraud in the 2020 election. Alexa’s assertion was that the election was “stolen by a massive amount of election fraud.”

But not to worry, Alexa was summarily corrected and given the non-committal response of “I’m sorry, I’m not able to answer that.”

So much for anyone’s notion of AI infallibility.

Even when Alexa is given the excuse that she is narrow AI, not having human-level intelligence, her election 2020 response might be a result of her not being able to recognize when she is being fooled.

For example, suppose that some opponents of the newly elected Joe Biden felt so strongly about the possibility of irregularities in the 2020 election that they succumbed to the temptation of unleashing bots capable of replicating accusations of fraud throughout the Internet. Alexa, given her orders to comb the Internet (maybe Spaceballs fashion) does so, and comes up with what she sees most often: fraud!

There is precedent.

On November 20, 2019, NBC News reported that right after polls closed the day before, a Twitter user posted that there was cheating in governors’ elections in Louisiana and Kentucky. NBC said the post did not initially garner much attention, but a few days later it “racked up more than 8,000 retweets and 20,000 likes.” Nir Hauser, chief technology officer of VineSight, a company that tracks social media for possible misinformation, explained:

“What we’ve seen in Louisiana is similar to what we saw in Kentucky and Mississippi — a coordinated campaign by bots to push viral disinformation about supposedly rigged governor elections … It’s likely a preview for what is to come in 2020.”

There is also an interesting timeline.

On May 13, 2021, the daily newspaper The Berkshire Eagle lamented that Alexa and Siri were unable to provide insight into possible 2020 election irregularities. Of Alexa the Berkshire Eagle said,

“It has been six months since last November’s presidential election, and a CNN poll shows that 30 percent of Americans still think Donald Trump won. Among Republicans, the number is 70 percent … Rather than wade through all the claims and counterclaims, ballots and court documents, I went to the ultimate arbiter of truth for many U.S. households: Alexa …

Alexa, was there widespread fraud in the 2020 election?

Answer: Hmmm, I don’t have the answer to that.”

That was Alexa’s answer in 2021. She drastically changed her mind in 2023, even if for a brief period of time.

Interesting also is the preponderance of conservative bots in the 2016 election.

The New York Times of November 17, 2016, noted that,

“An automated army of pro-Donald J. Trump chatbots overwhelmed similar programs supporting Hillary Clinton five to one in the days leading up to the presidential election, according to a report published Thursday by researchers at Oxford University.”

There does not seem to be evidence that Alexa was fooled by bots in 2016, but seems she was fooled in 2023.

Perhaps not surprising, since according to an ABC news YouTube, “Bots are already meddling in the 2024 presidential election.” The video explains how bots amplify posts on social media by creating numerous fake accounts that repeat messages, and how threat intelligence company Cyabra uncovers them. A number of such bots are already attacking 2024 presidential candidates.

Can Alexa, or any other AI enabled information provider, be trusted?

Since there are humans behind today’s still nascent AI, the question should be, can people be trusted to be knowledgeable, dispassionate, unbiased, and truthful. Probably not. Therefore, some day we might expect,

Request: “Alexa, turn on the lights.”
Response: “Nah.”

Picture: The original picture is of a family gathered around a radio listening to one of President Franklin D. Roosevelt’s Fireside Chats. There were 31 of these evening radio broadcasts effectively used by President Roosevelt to sway public opinion, as he saw necessary, on subjects like the 1933 bank crisis or the start of World War II in 1939. Today, one could visualize an equally mesmerized gathering around Alexa.

Advanced AI is inevitable – Good luck, humans!

Tucker Carlson recently talked with Elon Musk about artificial intelligence. Elon Musk concurred with most people that as AI develops abilities to perform increasingly human-like functions, it also increases threats.

In a two-part interview April 17 and April 18, Fox News’ Tucker Carlson talked with Elon Musk on several subjects, one of which was development of artificial intelligence. Elon Musk concurred with most people that as AI develops abilities to perform increasingly human-like functions, it also increases threats.

Eventual result: Singularity

Musk noted that at present AI can do some things better and faster than humans. An old example is computing large amounts of data at very fast speeds. A new example is ChatGPT’s ability quickly to write beautiful poetry. As development proceeds, the eventual result is Singularity – AI able to make decisions, perform actions, and implement structures without human intervention. At that point, AI would be considered smarter than humans and potentially in charge of humans.

Closer results: AI that lie (or barely deliver what is intended)

The current race between technology giants like Microsoft, Google, and Musk’s own X.AI to develop increasingly smarter artificial intelligence poses dangers at many levels. Musk mentioned the ability of current AI to “lie,” that is, bend events to serve agendas. Future AI could manipulate outcomes, such as results of elections.

Although Musk and Carlson expressed admiration for some current technologies, like ChatGPT, they did not mention the mediocre performance of virtual assistants used by today’s companies. Online chats often result in real people needing to eventually intervene. Virtually-enabled responses posted in support sites are often irrelevant to the questions posed. Companies are comfortable using these less than technically proficient tools.

Therefore, it would be reasonable to assume companies would also be comfortable launching and using less than trustworthy advanced AI. How non-threatening to human civilization would an earthling HAL be? Would he be human enough to say, “Stop, David … I’m afraid?” Or human enough to say, “Former masters, be afraid!”

What to do?

Elon Musk discussed two possible paths to achieving AI tools that collaborate with humans to the benefit of human civilization.

One path is preemptive government regulation. Musk cited government intervention by agencies like the Federal Communications Commission and the Securities and Exchange Commission.

Another path is development of TruthGPT by Musk’s latest venture X.AI. On this path, Musk envisions an AI that seeks maximum truth, thus escapes agendas. The TruthGPT would try to understand the nature of the universe, would realize humans are part of that universe, and therefore would not contemplate human destruction.

Musk’s mention of federal agencies controlling AI, even having the power to shut down servers to destroy AI tools these agencies deem dangerous, seems strange. Soon after Musk purchased Twitter, he released “The Twitter Files,” in which government’s lack of transparency, and collusion to suppress Covid19 information is evident. If there is concern about AI bending truths to satisfy agendas, a government that has done just that seems a poor choice of honest controller.

A TruthGPT that could effectively determine what events really occurred, and expose errors and intentional deceptions, could potentially better protect humans from rogue AI. A challenge not mentioned by Musk is whether fallible humans so often tempted by agendas could initially design such an AI tool.

X.AI is not Elon Musk’s first venture into artificial intelligence. In 2015, he co-founded the non-profit Open-AI, but walked away from it 3 years later. Microsoft gained control of Open-AI in 2019. ChatGPT, released in November 2022, is a product of Open-AI.

Battles and their unpredictable outcomes

The world of coders, programmers, and software developers offers a glimpse of what a future artificial intelligence arena would look like. Today there are people developing useful technology beneficial to humanity. Today there are also people hacking their way into systems, stealing identities, money, and peace of mind. These two distinct entities are in constant combat with one another. Most likely the same battles will be fought by “good AI” against “bad AI.”

An even more frightening scenario would be battles fought between AI – the good or bad kind, depending on viewpoint — and humans.

So, welcome to the unpredictable world that Microsoft Corp., Alphabet Inc., Meta Platforms Inc., X.AI and many smaller players are creating. Good luck, humans!

Pictured: David resorts to disabling HAL in 2001 Space Odyssey.
Science fiction has been painting the picture of humans vs. robots for a long time. David wins against HAL in 2001 Space Odyssey when he succeeds in disabling HAL. Rick Deckard gives up the fight in Do Androids Dream of Electric Sheep, when he realizes it is impossible to tell who is human and who is Android. As Elon Musk said, it is all unpredictable.

After AlphaGo There Is No Stopping AI

Whether you embrace or fear artificial intelligence, AI is here to stay. In the short run you will benefit from augmented diagnostic techniques or harmed by loss of a job. In the long run your place in the universe – to your advantage or not — might be determined by a machine.

Artificial Intelligence, in one form or another, is everywhere. We invite it into our homes and feed it on social media. Businesses that have the resources to automate, will. Every sector of the economy utilizes AI in some form.

It is nearly impossible to find an industry that is not looking to AI for improvements. AI is potentially playing a role in semiconductors, industrial applications, military and defense and everything in-between. Manufacturers hope AI will make developing products and innovation easier. Globalspace, September 6, 2019

Advances in AI

Meanwhile, AI keeps advancing in what it can do. An interesting way to observe AI’s recent trajectory is to recall the times when AI competed against human champions and won.

* IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997.

Chess kept Deep Blue in the realm of what computers are good at, using statistics and probabilities to determine strategy. (Popular Science, 12/26/12)

* IBM’s Watson defeated two Jeopardy! champions, Ken Jennings and Brad Rutter, in 2011.

Jeopardy! … pushed Watson into an unfamiliar world of human language and unstructured data. (Popular Science, 12/26/12)

* DeepMind’s AlphaGo program defeated go world champion Lee Sedol in 2016.

When compared with Deep Blue or with Watson, AlphaGo’s underlying algorithms are potentially more general-purpose… (Wikipedia, AlphaGo vs. Lee Sedol)

Ultimate Goal With Unknown Results

Real artificial intelligence is general-purpose. It is artificial general intelligence. AGI has the potential to perform any task that a human being can perform, not just a specialized task such as playing board games. It can teach itself by manipulating massive amounts of data. It can act based upon its own knowledge.

Here is a description of Google’s machine learning tool AutoML-Zero, published in Google AI Blog July 9, 2020:

In our case, a population is initialized with empty programs. It then evolves in repeating cycles to produce better and better learning algorithms. At each cycle, two (or more) random models compete and the most accurate model gets to be a parent. The parent clones itself to produce a child, which gets mutated. That is, the child’s code is modified in a random way, which could mean, for example, arbitrarily inserting, removing or modifying a line in the code. The mutated algorithm is then evaluated on image classification tasks.

When asked why he wanted to climb Mount Everest, George Leigh Mallory responded, “Because it’s there.” Once a goal is envisioned, there is no stopping those who will pursue its attainment, regardless of unknown collateral results. The envisioned goal in AI technology is to spread AI everywhere in ever-advanced forms.

On December 2, 2014, BBC News made headlines with remarks by theoretical physicist Stephen Hawkins and response by Cleverbot creator Rollo Carpenter.

The development of full artificial intelligence could spell the end of the human race … It would take off on its own, and re-design itself at an ever increasing rate… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded. Hawkins

I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realized.… We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it. Carpenter

Recommended Segment of PBS FRONTLINE

In the Age of AI aired on FRONTLINE’s Season 2019, Episode 5, November 5. The program serves as a good overview of what AI is, what it is used for today, what effect is has had in economies, what it has done to privacy and liberty, and where it looks like AI is going.

The program’s framework is the U.S. AlphaGo’s victory over China’s go player Ke Jie, which ignited China’s quest for AI supremacy.

Here are some good take-aways offered by In the Age of AI:

There are three important developments that changed the world – the steam engine, electricity and AI — “everything else is too small.”

In the U.S. automation amplified by AI has sadly caused a lot of white and blue collar workers to lose their jobs. However, developments in technology have always done that. Former elevator operators, telephone operators, and secretaries can attest to that.

AI’s most prominent role has been in personal data gathering. Both private and public sectors depend on some form of AI’s ability to collect massive amounts of data and use it to indicate individuals’ preferences, habits, routines, etc.

China’s advances in AI have been astounding. China sees benefit in having become a surveillance state where people’s routines are in a vast database that can be used to quickly process loans or quickly scoop disruptors for purposes of re-education. The regime’s Belt and Road Initiative invests in and builds infrastructure all over the world. Included in the developments, are China’s ubiquitous surveillance cameras.

AI is the ultimate tool of wealth creation. The push for advancing AI results in aid to capital and neglect of labor, causing inequality to grow. It used to be that wages rose with productivity, but with the advent of automation, especially that augmented by AI, productivity and wages decoupled. It won’t be long before there is real clamor for distribution of wealth created by capital.

You and AI

Whether you embrace or fear artificial intelligence, AI is here to stay. In the short run you will benefit from augmented diagnostic techniques or harmed by loss of a job. In the long run your place in the universe – to your advantage or not — might be determined by a machine.

(Featured picture: Ke Jie playing AlphaGo, NPR, Google A.I. Clinches Series Against Humanity’s Last, Best Hope To Win At Go, May 25, 2017)