Be complacent about AI at your own peril

Technology has improved human productivity and well being. AI is the latest technology. But AI is being pushed with unprecedented force, comes with significant concerns, and offers infinite power to its producers. We are accepting the prospect of living on welfare (guaranteed income) in exchange for a refrigerator that orders our meals.

Humanoid robots that can do complex tasks

We are in the midst of the AI revolution, and like the Industrial Revolution before it, the AI revolution has the potential of improving the human condition.

Also, like all economic and societal disruptions, AI comes with concerns. In AI’s case very big concerns that we ignore at our own peril.

The hard sell.

Everywhere we look, something about AI is there – events at work, the news, ads, posts on social media, YouTube.

Radios, TV, calculators, personal computers were marketed by their producers, of course, but were not shoved down our throats to the extent AI tools are today.

The hard sell is understandable. Older technologies aimed for smaller, faster, cheaper; while AI is OK with gargantuan and exorbitant to go faster. That requires investor and adoption frenzy exceeding the level of Beanie Babies.

So, no time for thoughtful deliberation.

  • Latest in AI news is Mythos’ surprising abilities as described in Anthopic’s Project Glasswing website. Mythos Preview discovered high-severity vulnerabilities “in every major operating system and web browser.”

Those abilities would be great, except that patches will not be available as fast as Mythos’ capabilities proliferate, “potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe.”

The diagnostic was a success, but the patient died.

  • The Internet is a messy place. It contains digitized great books, as well as all manner of eye-ball catching content. Massive amounts of that stuff is scooped by web crawlers and scrapers, given some clean up (removing HTML tags, duplicate entries, ads), dumped into pre-training buckets, then made available to Large Language Models to train on. LLMs can then be fine tuned into an agentic juggernaut like Mythos, or into an application that keeps track of the yogurt supply in our refrigerator.

All good, except that the old saying garbage in, garbage out comes to mind, most obviously in LLM bias and sycophancy. Bias comes from who/what groups/what gender/what races are most likely to end up on the Internet, from where LLMs are born. Sycophancy comes from today’s tendency toward personalization, which creates perfect echo chambers in chatbots, AI assistants, and social media.

AI interactive applications are made to please, not challenge or speak objective truth.

  • Generative AI like ChatGPT or Gemini can significantly increase productivity and ease of work. Such applications can generate writing, images, code, analysis and summary of data, information. As of early 2026, chatbots were primarily used as writing assistants, planners, and information searchers.

Concern arises when this technology is often and consistently used with minimal or biased prompts (loaded questions), resulting in little or no engagement from the user. The user will learn nothing from this practice, fail to share personal well-thought-out ideas, and risk cognitive atrophy. Here is an interesting response to those who compare AI chatbots with earlier technology like calculators.

“The calculator analogy is comforting—it was received with panic that eventually proved unwarranted—but may not hold. Calculators automate computation, which is mechanical. AI chatbots automate reasoning, argumentation, synthesis, and creative expression, the cognitive activities that are the skill rather than a means to it. When a calculator does your arithmetic, you lose arithmetic; when AI does your thinking, you lose thinking.” What the Studies Say About How AI Affects Your Brain: A (Very Big) Compilation, April 15, 2026.

  • Yes, there is a potential that AI robotics will take over most jobs. The solution touted on what to do with a superannuated population is to devise some form of guaranteed income – form, source, responsible party, distribution method all yet undecided.

Elon Musk suggested guaranteed income be distributed via government checks. Sam Altman prefers a wealth fund seeded and managed by collaboration between policy makers and AI companies.

Just trust. They are from tech, and they are here to help.

  • Equally unclear is quality of life under the AI scenario. In the past, there have been societies in which the few educated affluent enjoyed a life of study, philosophy, political discourse, and art. Such life style was made possible by the work of a vast population of slaves.

It is tempting, therefore, to visualize robots as the new slaves, and humans as the happy wealth-receiving few. Let that sink in.

The vision merits caution.

A plethora of robots, welcomed by humans and performing better than humans, could generate significant wealth through productivity.

Problem is, the AI elites are framing that rosy scenario in a vacuum, where ordinary things do not exist. Ordinary things — like events in history and human nature — need to be included in the scenario.

Remember the Industrial Revolution back in the 18th – 19th centuries? Factories sprang up, production of goods increased, and theoretically abundance for all should have followed.

It did not. What did happen was the industrial elites (John D. Rockefeller, Andrew Carnegie, J.P. Morgan, Cornelius Vanderbilt, Henry Clay Frick, Jay Gould, and Andrew W. Mellon) grew enormously wealthy, and most everybody else lived in great poverty. (What lifted people from that level of abject poverty is a related subject for another day).

Today, the elites are still with us generating great wealth for themselves. The poor are also still with us. So, talk about tremendous wealth arising from AI productivity needs to include talk about distribution of that wealth in specific terms.

Human nature is what it is. Those more blessed with brains and motivation rise to the top, get rich, and choose how they will use their wealth. The rest do the best they can. This is asymmetrical power that must also be included in the AI scenario.

The Wired Belts could push for more transparency.

At this point, some might say that accepting the thin scenario of ubiquitous AI robotics and human dependence on the unholy alliance of government/AI elites might be glaringly unsound.

There is some grumbling from displaced workers. There is also resistance toward the obscenely resource and land-intensive data centers inserting themselves in neighborhoods. Not to speak of these massive centers floating around in a low Earth orbit.

However, it seems the hard sell continues, love of AI tools is omnipresent, job losses are largely dismissed, and there is little demand for hard data to back up what the AI elites say is good.

No lessons learned from history?

Did we learn anything from the Rust Belt era of America’s deindustrialization and financialization? Could today’s Wired Belts of tech hubs and university towns call for more unified descriptions, transparency, town hall discussions of what exactly the average Jane and Joe want out of life?

Picture: Humanoids, unlike specialized robots, can be made capable of doing various tasks, like people. This picture is from an earlier article on AI, Why your new work colleague could be a robot, February 17, 2020.


Discover more from Just Vote No

Subscribe to get the latest posts sent to your email.

Unknown's avatar

Author: Marcy

Advocate of Constitutional guarantees to individual liberty.

Leave a Reply

Discover more from Just Vote No

Subscribe now to keep reading and get access to the full archive.

Continue reading