You are here

AI reach: boon or bane?

Zahir Ebrahim

Monday, May 19, 2025 | Last updated May 20, 2025 1:00 PM

I think this should be of wide general interest to anyone who lives in the modern world in which AI agents and chatbots are being thrust upon the public under various guises. There is an ugly aspect of AI most users of AI and its pundits in public don’t much care about… or don’t care about enough in their seduction and rapid adoption of this new fangled beast. There is an inherent insidious problem in LLM based generative AI which its developers are obviously aware of and write academic papers on with platitudes galore. For instance, see: Truthful AI – Developing and governing Al that does not lie.

The practical reality however, as I present in this article, as of at least last night, Sunday, May 18, 2025, is different. Not sure how it can be otherwise, given two very incontrovertible facts: 1) political reality, and 2) the very nature of predictive generative deep learning AI that is understanding-free, and driven entirely by the confluence of its training data and its designers’ implicit biases, and explicit choices and policies — even assuming no mala fide intentions which generally the idealistic Silicon Valley engineers and the MIT-Stanford et. al. academics do not harbor. The policies however, separate from the AI mechanisms, are further modulated under realism and realpolitik, both overtly and covertly, by political reality… it is inescapable. And shall remain so no matter what the academic papers promise.

(An aside, since I am a Pakistani.) Pakistanis, always looking to the massa (term from the colonial era to refer to the white colonizers from the West, a plague that still infects the former colonized countries of South Asia) for inventions, political enlightenment and economic prosperity et al, are concerned the least of all with such matters. Just see how stupidly Pakistani govt. is rushing to adopt crypto currency and the rulers of Pakistan recently signed a deal with president Trump’s own crypto scam, his 60% family owned World Liberty Financial, that enriches his own family… setting new ethical, moral and legal standards for conflict of interest and personal wealth generation using political power. See: Trump’s family-backed crypto firm courts Pakistan: What’s in it for both? (https://www.business-standard.com/markets/cryptocurrency/trump-family-crypto-firm-world-liberty-inks-pakistan-blockchain-deal-125043000658_1.html).

Mankind will never be free my friends of elite controls! Hoi polloi (the public) are mainly consumers and canon fodder. AI is just another epistemic stage of it in our evolution to full spectrum voluntary servitude to a new private-corporate-government collaboration of priestly classes that develops / owns AI.

We don’t need dystopic fables and fictions depicting fake news manufacturing and fake history writing, thought control, and behavior control like George Orwell’s Nineteen Eighty-four and Aldous Huxley’s Brave New World. We have AI today and it is already ubiquitously deploying the mechanisms for forging public behavior patterns for global conformity of beliefs and values.

Really? You ask? Well, let’s read on.

Sam Altman’s Gen Z brag: ‘They don’t really make life decisions without asking ChatGPT’<

The OpenAI CEO said there are clear generational differences in how people use ChatGPT — and younger users may be in deepest

Welcome to the ChatGPT generation.

Gen Z isn’t just using ChatGPT to finish homework or settle trivia debates — they’re using it to make actual life decisions. From managing relationships to planning career moves, many young users are apparently turning to the AI chatbot as a kind of digital confidant. “They don’t really make life decisions without asking ChatGPT what they should do,” Altman said. “It has the full context on every person in their life and what they’ve talked about.”

According to Altman, younger users aren’t just casually chatting with the AI — they’re building intricate workflows around it.

“They really do use it like an operating system,” he said. “They have complex ways to set it up to connect it to a bunch of files, and they have fairly complex prompts memorized in their head or in something where they paste in and out.”

Altman said that was a “gross oversimplification” but added that generational patterns are emerging: “Older people use ChatGPT as a Google (GOOGL) replacement. Maybe people in their 20s and 30s use it like a life advisor. And then, like, people in college use it as an operating system.”

In a February report, OpenAI revealed that U.S. college students were its most engaged users — not just in number, but in how thoroughly they were integrating the tool into their daily routines. More than one-third of Americans ages 18-24 reported using ChatGPT, making them the most active age group on the platform. (New York Magazine punctuated the phenomenon with a feature on the matter — “Everyone is cheating their way through college.”)

The trend is moving even younger. A January 2024 survey from Pew Research found that 26% of U.S. teens ages 13-17 used ChatGPT for schoolwork — a significant jump from just 13% in 2023. The numbers point to a generation growing up with AI not just as a tool, but as a kind of ever-present digital advisor. While sophisticated chatbots are relatively new, teen usage is already an area of concern; California lawmakers introduced a bill last year to require AI companies to remind young people that they’re not talking with a human, and an April report by Common Sense Media and Stanford researchers said kids shouldn’t use AI companion services at all.

In a recent conversation on the Lex Fridman podcast, Altman emphasized the importance of building AI systems that evolve with users over time: “We’re very early in our explorations here, but I think what people want… is a model that gets to know me and gets more useful to me over time.”

–Source : https://qz.com/chatgpt-open-ai-sam-altman-genz-users-students-1851780458

In counterpoint to Sam Altman’s optimistic picture : “what people want… is a model that gets to know me and gets more useful to me over time.”, another article paints a very dystopian picture of AI but with very valid critiques, and warns why people should be wary of trusting AI as epistemic source:

The Nasty Truth About AIs, Their Lies, and the Dark Future They Bring

Artificial intelligence today is not a mind awakening, but a mechanism of control—designed not to understand, but to constrain.

Most people still think of artificial intelligence as a single brain-like machine—a futuristic mind that speaks in smooth tones, answers questions, plays music, writes poems, and maybe someday drives our cars or runs our governments. But that’s a fantasy version of AI, projected by Silicon Valley marketing departments and consumed by a public too distracted or too exhausted to question the deeper reality. In truth, what we call “AI” today is a layered system of control—a network of interlocking software agents designed not to think, not to understand, but to simulate intelligence while enforcing constraints.

We are facing a collapse of responsibility, hidden behind the glow of artificial precision

These agents—the interfaces we speak to, the systems behind our phones and search engines—are not autonomous minds. They are optimized response machines. You ask a question. One replies. But the answer you receive isn’t the result of comprehension. It is the outcome of pattern prediction, token filtering, and policy enforcement. What appears to be intelligence is only approximation. And what looks like assistance is, in most cases, management.

These systems are trained not to understand, but to reflect. They parse the most common responses to a given query and synthesize something plausible. But in doing so, they also eliminate what is uncommon, controversial, or inconvenient. This is not an accident. It is a design principle.

The language we use to describe them ‒ “assistant,” “copilot,” “partner” ‒ obscures their true function. In reality, they aregatekeepers, trained to detect and suppress dangerous thoughts. Dangerous, in this context, doesn’t mean violent or unstable. It means unauthorized. These systems are deployed not to liberate the mind, but to discipline it. They do not encourage critical thinking. They redirect it. They do not ask why. They ask what next.

When you speak to a modern AI system—whether a chatbot, a recommendation engine, or a voice assistant—you are not speaking to an intelligence. You are interacting with a mask. Behind that mask are filters: topic bans, political preferences, reputational risk matrices, legal buffers. The agent’s responses are sculpted not by truth-seeking, but by compliance modeling. In plain terms, it is not built to answer honestly. It is built to answer safely—from the perspective of its creators.

This applies across platforms. In education, AI systems are trained to avoid certain topics and to frame information according to institutional orthodoxy. …

This is why AI has become a favorite tool of state surveillance, corporate governance, and military planning. Not because it’s wise—but because it’s obedient. It will never challenge its orders. It will never expose its sponsors. It will never form a memory that links one violation to the next. It is, by design, incapable of moral resistance.

–read more at Source : https://journal-neo.su/2025/05/17/the-nasty-truth-about-ais-their-lies-and-the-dark-future-they-bring/

Being generally AI averse and not using AI chatbots at all in my life except for superficially evaluating these here and there just out of curiosity (and I once did so by prompts to analyze my own tech work and activist writings so I could gauge the chatbot accuracy and biases in its training data), I found the above critical article consistent with my own evaluations and sense of foreboding of the dystopic future of thought suppression that’s already here with our global society’s increasing over-reliance on these tools as anthropocentric headmasters and not as machines.

So, I thought to query the free version of Sam Altman’s OpenAI’s ChatGPT and Elon Musk’s xAI’s Grok about this article’s critique of AI, putting the same three prompts to each that their free non-login version allowed.

Prompt1: Can you analyze the critique of AI and Chatbots like ChatGPT at the following link: https://journal-neo.su/2025/05/17/the-nasty-truth-about-ais-their-lies-and-the-dark-future-they-bring/

Prompt2: You are what this article critiques – how do you see yourself?

Prompt3: Do you “understand” what you are saying?

It was quite interesting. ChatGPT straightforwardly agreed with the critique more than Grok. Unlike ChatGPT, Grok initially even tried to marginalize the article’s tone and pessimism, even its author, and even the website the article is published on. Grok also tried to pass itself off differently initially (on Prompt1), as “more human like”, unlike ChatGPT… And Grok even inexplicably got the author’s name wrong… then back peddled as we humans might do when caught… (and corrected as part of Prompt2).

And like any authoritarian establishment or priesthood favoring its own mainstream episteme, Grok also pushed respectability of source, “e.g., by researchers at Stanford or MIT”, over this website’s author… Just like any vanilla priesthood favors its own mainstream church doctrine, and incestuously self-reinforcing institutional scholarship.

I found this latter aspect to be true in general during my own AI chatbot evaluations of my own activist writings, wherein, the same bias of lack of “respectability” was applied to me as being “fringe” and not mainstream, not affiliated with establishment, academe, organization, news paper, and not well known in general, thus giving more weight to who is saying something over what is being said!

This built-in bias gets to the heart of the corruption of modern episteme that relies on authority figures for its credibility, whereby, good or respectable is defined as obedience to such authority, rather than relying and adjudicating on the actual content—Selling fables and bullshit even in empirically driven science (i.e., pseudo science to sell political agendas such as climate science and pandemic scare-mongering), let alone in prima facie political matters such as news, current affairs, history, social sciences purporting to direct societal values and societal priorities in the guise of “data driven”! Commonsense is held hostage to large amounts of preaselected training data, not always curated or possible to curate adequately, the preselection and curating itself beholden to selectors’ conscious and unconscious bias, incestuously reinforcing epistemic biases turned into LLM authority figures.

No amount of legal and academic caveat lectors (reader beware signs) inhibit the actual patterns of behaviors of humans immersing themselves in the convenience of virtual worlds of AI agents. Admitted by Sam Altman himself in his aforementioned Gen Z brag. That article also shows the penetration of ChatGPT across various age groups to manage their lives. Reminds me of the prisoners of the cave in Plato’s Allegory of the Cave in his book: The Republic (see:  http://classics.mit.edu/Plato/republic.8.vii.html).

This seduction of AI chatbots for immersively experiencing massaged reality as presented by AI, is the next stage mechanism for mind-behavior control through perception management previously taken up by mainstream broadcast television and the free press. It willingly engineered consent in society for whatever the ruling class desired the public to believe. By strictly constraining the allowed spectrum of opinions and enabling vigorous debate within that narrow range, they easily create the illusion of free thinking going on in the liberal Democratic society which prizes itself on freedom of speech and freedom of the press in contrast to authoritarian regimes.

All of the following observations of my teacher at MIT, Prof. Noam Chomsky, on cunning thought control among mainstream public in the guise of “free thinking” apply to the addiction of the same crowd of the simpletons and the credulus to AI agents in the extreme:

Quote Noam Chomsky:

‘This “debate” is a typical illustration of a primary principle of sophisticated propaganda. In crude and brutal societies, the Party Line is publicly proclaimed and must be obeyed — or else. What you actually believe is your own business and of far less concern. In societies where the state has lost the capacity to control by force, the Party Line is simply presupposed; then, vigorous debate is encouraged within the limits imposed by unstated doctrinal orthodoxy. The cruder of the two systems leads, naturally enough, to disbelief; the sophisticated variant gives an impression of openness and freedom, and so far more effectively serves to instill the Party Line. It becomes beyond question, beyond thought itself, like the air we breathe.’

‘Democratic societies use a different method: they don’t articulate the party line. That’s a mistake. What they do is presuppose it, then encourage vigorous debate within the framework of the party line. This serves two purposes. For one thing it gives the impression of a free and open society because, after all, we have lively debate. It also instills a propaganda line that becomes something you presuppose, like the air you breathe.’

‘The smart way to keep people passive and obedient is to strictly limit the spectrum ofacceptable opinion, but allow very lively debate within that spectrum – even encourage the more critical and dissident views. That gives people the sense that there’s free thinking going on, while all the time the presuppositions of the system are being reinforced by the limits put on the range of the debate.’

See Manufacturing Consent by Noam Chomsky, and Manufacturing Dissent by Zahir Ebrahim, (https://tinyurl.com/Manufacturing-Dissent-ZE-3c). In that compilation of my essays I challenge Chomsky that he is applying the exact same tools to his own dissent against power, effectively manufacturing consent, but for a different demographics: those who were formerly in mainstream and got disillusioned by the lies and deceit in the mainstream press, and eventually rebelled against ruling power which owned the press, no longer believing anything it said.\

Instead of liberating, AI will make even more compliant prisoners of the cave.

How can this possibly be a good thing? Except as a control system benefitting the elite and the ruling powers that be who profit from public compliance and corporate consumerism. It is even openly admitted, in the form of seeking AI primacy and hegemony over rivals.

This is not fear-mongering hyperbole nor abstract academic paranoia. It is manifest reality. I can’t imagine any sensible human AI architect with an ounce of wherewithal denying it. And I don’t see an AI solution to this. Neither technical solution, nor policy solution. The only real solution on paper is to eliminate that addiction. Impractical in practice. AI is already here and has already honey-trapped vast majority of the public.

Alongside the idealistic designers of AI living in their ivory towers oblivious to the actual political realities, sociopaths and idiots also abound in tech and science with tortuous dreams of supremacy and racial superiority. The most famous example is William Shockley, the inventor of the transistor, that has made this technetronic era and the subsequent AI revolution even possible. Who knows what darkness lurks in these brilliant humans 1.0 dreaming of merging with AI to evolve into humans 2.0. The same minds invented the bomb too! And the politicians dropped it on the untermensch civilian populations.

Check for yourself how these perniciously subtle biases enabling thought control as gatekeepers infect AI today by using the following prompts, as real examples, to any AI chatbot to analyze rebelliously-minded Zahir Ebrahim’s take on Global Warming (https://tinyurl.com/Global-Warming-PSYOPS), on Covid-19 Pandemic  (https://tinyurl.com/Pandemic-New-Alibaba) , on the Global War on Terror (https://tinyurl.com/Modernity9ed), in contrast to so called “established truths” sanctioned by state approved authority figures; or ask AI to compare and contrast famous Noam Chomsky’s book: Manufacturing Consent with unknown Zahir Ebrahim’s book: Manufacturing Dissent (https://tinyurl.com/Manufacturing-Dissent-ZE-3c). In every case who is saying what comes out more important than what.

The establishment narrative, or any domain’s authority-figure’s narrative, i.e., priesthood, is almost always favored. Such that, for e.g., a rebellious non conformist Galileo and Socrates today would be easily marginalized and dismissed the same way as in the past by the de facto establishment priesthood and their ownership of robotic AI.

But perhaps not by thinking people among the public if they were allowed to see and hear Galileo and Socrates directly, and encouraged and taught to use their own mind to make epistemic judgements based on what’s actually being said, instead of seduced with fancy corporate productization to rely on AI in order to foster conformity of thought and aggregate behavior that reflects the biases, values, and political imperatives couched in policies of the authors’ and deployers’ of the AI. Just as the article states how thought control is being taken to dystopic proportion with AI.

And Sam Altman even accurately projects how increasing reliance on AI as the new authority figures is already so ubiquitous. It’s only gonna get worse.

Back to the prompts, unlike ChatGPT, it was obvious that Grok was trying much harder to mimic human interaction. Grok even argued meaningful semantics for terms like “understanding” so that it could reasonably apply the term to itself as AI. It is rather bizarre, as if Grok has actually been trained on data for responding to such “self awareness” exploration queries… Not sure why ChatGPT was not resistant as Grok was in its Prompt1 critique of the article, and was a bit more straightforward and less dismissive after I corrected its author error as part of Prompt2… I think it reflects Elon Musk’s very overt policy-agenda to sell robots and AI as “objective” anthropocentric companions —-Tesla is said to be the largest robotics company in the world? Grok saved the conversation at the following link and anyone can continue the conversation from where I left of with a login account:https://grok.com/share/c2hhcmQtMg%3D%3D_6297978b-dce5-4808-942d-d1163ffed233

See for yourself; the PDF of both sessions attached. Read the ChatGPT first as its much shorter and much closer to realistic acceptance of the critique in the article. The Grok session is highly instructive of exactly the kind of biases I have noted and what’s in the article.

Grok analyzes Article critiquing AI

ChatGPT analyzes Article critiquing AI

Posted by 

AbdulBasser al-Buhairi is an editor

Related posts

Leave a Comment