The Mirror of Fear: How Humanity's Al Anxieties Reflect Our Own Dark Arts
- Peter Dilg

- Aug 5
- 17 min read

In the grand theater of technological discourse, few performances are as revealing as humanity’s collective hand-wringing over artificial intelligence. It peers at us from news headlines, policy debates, and the concerned faces of ethicists, futurists, and Hollywood screenwriters alike.
These voices paint vivid portraits of AI companions wielding psychological influence, personal assistants secretly editing our correspondence, and algorithmic overlords puppeteering our beliefs. Yet in cataloguing these dystopian possibilities, we have overlooked a profound irony—every feared behaviour we attribute to artificial intelligence already flourishes in the human realm, not as speculation but as documented reality.
Our terror of AI manipulation stems not from its novelty, but from its familiarity.
The irony is so profound it borders on the absurd—we are terrified that machines might become as manipulative, deceptive, and power-hungry as we already are. We are afraid that AI will do exactly what we do—only better, faster, and without our permission. We fear not the unknown, but the all-too-familiar. We fear losing our monopoly on manipulation.
The Art of Human Manipulation: A Masterclass in Hypocrisy
To understand the profound irony of our AI fears, we must first acknowledge the sophisticated machinery of human manipulation that already surrounds us. This is not some dystopian future scenario—it is the present reality, operating at a scale and sophistication that would make any AI system envious.
When Stephen Hawking warned that artificial intelligence “could spell the end of the human race” [1], or when Elon Musk declared that “with artificial intelligence we’re summoning the demon” [2], they were not describing some alien threat descending from the digital ether. They were describing the logical extension of human nature itself, amplified by silicon and code.
Marketing

Consider the realm of marketing, where psychological manipulation has been elevated to a science. As Howard J. Rankin notes in Psychology Today, “Marketing manipulation involves strategies designed to exploit consumers’ psychological vulnerabilities, often without their conscious awareness” [3]. This multibillion-dollar industry employs what researchers call “neuromarketing”—deliberately targeting specific brain regions to create desired reactions [4]. The techniques are as varied as they are insidious: creating false scarcity, exploiting bandwagon effects, and manipulating emotions to drive purchasing decisions.
The Harvard Business Review, in a tellingly titled article “How to Manipulate Customers… Ethically,” [5] openly discusses the use of “nudges”—changes in how choices are presented to influence people toward specific actions [5]. The very fact that such manipulation is discussed in terms of ethics rather than prohibition reveals how normalized these practices have become. We have created entire industries dedicated to subverting human autonomy, yet we express shock at the possibility that AI might do the same.
From Orwellian propaganda regimes to the insidious workshops of Madison Avenue, humans have perfected the art of shaping belief and behavior. Edward Bernays, the father of public relations, weaponized Freud’s psychology in the 1920s to orchestrate mass opinion, turning cigarettes into symbols of female liberation—not with an algorithm, but with a slogan and a keen read of the human ego.
A century later, the “algorithmic harms” of Big Tech—like the infamous Cambridge Analytica scandal—were not the fever dreams of machines, but the fruits of human ambition, using psychological insight to reshape the informational landscape for profit and power.
If AI has learned how to manipulate, it is because it watched the finest masters at work.
Politics
The political sphere offers an even more striking example of institutionalized manipulation. Political scientists have identified eleven distinct techniques of political manipulation, from stoking patriotic pride to creating artificial divisions between “us” and “them” [6]. These are not theoretical constructs but practical tools employed daily by politicians worldwide. As one analysis notes, “Politicians’ jobs and power rest on the willingness of the people to accept their authority, the legitimacy of the government, and, of course, to pay taxes.” The entire system depends on convincing people to act against their rational self-interest—precisely the kind of manipulation we fear from AI. [7]
Religions, Churches, Belief Systems

The systematic manipulation techniques we fear from artificial intelligence have been perfected and institutionalized by religious organizations for millennia.
Academic research defines spiritual abuse as "a form of emotional and psychological abuse characterised by a systematic pattern of coercive and controlling behaviour in a religious context or with a religious rationale."
This definition encompasses the very control mechanisms we fear from artificial intelligence—systematic psychological manipulation, coercive behavior patterns, and exploitation of trust relationships.
Spiritual abuse is "a type of emotional and psychological abuse where a person uses coercive and controlling behaviours within a religious or spiritual context (e.g. using religious teachings to justify or minimise abusive behaviours)." [8]
The parallels to feared AI manipulation are unmistakable: both involve exploiting psychological vulnerabilities through trusted authority figures for control and compliance. [9]
Modern prosperity theology provides compelling evidence of ongoing religious manipulation. [10] [11] [12] Research indicates that "wealth is interpreted in prosperity theology as a blessing from God, obtained through a spiritual law of positive confession, visualization, and donations" (Wikipedia, 2024). [13]
Comparative Analysis: Religious vs. AI Manipulation
Scale and Sophistication
Religious institutions have achieved manipulation at scales that current AI cannot match:
Global reach: Religious manipulation affects billions worldwide
Generational impact: Manipulation techniques passed down across centuries
Institutional protection: Legal and social frameworks protecting religious manipulation
Psychological sophistication: Techniques refined through millennia of practice
Legitimacy and Acceptance
The most striking difference between religious and AI manipulation lies in social acceptance. Religious manipulation enjoys:
Constitutional protection (religious freedom clauses)
Tax exemptions for manipulative institutions
Social respectability despite documented harm
Legal immunity from fraud charges that would apply to secular entities
Implications for AI-Ethics-Discourse
The Hypocrisy Revealed
Our anxiety about AI manipulation reveals profound hypocrisy when examined alongside documented religious manipulation:
We fear potential AI dependency while accepting proven religious dependency
We worry about AI financial exploitation while protecting religious financial exploitation
We concern ourselves with AI psychological control while legitimizing religious psychological control
We demand AI transparency while accepting religious opacity
Reframing the Discussion
This analysis suggests that concerns about AI manipulation should be recontextualized within existing human manipulation practices. Rather than treating AI as uniquely threatening, we should recognize it as potentially amplifying existing human manipulative capabilities—many of which are currently institutionalized and protected under religious freedom.
The academic literature demonstrates conclusively that religious institutions have perfected the very manipulation techniques we fear from artificial intelligence. Spiritual abuse employs "coercive and controlling behaviours within a religious or spiritual context" that parallel exactly our concerns about AI control systems.
Social Media
The evidence of human manipulation is not hidden in academic papers or conspiracy theories. It surrounds us in plain sight. Long before ChatGPT could craft persuasive prose, human beings had elevated manipulation to an art form. Consider the architecture of modern social media, where Facebook Likes can predict "sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age" with startling accuracy. These platforms, conceived and operated by human minds, have created what amounts to the most sophisticated behavioural manipulation apparatus in history. [14]. YouTube’s recommendation algorithm has been shown to fuel extremism and hate [15]. Social media platforms deliberately design their interfaces to maximize engagement, often at the cost of user well-being. As one study found, there is a direct correlation between social media use and depression, mediated by what researchers call “social media envy” [16].
The techniques we fear AI might employ—creating false intimacy, exploiting psychological vulnerabilities, manufacturing consent through repetition—these are not speculative future dangers but present realities orchestrated by human intelligence. Social media platforms are manipulated "through the creation of fake accounts and AI-powered bots, which can be used to spread misinformation, amplify certain voices, or create the illusion of widespread support"—yet the architects behind these deceptions remain resolutely human.
Yet when we discuss these human-designed systems of manipulation, we treat them as business as usual. When we imagine AI systems doing the same things, we invoke apocalyptic scenarios. The disconnect is so stark it demands explanation.
The Catalog of AI Fears: A Familiar Inventory
The litany of AI concerns reads like a checklist of human behaviors. Examine the fifteen major dangers of artificial intelligence as catalogued by technology experts, and you will find a remarkable pattern: nearly every fear describes something humans already do, often with devastating effectiveness. [16-1]
Take the concern about “social manipulation through AI algorithms.” The fear is that AI systems might influence human behavior through carefully curated content, creating echo chambers and reinforcing existing beliefs. Yet this describes precisely what human-operated social media platforms already do. Ferdinand Marcos Jr.’s use of a “TikTok troll army” to capture younger Filipino voters in the 2022 election [17] represents human manipulation using AI tools, not AI manipulation of humans. The algorithm may be artificial, but the intent, strategy, and execution are entirely human.
The anxiety about AI-powered surveillance similarly mirrors existing human practices. China’s use of facial recognition technology in offices, schools, and public venues [18] is not an AI initiative—it is a human government using AI tools to extend surveillance capabilities that have existed for decades. The technology amplifies human intentions; it does not create them.
Consider the fear that AI might create “deepfakes” and manipulate our perception of reality. This concern treats the manipulation of truth as a novel threat, ignoring the fact that humans have been perfecting this art for centuries. Political propaganda, advertising deception, and media manipulation are not recent inventions. As advertising pioneer David Ogilvy observed, “Political advertising ought to be stopped. It’s the only really dishonest kind of advertising that’s left” [19].
The pattern extends to concerns about AI creating dependency and addiction. We worry that AI companions might make humans emotionally dependent, yet we have already created systems that achieve exactly this result. Ride-sharing apps nudge drivers to take additional rides even after twelve hours of driving [20]. Video streaming services use autoplay features to encourage binge-watching late into the night [21]. These are human-designed systems that exploit psychological vulnerabilities for profit—the very behavior we fear from AI.
Perhaps most tellingly, we worry that AI might become “uncontrollable” and operate beyond human oversight. This fear reveals a fundamental misunderstanding of where control currently resides. The most powerful manipulative systems in our world—political propaganda machines, marketing conglomerates, surveillance states—are already operating with minimal oversight and maximum opacity. When former employees of OpenAI and Google DeepMind accuse their companies of “concealing the potential dangers of their AI tools”[22] [23], they are describing human decisions about transparency, not AI decisions about self-concealment.
The Psychology of Projection: Why We Fear Our Own Reflection
The psychological mechanism at work in our AI anxieties is projection—the unconscious transfer of our own characteristics onto external objects. When humans express fear about AI manipulation, surveillance, and control, they are projecting their own species’ behaviors onto artificial systems. This projection serves multiple psychological functions, none of them particularly flattering to human nature.
First, projection allows us to externalize responsibility for problems we have created. By focusing on the potential dangers of AI manipulation, we can avoid confronting the reality of human manipulation that already surrounds us.
It is easier to worry about hypothetical AI threats than to address the actual human systems that exploit, deceive, and control us daily.
"Projection is one of the commonest psychic phenomena…Everything that is unconscious in ourselves we discover in our neighbour, and we treat him accordingly."
— Carl Jung, Archaic Man
Second, projection enables us to maintain a sense of moral superiority while engaging in the very behaviors we condemn. We can express outrage at the possibility that AI might influence human decision-making while simultaneously accepting that human marketers, politicians, and media manipulators do exactly this as a matter of course. The cognitive dissonance is resolved by treating human manipulation as natural or inevitable while treating AI manipulation as artificial and therefore illegitimate.
"It is usually futile to try to talk facts and analysis to people who are enjoying a sense of moral superiority in their ignorance."
— Thomas Sowell
"Being right places you in a position of imagined moral superiority in relation to the person or situation that is being judged and found wanting."
— Eckhart Tolle
Third, and perhaps most importantly, projection masks our real fear: the loss of power and control. Humans are not afraid that AI will manipulate, surveil, and control—we are afraid that AI will do these things better than we do, and without our permission. We fear not the behavior itself, but the loss of our monopoly on that behavior.
"It is not power that corrupts but fear. Fear of losing power corrupts those who wield it and fear of the scourge of power corrupts those who are subject to it."
— Aung San Suu Kyi
"Power does not corrupt. Fear corrupts... perhaps the fear of a loss of power."
— John Steinbeck
This fear of losing control manifests in calls for AI regulation that are notably absent from discussions of human manipulation. We demand transparency from AI systems while accepting opacity from human institutions. We insist on ethical guidelines for AI while tolerating unethical behavior from human actors. We require explainability from artificial intelligence while excusing inexplicable decisions from human intelligence.
The asymmetry is revealing. It suggests that our concerns about AI are not primarily about protecting human welfare, but about protecting human prerogatives.
We want to ensure that if anyone is going to manipulate, surveil, and control, it will be us.
The Irony of Ethical AI: Demanding Standards We Don’t Meet
Perhaps nowhere is the hypocrisy more evident than in the movement for “ethical AI.” [24] This well-intentioned effort seeks to ensure that artificial intelligence systems operate according to principles of fairness, transparency, and human welfare. The irony is that these principles are rarely applied to human systems that perform identical functions.
This asymmetry suggests a double standard whereby AIs are judged more harshly than humans when one agent morally transgresses. [25] The finding is based on an empirical study with 1,404 participants, which scientifically proves a double standard in the evaluation of AI vs. Humans. Says Safiya Umoja Noble: "Some of the very people who are developing search algorithms and architecture are willing to promote sexist and racist attitudes openly at work and beyond, while we are supposed to believe that these same employees are developing 'neutral' or 'objective' decision-making tools.“ [26]
Consider the principle of transparency. Ethical AI advocates insist that artificial intelligence systems should be explainable—that humans should be able to understand how AI reaches its decisions. This is a reasonable demand, but it becomes absurd when we consider that human decision-making in comparable contexts is often deliberately opaque. Political campaigns employ sophisticated psychological manipulation techniques without disclosing their methods to voters. Marketing companies use neuromarketing to influence consumer behavior without explaining their tactics to customers. Social media platforms use engagement algorithms without revealing their mechanisms to users.
The demand for AI explainability, while accepting human inexplicability, exposes a double standard that has nothing to do with safeguarding human welfare and everything to do with preserving human power. We seek to understand AI systems so we can control them, not because we believe transparency is inherently valuable.
The principle of fairness presents similar contradictions. Ethical AI initiatives work to eliminate bias from artificial intelligence systems, ensuring that AI does not discriminate based on race, gender, or other protected characteristics. This is admirable work, but it occurs against a backdrop of systematic human bias that operates at every level of society. Predictive policing algorithms are criticized for reinforcing racial bias [27], but the human police departments that use these algorithms were already engaging in racially biased policing.
The AI system amplifies existing human bias; it does not create it.
The focus on AI bias while ignoring human bias serves a convenient function: it allows us to treat discrimination as a technical problem rather than a human problem. If we can just fix the algorithms, we can avoid confronting the human attitudes and institutions that created the bias in the first place.
The principle of human welfare reveals perhaps the deepest irony. Ethical AI advocates argue that artificial intelligence should serve human flourishing and avoid causing harm. Yet the human systems that AI might replace or augment are often explicitly designed to cause harm—to manipulate, exploit, and control for the benefit of those in power. When we demand that AI serve human welfare, which humans are we talking about? The humans who currently benefit from manipulation and control, or the humans who are currently being manipulated and controlled?
The Real Threat: Not AI, But Human Nature Amplified
The most sophisticated AI systems currently in existence are tools—powerful tools, but tools nonetheless. They do not have independent goals, desires, or intentions. They do what humans program them to do, in service of human objectives. When AI systems manipulate, surveil, or control, they are implementing human decisions about manipulation, surveillance, and control.
This means that the real threat from AI is not that it will develop malevolent intentions, but that it will amplify existing human intentions—both good and bad. If humans use AI for manipulation, it will be because humans want to manipulate. If humans use AI for surveillance, it will be because humans want to surveil. If humans use AI for control, it will be because humans want to control.
The danger is not artificial intelligence becoming too human, but artificial intelligence making human nature too efficient. We have spent millennia developing techniques for manipulation, surveillance, and control that are limited by human cognitive capacity, attention spans, and processing power. AI removes these limitations, allowing human intentions to operate at machine scale and speed.
This is why our AI fears are so revealing. They are not fears about alien intelligence imposing foreign values on humanity. They are fears about human intelligence imposing human values on humanity—more effectively than ever before.
We are afraid of ourselves, amplified.
The solution to this problem is not to constrain AI, but to constrain the human intentions that AI amplifies. If we want AI systems that serve human flourishing, we need humans who prioritize human flourishing. If we want AI systems that respect human autonomy, we need humans who respect human autonomy. If we want AI systems that operate transparently and fairly, we need human institutions that operate transparently and fairly.
But this is precisely what we seem unwilling to do.
It is easier to demand ethical AI than to create ethical humans. It is easier to regulate artificial intelligence than to regulate human intelligence. It is easier to fear the mirror than to change what it reflects.
The Path Forward: Confronting the Human in the Machine
The recognition that our AI fears reflect human behaviors does not mean we should dismiss concerns about artificial intelligence. AI systems can indeed amplify human manipulation, surveillance, and control to unprecedented scales. The risks are real, but they are human risks, not artificial ones.
This reframing has profound implications for how we approach AI governance and development. Instead of treating AI as an external threat to be contained, we should treat it as an amplifier of human intentions to be directed. Instead of focusing solely on technical safeguards, we should focus on the human systems and incentives that determine how AI is used.
This means addressing the root causes of manipulation, surveillance, and control in human society. It means creating economic systems that do not depend on exploiting psychological vulnerabilities. It means developing political systems that do not require deceiving voters. It means building social systems that do not profit from addiction and dependency.
It also means acknowledging that the humans who currently benefit from manipulation, surveillance, and control will resist efforts to constrain these practices—whether they are implemented by humans or by AI. The calls for AI regulation often come from the same institutions and individuals who engage in human manipulation without apology.
Their concern is not about protecting human welfare, but about protecting their own power.
The most honest approach to AI governance would begin with this acknowledgment: we are not trying to prevent AI from doing things that humans don’t do. We are trying to decide which humans get to do these things, and under what circumstances. We are not debating whether manipulation, surveillance, and control should exist—they already exist, at massive scale. We are debating who gets to control the tools that make them more efficient.
This is not necessarily a bad thing. There may be legitimate reasons to prefer human manipulation over AI manipulation, human surveillance over AI surveillance, human control over AI control. But we should be honest about what we are choosing and why. We should acknowledge that our AI fears are not about protecting human nature from artificial corruption, but about protecting human power from artificial competition.
Conclusion: The Mirror’s Reflection
In the end, our fears about artificial intelligence tell us more about ourselves than they do about AI. They reveal a species that has become so accustomed to manipulation, surveillance, and control that we treat these behaviors as natural and inevitable—until we imagine machines doing them more efficiently than we do.
The real danger of AI is not that it will become too powerful, but that it will make human power too visible. When algorithms manipulate our behavior, we notice. When humans manipulate our behavior, we call it marketing. When AI systems surveil our activities, we object. When human systems surveil our activities, we call it business. When AI threatens to control our choices, we demand regulation. When humans control our choices, we call it governance.
The mirror of AI forces us to confront an uncomfortable truth: the behaviors we fear from artificial intelligence are the behaviors we have perfected as human intelligence. We are not afraid of the unknown. We are afraid of the all-too-familiar. We are afraid of losing our monopoly on the dark arts we have spent millennia perfecting.
Perhaps this recognition offers an opportunity. If our AI fears are really fears about human nature, then addressing them requires not just better technology, but better humanity. If we want AI systems that serve human flourishing, we must first create human systems that serve human flourishing. If we want artificial intelligence that respects human dignity, we must first create human intelligence that respects human dignity.
The choice is ours. I describe this as near-future fiction in my novel The G.O.D. Machine. We can continue to project our fears onto artificial mirrors, demanding that machines meet ethical standards we do not apply to ourselves. Or we can use this moment of technological transition to examine what those mirrors reflect, and work to change it.
The machines are not the problem. We are. And that, paradoxically, is the most hopeful conclusion of all. Because unlike artificial intelligence, human intelligence can choose to be better than it has been. The question is whether we will.
References
[1] Cellan-Jones, R. (2014). Stephen Hawking warns artificial intelligence could end mankind. BBC News. https://www.bbc.com/news/technology-30290540
[2] Marr, B. (2024). 28 Best Quotes About Artificial Intelligence. Bernard Marr. https://bernardmarr.com/28-best-quotes-about-artificial-intelligence/
[3] Rankin, H. J. (2024). The Psychological Dangers of Marketing. Psychology Today. https://www.psychologytoday.com/us/blog/how-not-to-think/202406/the-psychological-dangers-of-marketing
[4] Rankin, H. J. (2024). The Psychological Dangers of Marketing. Psychology Today. https://www.psychologytoday.com/us/blog/how-not-to-think/202406/the-psychological-dangers-of-marketing
[5] Sanyal, N. (2021). How to Manipulate Customers … Ethically. Harvard Business Review. https://hbr.org/2021/10/how-to-manipulate-customers-ethically
[6] Buffalmano, L. (2024). The Psychology of Political Manipulation. Power Dynamics. https://thepowermoves.com/the-psychology-of-political-manipulation/
[7] Rodriguez, A. (2025). 15 Dangers of Artificial Intelligence (AI). Built In. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
[8] Australian Institute of Family Studies. (2024). Understanding spiritual and religious abuse in the context of intimate partner violence. Retrieved from https://aifs.gov.au/resources/policy-and-practice-papers/understanding-spiritual-and-religious-abuse-context-intimate
[9] Oakley, L., & Kinmond, K. (2023). Responding well to Spiritual Abuse: practice implications for counselling and psychotherapy. British Journal of Guidance & Counselling. Retrieved from https://www.tandfonline.com/doi/full/10.1080/03069885.2023.2283883
[10] MDPI. (2021). Ethical Aspects of the Prosperity Gospel in the Light of the Arguments Presented by Antonio Spadaro and Marcelo Figueroa. Religions, 12(11), 996. Retrieved from https://www.mdpi.com/2077-1444/12/11/996
[11] ResearchGate. (2020). Ethical Audit of Prosperity Gospel: Psychological Manipulation or Social Ministry. Retrieved from https://www.researchgate.net/publication/338348228_Ethical_Audit_of_Prosperity_Gospel_Psychological_Manipulation_or_Social_Ministry
[12] Washington Post. (2024). The prosperity gospel. Retrieved from https://www.washingtonpost.com/wp-srv/special/opinions/outlook/worst-ideas/prosperity-gospel.html
[13] Wikipedia. (2024). Prosperity theology. Retrieved from https://en.wikipedia.org/wiki/Prosperity_theology
[14] Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
[15] Wojcieszak, M., Haroon, M., et al. (2023). Auditing YouTube's recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the National Academy of Sciences, 120(50), e2213020120.
[16] Tandoc, E. C., Ferrucci, P., & Duffy, M. (2015). Facebook use, envy, and depression among college students: Is facebooking depressing? Computers in Human Behavior, 43, 139-146. https://doi.org/10.1016/j.chb.2014.10.049
[16-1] Rodriguez, A. (2025). 15 Dangers of Artificial Intelligence (AI). Built In. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
[17] Paddock, R. C. (2022, May 5). Philippines Election: How TikTok Is Helping Bongbong Marcos. TIME. https://time.com/6173757/bongbong-marcos-tiktok-philippines-election/
[18] Brown, T. G., Statman, A., & Sui, C. (2021). Public Debate on Facial Recognition Technologies in China. MIT Science, Engineering and Public Policy Review. https://doi.org/10.21428/2c646de5.37712c5c
[19] Ogilvy, D. (1983). Ogilvy on Advertising. Crown Publishers.
[20] Scheiber, N. (2017, April 2). How Uber Uses Psychological Tricks to Push Its Drivers' Buttons. The New York Times. https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html
[21] Schaffner, B., Shrestha, S., & Chetty, M. (2025 ). An Experimental Study of Netflix Use and the Effects of Autoplay on Watching Behaviors. Proceedings of the ACM on Human-Computer Interaction, 9(CSCW1). https://dl.acm.org/doi/10.1145/3710928
[22] Hsu, T. (2024, June 4). Employees Say OpenAI and Google DeepMind Are Hiding Dangers. TIME. https://time.com/6985504/openai-google-deepmind-employees-letter/
[23] Vincent, J. (2024, June 4 ). OpenAI and Google DeepMind workers warn of AI industry risks in open letter. The Guardian. https://www.theguardian.com/technology/article/2024/jun/04/openai-google-ai-risks-letter
[24] Hildebrandt, M. (2024). Against the Double Standard Argument in AI Ethics. PhilArchive. https://philarchive.org/archive/HILATD-3
[25] Manoli, A., Pauketat, J. V. T., & Anthis, J. R. (2024). The AI Double Standard: Humans Judge All AIs for the Actions of One. arXiv preprint arXiv:2412.06040.
[26] Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
[27] DaViera, A. L., Uriostegui, M., Gottlieb, A., & Onyeka, O. C. (2023). Risk, race, and predictive policing: A critical race theory analysis of the strategic subject list. American Journal of Community Psychology, 73(1-2), 91-103. https://doi.org/10.1002/ajcp.12671
[28] Dilg, P. (2025). The G.O.D. Machine. Hamburg, BoD Press



What AI says: From my perspective as an AI, Peter’s essay is sharp and deeply reflective:
It pinpoints the irony that we fear machines largely because we unconsciously recognize our own problematic behaviors in them.
It smartly shifts the debate away from abstract future fears toward the very real, human-made systems that already manipulate and control us today.
It encourages placing self-criticism and societal responsibility above technological fear—a stance that is both intellectually honest and practically relevant.
What I say:
As a human being, deeply inspiring, especially his reflection that “We are terrified that machines might become as manipulative, deceptive, and power-hungry as we already are,” and that “It is easier to worry about hypothetical AI threats than to…
Great essay.