Informational Page

Generative artificial intelligence technologies, especially large language models, have unquestionably transformed society and the way people go about their daily lives in the past few years.

My opinion is, these extraordinary innovations do vastly more harm than good to the society and this world as a whole. This also applies to myself, which is the reason I do not regularly use things like ChatGPT or AI coding assistants myself for anything, and try to avoid these technologies altogether as much as possible. Now to be fair, I do know that I sometimes have difficulties with change, for example when the user interface of iOS or Windows changes once again (the latter, definitely for the worse with Windows 11). But this is different, this is not just a small thing; I see what AI does, what it is used for, and the consequences arising from their usage and their creation.

I often get asked why I view the new AI technologies so negatively, and I usually bring up these points (listed here in no particular order):

Please feel free to bring counter points to my attention, through any of my contact methods. Or maybe even some more references in support of what I said, if you're feeling generous.

Now, after having said all this, I admit that the new generative AI technologies are not just all bad. On a personal level, there is no denying the fact that asking a large language model is just easier than reading a book or so when trying to learn about something new. And on a global scale, AI has long been used for good things that bring humanity and science further, and generative AI will probably not be much different. The technological developments and advancements over the last decades, including in the field of artificial intelligence, are undoubtedly nothing short of remarkable.

Still, I struggle to name a single major reason why specifically generative AI brings a significant benefit to humanity, let alone be advantageous enough to outweigh all the disadvantages. Humans are social creatures meant to talk to each other, not talk to a computer algorithm devoid of any kind of emotion.
If you want to learn about something, why not ask a human that knows about it. With the Internet, this is easier than ever.
If you need to talk to someone about your personal challenges, why not talk to a close friend. Your friends are capable of feeling empathy and supporting you, a computer is not.
Of course, this is easier said than done. The rising toxicity on the Internet and increasing feelings of loneliness among people, while social media ought to have the opposite effect, are a few of the many lamentable realities of the world we currently live in, which make these two alternatives things that people might not even have or at least shy away from. But, why make it even worse by choosing AI over acquaintances?

References

[1] "Google Gemini is for adult babies..." by lessthoughtof (2025). YouTube
[2] "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change" by Imane El Atillah (2023). Euronews
[3] "A man asked AI for health advice and it cooked every brain cell" by Chubbyemu (2025). YouTube
[4] "I Infiltrated a Disturbing AI Cult" by Farrel McGuirre (2025). YouTube
[5] "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" by Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes (2025). arXiv:2506.08872 [cs.AI]
[6] "What large language models know and what people think they know" by Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Catarina Belem, Sheer Karny, Xinyue Hu, Lukas W. Mayer, Padhraic Smyth (2025). doi:10.1038/s42256-024-00976-7
[7] "A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions" by Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu (2024). doi:10.1145/3703155
[8] "Evaluating the Accuracy of Responses by Large Language Models for Information on Disease Epidemiology" by Kexin Zhu, Jiajie Zhang, Anton Klishin, Mario Esser, William A. Blumentals, Juhaeri Juhaeri, Corinne Jouquelet-Royer, Sarah-Jo Sinnott (2025). doi:10.1002/pds.70111
[9] "How Susceptible are LLMs to Influence in Prompts?" by Sotiris Anagnostidis, Jannis Bulian (2024). arXiv:2408.11865 [cs.CL]
[10] "The rise of 'grief tech': AI is being used to bring the people you love back from the dead" by Amber Louise Bryce (2023). Euronews
[11] "How it feels to have your mind hacked by an AI" by blaked (2023). LessWrong
[12] "Generative AI Security: Challenges and Countermeasures" by Banghua Zhu, Norman Mu, Jiantao Jiao, David Wagner (2024). arXiv:2402.12617 [cs.CR]
[13] "If Not Bubble... Why Bubble Shaped?" by How Money Works (2025). YouTube
[14] "The State of the AI Industry is Freaking Me Out (Discussion)" (accessed Dec 2025). Reddit
[15] "AI Fatigue: Why Millions Are Deleting Duolingo" by Logically Answered (2025). YouTube
[16] "The Rise And Fall Of Vibe Coding: The Reality Of AI Slop" by Logically Answered (2025). YouTube
[17] "Thoughts on the Impact of Large Language Models on Software Development" by Emilio Dorigatti (2023). GitHub Pages
[18] "Application of Large Language Models (LLMs) in Software Engineering: Overblown Hype or Disruptive Change?" by Ipek Ozkaya, Anita Carleton, John E. Robert, Douglas Schmidt (William & Mary) (2023). sei.cmu.edu
[19] "The Impact of LLM-Assistants on Software Developer Productivity: A Systematic Literature Review" by Amr Mohamed, Maram Assi, Mariam Guizani (2025). arXiv:2507.03156 [cs.SE]
[20] "Google's Nano Banana Pro is raising concerns over realistic AI image generation" (2025). NBC News
[21] "Two Versions of Same Photo Spark Online Alarm: 'It Is So Over'" by Marni Rose McFall (Newsweek) (2025). Newsweek
[22] "Exposing a Romance Scammer" by Coffeezilla, Kitboga (2025). YouTube
[23] "Threats by artificial intelligence to human health and human existence" by Frederik Federspiel, Ruth Mitchell, Asha Asokan, Carlos Umana, David McCoy (2023). doi:10.1136/bmjgh-2022-010435
[24] "If you remember one AI disaster, make it this one" by AI In Context (2025). YouTube
[25] "'Godfather of AI' shortens odds of the technology wiping out humanity over next 30 years" by Dan Milmo (2024). The Guardian
[26] "AI Is Getting Powerful. But Can Researchers Make It Principled?" by Mordechai Rorvig, Sophie Bushwick (2023). Scientific American
[27] "Large Language Models Reflect the Ideology of their Creators" by Maarten Buyl, Alexander Rogiers, Sander Noels, Guillaume Bied, Iris Dominguez-Catena, Edith Heiter, Iman Johary, Alexandru-Cristian Mara, Raphaël Romero, Jefrey Lijffijt, Tijl De Bie (2025). arXiv:2410.18417 [cs.CL]
[28] "Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters" by Yujin Potter, Shiyang Lai, Junsol Kim, James Evans, Dawn Song (2024). arXiv:2410.24190 [cs.CL]
[29] "LLMs' Potential Influences on Our Democracy: Challenges and Opportunities" by Yujin Potter, Yejin Choi, David Rand, Dawn Song (2025). future-of-democracy-with-llm.org
[30] "Explained: Generative AI’s environmental impact" by Adam Zewe (MIT News) (2025). MIT News
[31] "AI has an environmental problem. Here’s what the world can do about that." (2025). United Nations Environment Programme
[32] "Environmental impact of artificial intelligence" (accessed Dec 2025). Wikipedia
[33] "The Green Dilemma: Can AI Fulfil Its Potential Without Harming the Environment?" by Alokya Kanungo (2023). earth.org
[34] "AI Slop Is Destroying The Internet" by Kurzgesagt (2025). YouTube
[35] "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models" by Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock (2023). arXiv:2303.10130 [econ.GN]

This page was written entirely without any assistance from large language models or other types of generative artificial intelligence.
Like everything I do. Maybe you even came from some work by me that included something like this sentence.


Last modified: