Generative artificial intelligence technologies, especially large language models, have unquestionably transformed society and the way people go about their daily lives in the past few years.
My opinion is, these extraordinary innovations do vastly more harm than good to the society and this world as a whole. This also applies to myself, which is the reason I do not regularly use things like ChatGPT or AI coding assistants myself for anything, and try to avoid these technologies altogether as much as possible. Now to be fair, I do know that I sometimes have difficulties with change, for example when the user interface of iOS or Windows changes once again (the latter, definitely for the worse with Windows 11). But this is different, this is not just a small thing; I see what AI does, what it is used for, and the consequences arising from their usage and their creation.
I often get asked why I view the new AI technologies so negatively, and I usually bring up these points (listed here in no particular order):
-
People get dumber. Now this might be a strong way to say this, but I think it summarizes it pretty well. I will break it down further:
- LLMs are used by people for the most basic things, that they definitely should be capable of doing without AI.[1] This can even make some people completely lose their common sense.[2][3][4]
- People forget and don't learn how to learn. Objectively the best way to learn about a topic is to read about it from a reputable or authoritative source. Instead, with generative AI, people often choose convenience over integrity and will simply ask a language model. A person choosing this way of learning becomes very dependent on the language model, and doesn't learn the skill of finding and then also processing a proper resource themselves.[5]
- When using language models for writing, a person's writing skill will inherently be degraded, because they can write some possibly incoherent low-quality text, and then ask the language model to improve the language. It is better to learn and improve own writing skills, and then, at most, ask a language model or some other tool to check grammar and spelling, without major changes to the text style. Also, a lot of people using similar language models will lead to writing styles becoming more aligned with how the language model writes, and more distant from an author's unique style, which in my opinion is a loss too.[5]
- Extending the other points as well, generative AI is not very smart, and very often, more than people like to admit or are even aware of, makes mistakes and says something utterly incorrect with unfounded confidence.[6][7][8][9] There is a reason the memes exist that say something like "Funny how ChatGPT is wrong about things that I know about, but so smart when asked about things I don't know about". I think this is pretty clear; everyone that has ever asked a language model anything knows that it often makes mistakes. If you do and never noticed this, please go look up your things somewhere else for a change.
-
People easily develop strong and inappropriate emotional attachment to language models, due to their very human-like and supportive way of talking.[10][11]
- It should be obvious that developing an emotional attachment to a computer program, especially one controlled by someone else, is extremely unhealthy. Now you might think, "oh this would never happen to me". But we as humans are fundamentally irrational creatures, and even if you think and always pretend to be acting rationally, it is easy to slip up and then have something like this happen to you. And even if you think you are somehow immune to this, this is still a concerning issue affecting many people; these are not isolated cases.
-
Generative AI can be and is already grossly misused. This will also be a concern with future AI developments, such as Artificial General Intelligence (AGI).
- Businesses attempt to use generative AI to replace tasks traditionally performed by humans. This can result in anything from small problems to major collateral damage.[12] Not to mention that the current AI industry is looking a lot like a bubble.[13][14] The most noticeable way the replacement of humans with AI affects consumers is that product quality gets worse with AI.[15][16] Taking software development for instance; while these AI technologies can help with writing code faster, there are inherent and substantial risks associated with it that are easily disregarded.[17][18][19]
- Generative AI for media has advanced and improved quickly recently, and is now able to create images, audio, and even videos that are virtually indistinguishable from reality by humans.[20][21] This is already being used for fraud, making manipulation using fake imagery and audio much more successful and accessible than was possible in the past.[22] Similarly, generated images can be used for defamation, by creating realistic videos or images of people in situations that never actually happened in reality.
- With the speed of advancements in AI technology, there are insufficient safeguards in place. Companies race to create better and more complex AI, and put security second because of the competition. This is especially going to become a problem with the probably inevitable advent of AGI. If humanity ever manages to create AI that is as intellectually powerful as humans, there is nothing stopping the AI from going rogue, even if the technology itself is protected. And if that technology goes into the wrong hands, unspeakable things are very likely to happen.[23][24][25][26]
-
Large language model creators have major power over people using them, in terms of opinion shaping and other kinds of psychological influence.[4][24][27][28][29]
- While I am not aware of evidence suggesting that language model creators have actively tried to manipulate their frequent users, this is certainly a power they possess, to a much greater extent compared to what was possible in the past. Especially because it is so much easier to develop emotional connection to and be influenced by a very human-like interactive language model, compared to a video on the Internet or biased results of a web search. And once the language model shareholders decide to wield this power for their own interests, it should be quite obvious how catastrophic that would be for society as a whole.
-
The training and even usage of generative AI has a major, and in my opinion unjustified, impact on the environment.[30][31][32][33]
- Training an AI model takes a huge amount of resources, mainly water and electricity. Next to that, more of more advanced hardware is required, which is also quickly replaced due to new innovations. Estimates put the emission from the training of a single large language model anywhere between a few dozen and several hundred tons of carbon dioxide equivalents. For comparison, a fossil fuel powered car generates around 25-50 tons during its entire lifetime, including production.
- Even more significant is the resource consumption while using large language models due to their widespread popularity. According to estimates, a single prompt of a common large language model uses a few hundred milliwatt hours of electricity on average. This is around 5-30 times the amount used by a traditional web search depending on who you ask. And in my opinion, the utility of one language model prompt is definitely not 5-30 times the utility of a web search.
-
AI easily replaces what humans like doing, and not what they don't like doing.
- Generative AI "art" is an insult to human creativity.[34] I think this should be self explanatory, as long as you haven't lived under a rock for the past few years.
- AI is often very good at things humans struggle with or need more time for, such as computations or processing large amounts of data. Now, those two examples are not a bad thing, that is why we have computers; but, things like artistic work also fall in that category, especially when talking about generative AI. And, AI is quite bad at basic tasks that humans easily accomplish but are often boring, things that are considered chores like cleaning, doing laundry, flipping burgers, and stuff like this.[35] That is what AI should be replacing, not the other way around.
Please feel free to bring counter points to my attention, through any of my contact methods. Or maybe even some more references in support of what I said, if you're feeling generous.
Now, after having said all this, I admit that the new generative AI technologies are not just all bad. On a personal level, there is no denying the fact that asking a large language model is just easier than reading a book or so when trying to learn about something new. And on a global scale, AI has long been used for good things that bring humanity and science further, and generative AI will probably not be much different. The technological developments and advancements over the last decades, including in the field of artificial intelligence, are undoubtedly nothing short of remarkable.
Still, I struggle to name a single major reason why specifically generative AI brings a significant benefit to humanity, let alone be advantageous enough to outweigh all the disadvantages.
Humans are social creatures meant to talk to each other, not talk to a computer algorithm devoid of any kind of emotion.
If you want to learn about something, why not ask a human that knows about it. With the Internet, this is easier than ever.
If you need to talk to someone about your personal challenges, why not talk to a close friend. Your friends are capable of feeling empathy and supporting you, a computer is not.
Of course, this is easier said than done. The rising toxicity on the Internet and increasing feelings of loneliness among people, while social media ought to have the opposite effect,
are a few of the many lamentable realities of the world we currently live in, which make these two alternatives things that people might not even have or at least shy away from.
But, why make it even worse by choosing AI over acquaintances?
References
[1] "Google Gemini is for adult babies..." by lessthoughtof (2025). YouTube
[2] "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change" by Imane El Atillah (2023). Euronews
[3] "A man asked AI for health advice and it cooked every brain cell" by Chubbyemu (2025). YouTube
[4] "I Infiltrated a Disturbing AI Cult" by Farrel McGuirre (2025). YouTube
[5] "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" by Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes (2025). arXiv:2506.08872 [cs.AI]
[6] "What large language models know and what people think they know" by Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Catarina Belem, Sheer Karny, Xinyue Hu, Lukas W. Mayer, Padhraic Smyth (2025). doi:10.1038/s42256-024-00976-7
[7] "A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions" by Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu (2024). doi:10.1145/3703155
[8] "Evaluating the Accuracy of Responses by Large Language Models for Information on Disease Epidemiology" by Kexin Zhu, Jiajie Zhang, Anton Klishin, Mario Esser, William A. Blumentals, Juhaeri Juhaeri, Corinne Jouquelet-Royer, Sarah-Jo Sinnott (2025). doi:10.1002/pds.70111
[9] "How Susceptible are LLMs to Influence in Prompts?" by Sotiris Anagnostidis, Jannis Bulian (2024). arXiv:2408.11865 [cs.CL]
[10] "The rise of 'grief tech': AI is being used to bring the people you love back from the dead" by Amber Louise Bryce (2023). Euronews
[11] "How it feels to have your mind hacked by an AI" by blaked (2023). LessWrong
[12] "Generative AI Security: Challenges and Countermeasures" by Banghua Zhu, Norman Mu, Jiantao Jiao, David Wagner (2024). arXiv:2402.12617 [cs.CR]
[13] "If Not Bubble... Why Bubble Shaped?" by How Money Works (2025). YouTube
[14] "The State of the AI Industry is Freaking Me Out (Discussion)" (accessed Dec 2025). Reddit
[15] "AI Fatigue: Why Millions Are Deleting Duolingo" by Logically Answered (2025). YouTube
[16] "The Rise And Fall Of Vibe Coding: The Reality Of AI Slop" by Logically Answered (2025). YouTube
[17] "Thoughts on the Impact of Large Language Models on Software Development" by Emilio Dorigatti (2023). GitHub Pages
[18] "Application of Large Language Models (LLMs) in Software Engineering: Overblown Hype or Disruptive Change?" by Ipek Ozkaya, Anita Carleton, John E. Robert, Douglas Schmidt (William & Mary) (2023). sei.cmu.edu
[19] "The Impact of LLM-Assistants on Software Developer Productivity: A Systematic Literature Review" by Amr Mohamed, Maram Assi, Mariam Guizani (2025). arXiv:2507.03156 [cs.SE]
[20] "Google's Nano Banana Pro is raising concerns over realistic AI image generation" (2025). NBC News
[21] "Two Versions of Same Photo Spark Online Alarm: 'It Is So Over'" by Marni Rose McFall (Newsweek) (2025). Newsweek
[22] "Exposing a Romance Scammer" by Coffeezilla, Kitboga (2025). YouTube
[23] "Threats by artificial intelligence to human health and human existence" by Frederik Federspiel, Ruth Mitchell, Asha Asokan, Carlos Umana, David McCoy (2023). doi:10.1136/bmjgh-2022-010435
[24] "If you remember one AI disaster, make it this one" by AI In Context (2025). YouTube
[25] "'Godfather of AI' shortens odds of the technology wiping out humanity over next 30 years" by Dan Milmo (2024). The Guardian
[26] "AI Is Getting Powerful. But Can Researchers Make It Principled?" by Mordechai Rorvig, Sophie Bushwick (2023). Scientific American
[27] "Large Language Models Reflect the Ideology of their Creators" by Maarten Buyl, Alexander Rogiers, Sander Noels, Guillaume Bied, Iris Dominguez-Catena, Edith Heiter, Iman Johary, Alexandru-Cristian Mara, Raphaël Romero, Jefrey Lijffijt, Tijl De Bie (2025). arXiv:2410.18417 [cs.CL]
[28] "Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters" by Yujin Potter, Shiyang Lai, Junsol Kim, James Evans, Dawn Song (2024). arXiv:2410.24190 [cs.CL]
[29] "LLMs' Potential Influences on Our Democracy: Challenges and Opportunities" by Yujin Potter, Yejin Choi, David Rand, Dawn Song (2025). future-of-democracy-with-llm.org
[30] "Explained: Generative AI’s environmental impact" by Adam Zewe (MIT News) (2025). MIT News
[31] "AI has an environmental problem. Here’s what the world can do about that." (2025). United Nations Environment Programme
[32] "Environmental impact of artificial intelligence" (accessed Dec 2025). Wikipedia
[33] "The Green Dilemma: Can AI Fulfil Its Potential Without Harming the Environment?" by Alokya Kanungo (2023). earth.org
[34] "AI Slop Is Destroying The Internet" by Kurzgesagt (2025). YouTube
[35] "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models" by Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock (2023). arXiv:2303.10130 [econ.GN]
This page was written entirely without any assistance from large language models or other types of generative artificial intelligence.
Like everything I do. Maybe you even came from some work by me that included something like this sentence.
Last modified: