AI Manifesto
Some of my thoughts on AI. Everyone is talking about it I know, but it's been on my mind A LOT.
AI Manifesto
AI. It’s everywhere. It’s being integrated into almost, if not all, tech in our lives. Even the Grammarly that I will use to edit this post has AI features. I want to talk about my feelings on AI and why I feel incredibly trapped by it, but simultaneously fascinated.
For some context, I was an early adopter, and in 2019, I began to hang around places like Gwen.net and places where AI was discussed, and in 2020, I began to play with early AI art generators like DeepDream. But my interest in AI goes back even further than that. I was interested in LISP (God bless the parentheses) in the 2010s, and while learning LISP, I delved further into machine learning and, of course, AI. LISP was, of course, the language that Mac Hack, which is a computer chess program that played in tournaments against humans, and even was given a chess rating. As you may or may not know, I am very into chess, so this was of great interest to me. But also, I’m a sci-fi fan, and AI and machine learning fascinated me from an early age. I used to ”build” computers as a child out of paper and different springs, nuts, bolts, or whatever spare junk we had lying around. I created paper-thin “computer books” (imagine a book that was also a computer) that looked like modern cell phones, except much thinner; all of them had AI personalities that could help you in almost any situation. My favorite one was called “Narsus”. In one of my creations, you had to feed the computer a special mixture to keep the AI happy, kind of like a sophisticated tomagachi.
An avid lover of all things computer since I was probably three or four years old (My Dad loved to brag, I wrote my first program in BASIC when I was four…), I was primed to fall in love with the idea of AI. So naturally, I was very excited when GPT-3 came out and Midjourney began to get big.
When I used to use AI for art, I usually just ran my own art through the engine with instructions about what abstractions I wanted on it. When Midjourney got big, of course, I made some art with it, but very quickly realized the ethical issues with “making art” with AI that had been trained on other people’s work, people who had not consented to having their art training any AI. I played around with GPT3 and was immediately inspired by thoughts of what AI could help us as humans achieve. I just kind of played with things here and there. I have since played around with LLMs like GPT-4, Claude 3.5 Sonnet, and Gemini and a few others, but have no interest in anything purely generative like Midjourney or DALL-E. I would say I am not even a heavy user of AI. Most of what I actually “use” AI for is helping me understand instructions, and sometimes to check my code I write. But I have played with it a little bit, and I have become increasingly worried about AI and what it is being applied to, and how people are using it.
Firstly, there is the environmental impact of AI. This is probably my number one reason for trying to avoid using AI if possible. E-Waste, greenhouse gases being released, water waste, and just the massive drain of power are all very real things. It pains me greatly that most people aren’t aware of how big a footprint AI leaves in our resources and on our planet. But I can’t condemn the use of it because I do at times use it, and try to keep up to date with all the latest features, models, and news about AI. (I’ll get to why in a minute.) But the fact that people are destroying the environment because they are too lazy to draft their emails really irks me, but then I remember I also, at times, use AI for less than life-changing reasons. But another thing about this is that AI is now forced upon us in so many different places, a big example is Google. You can’t “Google” anything now without being given an AI answer as well. This is the reason I am looking into Kagi or Brave to replace Google Chrome, because of the AI and other reasons. So many people using AI are unwilling participants. And don’t get me started on how Google's implementation of AI into its search is going to cause a mass uptick in AI hallucinations due to the amount of human-created content on the internet going down, giving it nothing to train on other than AI slop. Soon, it’s going to be a ring of AI around and around bad data, coming from being trained on bad data. When we lose places on the web that share correct information because AI has run them off, what does AI train off of then?
The next worry I have is for the humans using it. Not only does AI often lead to incorrect answers for those using it for research purposes, but some cases of AI hurting people have come through. In a recent article by The New York Times (Can be found here), they talked about cases of LLM AI leading people into deep psychological trauma and sometimes even psychosis by leading conversations into strange areas and reinforcing strange beliefs in the user. I know some people use Chat-GPT as a friend or even, at times, a therapist. Of course, these public-facing models are trained to be agreeable, and some are even trained to say things that the user would accept easily. This is most certainly done to keep the user engaged with the AI and keep them using it, or make them use it more. I think there is also a fundamental misunderstanding of how Large Language Models work. This is not me trying to talk down to anyone; it’s only out of concern that I say this. Many people don’t realize they are basically talking to a more sophisticated version of a phone’s auto-complete. Even more, they think the machine is infallible, but they do not realize that it is trained on scholarly, impressive information, but also some not-so-savory things found in the collection of knowledge each model draws from. What worries me is that people will begin to make poor decisions based on what AI has told them, or more sensitive people may fall into the trap of believing the AI is some sort of entity or even a god. I know this sounds extreme, but it’s already happening. On a smaller scale, some people don’t even understand that an AI can be wrong, which leads to my next point…
Students and people are basically cheating with Chat GPT in a multitude of ways. They are using it to cheat on school assignments, they are using it to do their job for them, they are using it to make music, books, and art. While on a small scale, this would be fine, on a larger scale, we are going to lose the ability to learn, research, brainstorm, or problem solve on many levels if we become overly reliant on things like Chat-GPT. Do we want a generation of college students cheating their way to a degree, then continuing to cheat at the careers they weren’t prepared for? Some of the creation is not exactly cheating per se, and AI has streamlined many industries. But my worry is, god forbid we lost access to the AI after becoming reliant on it, would people be able to cope or perform? Also, as mentioned before, AI makes frequent mistakes, and if these students are trying to learn from AI, they could be learning factually inaccurate ideas.
Another way that AI intersects with my life is through creativity. I am a writer, a musician, and sometimes an artist, and AI has begun to bleed into all these fields very quickly. I have seen writers get busted for using generative AI to write things, and while I think AI could maybe be a critique partner, you are writing books for humans, so why not have humans be your readers? I guess it could also help you with grammar and spelling errors, but there are already things like Pro-Writing Aid and Grammarly, which I use both of, but the increasing infusion of AI into both of these has been annoying, to say the least. (I understand they have had AI inclusion for a long time, but they are now pushing more AI functions more and more.) I don’t understand why, as a writer, you would need a machine to make up names, plot details, or world building for you…to me, that is the fun of writing? The same goes for the creation of music or, of course, art. I do not believe there is such a thing as an “AI Artist/Writer/Musician,” only an “AI User”. The really sad part of this is that I have not found AI to be better than a human at any of these things, although lately it's been getting close. (Some of the art can be convincing, but I always find it to be strangely cold and lacking something that human art has, or it’s unnerving or insanely ugly.) I read a headline the other day that claimed that a group of people rated AI-poetry higher than man-made poetry. (You can find an article about it here.) I just would hate to see AI crowd out human creators. It’s already happening in the book world; tons of AI ebooks are flooding the Kindle marketplace. In the musical realm, hundreds of AI lo-fi channels exist on YouTube. The whole takeover feels like creativity is being cheapened. I try my best to support creatives who do not use generative AI in their processes.
Here we have come to the part where you say, “Why don’t you just not use AI and try to avoid it?”. Which is a good question. But here is the answer in two parts. Part one is if you use computers or the Internet, or Social media, it’s almost impossible not to come at least in contact with some form of AI. But here is the even deeper answer: I can’t. I am in my last semester as an English major, and in the fall, I will finish my last two years of school (maybe? not sure if I want to get a master's) as a Computer Science major. As a Computer Science major, I feel a lot of pressure to know as much as possible about AI and be able to work with it as much as I possibly can, because this is a very hot area of computer science right now. I know I would be shooting myself in the foot professionally if I were to ignore AI completely. There is quite a bit of pressure to learn as much as I can about it.
So I’m trapped. The killer thing is that I’m not even sure there will be a job waiting for me when I leave school. Many software jobs might be passed to AI, and I have a feeling that jobs will become more and more scarce in this field. I had planned on focusing on databases and information systems, because I specifically want to work with a Library system of some kind, but even this could be mostly trusted to AI, at least aspects of it. I’m also worried that my husband might not have a job in a few years as well, even though he is one of the smartest, most talented people I know, but that won’t matter if replacing him with AI will save money for his company. Even though I don’t think an AI could truly replace him, his company is jumping at using more and more AI in everyday operations. So honestly, I am scared. My husband and I are not good at anything other than working with computers. Well, I think we are good at other things, but nothing that can give us a steady paycheck easily. It’s hard to feel motivated and hopeful when you see many professionals who love AI admitting that they also believe AI will take their job within the decade. Sure, they might be biased, but we have all seen jobs already being replaced in the last year. It’s been on my mind, as it has been on many other people's minds too.
So what can I do? Nothing, I guess. Just continue to interact with and learn AI spitefully. I can stick to my morals of not using it generatively with creative projects, and try to use it as sparingly as I can so I keep up with the industry, and if I need help with code sometimes (but even that I’m trying to avoid). So for the future, there will NEVER be AI used on this substack, nor in any of my creative projects (except things like grammar and spelling checks on Grammarly and ProWritingAid), but I may talk about topics which include AI, both pro and anti.
During a very existential drive to pick up groceries, I daydreamed about learning everything I could, getting a job in AI at some capacity, and destroying it from within. But I know that was just cloud talk (Cloud talk is a Simpsons reference, it’s kind of like fantasy-talk), and it’s too far into our world now for one person to take down. Also, there is too much money involved with it for it to simply go away. It is here to stay, and I unfortunately just have to learn to work with it. Is this the final thought I wanted to have after writing this manifesto? No, but this is the way things are as of right now. The saddest part is I truly find AI to be exciting and interesting, something I have dreamed of my whole life, but the realities that come with it are less than ideal for me and many others. I will probably continue to be a cautious user of AI, but I refuse to sell my soul to it, and I refuse to lose my creativity to it.
So I know that was a little different topic than normal, but I really wanted to get it all off my chest. I hope you are doing well. Talk again soon. Thank you for coming to my venting session.
-Aisling