As someone who categorically so far refuses to use generative AI and LLMs for code in particular, I’m increasingly becoming the odd one out among my peers in academia. I don’t have the persuasive power nor the actual arguments against using LLMs to convince them otherwise, but because I’m now officially in the minority at my workplace, I feel the need to explain why I personally don’t use LLMs.
Power
William J. Bowman wrote a blog on the ethics of generative models; my main takeaway is that contributing to the power of the generative AI industry is unethical because the AI industry uses their power to cause harm.1 Aside from immense resource usage for training, the direct harm I care about is LLM corporations pushing unguarded slop onto every tool, from search engines to code editors to the website I download PDFs from. While I’m not forced to interact with them, they’re as pervasive and intrusive as banner ads. I never asked for this, I don’t want this, and frankly I don’t know who does.
But from the way the corporations are pushing LLM usage onto ordinary computer users, they are a significant part of their profit model, and not just power users who spend 200$/mo directly. By incorporating LLMs into all these aspects of using a computer or the internet, they seek to create a dependency on LLMs, which then incites other companies and organizations to spend money on LLMs to add to their tools, which reïnforces the power that the AI industry holds.
I can only guess that the biggest way anyone knows whether they’ve successfully pushed adoption is by tracking usage of their LLMs. It’s the same principle as ad space: people who buy ad space to advertise their products spend the money because they know the ads work, and they know the ads work because they track the people who are clicking on them. I don’t use LLMs because mere usage signals to the corporations that their products are desired; every token that gets used is an indication of worth of the token to the corporation that produces the tokens. Mere usage then gives power to the AI industry.
This argument only covers LLMs coming out of the AI industry and giving power to those corporations. What about open source generative models, trained on consentually obtained data, maybe running on distributed computing, whose existence isn’t backed by a profit model? I don’t think such models are quite as powerful as the cutting-edge industrial models yet, but even if they were, I still don’t think I would be interested in using LLMs.
Intent
Way back during my linear algebra course, the homework policy was that you may collaborate on problem sets, but you must write down (physically, on paper) your own solutions. I’ve adopted a bit of this philosophy when writing code because it helps me understand not only what the code does, but also why the person who wrote it did so that way. It’s a little silly, but sometimes when I copy medium-short snippets of code from StackOverflow, I’ll retype the whole thing out by hand instead, editing along the way. This only works because human-written code is imbued with intent from the author. Incidentally, understanding intent is a useful skill to pick up as a TA: students who write incorrect code with clear intentions are easier to guide.
LLMs do not have any intentions to speak of. They generate statistically likely code that is statistically likely to be correct. Every day I’m told that likelihood is increasing, but as long as the chance of error is nontrivially nonzero, and as long as there is more than one way to write a piece of code, using LLM-generated code will require you to read it first. I do not want to read code with no intent behind it. In the process of reviewing code and asking the whats and the whys, I’ll inevitably try to develop a mental model of what the code author is thinking, only of course there isn’t anyone there doing any thinking behind LLM-generated code.
This is probably not how most programmers approach code; this is also not how I always approach code, because most of the time we just want to get something done, be it a program or a proof. Sometimes I will copy and paste code directly from StackOverflow if I see that the code works once. But I consider this to be a bad habit of mine, and using LLMs would only worsen it.
Complicity
If I produced LLM-generated content for others to view, it would be disrespectful (and possibly ethically harmful) to not disclose what was generated, because doing so would deceive others on the nonexistence of its intent. However, disclosure itself signals endorsement. I can’t possibly include along with every disclosure a treatise on my personal philosophy of generative AI usage, and even if I did, people won’t follow the same philosophy, because that’s not how humans work.
In fact, the whole reason I wrote this post is because many of my peers, both online and in person, have started talking about how they use LLMs, so thinking about it has become unavoidable. Regardless of what each of their takes on generative AI is, the mass effect induced on me is the same: LLMs seem to be useful to all my peers, so maybe they would be useful to me too. I do not want to contribute to this mass effect. Even if I were a perfectly ethical LLM user using perfectly ethical LLMs, I couldn’t guarantee that others I influence would be too. I refuse to endorse a tool currently being used as a weapon of mass deception, nor to potentially be complicit in someone’s newfound journey into sending 200$ to a harmful corporation.
Brainrot
Reliance on LLMs would probably rot my brain. I wouldn’t know, seeing as I don’t use LLMs. Among my peers who use LLMs medium-heavily, I haven’t seen it rot their brains, likely because they are all experienced programmers who are careful and judicious. However, I have seen my peers flip position very quickly and very far on LLM usage. I think I would be susceptible to getting caught up in hype and rotting my own brain along the way.
-
Separately, there’s the question of whether you do something in spite of its ethics, which I suppose is morality, not ethics. Obviously, there are unethical actions that I do; otherwise I wouldn’t be consuming chocolate, coffee, or cod. I don’t think it’s possible for me to rationally explain my morals, and this post isn’t where I would do it anyway. ↩