I see plenty of AI criticism these days. If you dislike it, fair enough. I’m not here to argue its merits.
I am here to argue one point. Some people seem to believe AI will go away soon.
Reader, generative AI is never going away. We will be dealing with its benefits and drawbacks for the rest of our lives. Love or hate it, we should acknowledge reality.
Reason 1: Generative AI can’t be un-invented
Large language models are, at their core, math. Dump troves of data into a training algorithm and it makes a box of numbers capable of complex pattern matching. A chatbot is simply an app feeding your question into the box. The box does a bunch of complex math and spits out words representing the solution. Sometimes that answer is even right.
You can’t ban math. Math is knowledge, and knowledge replicates freely. It can only be destroyed with gargantuan effort, and only if it hasn’t spread.
At this point, it’s too late to destroy the knowledge of how to build generative AI. Meta, neé Facebook, gave away their LLM for free to almost everyone. DeepSeek trained a world-class model for peanuts. Enthusiasts have LLMs running on Raspberry Pis. Apple announced this month every app on your iPhone will have access to a free on-device AI. Heck, while I was writing this, OpenAI dropped the price of their o3 reasoning model by 80%. This is all in addition to every product under the sun adding “AI features.”
LLMs are everywhere. You couldn’t ban them if you tried.
Reason 2: A critical mass of people don’t want them banned
This argument is the hardest to make to skeptics. Many people take it as axiomatic that AI is useless. They point to the many, many things generative AI does wrong as evidence.
And critics aren’t not wrong! An LLM is a box of numbers that tells you stuff. Sometimes that stuff is wildly wrong. The thing is, that’s okay. AI doesn’t have to be right all the time to be useful to some number of people. AI doesn’t need to be good, it needs to be good enough.
LLMs are not perfect or even good at many things, but they don’t have to be. If they can summarize a work email pretty well, most of the time, that’s good enough for most people.
I believe there is a large number of people who think existing AI is already good enough. 10% of the world now uses ChatGPT. OpenAI’s annual revenue doubled from $5.5 billion last December to $10 billion as of June. These products are popular and growing.
If you still want to argue that LLMs are unethical, sure. Plenty of evidence for that. But there is a significant number of people who don’t know or care about such concerns. They just want something to write work emails for them.
AI is here forever
AI is here forever. I don’t know if that’s good. Personally, it makes me worry about my own field. Does this put downward pressure on software engineering wages? How could it not?
But I have to be honest. Regardless of how you or I or anyone feels about AI, it is never going away. We are stuck with this technology for the rest of our lives. Let’s admit that, so we can start adapting.