J.MINJIBIR
← Back to Archive
Signal 2026-01-14

AI Doesn't Write Solutions. It Writes Instant Legacy Code.

Exploring the fundamental limitations of LLMs in software engineering and why seniority is defined by context, not syntax.

JM

Author

Jabir Minjibir

Title: AI Doesn’t Write Solutions. It Writes Instant Legacy Code.

I saw a video on LinkedIn recently. A veteran engineer claimed that the code Large Language Models are producing today is superior to what most Senior Software Engineers can write.

Statements like this worry me.

Don’t get me wrong. I’m impressed by the speed. I use LLMs. They generate boilerplate and functions in seconds that would take me ten minutes to type out. They are incredible springboards.

But whenever I hear the argument that “Speed + Syntax = Seniority,” I have to push back.

My understanding of Senior Engineering has never been about typing speed or line count. It’s about how the code fits into a living business domain. If we look past the hype, we see the uncomfortable reality. AI isn’t replacing engineers. It is just accelerating the creation of legacy code.

The Time Capsule Problem

The real danger here isn’t that LLMs write buggy code. We do that too. The danger is that they are fundamentally conservative. They anchor us to the “best practices” of yesterday.

Consider the evolution of security standards. At one point, OAuth 1.0 and early OpenID implementations were “Best in Class.” If you look at the sheer volume of code written during that era, those protocols are statistically dominant. An LLM trained on historical repositories sees that code as “correct” because it appears frequently.

But security is a moving target. It is not a static dataset. Today, those protocols are deprecated. They are liabilities.

If we rely on generated code, who flags the obsolescence? An LLM lacks the temporal awareness to realize that what was safe yesterday is an exploit today. Only a human can look at “working” code and decide it is dangerous.

Innovation Requires Disgust

This leads to a deeper issue regarding the future of software. How do things actually get better?

Software improves because humans have a unique capacity for dissatisfaction. We look at code we wrote six months ago and we cringe. We think, “I can do this better now. This is messy.”

That discomfort is the engine of innovation.

An LLM is incapable of looking at its output with critical disdain. It is a statistical mirror reflecting the average of what has already been done. If we begin feeding AI its own generated code, we eliminate that dissatisfaction. We settle for a “good enough” average that never evolves. We effectively freeze software development in 2024.

We Are the Safety Valve

The viral videos miss the reality of the job. The Senior Engineer isn’t there to compete with the machine on words-per-minute. We are there to be the agents of obsolescence.

We are there to look at the massive amount of syntactically correct code the machine produces and say, “This works. But it belongs in 2018. We are building for 2026.”

End of Transmission

© 2026 JABIR MINJIBIR.