DRY and SOLID Are More Important Than Ever in the Age of AI

·

...

Jan Cizmar

Founder & CEO

Throughout my programming career, I've heard the same opinion multiple times: DRY is dead.

Before it was "it takes time to make the code generic, deduplicate", now it's "because AI can handle it." Another argument is that duplications are actually good because AI learns from patterns. The more it sees the same pattern repeated, the better it writes new code in the same style. And when you need to change something, AI will update all the duplicated spots for you.

Every time someone complained about DRY, my gut reaction is the same. When there is duplication in the code, it usually means someone didn't take enough care about it. And it will backfire later. AI does benefit from consistent patterns, but you get those through good abstractions, not through copy-pasting the same code everywhere. My present experience repeatedly confirms this.

SOLID principles are not some forgotten OOP concept from textbooks that doesn't apply anymore. They're a foundation for writing readable, sustainable code. They existed before AI, and they matter even more now that AI is writing so much of our code.

After months of working with AI coding tools, I'm now more convinced than ever: staying DRY and following some of the SOLID principles is more important in the AI era, not less.

SOLID without L and D = SOI

Before this sounds like I'm preaching from a textbook, let me be honest. I have to google what all the SOLID letters mean every single time. And when I say SOLID in this article, I really mean the S, O, and I, so from now on I'll call it SOI. Single Responsibility, Open/Closed, and Interface Segregation. Those are the ones that matter most in the context of AI-generated code. (If you need a refresher too, this guide explains them well.)

Single Responsibility says each class or module should have one reason to change. When AI generates code, it tends to create classes that do too much because it's solving the immediate problem without thinking about separation of concerns.

Open/Closed says code should be open for extension but closed for modification. This is the principle that makes good libraries great. You shouldn't need to change existing code to add new behavior. You extend it.

Interface Segregation says interfaces should be small and focused. Clients shouldn't be forced to depend on methods they don't use. When AI adds features, it tends to bloat existing interfaces instead of creating new, focused ones.

Why not Liskov?

The Liskov Substitution Principle says that if you have a base class, any subclass should be usable in its place without breaking the program. It's a theoretically sound idea. But in practice, I've never seen it make a meaningful difference in code readability or maintainability.

Most modern codebases don't rely heavily on deep inheritance hierarchies where Liskov violations would actually cause problems. We use composition, interfaces, and flat structures. The situations where violating Liskov leads to real bugs are rare. And when they happen, they're usually caught quickly because the code simply doesn't work.

I'm not saying the principle is wrong. I'm saying it doesn't provide enough practical value to be worth actively thinking about. The other three (S, O, I) have a much more direct impact on how readable and sustainable your code is.

Ignoring Dependency Inversion

I'm also not a fan of creating interfaces "just in case," which is what the Dependency Inversion Principle would require us to do. In some Java/Spring projects, every service has an interface that's implemented exactly once. I never followed this pattern and never had a problem. It comes from historical reasons (early Spring and EJB actually required interfaces for proxying, and it was also considered a best practice for unit testing with mocking frameworks before tools like Mockito made mocking concrete classes easy), but it hasn't been necessary for over a decade.

The other justification is the "what if" argument. "What if we need to swap the database for an external REST API?" Making code reusable and extensible makes sense when there's a high probability that someone will actually need to touch it. But we shouldn't act like every piece of code needs to be prepared for every possible future scenario. Performance characteristics, availability constraints, query patterns are all completely different between data sources. I've personally never seen a codebase that swapped database access for a REST API through an interface. Maybe such code exists, but I'd be very concerned about performance when swapping these two. If we ever need an interface, we create it when we need it. But we will probably never need it.

So I'm practical about it. Use interfaces when you actually need the abstraction. But when you do have an interface, keep it focused.

When it actually matters

Also I need to mention that I don't always write DRY code either. When there's nothing complex going on and nothing is likely to break, duplicating a small piece of logic is fine. It's not worth abstracting everything.

But when things start getting complex, that's when DRY and SOI matter. When it's easy to accidentally introduce different behavior in two places. When there's a high chance that future developers will touch this code and might introduce new bugs without realizing it. That's exactly where you want the code to be SOI. You want future travelers to reuse existing, tested code instead of writing their own version and accidentally breaking something you already fixed.

And the thing is, the future traveler will increasingly be AI. If your code is well-structured and extensible, AI will extend it properly. If it's a mess of duplications, AI will add another duplication on top. The quality of AI output directly depends on the quality of the code it's working with.

The real problem with AI-generated code

This is not just my gut feeling. A GitClear study of 211 million lines of code found that copy/pasted code surged 48% from 2020 to 2024, while refactored code collapsed from 24% to under 10%. Code clones grew 4x. This happened exactly as AI coding adoption exploded.

When you instruct an AI coding agent like Claude Code to build a feature, it does exactly what you asked. It follows the goal with tunnel vision. It produces code that matches the specification. And that's the problem.

It doesn't care about the rest of your codebase. It doesn't ask itself "is there already a method that does this?" or "should I extract this into a reusable component?" It just writes the code that solves the immediate problem.

On the backend, we start seeing duplicated functionality. On the frontend, we get components that look almost the same as existing ones but are slightly different. Not wrong per se, but off. The UI starts feeling inconsistent. Components that look 90% the same but not quite.

The core issue is that AI simply doesn't think about code reusability. It doesn't refactor old code to make it cleaner. It doesn't look at what already exists and extend it. And reusability and managing change safely is what SOI is really about.

We're experiencing this at Tolgee right now. Our codebase is growing fast, but sometimes it's very hard for people doing code reviews to understand what's happening. We ship code that's less effective than it should be and end up addressing these issues later in the cycle. The bottleneck has shifted from writing code to reviewing it and testing it. AI writes so much code so quickly, but it doesn't think about the system as a complex thing where everything is connected to everything.

"AI will update all the duplications" - it won't

The argument is that duplications are fine because when you need to change something, AI will simply update all the occurrences across the codebase. It knows the pattern, it sees all the copies, it updates them all.

But that's not how AI actually works. Because of its tunnel vision, it doesn't spontaneously touch code outside of what it's directly working on. If you ask it to fix a bug in component A, it fixes component A. It doesn't go looking for component B that has the same logic duplicated. Why would it? You didn't ask it to.

I've already seen Claude introduce a duplication where it didn't fully replicate the original functionality. It copied the pattern but missed part of the logic. So the duplication itself introduced a bug. The new code looked right, passed a quick review, but didn't work the same way as the original.

This is the classic duplication problem. You have code A and its duplicated version B. Someone (or some AI) fixes a bug in A but doesn't update B. Or updates B but slightly differently. Over time, the versions drift apart. You now have technical debt that's hard to even detect because the code looks like it should work.

"Just write better specifications" has its own problem

One solution I hear often is to give the AI very detailed specifications. Tell it exactly which methods to use, which components to extend, which patterns to follow. Specify the architecture on both the strategic and tactical level.

This can work. But there's a practical issue with it. Developers are not used to writing specifications. They're used to coding, learning from code, improving code, refactoring. Asking them to write a detailed spec before every AI prompt adds a layer of work that feels unnatural and slows things down.

It also assumes you already know the right architecture before you start. In reality, good architecture often emerges through iteration. You write something, realize it could be better, refactor. That feedback loop is how clean code happens. When you skip it by outsourcing everything to AI, you lose it.

What actually works: the old rules, applied consistently

My recommendation is boring. Do what was right all the time. Stay DRY. Follow SOI. Keep the code clean, without duplicated parts. Keep it extensible so future developers (and future AI agents) can reuse your code without changing it.

That's what SOI is really about. Making code that can be extended and reused, not modified. I love Material UI for exactly this reason. It's extremely extensible and customizable. You almost never feel like you need to touch their source code. You just compose, extend, and configure. That's because they especially follow the Open/Closed Principle.

If your codebase follows such principles, AI tools will actually produce better results too. There will be clear patterns to follow, reusable abstractions to use, and less temptation to duplicate.

Use AI iteratively, not for big bangs

I also recommend not using AI for large feature development in one go. Instead, work iteratively. One small scope at a time.

Build a piece. Review it. Write tests. Clean up the code. Make sure you understand what was generated. Then move to the next piece.

This is similar to how Shape Up defines work. You break things into scopes and you finish each one properly before moving on.

The key is: you need to understand the code. If you let AI generate a whole feature and you just skim the PR, you've given away ownership and responsibility. You can't maintain what you don't understand. And when something breaks at 2 AM, "the AI wrote it" is not a helpful answer.

Quality over quantity

There's a bigger picture here. AI makes it incredibly easy to produce more code. But does the world actually need that much more code? I don't really think so. It needs less code in better quality.

We all know that purely vibecoded software doesn't scale well. It works for prototypes and MVPs. But the moment you need to handle edge cases, onboard new developers, or maintain the product for years, you need clean, well-structured code.

AI is great at targeting fast results. But fast results that break in edge cases and are hard to maintain are not results. They're debt.

I'm not saying don't use AI to write code. Everyone does and so do we. But I think developers should focus much more on using AI to improve quality than they currently do. Instead of letting AI generate the code and then handing it off for review, giving away ownership, we should do our reviews properly. Use AI to answer questions about the code. Ask it how to make it more readable, more effective, more reusable. Use it as a thinking partner, not just a code generator.

Learn from AI. In many areas, it knows more than you do. But remember, you are the owner.

One place where DRY doesn't work: localization

Since we build a localization platform, I should mention this. DRY doesn't apply to translation keys, and this is one place I would strictly recommend against it.

Reusing translation keys across multiple places sounds efficient, but it leads to wrong translations in wrong contexts. For example, the English word "Post" can mean a blog post (noun) or to post something (verb). In German, these are completely different words: "Beitrag" vs. "veröffentlichen." If you reuse a single common.post key everywhere, the translator picks one and the other context is wrong.

Same with "Save." Saving a document vs. saving money are different words in many languages. Each context needs its own key.

It feels like duplication, but it's not. These are genuinely different strings that happen to have the same value in English. (We wrote more about this in our guide to naming translation keys.)

The bottom line

DRY and SOI aren't relics from a pre-AI era. They're the foundation that makes AI-assisted development actually work. Without them, AI just produces more mess, faster.

Stay DRY. Follow SOI. Work iteratively. Understand your code. Use AI to improve quality, not to multiply quantity.

The principles haven't changed. The temptation to ignore them just got stronger.

Further reading:

Translate your app without losing your mind!

Translate your app without losing your mind!

Code once. Ship globally.

Code once. Ship globally.

Translate your app without losing your mind!