I'm gonna tackle this question from a different angle and express a love for repeating myself, for repeating myself, for repeating myself, when it doesn't cause maintenance issues. It's easy for overzealous developers with their hearts in the right place to develop a hatred of repeating oneself to the extent that, in trying to fight a perceived evil, they create something just as bad or worse.
It's the most intuitive for humans to see redundancy as the greatest and most immediate form on waste when it isn't necessarily in computation, just as it isn't necessarily wasteful to do redundant arithmetic if it results in less branching, just as it isn't necessarily wasteful to duplicate some simple boilerplate here and there if it simplifies things.
And of course redundancy sometimes is genuinely wasteful, like a codebase that duplicates complex, highly error-prone code which wasn't even correct and properly tested in the first place where you fix a bug in one place only to find that instead of centrally correcting the issue, you have the bug duplicated in 5 other places that you've yet to fix which apply the exact same logic. And likewise you might have an application that allocates and frees many chunks of memory ten times more often than it needs, and that type of computational redundancy could be far from trivial. Yet these are extreme cases where complexity is being duplicated, not simplicity.
Further as jmoreno
pointed out, just because code looks the same doesn't mean its repetitive. DRY is most prone to be abused if you are trying to apply it at a basic syntactical level where you can end up fighting the tools and the language you're using and get tempted to write the most exotic code that you'll come to hate years later. Still, even when you apply DRY to eliminate high-level, logical redundancies, not all logical redundancies are always so bad as I see it.
If I had the choice between a third party library providing some image operations duplicating some mathematical functions like vector and matrix operations instead of depending on a separate, huge, mathematical library, I'd prefer the code duplication there for things like vector/matrix multiplication and matrix transposition in favor of having this nice, independent, self-contained image library which has the minimum number of external dependencies. In that case, code duplication can become like a decoupling mechanism and a way to achieve isolated minimalism. That's provided that it is well-tested and works and isn't the type of buggy or inefficient or narrowly-applicable code that constantly needs to be modified.
If it's stable and it works great, then I don't care if it's duplicating hundreds of lines of code that is offered elsewhere in some other section of my codebase. There's timeless code like that like image routines I have for parallelized, vectorized convolution filters which I haven't updated or needed to update in over a decade. Such might not have been the case if, instead of duplicating a small amount of code to make the library more independent, it instead relied on a central math library, a central image library, etc, all of which could have needed updates or replacements and could have become obsolete in ways this small and independent library has not. It's nice to have some isolated code here and there which is complex but stands the test of time, being able to continue being widely applicable without requiring any further changes in spite of surrounding code aging, requiring changes, and working towards obsolescence. That requires such code to be isolated away from instable code wanting further changes, and that isolated quality tends to imply some modest duplication for the code to fully achieve its independence.
Anyway, I'm playing the devil's advocate a bit here by trying to make a case in favor of some code duplication, but that might help balance your mindset towards one that isn't tempted to try to zealously stamp out every single type of redundancy you find in your system which could be even worse than accepting some harmless redundancy here and there. I never cared for the term, "overengineering", but there are bigger priorities in SE like making sure that things are thoroughly tested to minimize the reasons for them to change over making sure they reuse as much code as possible (which could actually maximize the reasons for them to change -- maximizing instability).
Further if you are dealing with image processing, then there are some characteristics that typically emerge:
- Efficiency tends to be a big factor considering that you're typically looping over millions of pixels. When efficiency is a big enough concern, flatter code, even if it's not the most syntactically elegant, tends to be easier to optimize and work with against a profiler. Likewise optimizing often requires making tradeoffs making common case execution paths more efficient in exchange for making rare case paths less efficient. It's easier to skew the code and make those trade-offs if the changes are only applicable to the particular operation or related operations you are profiling and not to some central library of highly reused code. Such a central library trying to juggle disparate needs might not have the same common use cases as the common case requirements of the image operation(s) you are optimizing.
- Image processing generally isn't the type of thing which causes very difficult, incomprehensible bugs. Image operations can be complex and can sometimes be difficult to write the first time around, but they're self-contained and, once tested to a decent degree, tend to work just fine without tricky edge cases and gotchas. As a result, it isn't typically the type of field where you have to worry about creating the most maintainable code possible for an image filter, e.g., since once an image filter works, it tends to have few reasons to change, and few or no reasons to change for correctness. You're typically not coming back to the code you wrote time and time again. If your testing is good, you can apply a checklist mindset and just move from writing one image operation to the next and typically without going back and forth all over the place.
Given these two characteristics, there's even more reasons not to get too obsessed with DRY. Get the image operations done, test them, make sure they're efficient enough and work well, and call it a day. If they required writing a bit more code than conceptually necessary and don't implement things in the most elegant way possible but are still fast and work well in testing, it's not the end of the world. The two most common reasons to revisit former image processing code is usually efficiency (so it might pay to benchmark and profile and tune in advance in favor of greater stability) or because it depended on some external types and functions from libraries that became obsolete (which some modest code duplication and relying on simple PODs can actually fight against and help you achieve a more stable solution with fewer reasons to change).