Short answer: no
. The cost of dynamic dispatch doesn't increase as a class implements more interfaces even if the compiler fails to inline. There is a cost to dynamic dispatch but the cost of doing it doesn't scale algorithmically in any implementation I've ever seen regardless of whether a class implements 1 interface or 100.
That said, I pitched in because I believe you have a design-related issue. When performance is a concern, I actually believe you should invert the popular priority of design and favor a data-oriented mindset first. That begins with deciding how your code represents, accesses, and manipulates data first and foremost. The interfaces come second. That doesn't mean you design poor interfaces that do little more than retrieve and manipulate data and exposes unsightly implementation details, quite the opposite. Instead it just means you model things at the appropriate level of coarseness or granularity while taking into account the most efficient way to store, access, and manipulate data.
For efficiency, you often need to think in a bulkier way, not a granular way (Users
, not User
, Image
, not Pixel
). You often can't model the teeniest and simplest objects without painting yourself in a corner where you can no longer optimize much further without making cascading, breaking changes to the interfaces in your system.
As a basic example, a particle system really concerned with efficiency as a huge aspect of its quality wouldn't necessarily expose a scalar Particle
object that serves as any more than a handle to be exposed and manipulated directly to thousands of disparate clients in a complex codebase. That would limit the ability to centrally optimize code down to the granular level of a single particle with countless dependencies to the granular Particle
object. It wouldn't allow you to favor central optimizations that allow you to do many things with multiple particles at once in parallel or vectorized if all the clients are working with one particle at a time with single-threaded scalar code. It wouldn't allow you to coalesce and interleave the data fields of multiple particles at once for, say, efficient SoA representations when each individual particle's fields have to stored within a scalar Particle
object. You're trapped in the inefficient realm of being confined to working with a single AoS particle at once and possibly with memory waste from things like padding and hidden per-object data fields with such a design. So such a system might instead expose a ParticleSystem
interface with functions to manipulate many particles at once instead of one particle at a time with a scalar Particle
interface.
Similar thing for your case. If you want to eliminate many redundant database queries, your starting point for design should be how to efficiently represent, access, store, and manipulate data in a way such that you perform the minimum amount of queries. I'd actually suggest doing this from a central place in bulk initially. The interfaces come second. When the interfaces come first, then you often end up with that case where the data is represented, accessed, and/or manipulated inefficiently as a result of having so many granular classes that are potentially well-coordinated in terms of interfaces and high-level design but not in terms of data access and storage patterns.
That's fine if efficiency isn't a primary goal. In such a case where efficiency isn't a concern, you can always make the implementation behind your granular interfaces do whatever they need to do in order to function regardless of computational cost. Yet when it is a big concern, you should approach design problems with more of a data-oriented mindset so that you don't find yourself in that overly granular inefficiency trap.