44

I have always worked on projects where caching was done on DAL, basically just when you are about to make the call to database, it checks if data is already there in the cache and if it is, it just doesn't make the call and instead returns that data.

I just recently read about caching at business layer, so basically caching the entire business objects. One advantage I can see straight away is much better response times.

When would you prefer one over the other? and Is caching at Business Layer a common practice?

gnat
  • 21,442
  • 29
  • 112
  • 288
Emma
  • 645
  • 1
  • 6
  • 9
  • Is the performance of your applications so critical that caching in the business layer is prefereable over avoiding the clarity of an additional call to a repository or DAL-layer? – JDT Feb 05 '15 at 14:32
  • 1
    No its not and after reading the replies, I think I would stick to just caching in DAL. Cheers. – Emma Feb 05 '15 at 16:28
  • You should consider caching above your Business Layer, and also think about scaling. – AK_ Feb 06 '15 at 16:34

3 Answers3

34

This is probably too broad for a definitive answer. Personally, I feel that a data access layer is the better place for caching, simply because it is supposed to be very simple - records go in and out and that's it.

A business layer implements many additional rules of higher complexity, so it's better if it doesn't also have to manage per-object availability concerns in addition to multiple-object consistency concerns in the same class(or even the same method) - that would be a blatant violation of the SRP.

(Of course, I only reached that insight after my service classes had grown to unmanageable complexity when they tried to do both caching and configuration simultaneously. There is no better teacher than experience, but the price sure is steep.)

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • why caching needs to be complex? it may be done with AOP and a couple of annotations. Is it still a violation of SRP? why it's not when done in DAL? also IMHE I never saw service classes "too complex" to be cached; independently of its complexity, a service can be seen as a black box and its result can be cached – user1075613 Nov 12 '18 at 19:53
27

Data access and persistence/storage layers are irresistibly natural places for caching. They're doing the I/Os, making them handy, easy place to insert caching. I daresay that almost every DAL or persistence layer will, as it matures, be given a caching function--if it isn't designed that way from the very start.

The problem is intent. DAL and persistence layers deal with relatively low-level constructs--for example, records, tables, rows, and blocks. They don't see the "business" or application-layer objects, or have much insight into how they're being used at higher levels. When they see a handful of rows or a dozen blocks being read or written, it's not clear that they represent. "The Jones account we're currently analyzing" doesn't look much different from "some basic taxation rate reference data the app needs just once, and to which it won't refer again." At this layer, data is data is data.

Caching at the DAL/persistence layer risk having the "cold" tax reference data sitting there, pointlessly occupying 12.2MB of cache and displacing some account information that will, in fact, be intensively used in just a minute. Even the best cache managers are dealing with scant knowledge of the higher level data structures and connections, and little insight as to what operations are coming soon, so they fall back to guesstimation algorithms.

In contrast, application- or business-layer caching isn't nearly so neat. It requires inserting cache management operations or hints in the middle of other business logic, which makes the business code more complex. But the tradeoff is: Having more knowledge of how macro-level data is structured and what operations are coming up, it has a much better opportunity to approximate optimal ("clairvoyant" or "Bélády Min") caching efficiency.

Whether inserting cache management responsibility into business/application code makes sense is a judgment call, and will vary by applications. In many cases, while it's known that DAL/persistence layers won't get it "perfectly right," the tradeoff is that they can do a pretty good job, that they do so in an architecturally "clean" and much more intensively testable way, and that low-level catching avoids increasing the complexity of business/app code.

Lower complexity encourages higher correctness and reliability, and faster time-to-market. That is often considered a great tradeoff--less perfect caching, but better-quality, more timely business code.

Jonathan Eunice
  • 9,710
  • 1
  • 31
  • 42
  • Thanks for the reply. After reading yours and others' replies, I think I definitely don't need to cache in business layer. It will just add to overall complexity of the product. – Emma Feb 05 '15 at 16:34
  • 1
    One problem with the "layers" model is that efficient caching mechanisms often need to use information that isn't available on a single layer. What would you think, though, of having a business layer pass "hints" to the data layer about its overall "plan"? The data layer could initially ignore most such hints, but if a bottleneck was found, some logic could be added which, when given certain hints, would alter the caching strategies in a business-specific way. – supercat Feb 05 '15 at 18:44
  • 1
    Excellent point, @supercat. I was going to mention a hinting/pragma strategy, but the answer was long already. But you're exactly right. Business layer hints to lower layers about how to prioritize caches, or "what to pin," are a pretty common/useful way to get higher level caching without making the business code do it all, or become too caught up in managing its own storage hierarchy. – Jonathan Eunice Feb 05 '15 at 19:53
  • @JonathanEunice: A nice thing about hints is that code doesn't need to do much of anything with them initially. A lot of systems have a few obvious bottlenecks that dominate their performance, but it may be hard predict whether which ones will be bad enough to matter. Adding a small amount of ugly caching logic in a few critical spots may be better than strewing lots of caching logic in places that really don't matter. – supercat Feb 05 '15 at 20:10
  • 1
    Exactly. Especially if you have "pretty good" low-level caching already at the persistence/access layer. You may only need a little added prioritization information to go from "pretty good" to "really good." – Jonathan Eunice Feb 05 '15 at 20:27
  • Can any of you guys point me to any explanation of this 'hinting/pragma' strategy please? I have mostly used MS enterprise caching library mostly but what you guys are talking about seem to be much more advanced. I would love to read more about it...@supercat @JonathanEunice – Emma Feb 06 '15 at 00:23
  • @Emma a simple example would be "page pinning" in operating systems. You let the virtual memory manager make most of the decisions, but occasionally say "this page--keep it in real memory!" – Jonathan Eunice Feb 06 '15 at 15:39
19

Caching on the DAL is straightforward and simple

Your DAL is the central data access layer, which means that any and all data access can be controlled through the classes there. As both reading and persisting happens on those layers it is equally easy to clear or update cache entries as changes happen.

Caching in the business is flexible

Caching on the business gives developers the flexibility to determine if the concrete usage of an object will benefit from caching. Depending on the structure of the application back-end services or automated processes might change data that is cached in other parts. With caching in the business a developer can determine if a certain business object will have possible stale data and gain performance, or have the most up-to-date state of a business object at the expense of performance.

JDT
  • 6,302
  • 17
  • 32