I guess the best answer is that it depends. In my experience there are a lot of factors that go into choosing caching algorithms.
Factors to consider
- Read/Write Balance. (What percentage of accesses are reads vs writes)
- Amount of cache.
- Type of media behind the cache. (Are they slow SATA drives or fast SSD drives?)
- Hits vs Misses. (How often are things rewritten or reread?)
- Average access size (This goes into choose the page size)
- How expensive are reads and writes.
Once you consider all the different factors you then need to find a cache algorithm that handles that best. For example say that you have an application where there are a lot of writes, some rewrites, reads of recently written data and some sort of spinning media. In this case you would want a sort of hybrid caching algorithm. To handle the write data you might want something like Wise order of Writes (WOW) and an LRU algorithm for data that has been read from disk. The reason for this is that disk accesses are very expensive and the WOW algorithm will make it more efficient to write out data and the LRU will keep frequently accessed data always in cache.
Say you have SSD disks, which have very fast access time, you might want to gear your choice toward LRU algorithm since disk accesses are relatively inexpensive.
So really what I want to say is that there is no "best" answer. The best answer is know the factors that apply to you and choose an algorithm that best handles them.
How to find the algorithm for you
Profile your system. This usually involves adding code to keep statistics for memory accesses. By profiling you can see which factors are most important to you.
In the past I have added code to track all memory accesses over a period of time. Then later I look for patterns. I look for re-reads, re-writes, sequential access, random access, etc..
Once you have identified things of importance you need to look at all the different types of caching algorithms to see which handle which things the best.