Default values are often suggested to be part of failover mechanism for microservices. At a high level, for any services (say microservices here) the nature of the operation can be broadly classified as Read Or Write.
- Having default values for Write operations doesn't sound reliable.
Return values for Read operations in terms of data size can possibly(??) be categorized as follows:
- Read returning Small/Medium Size Data
- Read returning Huge amount of data
Let's assume the source of data is a Highly Available Cache [used for performance, round-trip avoidance etc and has it's own refresh cycle].
Now when the cache is down failover plan can be:
- When the data size is small - Going back to actual system to fetch the data, (assuming time taken is range milli secs) over a real time invocation, seems ok.
- When the data size is huge and a real time invocation takes time several mins, doing it over a synchronous call, doesn't seem to be correct.
The solutions I can think of are as follows:
- Keep the actual data in a persistent storage which is backed by High Availability and use it as a fallback. So the data availability will now be controlled by HA Policy of the persistent storage
- Use some kind of Caching for request. The Cache can have a fixed upper limit on size and can be used to keep latest request only. The cache can be reset periodically (with the same frequency of the HA Cache refresh) after checking health of the HA Cache. If HA Cache is available, the Request Cache can be reset, else it can retain it's last state. This essentially moves the data availability assurance to platform hosting the microservice(s)
Would be really helpful to know from the community, which among the above is better fit OR is there is any other better way of handling the problem described in (2)?