I am using Azure Storage Blobs to persist certain models per tenant. While processing items for each tenant these models have to be pulled to memory, used for certain operation, updated and pushed back to blob storage. With time the size of models seem to increase and thus overall latency grows higher and performance degrades.
I'm using in memory cache to support Get operations, however, Put operations still take a decent amount of time.
Splitting the model to smaller components is possible but it increases total number of requests made and takes away atomic nature of model updates.
Is there a general approach around such cases or some other recommendations to solve these?