1

I am using Azure Storage Blobs to persist certain models per tenant. While processing items for each tenant these models have to be pulled to memory, used for certain operation, updated and pushed back to blob storage. With time the size of models seem to increase and thus overall latency grows higher and performance degrades.

I'm using in memory cache to support Get operations, however, Put operations still take a decent amount of time.

Splitting the model to smaller components is possible but it increases total number of requests made and takes away atomic nature of model updates.

Is there a general approach around such cases or some other recommendations to solve these?

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
mebjas
  • 119
  • 2
  • 1
    If only there was technology that would allow efficiently persist structured data and that was optimized to access parts of the data without having to read and write everything. And which allowed to bunch multiple changes as atomic operation. /s – Euphoric Jul 03 '18 at 13:04
  • @Euphoric I have tried other nosql databases which charge on request rates and found it to have scale issues, with blob post serialization we can have compression on top of it to get around 10:1 reduction. However this takes away the ability to have indexed portion of models and operate on them – mebjas Jul 04 '18 at 07:40

0 Answers0