8

Definition of the lazy loading pattern from Wikipedia:

Lazy loading is a design pattern commonly used in computer programming to defer initialization of an object until the point at which it is needed. It can contribute to efficiency in the program's operation if properly and appropriately used. The opposite of lazy loading is eager loading.

The counterpart of that pattern will be to unload code and data as soon as they are not needed any longer. Is there a name for such a pattern?

R Sahu
  • 1,966
  • 10
  • 15
  • 4
    That sounds like deterministic finalization / cleanup – Matthew Jan 24 '17 at 20:17
  • Mmm, "garbage collection"? RAII? Depends on how soon do you determine that the data are not needed any longer. – 9000 Jan 24 '17 at 20:17
  • 1
    @Matthew, I had never heard of it before but deterministic finalization sounds like the right name. – R Sahu Jan 24 '17 at 20:27
  • @9000. "garbage collection" is the right idea but deterministic finalization sounds like a better name. It is not RAII though, rather the inverse of it. – R Sahu Jan 24 '17 at 20:30
  • C++ also has smart pointers which will destruct an object when all references to it have been removed. – Matthew Jan 24 '17 at 20:54
  • @Matthew, I am familiar with the mechanics. I was curious to see whether there was a name for it that is widely understood and accepted amongst designers and developers. – R Sahu Jan 24 '17 at 21:02
  • @RSahu: well, yes, RAII is a poor term, it's the inverse in the meaning of the "finalization", but is often used to refer to the way scope-allocated objects get orderly destroyed on the scope exit. – 9000 Jan 24 '17 at 21:20
  • 2
    Holding onto memory after it is no longer needed is called a memory leak. So the answer would be called not leaking memory. – candied_orange Jan 25 '17 at 09:11
  • @CandiedOrange, would you call it memory leak if resources are leaked at some point during program execution but not immediately after it is not needed? – R Sahu Jan 25 '17 at 15:51
  • RAII could be called "eager release" and GC "lazy release". People don't tend to put the "eager" on the front of "loading" except to compare to "lazy loading" – Caleth Mar 12 '18 at 09:51

1 Answers1

3

Lazy Loading the design pattern is the opposite of Eager Loading the design pattern.

This is when instead of deferring the load to hopefully avoid it, you perform the load immediately to avoid any delays to later operation. It is litterally the difference between "wait the user might change their mind..." and "just get it done so we don't have to worry about it later...".

Use Eager loading when delays during peak processing are unacceptable, code complexity from delayed behavior is too high, or preallocation guarantees algorithmic success (eg: allocate memory for the sorted outputs).

Use lazy loading when used infrequently, it takes too much effort/complexity to maintain, or when relevant resources can't be obtained till close to their use (such as live metrics).

However as you've described it, I think you are asking about:

Lazy Loading the property is the opposite of Eager De-allocation the property.

These are simply statements about resource management. ie: acquire it, use it, and release it. The lazy/eager portion means load it just before it first use, and remove it just after its last use.

There are several Design Patterns/Principles that have/may have this property:

  • Garbage Collection
  • Reference Counting
  • Quiescient States
  • RAII (Resource Aquistion is Initialisation)
  • Algorithmic Resource Management (Not a design pattern, but a principle for designing self-contained algorithms)

Garbage Collection requires no knowledge of the algorithm that is being run, and will deconstruct those objects that can provably no longer affect the course of the program. Most languages provide some mechanism for the object to apply specialised cleanup of resources just before the object is remove from memory. Usually this is implemented by some form of memory walk from certain always accessible locations such as registry, stack, static-thread-local and static-global memory. This pattern does not guarantee that all inaccessible object will be de-allocated , just those that can be detected. However most languages do ensure all inaccessible objects are detected.

Reference Counting requires that the references used in the running algorithms perform some extra book keeping (the count) during execution. When the count hits zero, the object is no longer needed and can be immediately deleted. Unfortunately its possible to create loops in memory that will keep objects alive when they should have been de-allocated. So like with Garbage Collection this pattern gives no hard guarantees. Some discipline and analyses of the data-structures can ensure that this method always de-allocates inaccessible objects.

Quiescent States create a form of memory transaction useful with copy-on-write/functional memory structures. During execution a thread updates the functional structure by creating new parts and linking them up with old pieces that haven't changed. Unfortunately some of the old parts need to be deleted, but another thread might still be reading/need to read it, maybe even this thread. So instead it registers the object for de-allocation with the re-claimer. Later the thread reaches a quiescent state (it has no more work to do in this version of the memory space) and signals the re-claimer that all future work will only occur with the currently versioned structures. The re-claimer detects and reclaims objects which were registered before the oldest quiescent state across all threads. Theoretically some inaccessible objects will exist somewhere in memory, but a given thread alone cannot determine which meet this criteria without interacting with the other threads to ensure no references exist in their registers, or performing shared book keeping which requires atomic operations/locks. It does provide the guarantee that no flagged inaccessible object will remain from a time period older than the oldest active version.

RAII operates best with strong stack de-allocation semantics. Here objects are purposely constructed on the stack, or on the heap where its life-time is strongly tied to a (possibly moveable) stack location. The idea is that when the function ends, objects on the stack are de-allocated, causing any constituent object to itself be de-allocated. Any associated heap allocation, or system handle is in turn cleaned up as either a direct or nested object. This necessitates that data structures managed this way naturally form trees, and that each resource (be it memory or otherwise) has a one-to-one mapping to an object. This is because the destructor can only be called on a completely constructed object. If the constructor fails, but the resource was allocated it will cause a memory/resource leak. This does guarantee that inaccessible objects are released, because the compiler guarantees it. In a language without a strong stack unwinding guarantee this pattern does not hold the same guarantees.

Algorithmic Resource Management (NOT a design pattern, but a principle of algorithm design) is the broadest and most precise way of providing the Lazy Loading/Eager Deletion property. It relies on intimate knowledge of the exact algorithm and its exact behaviour, placing the appropriate allocation/deallocation at the last/first possible statement that is algorithmically correct. It is the GoTo statement of all the possible ways to control resource/memory reclamation - very powerful, it can do anything, and when it goes wrong it has undesirable consequences. There are no guarantees here, only those that are provable by the algorithm implemented.

Kain0_0
  • 15,888
  • 16
  • 37