3

Right now I'm trying to create a Heap that requires 8-byte alignment, and was looking around online for some good methods that would minimize the amount of fragmentation. Less fragmentation is ideal, and even the ones that will slow down a lot are a consideration (but speed is also a sub-goal here)

Has anyone had any experience with Heaps who could let me know what to watch out for when going about this? Any input is appreciated

Dark Templar
  • 6,223
  • 16
  • 46
  • 46
  • Heap like in "http://en.wikipedia.org/wiki/Heap_%28data_structure%29"? A binary max heap, for example, does never get fragmented. So what the heck are you talking of? – Doc Brown Oct 11 '11 at 18:29
  • 1
    @DocBrown: Looking at the mention of alignment and OP's previous questions, my guess is "heap as in dynamic memory allocation". –  Oct 11 '11 at 19:26
  • Yup, that is correct! – Dark Templar Oct 12 '11 at 01:24

1 Answers1

2

A coalescing heap will alleviate (but not avoid) fragmentation by coalescing adjacent free blocks. Whenever you free a block, look at the next and previous blocks in your list. If either (or possibly both) of them are free, then you can combine them to form one large block. Some implementations defer the coalescing until allocation fails, at which point the allocator will walk the heap and look for adjacent free blocks to merge until it gets one large enough to satisfy the request. This gives better average-case performance, but worst-case performance is much worse. Merging blocks at deallocation time amortizes the cost over all deallocations, so there's not much variance between best-case and worst-case performance.

TMN
  • 11,313
  • 1
  • 21
  • 31
  • Hmm... I still don't understand what you would mean by 'best-case', 'average' and worst? – Dark Templar Oct 12 '11 at 01:29
  • If you don't merge the blocks on deallocation, then your deallocation call returns faster, giving you better performance. Worst-case performance takes a hit because an allocation may walk the entire heap and find no blocks large enough to satisfy the request. So now the allocator has to walk the heap again, looking for adjacent free blocks to merge until it is able to create one large enough to satisfy the request. If you merge at deallocation, then you never have to walk the heap more than once. – TMN Oct 12 '11 at 12:10
  • Hmm I see. Overall though, does it save more time (cumulatively) to do the deallocate-when-allocation-fails technique? – Dark Templar Oct 12 '11 at 17:25
  • 1
    It depends on your usage patterns and implementation. Merging on allocation hurts when your heap gets badly fragmented, so you wind up merging several small blocks. You can take a hybrid approach and make a "merge pass" every N allocations (say every 1000) or whenever some condition is met (like over N% of the heap is allocated, or there are over N separate free blocks). If this is for a general-purpose allocator, I'd recommend having some kind of tuning parameter, so you can specify which strategy to use. – TMN Oct 12 '11 at 18:38