What is Heap and the Heap Manager? Part Two

We're going to continue our look at Heap. At the end of the last post we mentioned Look-Aside Lists and Low Fragmentation Heap. But before we dive into those, let's take a look at Heap Synchronization. The heap manager supports concurrent access from multiple threads by default. A process can also lock the entire heap […]

We're going to continue our look at Heap. At the end of the last post we mentioned Look-Aside Lists and Low Fragmentation Heap. But before we dive into those, let's take a look at Heap Synchronization.

The heap manager supports concurrent access from multiple threads by default. A process can also lock the entire heap and prevent other threads from performing heap operations. This is required for operations that require consistent states across multiple heap calls. To use an analogy for this - think about trying to perform some data analysis and someone keeps changing the data in the Excel spreadsheet you are using. Your analysis winds up being skewed because the data keeps changing. If you lock the spreadsheet for your use only, you can complete all your data analysis using consistent data.

Look-Aside lists are single linked lists that allow operations such as "push to the list" or "pop from the list" in a last in, first out (LIFO) order. There are 128 look-aside lists per heap, which handle allocations up to 1KB on 32-bit systems and up to 2KB on 64-bit platforms. Look-aside lists provide performance improvements over normal heap allocations because multiple threads can concurrently perform allocations and deallocations without acquiring the heap global lock. The heap manager maintains the number of blocks in each list. If a thread allocates a block of a size that does not exist in the corresponding look-aside list, the heap manager forwards the call to the core heap manager. The heap manager creates look-aside lists automatically when a heap is created, as long as no debugging options are enabled and the heap is expandable.

Let's move on to Heap fragmentation. Heap fragmentation is when available memory is broken into small, non-contiguous blocks. When this happens, memory allocations can fail even though there may be enough total memory in the heap to satisfy the request. However, since no one block of memory is large enough - the allocation request fails. For applications with a low memory usage, the standard heap is adequate. Allocations will not fail on account of heap fragmentation. However - if the applications allocates memory frequently using different allocation sizes, these allocations may fail due to heap fragmentation.

So looking at the first diagram, we can see is all of the allocated and free memory placed together in contiguous blocks. New memory allocations are satisfied from one big pool of unallocated memory.

Over time, as the program runs, some memory will be freed and the overall heap picture will be changed as shown in the next diagram.

Here, we can see that the allocations for blocks A, C and E have all been released. However, this is causing some fragmentation - unallocated and allocated regions are becoming mixed. Over time this can result in performance degradation and possibly even application failure in a worst-case scenario.

Now we see the possible consequences of the heap fragmentation. Even though we have sufficient overall heap space, we do not have a contiguous memory block large enough to handle the new allocation request as shown in the following diagram.


Windows XP and Windows Server 2003 introduce the low-fragmentation heap (LFH). Application developers can use LFH within their application. LFH avoids fragmentation by managing all allocated blocks in 128 predetermined different block-size ranges (called a bucket). Whenever an application needs to allocate memory from the heap, the LFH chooses the bucket that can allocate the smallest block that is large enough to contain the requested size. The smallest block that can be allocated is 8 bytes. The MSDN Article on Low Fragmentation Heap lists the range of each bucket.

OK - let's take a look at Heap Corruption and using Pageheap.exe for troubleshooting. First, let's define what heap corruption is. Simply put, heap corruption is a situation that occurs whenever misbehaving code corrupts the data heap. One of the most common causes of heap corruption is to write beyond the bounds of a memory allocation. This can corrupt the memory directly before and / or after the allocated bytes. Heap corruption may also occur when an application attempts to write to a block of memory that has already been freed. When this occurs, and the application crashes, one of the first things we do is look at the dump file.

If you remember our post on Basic Debugging of an Application Crash, you can run the !analyze -v command to have the debugger perform a quick analysis in debug mode. When you have a crash caused by heap corruption, you may see information like this returned:

Full Article

Additional Resources:

Microsoft, Windows, Heap, Heap Mangaer, Memory Management, Performance, Debugging, Troubleshooting, Architecture, Knowledgebase