malloc & sysmalloc
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking:
HackTricks Training GCP Red Team Expert (GRTE)
Allocation Order Summary
(No checks are explained in this summary and some case have been omitted for brevity)
__libc_malloc
tries to get a chunk from the tcache, if not it calls_int_malloc
_int_malloc
:Tries to generate the arena if there isn't any
If any fast bin chunk of the correct size, use it
Fill tcache with other fast chunks
If any small bin chunk of the correct size, use it
Fill tcache with other chunks of that size
If the requested size isn't for small bins, consolidate fast bin into unsorted bin
Check the unsorted bin, use the first chunk with enough space
If the found chunk is bigger, divide it to return a part and add the reminder back to the unsorted bin
If a chunk is of the same size as the size requested, use to to fill the tcache instead of returning it (until the tcache is full, then return the next one)
For each chunk of smaller size checked, put it in its respective small or large bin
Check the large bin in the index of the requested size
Start looking from the first chunk that is bigger than the requested size, if any is found return it and add the reminders to the small bin
Check the large bins from the next indexes until the end
From the next bigger index check for any chunk, divide the first found chunk to use it for the requested size and add the reminder to the unsorted bin
If nothing is found in the previous bins, get a chunk from the top chunk
If the top chunk wasn't big enough enlarge it with
sysmalloc
__libc_malloc
The malloc
function actually calls __libc_malloc
. This function will check the tcache to see if there is any available chunk of the desired size. If the re is it'll use it and if not it'll check if it's a single thread and in that case it'll call _int_malloc
in the main arena, and if not it'll call _int_malloc
in arena of the thread.
Note how it'll always tag the returned pointer with tag_new_usable
, from the code:
void *tag_new_usable (void *ptr)
Allocate a new random color and use it to color the user region of
a chunk; this may include data from the subsequent chunk's header
if tagging is sufficiently fine grained. Returns PTR suitably
recolored for accessing the memory there.
_int_malloc
This is the function that allocates memory using the other bins and top chunk.
Start
It starts defining some vars and getting the real size the request memory space need to have:
Arena
In the unlikely event that there aren't usable arenas, it uses sysmalloc
to get a chunk from mmap
:
Fast Bin
If the needed size is inside the Fast Bins sizes, try to use a chunk from the fast bin. Basically, based on the size, it'll find the fast bin index where valid chunks should be located, and if any, it'll return one of those. Moreover, if tcache is enabled, it'll fill the tcache bin of that size with fast bins.
While performing these actions, some security checks are executed in here:
If the chunk is misaligned:
malloc(): unaligned fastbin chunk detected 2
If the forward chunk is misaligned:
malloc(): unaligned fastbin chunk detected
If the returned chunk has a size that isn't correct because of it's index in the fast bin:
malloc(): memory corruption (fast)
If any chunk used to fill the tcache is misaligned:
malloc(): unaligned fastbin chunk detected 3
Small Bin
As indicated in a comment, small bins hold one size per index, therefore checking if a valid chunk is available is super fast, so after fast bins, small bins are checked.
The first check is to find out if the requested size could be inside a small bin. In that case, get the corresponded index inside the smallbin and see if there is any available chunk.
Then, a security check is performed checking:
if
victim->bk->fd = victim
. To see that both chunks are correctly linked.
In that case, the chunk gets the inuse
bit, the doubled linked list is fixed so this chunk disappears from it (as it's going to be used), and the non main arena bit is set if needed.
Finally, fill the tcache index of the requested size with other chunks inside the small bin (if any).
malloc_consolidate
If it wasn't a small chunk, it's a large chunk, and in this case malloc_consolidate
is called to avoid memory fragmentation.
The malloc consolidate function basically removes chunks from the fast bin and places them into the unsorted bin. After the next malloc these chunks will be organized in their respective small/fast bins.
Note that if while removing these chunks, if they are found with previous or next chunks that aren't in use they will be unliked and merged before placing the final chunk in the unsorted bin.
For each fast bin chunk a couple of security checks are performed:
If the chunk is unaligned trigger:
malloc_consolidate(): unaligned fastbin chunk detected
If the chunk has a different size that the one it should because of the index it's in:
malloc_consolidate(): invalid chunk size
If the previous chunk is not in use and the previous chunk has a size different of the one indicated by
prev_chunk
:corrupted size vs. prev_size in fastbins
Unsorted bin
It's time to check the unsorted bin for a potential valid chunk to use.
Start
This starts with a big for look that will be traversing the unsorted bin in the bk
direction until it arrives til the end (the arena struct) with while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))
Moreover, some security checks are perform every time a new chunk is considered:
If the chunk size is weird (too small or too big):
malloc(): invalid size (unsorted)
If the next chunk size is weird (too small or too big):
malloc(): invalid next size (unsorted)
If the previous size indicated by the next chunk differs from the size of the chunk:
malloc(): mismatching next->prev_size (unsorted)
If not
victim->bck->fd == victim
or notvictim->fd == av
(arena):malloc(): unsorted double linked list corrupted
As we are always checking the las one, it's
fd
should be pointing always to the arena struct.
If the next chunk isn't indicating that the previous is in use:
malloc(): invalid next->prev_inuse (unsorted)
if in_smallbin_range
in_smallbin_range
If the chunk is bigger than the requested size use it, and set the rest of the chunk space into the unsorted list and update the last_remainder
with it.
If this was successful, return the chunk ant it's over, if not, continue executing the function...
if equal size
Continue removing the chunk from the bin, in case the requested size is exactly the one of the chunk:
If the tcache is not filled, add it to the tcache and continue indicating that there is a tcache chunk that could be used
If tcache is full, just use it returning it
If chunk not returned or added to tcache, continue with the code...
place chunk in a bin
Store the checked chunk in the small bin or in the large bin according to the size of the chunk (keeping the large bin properly organized).
There are security checks being performed to make sure both large bin doubled linked list are corrupted:
If
fwd->bk_nextsize->fd_nextsize != fwd
:malloc(): largebin double linked list corrupted (nextsize)
If
fwd->bk->fd != fwd
:malloc(): largebin double linked list corrupted (bk)
_int_malloc
limits
_int_malloc
limitsAt this point, if some chunk was stored in the tcache that can be used and the limit is reached, just return a tcache chunk.
Moreover, if MAX_ITERS is reached, break from the loop for and get a chunk in a different way (top chunk).
If return_cached
was set, just return a chunk from the tcache to avoid larger searches.
If limits not reached, continue with the code...
Large Bin (by index)
If the request is large (not in small bin) and we haven't yet returned any chunk, get the index of the requested size in the large bin, check if not empty of if the biggest chunk in this bin is bigger than the requested size and in that case find the smallest chunk that can be used for the requested size.
If the reminder space from the finally used chunk can be a new chunk, add it to the unsorted bin and the lsast_reminder is updated.
A security check is made when adding the reminder to the unsorted bin:
bck->fd-> bk != bck
:malloc(): corrupted unsorted chunks
If a chunk isn't found suitable for this, continue
Large Bin (next bigger)
If in the exact large bin there wasn't any chunk that could be used, start looping through all the next large bin (starting y the immediately larger) until one is found (if any).
The reminder of the split chunk is added in the unsorted bin, last_reminder is updated and the same security check is performed:
bck->fd-> bk != bck
:malloc(): corrupted unsorted chunks2
Top Chunk
At this point, it's time to get a new chunk from the Top chunk (if big enough).
It starts with a security check making sure that the size of the chunk size is not too big (corrupted):
chunksize(av->top) > av->system_mem
:malloc(): corrupted top size
Then, it'll use the top chunk space if it's large enough to create a chunk of the requested size.
If not, if there are fast chunks, consolidate them and try again.
Finally, if not enough space use sysmalloc
to allocate enough size.
sysmalloc
sysmalloc start
If arena is null or the requested size is too big (and there are mmaps left permitted) use sysmalloc_mmap
to allocate space and return it.
sysmalloc checks
It starts by getting old top chunk information and checking that some of the following condations are true:
The old heap size is 0 (new heap)
The size of the previous heap is greater and MINSIZE and the old Top is in use
The heap is aligned to page size (0x1000 so the lower 12 bits need to be 0)
Then it also checks that:
The old size hasn't enough space to create a chunk for the requested size
sysmalloc not main arena
It'll first try to extend the previous heap for this heap. If not possible try to allocate a new heap and update the pointers to be able to use it.
Finally if that didn't work, try calling sysmalloc_mmap
.
sysmalloc main arena
It starts calculating the amount of memory needed. It'll start by requesting contiguous memory so in this case it'll be possible to use the old memory not used. Also some align operations are performed.
sysmalloc main arena previous error 1
If the previous returned MORECORE_FAILURE
, try agin to allocate memory using sysmalloc_mmap_fallback
sysmalloc main arena continue
If the previous didn't return MORECORE_FAILURE
, if it worked create some alignments:
sysmalloc finale
Finish the allocation updating the arena information
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2921C3-L2943C12
if ((unsigned long) av->system_mem > (unsigned long) (av->max_system_mem))
av->max_system_mem = av->system_mem;
check_malloc_state (av);
/* finally, do the allocation */
p = av->top;
size = chunksize (p);
/* check that one of the above allocation paths succeeded */
if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
remainder_size = size - nb;
remainder = chunk_at_offset (p, nb);
av->top = remainder;
set_head (p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
check_malloced_chunk (av, p, nb);
return chunk2mem (p);
}
/* catch all failure paths */
__set_errno (ENOMEM);
return 0;
sysmalloc_mmap
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking:
HackTricks Training GCP Red Team Expert (GRTE)
Last updated