source: SVN/cambria/redboot/packages/services/memalloc/common/current/doc/dlmalloc/dlmalloc-2.6.4.c @ 1

Last change on this file since 1 was 1, checked in by Tim Harvey, 2 years ago

restored latest version of files from server backup

Signed-off-by: Tim Harvey <tharvey@…>

File size: 98.3 KB
Line 
1/* ---------- To make a malloc.h, start cutting here ------------ */
2
3/*
4  A version of malloc/free/realloc written by Doug Lea and released to the
5  public domain.  Send questions/comments/complaints/performance data
6  to dl@cs.oswego.edu
7
8* VERSION 2.6.4  Thu Nov 28 07:54:55 1996  Doug Lea  (dl at gee)
9 
10   Note: There may be an updated version of this malloc obtainable at
11           ftp://g.oswego.edu/pub/misc/malloc.c
12         Check before installing!
13
14* Why use this malloc?
15
16  This is not the fastest, most space-conserving, most portable, or
17  most tunable malloc ever written. However it is among the fastest
18  while also being among the most space-conserving, portable and tunable.
19  Consistent balance across these factors results in a good general-purpose
20  allocator. For a high-level description, see
21     http://g.oswego.edu/dl/html/malloc.html
22
23* Synopsis of public routines
24
25  (Much fuller descriptions are contained in the program documentation below.)
26
27  malloc(size_t n);
28     Return a pointer to a newly allocated chunk of at least n bytes, or null
29     if no space is available.
30  free(Void_t* p);
31     Release the chunk of memory pointed to by p, or no effect if p is null.
32  realloc(Void_t* p, size_t n);
33     Return a pointer to a chunk of size n that contains the same data
34     as does chunk p up to the minimum of (n, p's size) bytes, or null
35     if no space is available. The returned pointer may or may not be
36     the same as p. If p is null, equivalent to malloc.  Unless the
37     #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
38     size argument of zero (re)allocates a minimum-sized chunk.
39  memalign(size_t alignment, size_t n);
40     Return a pointer to a newly allocated chunk of n bytes, aligned
41     in accord with the alignment argument, which must be a power of
42     two.
43  valloc(size_t n);
44     Equivalent to memalign(pagesize, n), where pagesize is the page
45     size of the system (or as near to this as can be figured out from
46     all the includes/defines below.)
47  pvalloc(size_t n);
48     Equivalent to valloc(minimum-page-that-holds(n)), that is,
49     round up n to nearest pagesize.
50  calloc(size_t unit, size_t quantity);
51     Returns a pointer to quantity * unit bytes, with all locations
52     set to zero.
53  cfree(Void_t* p);
54     Equivalent to free(p).
55  malloc_trim(size_t pad);
56     Release all but pad bytes of freed top-most memory back
57     to the system. Return 1 if successful, else 0.
58  malloc_usable_size(Void_t* p);
59     Report the number usable allocated bytes associated with allocated
60     chunk p. This may or may not report more bytes than were requested,
61     due to alignment and minimum size constraints.
62  malloc_stats();
63     Prints brief summary statistics on stderr.
64  mallinfo()
65     Returns (by copy) a struct containing various summary statistics.
66  mallopt(int parameter_number, int parameter_value)
67     Changes one of the tunable parameters described below. Returns
68     1 if successful in changing the parameter, else 0.
69
70* Vital statistics:
71
72  Alignment:                            8-byte
73       8 byte alignment is currently hardwired into the design.  This
74       seems to suffice for all current machines and C compilers.
75
76  Assumed pointer representation:       4 or 8 bytes
77       Code for 8-byte pointers is untested by me but has worked
78       reliably by Wolfram Gloger, who contributed most of the
79       changes supporting this.
80
81  Assumed size_t  representation:       4 or 8 bytes
82       Note that size_t is allowed to be 4 bytes even if pointers are 8.       
83
84  Minimum overhead per allocated chunk: 4 or 8 bytes
85       Each malloced chunk has a hidden overhead of 4 bytes holding size
86       and status information. 
87
88  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
89                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
90                                     
91       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
92       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
93       needed; 4 (8) for a trailing size field
94       and 8 (16) bytes for free list pointers. Thus, the minimum
95       allocatable size is 16/24/32 bytes.
96
97       Even a request for zero bytes (i.e., malloc(0)) returns a
98       pointer to something of the minimum allocatable size.
99
100  Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
101                          8-byte size_t: 2^63 - 16 bytes
102
103       It is assumed that (possibly signed) size_t bit values suffice to
104       represent chunk sizes. `Possibly signed' is due to the fact
105       that `size_t' may be defined on a system as either a signed or
106       an unsigned type. To be conservative, values that would appear
107       as negative numbers are avoided. 
108       Requests for sizes with a negative sign bit will return a
109       minimum-sized chunk.
110
111  Maximum overhead wastage per allocated chunk: normally 15 bytes
112
113       Alignnment demands, plus the minimum allocatable size restriction
114       make the normal worst-case wastage 15 bytes (i.e., up to 15
115       more bytes will be allocated than were requested in malloc), with
116       two exceptions:
117         1. Because requests for zero bytes allocate non-zero space,
118            the worst case wastage for a request of zero bytes is 24 bytes.
119         2. For requests >= mmap_threshold that are serviced via
120            mmap(), the worst case wastage is 8 bytes plus the remainder
121            from a system page (the minimal mmap unit); typically 4096 bytes.
122
123* Limitations
124
125    Here are some features that are NOT currently supported
126
127    * No user-definable hooks for callbacks and the like.
128    * No automated mechanism for fully checking that all accesses
129      to malloced memory stay within their bounds.
130    * No support for compaction.
131
132* Synopsis of compile-time options:
133
134    People have reported using previous versions of this malloc on all
135    versions of Unix, sometimes by tweaking some of the defines
136    below. It has been tested most extensively on Solaris and
137    Linux. It is also reported to work on WIN32 platforms.
138    People have also reported adapting this malloc for use in
139    stand-alone embedded systems.
140
141    The implementation is in straight, hand-tuned ANSI C.  Among other
142    consequences, it uses a lot of macros.  Because of this, to be at
143    all usable, this code should be compiled using an optimizing compiler
144    (for example gcc -O2) that can simplify expressions and control
145    paths.
146
147  __STD_C                  (default: derived from C compiler defines)
148     Nonzero if using ANSI-standard C compiler, a C++ compiler, or
149     a C compiler sufficiently close to ANSI to get away with it.
150  DEBUG                    (default: NOT defined)
151     Define to enable debugging. Adds fairly extensive assertion-based
152     checking to help track down memory errors, but noticeably slows down
153     execution.
154  REALLOC_ZERO_BYTES_FREES (default: NOT defined)
155     Define this if you think that realloc(p, 0) should be equivalent
156     to free(p). Otherwise, since malloc returns a unique pointer for
157     malloc(0), so does realloc(p, 0).
158  HAVE_MEMCPY               (default: defined)
159     Define if you are not otherwise using ANSI STD C, but still
160     have memcpy and memset in your C library and want to use them.
161     Otherwise, simple internal versions are supplied.
162  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
163     Define as 1 if you want the C library versions of memset and
164     memcpy called in realloc and calloc (otherwise macro versions are used).
165     At least on some platforms, the simple macro versions usually
166     outperform libc versions.
167  HAVE_MMAP                 (default: defined as 1)
168     Define to non-zero to optionally make malloc() use mmap() to
169     allocate very large blocks. 
170  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
171     Define to non-zero to optionally make realloc() use mremap() to
172     reallocate very large blocks. 
173  malloc_getpagesize        (default: derived from system #includes)
174     Either a constant or routine call returning the system page size.
175  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
176     Optionally define if you are on a system with a /usr/include/malloc.h
177     that declares struct mallinfo. It is not at all necessary to
178     define this even if you do, but will ensure consistency.
179  INTERNAL_SIZE_T           (default: size_t)
180     Define to a 32-bit type (probably `unsigned int') if you are on a
181     64-bit machine, yet do not want or need to allow malloc requests of
182     greater than 2^31 to be handled. This saves space, especially for
183     very small chunks.
184  INTERNAL_LINUX_C_LIB      (default: NOT defined)
185     Defined only when compiled as part of Linux libc.
186     Also note that there is some odd internal name-mangling via defines
187     (for example, internally, `malloc' is named `mALLOc') needed
188     when compiling in this case. These look funny but don't otherwise
189     affect anything.
190  WIN32                     (default: undefined)
191     Define this on MS win (95, nt) platforms to compile in sbrk emulation.
192  LACKS_UNISTD_H            (default: undefined)
193     Define this if your system does not have a <unistd.h>.
194  MORECORE                  (default: sbrk)
195     The name of the routine to call to obtain more memory from the system.
196  MORECORE_FAILURE          (default: -1)
197     The value returned upon failure of MORECORE.
198  MORECORE_CLEARS           (default 1)
199     True (1) if the routine mapped to MORECORE zeroes out memory (which
200     holds for sbrk).
201  DEFAULT_TRIM_THRESHOLD
202  DEFAULT_TOP_PAD       
203  DEFAULT_MMAP_THRESHOLD
204  DEFAULT_MMAP_MAX     
205     Default values of tunable parameters (described in detail below)
206     controlling interaction with host system routines (sbrk, mmap, etc).
207     These values may also be changed dynamically via mallopt(). The
208     preset defaults are those that give best performance for typical
209     programs/systems.
210
211
212*/
213
214
215
216
217/* Preliminaries */
218
219#ifndef __STD_C
220#ifdef __STDC__
221#define __STD_C     1
222#else
223#if __cplusplus
224#define __STD_C     1
225#else
226#define __STD_C     0
227#endif /*__cplusplus*/
228#endif /*__STDC__*/
229#endif /*__STD_C*/
230
231#ifndef Void_t
232#if __STD_C
233#define Void_t      void
234#else
235#define Void_t      char
236#endif
237#endif /*Void_t*/
238
239#if __STD_C
240#include <stddef.h>   /* for size_t */
241#else
242#include <sys/types.h>
243#endif
244
245#ifdef __cplusplus
246extern "C" {
247#endif
248
249#include <stdio.h>    /* needed for malloc_stats */
250
251
252/*
253  Compile-time options
254*/
255
256
257/*
258    Debugging:
259
260    Because freed chunks may be overwritten with link fields, this
261    malloc will often die when freed memory is overwritten by user
262    programs.  This can be very effective (albeit in an annoying way)
263    in helping track down dangling pointers.
264
265    If you compile with -DDEBUG, a number of assertion checks are
266    enabled that will catch more memory errors. You probably won't be
267    able to make much sense of the actual assertion errors, but they
268    should help you locate incorrectly overwritten memory.  The
269    checking is fairly extensive, and will slow down execution
270    noticeably. Calling malloc_stats or mallinfo with DEBUG set will
271    attempt to check every non-mmapped allocated and free chunk in the
272    course of computing the summmaries. (By nature, mmapped regions
273    cannot be checked very much automatically.)
274
275    Setting DEBUG may also be helpful if you are trying to modify
276    this code. The assertions in the check routines spell out in more
277    detail the assumptions and invariants underlying the algorithms.
278
279*/
280
281#if DEBUG
282#include <assert.h>
283#else
284#define assert(x) ((void)0)
285#endif
286
287
288/*
289  INTERNAL_SIZE_T is the word-size used for internal bookkeeping
290  of chunk sizes. On a 64-bit machine, you can reduce malloc
291  overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
292  at the expense of not being able to handle requests greater than
293  2^31. This limitation is hardly ever a concern; you are encouraged
294  to set this. However, the default version is the same as size_t.
295*/
296
297#ifndef INTERNAL_SIZE_T
298#define INTERNAL_SIZE_T size_t
299#endif
300
301/*
302  REALLOC_ZERO_BYTES_FREES should be set if a call to
303  realloc with zero bytes should be the same as a call to free.
304  Some people think it should. Otherwise, since this malloc
305  returns a unique pointer for malloc(0), so does realloc(p, 0).
306*/
307
308
309/*   #define REALLOC_ZERO_BYTES_FREES */
310
311
312/*
313  WIN32 causes an emulation of sbrk to be compiled in
314  mmap-based options are not currently supported in WIN32.
315*/
316
317/* #define WIN32 */
318#ifdef WIN32
319#define MORECORE wsbrk
320#define HAVE_MMAP 0
321#endif
322
323
324/*
325  HAVE_MEMCPY should be defined if you are not otherwise using
326  ANSI STD C, but still have memcpy and memset in your C library
327  and want to use them in calloc and realloc. Otherwise simple
328  macro versions are defined here.
329
330  USE_MEMCPY should be defined as 1 if you actually want to
331  have memset and memcpy called. People report that the macro
332  versions are often enough faster than libc versions on many
333  systems that it is better to use them.
334
335*/
336
337#define HAVE_MEMCPY
338
339#ifndef USE_MEMCPY
340#ifdef HAVE_MEMCPY
341#define USE_MEMCPY 1
342#else
343#define USE_MEMCPY 0
344#endif
345#endif
346
347#if (__STD_C || defined(HAVE_MEMCPY))
348
349#if __STD_C
350void* memset(void*, int, size_t);
351void* memcpy(void*, const void*, size_t);
352#else
353Void_t* memset();
354Void_t* memcpy();
355#endif
356#endif
357
358#if USE_MEMCPY
359
360/* The following macros are only invoked with (2n+1)-multiples of
361   INTERNAL_SIZE_T units, with a positive integer n. This is exploited
362   for fast inline execution when n is small. */
363
364#define MALLOC_ZERO(charp, nbytes)                                            \
365do {                                                                          \
366  INTERNAL_SIZE_T mzsz = (nbytes);                                            \
367  if(mzsz <= 9*sizeof(mzsz)) {                                                \
368    INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
369    if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
370                                     *mz++ = 0;                               \
371      if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
372                                     *mz++ = 0;                               \
373        if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
374                                     *mz++ = 0; }}}                           \
375                                     *mz++ = 0;                               \
376                                     *mz++ = 0;                               \
377                                     *mz   = 0;                               \
378  } else memset((charp), 0, mzsz);                                            \
379} while(0)
380
381#define MALLOC_COPY(dest,src,nbytes)                                          \
382do {                                                                          \
383  INTERNAL_SIZE_T mcsz = (nbytes);                                            \
384  if(mcsz <= 9*sizeof(mcsz)) {                                                \
385    INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
386    INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
387    if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
388                                     *mcdst++ = *mcsrc++;                     \
389      if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
390                                     *mcdst++ = *mcsrc++;                     \
391        if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
392                                     *mcdst++ = *mcsrc++; }}}                 \
393                                     *mcdst++ = *mcsrc++;                     \
394                                     *mcdst++ = *mcsrc++;                     \
395                                     *mcdst   = *mcsrc  ;                     \
396  } else memcpy(dest, src, mcsz);                                             \
397} while(0)
398
399#else /* !USE_MEMCPY */
400
401/* Use Duff's device for good zeroing/copying performance. */
402
403#define MALLOC_ZERO(charp, nbytes)                                            \
404do {                                                                          \
405  INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
406  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
407  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
408  switch (mctmp) {                                                            \
409    case 0: for(;;) { *mzp++ = 0;                                             \
410    case 7:           *mzp++ = 0;                                             \
411    case 6:           *mzp++ = 0;                                             \
412    case 5:           *mzp++ = 0;                                             \
413    case 4:           *mzp++ = 0;                                             \
414    case 3:           *mzp++ = 0;                                             \
415    case 2:           *mzp++ = 0;                                             \
416    case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
417  }                                                                           \
418} while(0)
419
420#define MALLOC_COPY(dest,src,nbytes)                                          \
421do {                                                                          \
422  INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
423  INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
424  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
425  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
426  switch (mctmp) {                                                            \
427    case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
428    case 7:           *mcdst++ = *mcsrc++;                                    \
429    case 6:           *mcdst++ = *mcsrc++;                                    \
430    case 5:           *mcdst++ = *mcsrc++;                                    \
431    case 4:           *mcdst++ = *mcsrc++;                                    \
432    case 3:           *mcdst++ = *mcsrc++;                                    \
433    case 2:           *mcdst++ = *mcsrc++;                                    \
434    case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
435  }                                                                           \
436} while(0)
437
438#endif
439
440
441/*
442  Define HAVE_MMAP to optionally make malloc() use mmap() to
443  allocate very large blocks.  These will be returned to the
444  operating system immediately after a free().
445*/
446
447#ifndef HAVE_MMAP
448#define HAVE_MMAP 1
449#endif
450
451/*
452  Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
453  large blocks.  This is currently only possible on Linux with
454  kernel versions newer than 1.3.77.
455*/
456
457#ifndef HAVE_MREMAP
458#ifdef INTERNAL_LINUX_C_LIB
459#define HAVE_MREMAP 1
460#else
461#define HAVE_MREMAP 0
462#endif
463#endif
464
465#if HAVE_MMAP
466
467#include <unistd.h>
468#include <fcntl.h>
469#include <sys/mman.h>
470
471#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
472#define MAP_ANONYMOUS MAP_ANON
473#endif
474
475#endif /* HAVE_MMAP */
476
477/*
478  Access to system page size. To the extent possible, this malloc
479  manages memory from the system in page-size units.
480 
481  The following mechanics for getpagesize were adapted from
482  bsd/gnu getpagesize.h
483*/
484
485#ifndef LACKS_UNISTD_H
486#  include <unistd.h>
487#endif
488
489#ifndef malloc_getpagesize
490#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
491#    ifndef _SC_PAGE_SIZE
492#      define _SC_PAGE_SIZE _SC_PAGESIZE
493#    endif
494#  endif
495#  ifdef _SC_PAGE_SIZE
496#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
497#  else
498#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
499       extern size_t getpagesize();
500#      define malloc_getpagesize getpagesize()
501#    else
502#      include <sys/param.h>
503#      ifdef EXEC_PAGESIZE
504#        define malloc_getpagesize EXEC_PAGESIZE
505#      else
506#        ifdef NBPG
507#          ifndef CLSIZE
508#            define malloc_getpagesize NBPG
509#          else
510#            define malloc_getpagesize (NBPG * CLSIZE)
511#          endif
512#        else
513#          ifdef NBPC
514#            define malloc_getpagesize NBPC
515#          else
516#            ifdef PAGESIZE
517#              define malloc_getpagesize PAGESIZE
518#            else
519#              define malloc_getpagesize (4096) /* just guess */
520#            endif
521#          endif
522#        endif
523#      endif
524#    endif
525#  endif
526#endif
527
528
529
530/*
531
532  This version of malloc supports the standard SVID/XPG mallinfo
533  routine that returns a struct containing the same kind of
534  information you can get from malloc_stats. It should work on
535  any SVID/XPG compliant system that has a /usr/include/malloc.h
536  defining struct mallinfo. (If you'd like to install such a thing
537  yourself, cut out the preliminary declarations as described above
538  and below and save them in a malloc.h file. But there's no
539  compelling reason to bother to do this.)
540
541  The main declaration needed is the mallinfo struct that is returned
542  (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
543  bunch of fields, most of which are not even meaningful in this
544  version of malloc. Some of these fields are are instead filled by
545  mallinfo() with other numbers that might possibly be of interest.
546
547  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
548  /usr/include/malloc.h file that includes a declaration of struct
549  mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
550  version is declared below.  These must be precisely the same for
551  mallinfo() to work.
552
553*/
554
555/* #define HAVE_USR_INCLUDE_MALLOC_H */
556
557#if HAVE_USR_INCLUDE_MALLOC_H
558#include "/usr/include/malloc.h"
559#else
560
561/* SVID2/XPG mallinfo structure */
562
563struct mallinfo {
564  int arena;    /* total space allocated from system */
565  int ordblks;  /* number of non-inuse chunks */
566  int smblks;   /* unused -- always zero */
567  int hblks;    /* number of mmapped regions */
568  int hblkhd;   /* total space in mmapped regions */
569  int usmblks;  /* unused -- always zero */
570  int fsmblks;  /* unused -- always zero */
571  int uordblks; /* total allocated space */
572  int fordblks; /* total non-inuse space */
573  int keepcost; /* top-most, releasable (via malloc_trim) space */
574};     
575
576/* SVID2/XPG mallopt options */
577
578#define M_MXFAST  1    /* UNUSED in this malloc */
579#define M_NLBLKS  2    /* UNUSED in this malloc */
580#define M_GRAIN   3    /* UNUSED in this malloc */
581#define M_KEEP    4    /* UNUSED in this malloc */
582
583#endif
584
585/* mallopt options that actually do something */
586
587#define M_TRIM_THRESHOLD    -1
588#define M_TOP_PAD           -2
589#define M_MMAP_THRESHOLD    -3
590#define M_MMAP_MAX          -4
591
592
593
594#ifndef DEFAULT_TRIM_THRESHOLD
595#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
596#endif
597
598/*
599    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
600      to keep before releasing via malloc_trim in free().
601
602      Automatic trimming is mainly useful in long-lived programs.
603      Because trimming via sbrk can be slow on some systems, and can
604      sometimes be wasteful (in cases where programs immediately
605      afterward allocate more large chunks) the value should be high
606      enough so that your overall system performance would improve by
607      releasing. 
608
609      The trim threshold and the mmap control parameters (see below)
610      can be traded off with one another. Trimming and mmapping are
611      two different ways of releasing unused memory back to the
612      system. Between these two, it is often possible to keep
613      system-level demands of a long-lived program down to a bare
614      minimum. For example, in one test suite of sessions measuring
615      the XF86 X server on Linux, using a trim threshold of 128K and a
616      mmap threshold of 192K led to near-minimal long term resource
617      consumption. 
618
619      If you are using this malloc in a long-lived program, it should
620      pay to experiment with these values.  As a rough guide, you
621      might set to a value close to the average size of a process
622      (program) running on your system.  Releasing this much memory
623      would allow such a process to run in memory.  Generally, it's
624      worth it to tune for trimming rather tham memory mapping when a
625      program undergoes phases where several large chunks are
626      allocated and released in ways that can reuse each other's
627      storage, perhaps mixed with phases where there are no such
628      chunks at all.  And in well-behaved long-lived programs,
629      controlling release of large blocks via trimming versus mapping
630      is usually faster.
631
632      However, in most programs, these parameters serve mainly as
633      protection against the system-level effects of carrying around
634      massive amounts of unneeded memory. Since frequent calls to
635      sbrk, mmap, and munmap otherwise degrade performance, the default
636      parameters are set to relatively high values that serve only as
637      safeguards.
638
639      The default trim value is high enough to cause trimming only in
640      fairly extreme (by current memory consumption standards) cases.
641      It must be greater than page size to have any useful effect.  To
642      disable trimming completely, you can set to (unsigned long)(-1);
643
644
645*/
646
647
648#ifndef DEFAULT_TOP_PAD
649#define DEFAULT_TOP_PAD        (0)
650#endif
651
652/*
653    M_TOP_PAD is the amount of extra `padding' space to allocate or
654      retain whenever sbrk is called. It is used in two ways internally:
655
656      * When sbrk is called to extend the top of the arena to satisfy
657        a new malloc request, this much padding is added to the sbrk
658        request.
659
660      * When malloc_trim is called automatically from free(),
661        it is used as the `pad' argument.
662
663      In both cases, the actual amount of padding is rounded
664      so that the end of the arena is always a system page boundary.
665
666      The main reason for using padding is to avoid calling sbrk so
667      often. Having even a small pad greatly reduces the likelihood
668      that nearly every malloc request during program start-up (or
669      after trimming) will invoke sbrk, which needlessly wastes
670      time.
671
672      Automatic rounding-up to page-size units is normally sufficient
673      to avoid measurable overhead, so the default is 0.  However, in
674      systems where sbrk is relatively slow, it can pay to increase
675      this value, at the expense of carrying around more memory than
676      the program needs.
677
678*/
679
680
681#ifndef DEFAULT_MMAP_THRESHOLD
682#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
683#endif
684
685/*
686
687    M_MMAP_THRESHOLD is the request size threshold for using mmap()
688      to service a request. Requests of at least this size that cannot
689      be allocated using already-existing space will be serviced via mmap. 
690      (If enough normal freed space already exists it is used instead.)
691
692      Using mmap segregates relatively large chunks of memory so that
693      they can be individually obtained and released from the host
694      system. A request serviced through mmap is never reused by any
695      other request (at least not directly; the system may just so
696      happen to remap successive requests to the same locations).
697
698      Segregating space in this way has the benefit that mmapped space
699      can ALWAYS be individually released back to the system, which
700      helps keep the system level memory demands of a long-lived
701      program low. Mapped memory can never become `locked' between
702      other chunks, as can happen with normally allocated chunks, which
703      menas that even trimming via malloc_trim would not release them.
704
705      However, it has the disadvantages that:
706
707         1. The space cannot be reclaimed, consolidated, and then
708            used to service later requests, as happens with normal chunks.
709         2. It can lead to more wastage because of mmap page alignment
710            requirements
711         3. It causes malloc performance to be more dependent on host
712            system memory management support routines which may vary in
713            implementation quality and may impose arbitrary
714            limitations. Generally, servicing a request via normal
715            malloc steps is faster than going through a system's mmap.
716
717      All together, these considerations should lead you to use mmap
718      only for relatively large requests. 
719
720
721*/
722
723
724
725#ifndef DEFAULT_MMAP_MAX
726#if HAVE_MMAP
727#define DEFAULT_MMAP_MAX       (64)
728#else
729#define DEFAULT_MMAP_MAX       (0)
730#endif
731#endif
732
733/*
734    M_MMAP_MAX is the maximum number of requests to simultaneously
735      service using mmap. This parameter exists because:
736
737         1. Some systems have a limited number of internal tables for
738            use by mmap.
739         2. In most systems, overreliance on mmap can degrade overall
740            performance.
741         3. If a program allocates many large regions, it is probably
742            better off using normal sbrk-based allocation routines that
743            can reclaim and reallocate normal heap memory. Using a
744            small value allows transition into this mode after the
745            first few allocations.
746
747      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
748      the default value is 0, and attempts to set it to non-zero values
749      in mallopt will fail.
750*/
751
752
753
754
755/*
756
757  Special defines for linux libc
758
759  Except when compiled using these special defines for Linux libc
760  using weak aliases, this malloc is NOT designed to work in
761  multithreaded applications.  No semaphores or other concurrency
762  control are provided to ensure that multiple malloc or free calls
763  don't run at the same time, which could be disasterous. A single
764  semaphore could be used across malloc, realloc, and free (which is
765  essentially the effect of the linux weak alias approach). It would
766  be hard to obtain finer granularity.
767
768*/
769
770
771#ifdef INTERNAL_LINUX_C_LIB
772
773#if __STD_C
774
775Void_t * __default_morecore_init (ptrdiff_t);
776Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
777
778#else
779
780Void_t * __default_morecore_init ();
781Void_t *(*__morecore)() = __default_morecore_init;
782
783#endif
784
785#define MORECORE (*__morecore)
786#define MORECORE_FAILURE 0
787#define MORECORE_CLEARS 1
788
789#else /* INTERNAL_LINUX_C_LIB */
790
791#if __STD_C
792extern Void_t*     sbrk(ptrdiff_t);
793#else
794extern Void_t*     sbrk();
795#endif
796
797#ifndef MORECORE
798#define MORECORE sbrk
799#endif
800
801#ifndef MORECORE_FAILURE
802#define MORECORE_FAILURE -1
803#endif
804
805#ifndef MORECORE_CLEARS
806#define MORECORE_CLEARS 1
807#endif
808
809#endif /* INTERNAL_LINUX_C_LIB */
810
811#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
812
813#define cALLOc          __libc_calloc
814#define fREe            __libc_free
815#define mALLOc          __libc_malloc
816#define mEMALIGn        __libc_memalign
817#define rEALLOc         __libc_realloc
818#define vALLOc          __libc_valloc
819#define pvALLOc         __libc_pvalloc
820#define mALLINFo        __libc_mallinfo
821#define mALLOPt         __libc_mallopt
822
823#pragma weak calloc = __libc_calloc
824#pragma weak free = __libc_free
825#pragma weak cfree = __libc_free
826#pragma weak malloc = __libc_malloc
827#pragma weak memalign = __libc_memalign
828#pragma weak realloc = __libc_realloc
829#pragma weak valloc = __libc_valloc
830#pragma weak pvalloc = __libc_pvalloc
831#pragma weak mallinfo = __libc_mallinfo
832#pragma weak mallopt = __libc_mallopt
833
834#else
835
836
837#define cALLOc          calloc
838#define fREe            free
839#define mALLOc          malloc
840#define mEMALIGn        memalign
841#define rEALLOc         realloc
842#define vALLOc          valloc
843#define pvALLOc         pvalloc
844#define mALLINFo        mallinfo
845#define mALLOPt         mallopt
846
847#endif
848
849/* Public routines */
850
851#if __STD_C
852
853Void_t* mALLOc(size_t);
854void    fREe(Void_t*);
855Void_t* rEALLOc(Void_t*, size_t);
856Void_t* mEMALIGn(size_t, size_t);
857Void_t* vALLOc(size_t);
858Void_t* pvALLOc(size_t);
859Void_t* cALLOc(size_t, size_t);
860void    cfree(Void_t*);
861int     malloc_trim(size_t);
862size_t  malloc_usable_size(Void_t*);
863void    malloc_stats();
864int     mALLOPt(int, int);
865struct mallinfo mALLINFo(void);
866#else
867Void_t* mALLOc();
868void    fREe();
869Void_t* rEALLOc();
870Void_t* mEMALIGn();
871Void_t* vALLOc();
872Void_t* pvALLOc();
873Void_t* cALLOc();
874void    cfree();
875int     malloc_trim();
876size_t  malloc_usable_size();
877void    malloc_stats();
878int     mALLOPt();
879struct mallinfo mALLINFo();
880#endif
881
882
883#ifdef __cplusplus
884};  /* end of extern "C" */
885#endif
886
887/* ---------- To make a malloc.h, end cutting here ------------ */
888
889
890/*
891  Emulation of sbrk for WIN32
892  All code within the ifdef WIN32 is untested by me.
893*/
894
895
896#ifdef WIN32
897
898#define AlignPage(add) (((add) + (malloc_getpagesize-1)) &
899~(malloc_getpagesize-1))
900
901/* resrve 64MB to insure large contiguous space */ 
902#define RESERVED_SIZE (1024*1024*64)
903#define NEXT_SIZE (2048*1024)
904#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
905
906struct GmListElement;
907typedef struct GmListElement GmListElement;
908
909struct GmListElement
910{
911        GmListElement* next;
912        void* base;
913};
914
915static GmListElement* head = 0;
916static unsigned int gNextAddress = 0;
917static unsigned int gAddressBase = 0;
918static unsigned int gAllocatedSize = 0;
919
920static
921GmListElement* makeGmListElement (void* bas)
922{
923        GmListElement* this;
924        this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
925        ASSERT (this);
926        if (this)
927        {
928                this->base = bas;
929                this->next = head;
930                head = this;
931        }
932        return this;
933}
934
935void gcleanup ()
936{
937        BOOL rval;
938        ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
939        if (gAddressBase && (gNextAddress - gAddressBase))
940        {
941                rval = VirtualFree ((void*)gAddressBase, 
942                                                        gNextAddress - gAddressBase, 
943                                                        MEM_DECOMMIT);
944        ASSERT (rval);
945        }
946        while (head)
947        {
948                GmListElement* next = head->next;
949                rval = VirtualFree (head->base, 0, MEM_RELEASE);
950                ASSERT (rval);
951                LocalFree (head);
952                head = next;
953        }
954}
955               
956static
957void* findRegion (void* start_address, unsigned long size)
958{
959        MEMORY_BASIC_INFORMATION info;
960        while ((unsigned long)start_address < TOP_MEMORY)
961        {
962                VirtualQuery (start_address, &info, sizeof (info));
963                if (info.State != MEM_FREE)
964                        start_address = (char*)info.BaseAddress + info.RegionSize;
965                else if (info.RegionSize >= size)
966                        return start_address;
967                else
968                        start_address = (char*)info.BaseAddress + info.RegionSize; 
969        }
970        return NULL;
971       
972}
973
974
975void* wsbrk (long size)
976{
977        void* tmp;
978        if (size > 0)
979        {
980                if (gAddressBase == 0)
981                {
982                        gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
983                        gNextAddress = gAddressBase = 
984                                (unsigned int)VirtualAlloc (NULL, gAllocatedSize, 
985                                                                                        MEM_RESERVE, PAGE_NOACCESS);
986                } else if (AlignPage (gNextAddress + size) > (gAddressBase +
987gAllocatedSize))
988                {
989                        long new_size = max (NEXT_SIZE, AlignPage (size));
990                        void* new_address = (void*)(gAddressBase+gAllocatedSize);
991                        do 
992                        {
993                                new_address = findRegion (new_address, new_size);
994                               
995                                if (new_address == 0)
996                                        return (void*)-1;
997
998                                gAddressBase = gNextAddress =
999                                        (unsigned int)VirtualAlloc (new_address, new_size,
1000                                                                                                MEM_RESERVE, PAGE_NOACCESS);
1001                                // repeat in case of race condition
1002                                // The region that we found has been snagged
1003                                // by another thread
1004                        }
1005                        while (gAddressBase == 0);
1006
1007                        ASSERT (new_address == (void*)gAddressBase);
1008
1009                        gAllocatedSize = new_size;
1010
1011                        if (!makeGmListElement ((void*)gAddressBase))
1012                                return (void*)-1;
1013                }
1014                if ((size + gNextAddress) > AlignPage (gNextAddress))
1015                {
1016                        void* res;
1017                        res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1018                                                                (size + gNextAddress - 
1019                                                                 AlignPage (gNextAddress)), 
1020                                                                MEM_COMMIT, PAGE_READWRITE);
1021                        if (res == 0)
1022                                return (void*)-1;
1023                }
1024                tmp = (void*)gNextAddress;
1025                gNextAddress = (unsigned int)tmp + size;
1026                return tmp;
1027        }
1028        else if (size < 0)
1029        {
1030                unsigned int alignedGoal = AlignPage (gNextAddress + size);
1031                /* Trim by releasing the virtual memory */
1032                if (alignedGoal >= gAddressBase)
1033                {
1034                        VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal, 
1035                                                 MEM_DECOMMIT);
1036                        gNextAddress = gNextAddress + size;
1037                        return (void*)gNextAddress;
1038                }
1039                else 
1040                {
1041                        VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1042                                                 MEM_DECOMMIT);
1043                        gNextAddress = gAddressBase;
1044                        return (void*)-1;
1045                }
1046        }
1047        else
1048        {
1049                return (void*)gNextAddress;
1050        }
1051}
1052
1053#endif
1054
1055
1056
1057/*
1058  Type declarations
1059*/
1060
1061
1062struct malloc_chunk
1063{
1064  INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1065  INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
1066  struct malloc_chunk* fd;   /* double links -- used only if free. */
1067  struct malloc_chunk* bk;
1068};
1069
1070typedef struct malloc_chunk* mchunkptr;
1071
1072/*
1073
1074   malloc_chunk details:
1075
1076    (The following includes lightly edited explanations by Colin Plumb.)
1077
1078    Chunks of memory are maintained using a `boundary tag' method as
1079    described in e.g., Knuth or Standish.  (See the paper by Paul
1080    Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1081    survey of such techniques.)  Sizes of free chunks are stored both
1082    in the front of each chunk and at the end.  This makes
1083    consolidating fragmented chunks into bigger chunks very fast.  The
1084    size fields also hold bits representing whether chunks are free or
1085    in use.
1086
1087    An allocated chunk looks like this: 
1088
1089
1090    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1091            |             Size of previous chunk, if allocated            | |
1092            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1093            |             Size of chunk, in bytes                         |P|
1094      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1095            |             User data starts here...                          .
1096            .                                                               .
1097            .             (malloc_usable_space() bytes)                     .
1098            .                                                               |
1099nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1100            |             Size of chunk                                     |
1101            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1102
1103
1104    Where "chunk" is the front of the chunk for the purpose of most of
1105    the malloc code, but "mem" is the pointer that is returned to the
1106    user.  "Nextchunk" is the beginning of the next contiguous chunk.
1107
1108    Chunks always begin on even word boundries, so the mem portion
1109    (which is returned to the user) is also on an even word boundary, and
1110    thus double-word aligned.
1111
1112    Free chunks are stored in circular doubly-linked lists, and look like this:
1113
1114    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1115            |             Size of previous chunk                            |
1116            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1117    `head:' |             Size of chunk, in bytes                         |P|
1118      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1119            |             Forward pointer to next chunk in list             |
1120            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1121            |             Back pointer to previous chunk in list            |
1122            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1123            |             Unused space (may be 0 bytes long)                .
1124            .                                                               .
1125            .                                                               |
1126nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1127    `foot:' |             Size of chunk, in bytes                           |
1128            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1129
1130    The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1131    chunk size (which is always a multiple of two words), is an in-use
1132    bit for the *previous* chunk.  If that bit is *clear*, then the
1133    word before the current chunk size contains the previous chunk
1134    size, and can be used to find the front of the previous chunk.
1135    (The very first chunk allocated always has this bit set,
1136    preventing access to non-existent (or non-owned) memory.)
1137
1138    Note that the `foot' of the current chunk is actually represented
1139    as the prev_size of the NEXT chunk. (This makes it easier to
1140    deal with alignments etc).
1141
1142    The two exceptions to all this are
1143
1144     1. The special chunk `top', which doesn't bother using the
1145        trailing size field since there is no
1146        next contiguous chunk that would have to index off it. (After
1147        initialization, `top' is forced to always exist.  If it would
1148        become less than MINSIZE bytes long, it is replenished via
1149        malloc_extend_top.)
1150
1151     2. Chunks allocated via mmap, which have the second-lowest-order
1152        bit (IS_MMAPPED) set in their size fields.  Because they are
1153        never merged or traversed from any other chunk, they have no
1154        foot size or inuse information.
1155
1156    Available chunks are kept in any of several places (all declared below):
1157
1158    * `av': An array of chunks serving as bin headers for consolidated
1159       chunks. Each bin is doubly linked.  The bins are approximately
1160       proportionally (log) spaced.  There are a lot of these bins
1161       (128). This may look excessive, but works very well in
1162       practice.  All procedures maintain the invariant that no
1163       consolidated chunk physically borders another one. Chunks in
1164       bins are kept in size order, with ties going to the
1165       approximately least recently used chunk.
1166
1167       The chunks in each bin are maintained in decreasing sorted order by
1168       size.  This is irrelevant for the small bins, which all contain
1169       the same-sized chunks, but facilitates best-fit allocation for
1170       larger chunks. (These lists are just sequential. Keeping them in
1171       order almost never requires enough traversal to warrant using
1172       fancier ordered data structures.)  Chunks of the same size are
1173       linked with the most recently freed at the front, and allocations
1174       are taken from the back.  This results in LRU or FIFO allocation
1175       order, which tends to give each chunk an equal opportunity to be
1176       consolidated with adjacent freed chunks, resulting in larger free
1177       chunks and less fragmentation.
1178
1179    * `top': The top-most available chunk (i.e., the one bordering the
1180       end of available memory) is treated specially. It is never
1181       included in any bin, is used only if no other chunk is
1182       available, and is released back to the system if it is very
1183       large (see M_TRIM_THRESHOLD).
1184
1185    * `last_remainder': A bin holding only the remainder of the
1186       most recently split (non-top) chunk. This bin is checked
1187       before other non-fitting chunks, so as to provide better
1188       locality for runs of sequentially allocated chunks.
1189
1190    *  Implicitly, through the host system's memory mapping tables.
1191       If supported, requests greater than a threshold are usually
1192       serviced via calls to mmap, and then later released via munmap.
1193
1194*/
1195
1196
1197
1198
1199
1200
1201/*  sizes, alignments */
1202
1203#define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
1204#define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
1205#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
1206#define MINSIZE                (sizeof(struct malloc_chunk))
1207
1208/* conversion from malloc headers to user pointers, and back */
1209
1210#define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1211#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1212
1213/* pad request bytes into a usable size */
1214
1215#define request2size(req) \
1216 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1217  (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1218   (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1219
1220/* Check if m has acceptable alignment */
1221
1222#define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1223
1224
1225
1226
1227/*
1228  Physical chunk operations 
1229*/
1230
1231
1232/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1233
1234#define PREV_INUSE 0x1
1235
1236/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1237
1238#define IS_MMAPPED 0x2
1239
1240/* Bits to mask off when extracting size */
1241
1242#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1243
1244
1245/* Ptr to next physical malloc_chunk. */
1246
1247#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1248
1249/* Ptr to previous physical malloc_chunk */
1250
1251#define prev_chunk(p)\
1252   ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1253
1254
1255/* Treat space at ptr + offset as a chunk */
1256
1257#define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1258
1259
1260
1261
1262/*
1263  Dealing with use bits
1264*/
1265
1266/* extract p's inuse bit */
1267
1268#define inuse(p)\
1269((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1270
1271/* extract inuse bit of previous chunk */
1272
1273#define prev_inuse(p)  ((p)->size & PREV_INUSE)
1274
1275/* check for mmap()'ed chunk */
1276
1277#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1278
1279/* set/clear chunk as in use without otherwise disturbing */
1280
1281#define set_inuse(p)\
1282((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1283
1284#define clear_inuse(p)\
1285((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1286
1287/* check/set/clear inuse bits in known places */
1288
1289#define inuse_bit_at_offset(p, s)\
1290 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1291
1292#define set_inuse_bit_at_offset(p, s)\
1293 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1294
1295#define clear_inuse_bit_at_offset(p, s)\
1296 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1297
1298
1299
1300
1301/*
1302  Dealing with size fields
1303*/
1304
1305/* Get size, ignoring use bits */
1306
1307#define chunksize(p)          ((p)->size & ~(SIZE_BITS))
1308
1309/* Set size at head, without disturbing its use bit */
1310
1311#define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1312
1313/* Set size/use ignoring previous bits in header */
1314
1315#define set_head(p, s)        ((p)->size = (s))
1316
1317/* Set size at footer (only when chunk is not in use) */
1318
1319#define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1320
1321
1322
1323
1324
1325/*
1326   Bins
1327
1328    The bins, `av_' are an array of pairs of pointers serving as the
1329    heads of (initially empty) doubly-linked lists of chunks, laid out
1330    in a way so that each pair can be treated as if it were in a
1331    malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1332    and chunks are the same).
1333
1334    Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1335    8 bytes apart. Larger bins are approximately logarithmically
1336    spaced. (See the table below.) The `av_' array is never mentioned
1337    directly in the code, but instead via bin access macros.
1338
1339    Bin layout:
1340
1341    64 bins of size       8
1342    32 bins of size      64
1343    16 bins of size     512
1344     8 bins of size    4096
1345     4 bins of size   32768
1346     2 bins of size  262144
1347     1 bin  of size what's left
1348
1349    There is actually a little bit of slop in the numbers in bin_index
1350    for the sake of speed. This makes no difference elsewhere.
1351
1352    The special chunks `top' and `last_remainder' get their own bins,
1353    (this is implemented via yet more trickery with the av_ array),
1354    although `top' is never properly linked to its bin since it is
1355    always handled specially.
1356
1357*/
1358
1359#define NAV             128   /* number of bins */
1360
1361typedef struct malloc_chunk* mbinptr;
1362
1363/* access macros */
1364
1365#define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1366#define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1367#define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1368
1369/*
1370   The first 2 bins are never indexed. The corresponding av_ cells are instead
1371   used for bookkeeping. This is not to save space, but to simplify
1372   indexing, maintain locality, and avoid some initialization tests.
1373*/
1374
1375#define top            (bin_at(0)->fd)   /* The topmost chunk */
1376#define last_remainder (bin_at(1))       /* remainder from last split */
1377
1378
1379/*
1380   Because top initially points to its own bin with initial
1381   zero size, thus forcing extension on the first malloc request,
1382   we avoid having any special code in malloc to check whether
1383   it even exists yet. But we still need to in malloc_extend_top.
1384*/
1385
1386#define initial_top    ((mchunkptr)(bin_at(0)))
1387
1388/* Helper macro to initialize bins */
1389
1390#define IAV(i)  bin_at(i), bin_at(i)
1391
1392static mbinptr av_[NAV * 2 + 2] = {
1393 0, 0,
1394 IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
1395 IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
1396 IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
1397 IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
1398 IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
1399 IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
1400 IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
1401 IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
1402 IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
1403 IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
1404 IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
1405 IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
1406 IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
1407 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1408 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1409 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1410};
1411
1412
1413
1414/* field-extraction macros */
1415
1416#define first(b) ((b)->fd)
1417#define last(b)  ((b)->bk)
1418
1419/*
1420  Indexing into bins
1421*/
1422
1423#define bin_index(sz)                                                          \
1424(((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
1425 ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
1426 ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
1427 ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
1428 ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
1429 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1430                                          126)                     
1431/*
1432  bins for chunks < 512 are all spaced 8 bytes apart, and hold
1433  identically sized chunks. This is exploited in malloc.
1434*/
1435
1436#define MAX_SMALLBIN         63
1437#define MAX_SMALLBIN_SIZE   512
1438#define SMALLBIN_WIDTH        8
1439
1440#define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
1441
1442/*
1443   Requests are `small' if both the corresponding and the next bin are small
1444*/
1445
1446#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1447
1448
1449
1450/*
1451    To help compensate for the large number of bins, a one-level index
1452    structure is used for bin-by-bin searching.  `binblocks' is a
1453    one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1454    have any (possibly) non-empty bins, so they can be skipped over
1455    all at once during during traversals. The bits are NOT always
1456    cleared as soon as all bins in a block are empty, but instead only
1457    when all are noticed to be empty during traversal in malloc.
1458*/
1459
1460#define BINBLOCKWIDTH     4   /* bins per block */
1461
1462#define binblocks      (bin_at(0)->size) /* bitvector of nonempty blocks */
1463
1464/* bin<->block macros */
1465
1466#define idx2binblock(ix)    ((unsigned)1 << (ix / BINBLOCKWIDTH))
1467#define mark_binblock(ii)   (binblocks |= idx2binblock(ii))
1468#define clear_binblock(ii)  (binblocks &= ~(idx2binblock(ii)))
1469
1470
1471
1472
1473
1474/*  Other static bookkeeping data */
1475
1476/* variables holding tunable values */
1477
1478static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
1479static unsigned long top_pad          = DEFAULT_TOP_PAD;
1480static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
1481static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
1482
1483/* The first value returned from sbrk */
1484static char* sbrk_base = (char*)(-1);
1485
1486/* The maximum memory obtained from system via sbrk */
1487static unsigned long max_sbrked_mem = 0; 
1488
1489/* The maximum via either sbrk or mmap */
1490static unsigned long max_total_mem = 0; 
1491
1492/* internal working copy of mallinfo */
1493static struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1494
1495/* The total memory obtained from system via sbrk */
1496#define sbrked_mem  (current_mallinfo.arena)
1497
1498/* Tracking mmaps */
1499
1500static unsigned int n_mmaps = 0;
1501static unsigned int max_n_mmaps = 0;
1502static unsigned long mmapped_mem = 0;
1503static unsigned long max_mmapped_mem = 0;
1504
1505
1506
1507/*
1508  Debugging support
1509*/
1510
1511#if DEBUG
1512
1513
1514/*
1515  These routines make a number of assertions about the states
1516  of data structures that should be true at all times. If any
1517  are not true, it's very likely that a user program has somehow
1518  trashed memory. (It's also possible that there is a coding error
1519  in malloc. In which case, please report it!)
1520*/
1521
1522#if __STD_C
1523static void do_check_chunk(mchunkptr p) 
1524#else
1525static void do_check_chunk(p) mchunkptr p;
1526#endif
1527{ 
1528  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1529
1530  /* No checkable chunk is mmapped */
1531  assert(!chunk_is_mmapped(p));
1532
1533  /* Check for legal address ... */
1534  assert((char*)p >= sbrk_base);
1535  if (p != top) 
1536    assert((char*)p + sz <= (char*)top);
1537  else
1538    assert((char*)p + sz <= sbrk_base + sbrked_mem);
1539
1540}
1541
1542
1543#if __STD_C
1544static void do_check_free_chunk(mchunkptr p) 
1545#else
1546static void do_check_free_chunk(p) mchunkptr p;
1547#endif
1548{ 
1549  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1550  mchunkptr next = chunk_at_offset(p, sz);
1551
1552  do_check_chunk(p);
1553
1554  /* Check whether it claims to be free ... */
1555  assert(!inuse(p));
1556
1557  /* Unless a special marker, must have OK fields */
1558  if ((long)sz >= (long)MINSIZE)
1559  {
1560    assert((sz & MALLOC_ALIGN_MASK) == 0);
1561    assert(aligned_OK(chunk2mem(p)));
1562    /* ... matching footer field */
1563    assert(next->prev_size == sz);
1564    /* ... and is fully consolidated */
1565    assert(prev_inuse(p));
1566    assert (next == top || inuse(next));
1567   
1568    /* ... and has minimally sane links */
1569    assert(p->fd->bk == p);
1570    assert(p->bk->fd == p);
1571  }
1572  else /* markers are always of size SIZE_SZ */
1573    assert(sz == SIZE_SZ); 
1574}
1575
1576#if __STD_C
1577static void do_check_inuse_chunk(mchunkptr p) 
1578#else
1579static void do_check_inuse_chunk(p) mchunkptr p;
1580#endif
1581{ 
1582  mchunkptr next = next_chunk(p);
1583  do_check_chunk(p);
1584
1585  /* Check whether it claims to be in use ... */
1586  assert(inuse(p));
1587
1588  /* ... and is surrounded by OK chunks.
1589    Since more things can be checked with free chunks than inuse ones,
1590    if an inuse chunk borders them and debug is on, it's worth doing them.
1591  */
1592  if (!prev_inuse(p)) 
1593  {
1594    mchunkptr prv = prev_chunk(p);
1595    assert(next_chunk(prv) == p);
1596    do_check_free_chunk(prv);
1597  }
1598  if (next == top)
1599  {
1600    assert(prev_inuse(next));
1601    assert(chunksize(next) >= MINSIZE);
1602  }
1603  else if (!inuse(next))
1604    do_check_free_chunk(next);
1605
1606}
1607
1608#if __STD_C
1609static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) 
1610#else
1611static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1612#endif
1613{
1614  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1615  long room = sz - s;
1616
1617  do_check_inuse_chunk(p);
1618
1619  /* Legal size ... */
1620  assert((long)sz >= (long)MINSIZE);
1621  assert((sz & MALLOC_ALIGN_MASK) == 0);
1622  assert(room >= 0);
1623  assert(room < (long)MINSIZE);
1624
1625  /* ... and alignment */
1626  assert(aligned_OK(chunk2mem(p)));
1627
1628
1629  /* ... and was allocated at front of an available chunk */
1630  assert(prev_inuse(p));
1631
1632}
1633
1634
1635#define check_free_chunk(P)  do_check_free_chunk(P)
1636#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1637#define check_chunk(P) do_check_chunk(P)
1638#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1639#else
1640#define check_free_chunk(P)
1641#define check_inuse_chunk(P)
1642#define check_chunk(P)
1643#define check_malloced_chunk(P,N)
1644#endif
1645
1646
1647
1648/*
1649  Macro-based internal utilities
1650*/
1651
1652
1653/* 
1654  Linking chunks in bin lists.
1655  Call these only with variables, not arbitrary expressions, as arguments.
1656*/
1657
1658/*
1659  Place chunk p of size s in its bin, in size order,
1660  putting it ahead of others of same size.
1661*/
1662
1663
1664#define frontlink(P, S, IDX, BK, FD)                                          \
1665{                                                                             \
1666  if (S < MAX_SMALLBIN_SIZE)                                                  \
1667  {                                                                           \
1668    IDX = smallbin_index(S);                                                  \
1669    mark_binblock(IDX);                                                       \
1670    BK = bin_at(IDX);                                                         \
1671    FD = BK->fd;                                                              \
1672    P->bk = BK;                                                               \
1673    P->fd = FD;                                                               \
1674    FD->bk = BK->fd = P;                                                      \
1675  }                                                                           \
1676  else                                                                        \
1677  {                                                                           \
1678    IDX = bin_index(S);                                                       \
1679    BK = bin_at(IDX);                                                         \
1680    FD = BK->fd;                                                              \
1681    if (FD == BK) mark_binblock(IDX);                                         \
1682    else                                                                      \
1683    {                                                                         \
1684      while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
1685      BK = FD->bk;                                                            \
1686    }                                                                         \
1687    P->bk = BK;                                                               \
1688    P->fd = FD;                                                               \
1689    FD->bk = BK->fd = P;                                                      \
1690  }                                                                           \
1691}
1692
1693
1694/* take a chunk off a list */
1695
1696#define unlink(P, BK, FD)                                                     \
1697{                                                                             \
1698  BK = P->bk;                                                                 \
1699  FD = P->fd;                                                                 \
1700  FD->bk = BK;                                                                \
1701  BK->fd = FD;                                                                \
1702}                                                                             \
1703
1704/* Place p as the last remainder */
1705
1706#define link_last_remainder(P)                                                \
1707{                                                                             \
1708  last_remainder->fd = last_remainder->bk =  P;                               \
1709  P->fd = P->bk = last_remainder;                                             \
1710}
1711
1712/* Clear the last_remainder bin */
1713
1714#define clear_last_remainder \
1715  (last_remainder->fd = last_remainder->bk = last_remainder)
1716
1717
1718
1719
1720
1721
1722/* Routines dealing with mmap(). */
1723
1724#if HAVE_MMAP
1725
1726#if __STD_C
1727static mchunkptr mmap_chunk(size_t size)
1728#else
1729static mchunkptr mmap_chunk(size) size_t size;
1730#endif
1731{
1732  size_t page_mask = malloc_getpagesize - 1;
1733  mchunkptr p;
1734
1735#ifndef MAP_ANONYMOUS
1736  static int fd = -1;
1737#endif
1738
1739  if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1740
1741  /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1742   * there is no following chunk whose prev_size field could be used.
1743   */
1744  size = (size + SIZE_SZ + page_mask) & ~page_mask;
1745
1746#ifdef MAP_ANONYMOUS
1747  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1748                      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1749#else /* !MAP_ANONYMOUS */
1750  if (fd < 0) 
1751  {
1752    fd = open("/dev/zero", O_RDWR);
1753    if(fd < 0) return 0;
1754  }
1755  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1756#endif
1757
1758  if(p == (mchunkptr)-1) return 0;
1759
1760  n_mmaps++;
1761  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1762 
1763  /* We demand that eight bytes into a page must be 8-byte aligned. */
1764  assert(aligned_OK(chunk2mem(p)));
1765
1766  /* The offset to the start of the mmapped region is stored
1767   * in the prev_size field of the chunk; normally it is zero,
1768   * but that can be changed in memalign().
1769   */
1770  p->prev_size = 0;
1771  set_head(p, size|IS_MMAPPED);
1772 
1773  mmapped_mem += size;
1774  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
1775    max_mmapped_mem = mmapped_mem;
1776  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
1777    max_total_mem = mmapped_mem + sbrked_mem;
1778  return p;
1779}
1780
1781#if __STD_C
1782static void munmap_chunk(mchunkptr p)
1783#else
1784static void munmap_chunk(p) mchunkptr p;
1785#endif
1786{
1787  INTERNAL_SIZE_T size = chunksize(p);
1788  int ret;
1789
1790  assert (chunk_is_mmapped(p));
1791  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1792  assert((n_mmaps > 0));
1793  assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1794
1795  n_mmaps--;
1796  mmapped_mem -= (size + p->prev_size);
1797
1798  ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1799
1800  /* munmap returns non-zero on failure */
1801  assert(ret == 0);
1802}
1803
1804#if HAVE_MREMAP
1805
1806#if __STD_C
1807static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1808#else
1809static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1810#endif
1811{
1812  size_t page_mask = malloc_getpagesize - 1;
1813  INTERNAL_SIZE_T offset = p->prev_size;
1814  INTERNAL_SIZE_T size = chunksize(p);
1815  char *cp;
1816
1817  assert (chunk_is_mmapped(p));
1818  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1819  assert((n_mmaps > 0));
1820  assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1821
1822  /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1823  new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1824
1825  cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1826
1827  if (cp == (char *)-1) return 0;
1828
1829  p = (mchunkptr)(cp + offset);
1830
1831  assert(aligned_OK(chunk2mem(p)));
1832
1833  assert((p->prev_size == offset));
1834  set_head(p, (new_size - offset)|IS_MMAPPED);
1835
1836  mmapped_mem -= size + offset;
1837  mmapped_mem += new_size;
1838  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
1839    max_mmapped_mem = mmapped_mem;
1840  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1841    max_total_mem = mmapped_mem + sbrked_mem;
1842  return p;
1843}
1844
1845#endif /* HAVE_MREMAP */
1846
1847#endif /* HAVE_MMAP */
1848
1849
1850
1851
1852/*
1853  Extend the top-most chunk by obtaining memory from system.
1854  Main interface to sbrk (but see also malloc_trim).
1855*/
1856
1857#if __STD_C
1858static void malloc_extend_top(INTERNAL_SIZE_T nb)
1859#else
1860static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1861#endif
1862{
1863  char*     brk;                  /* return value from sbrk */
1864  INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1865  INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
1866  char*     new_brk;              /* return of 2nd sbrk call */
1867  INTERNAL_SIZE_T top_size;       /* new size of top chunk */
1868
1869  mchunkptr old_top     = top;  /* Record state of old top */
1870  INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1871  char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
1872
1873  /* Pad request with top_pad plus minimal overhead */
1874 
1875  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
1876  unsigned long pagesz    = malloc_getpagesize;
1877
1878  /* If not the first time through, round to preserve page boundary */
1879  /* Otherwise, we need to correct to a page size below anyway. */
1880  /* (We also correct below if an intervening foreign sbrk call.) */
1881
1882  if (sbrk_base != (char*)(-1))
1883    sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1884
1885  brk = (char*)(MORECORE (sbrk_size));
1886
1887  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
1888  if (brk == (char*)(MORECORE_FAILURE) || 
1889      (brk < old_end && old_top != initial_top))
1890    return;     
1891
1892  sbrked_mem += sbrk_size;
1893
1894  if (brk == old_end) /* can just add bytes to current top */
1895  {
1896    top_size = sbrk_size + old_top_size;
1897    set_head(top, top_size | PREV_INUSE);
1898  }
1899  else
1900  {
1901    if (sbrk_base == (char*)(-1))  /* First time through. Record base */
1902      sbrk_base = brk;
1903    else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
1904      sbrked_mem += brk - (char*)old_end;
1905
1906    /* Guarantee alignment of first new chunk made from this space */
1907    front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
1908    if (front_misalign > 0) 
1909    {
1910      correction = (MALLOC_ALIGNMENT) - front_misalign;
1911      brk += correction;
1912    }
1913    else
1914      correction = 0;
1915
1916    /* Guarantee the next brk will be at a page boundary */
1917    correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
1918
1919    /* Allocate correction */
1920    new_brk = (char*)(MORECORE (correction));
1921    if (new_brk == (char*)(MORECORE_FAILURE)) return; 
1922
1923    sbrked_mem += correction;
1924
1925    top = (mchunkptr)brk;
1926    top_size = new_brk - brk + correction;
1927    set_head(top, top_size | PREV_INUSE);
1928
1929    if (old_top != initial_top)
1930    {
1931
1932      /* There must have been an intervening foreign sbrk call. */
1933      /* A double fencepost is necessary to prevent consolidation */
1934
1935      /* If not enough space to do this, then user did something very wrong */
1936      if (old_top_size < MINSIZE) 
1937      {
1938        set_head(top, PREV_INUSE); /* will force null return from malloc */
1939        return;
1940      }
1941
1942      /* Also keep size a multiple of MALLOC_ALIGNMENT */
1943      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
1944      chunk_at_offset(old_top, old_top_size          )->size =
1945        SIZE_SZ|PREV_INUSE;
1946      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
1947        SIZE_SZ|PREV_INUSE;
1948      set_head_size(old_top, old_top_size);
1949      /* If possible, release the rest. */
1950      if (old_top_size >= MINSIZE) 
1951        fREe(chunk2mem(old_top));
1952    }
1953  }
1954
1955  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) 
1956    max_sbrked_mem = sbrked_mem;
1957  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
1958    max_total_mem = mmapped_mem + sbrked_mem;
1959
1960  /* We always land on a page boundary */
1961  assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
1962}
1963
1964
1965
1966
1967/* Main public routines */
1968
1969
1970/*
1971  Malloc Algorthim:
1972
1973    The requested size is first converted into a usable form, `nb'.
1974    This currently means to add 4 bytes overhead plus possibly more to
1975    obtain 8-byte alignment and/or to obtain a size of at least
1976    MINSIZE (currently 16 bytes), the smallest allocatable size.
1977    (All fits are considered `exact' if they are within MINSIZE bytes.)
1978
1979    From there, the first successful of the following steps is taken:
1980
1981      1. The bin corresponding to the request size is scanned, and if
1982         a chunk of exactly the right size is found, it is taken.
1983
1984      2. The most recently remaindered chunk is used if it is big
1985         enough.  This is a form of (roving) first fit, used only in
1986         the absence of exact fits. Runs of consecutive requests use
1987         the remainder of the chunk used for the previous such request
1988         whenever possible. This limited use of a first-fit style
1989         allocation strategy tends to give contiguous chunks
1990         coextensive lifetimes, which improves locality and can reduce
1991         fragmentation in the long run.
1992
1993      3. Other bins are scanned in increasing size order, using a
1994         chunk big enough to fulfill the request, and splitting off
1995         any remainder.  This search is strictly by best-fit; i.e.,
1996         the smallest (with ties going to approximately the least
1997         recently used) chunk that fits is selected.
1998
1999      4. If large enough, the chunk bordering the end of memory
2000         (`top') is split off. (This use of `top' is in accord with
2001         the best-fit search rule.  In effect, `top' is treated as
2002         larger (and thus less well fitting) than any other available
2003         chunk since it can be extended to be as large as necessary
2004         (up to system limitations).
2005
2006      5. If the request size meets the mmap threshold and the
2007         system supports mmap, and there are few enough currently
2008         allocated mmapped regions, and a call to mmap succeeds,
2009         the request is allocated via direct memory mapping.
2010
2011      6. Otherwise, the top of memory is extended by
2012         obtaining more space from the system (normally using sbrk,
2013         but definable to anything else via the MORECORE macro).
2014         Memory is gathered from the system (in system page-sized
2015         units) in a way that allows chunks obtained across different
2016         sbrk calls to be consolidated, but does not require
2017         contiguous memory. Thus, it should be safe to intersperse
2018         mallocs with other sbrk calls.
2019
2020
2021      All allocations are made from the the `lowest' part of any found
2022      chunk. (The implementation invariant is that prev_inuse is
2023      always true of any allocated chunk; i.e., that each allocated
2024      chunk borders either a previously allocated and still in-use chunk,
2025      or the base of its memory arena.)
2026
2027*/
2028
2029#if __STD_C
2030Void_t* mALLOc(size_t bytes)
2031#else
2032Void_t* mALLOc(bytes) size_t bytes;
2033#endif
2034{
2035  mchunkptr victim;                  /* inspected/selected chunk */
2036  INTERNAL_SIZE_T victim_size;       /* its size */
2037  int       idx;                     /* index for bin traversal */
2038  mbinptr   bin;                     /* associated bin */
2039  mchunkptr remainder;               /* remainder from a split */
2040  long      remainder_size;          /* its size */
2041  int       remainder_index;         /* its bin index */
2042  unsigned long block;               /* block traverser bit */
2043  int       startidx;                /* first bin of a traversed block */
2044  mchunkptr fwd;                     /* misc temp for linking */
2045  mchunkptr bck;                     /* misc temp for linking */
2046  mbinptr q;                         /* misc temp */
2047
2048  INTERNAL_SIZE_T nb  = request2size(bytes);  /* padded request size; */
2049
2050  /* Check for exact match in a bin */
2051
2052  if (is_small_request(nb))  /* Faster version for small requests */
2053  {
2054    idx = smallbin_index(nb); 
2055
2056    /* No traversal or size check necessary for small bins.  */
2057
2058    q = bin_at(idx);
2059    victim = last(q);
2060
2061    /* Also scan the next one, since it would have a remainder < MINSIZE */
2062    if (victim == q)
2063    {
2064      q = next_bin(q);
2065      victim = last(q);
2066    }
2067    if (victim != q)
2068    {
2069      victim_size = chunksize(victim);
2070      unlink(victim, bck, fwd);
2071      set_inuse_bit_at_offset(victim, victim_size);
2072      check_malloced_chunk(victim, nb);
2073      return chunk2mem(victim);
2074    }
2075
2076    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2077
2078  }
2079  else
2080  {
2081    idx = bin_index(nb);
2082    bin = bin_at(idx);
2083
2084    for (victim = last(bin); victim != bin; victim = victim->bk)
2085    {
2086      victim_size = chunksize(victim);
2087      remainder_size = victim_size - nb;
2088     
2089      if (remainder_size >= (long)MINSIZE) /* too big */
2090      {
2091        --idx; /* adjust to rescan below after checking last remainder */
2092        break;   
2093      }
2094
2095      else if (remainder_size >= 0) /* exact fit */
2096      {
2097        unlink(victim, bck, fwd);
2098        set_inuse_bit_at_offset(victim, victim_size);
2099        check_malloced_chunk(victim, nb);
2100        return chunk2mem(victim);
2101      }
2102    }
2103
2104    ++idx; 
2105
2106  }
2107
2108  /* Try to use the last split-off remainder */
2109
2110  if ( (victim = last_remainder->fd) != last_remainder)
2111  {
2112    victim_size = chunksize(victim);
2113    remainder_size = victim_size - nb;
2114
2115    if (remainder_size >= (long)MINSIZE) /* re-split */
2116    {
2117      remainder = chunk_at_offset(victim, nb);
2118      set_head(victim, nb | PREV_INUSE);
2119      link_last_remainder(remainder);
2120      set_head(remainder, remainder_size | PREV_INUSE);
2121      set_foot(remainder, remainder_size);
2122      check_malloced_chunk(victim, nb);
2123      return chunk2mem(victim);
2124    }
2125
2126    clear_last_remainder;
2127
2128    if (remainder_size >= 0)  /* exhaust */
2129    {
2130      set_inuse_bit_at_offset(victim, victim_size);
2131      check_malloced_chunk(victim, nb);
2132      return chunk2mem(victim);
2133    }
2134
2135    /* Else place in bin */
2136
2137    frontlink(victim, victim_size, remainder_index, bck, fwd);
2138  }
2139
2140  /*
2141     If there are any possibly nonempty big-enough blocks,
2142     search for best fitting chunk by scanning bins in blockwidth units.
2143  */
2144
2145  if ( (block = idx2binblock(idx)) <= binblocks) 
2146  {
2147
2148    /* Get to the first marked block */
2149
2150    if ( (block & binblocks) == 0) 
2151    {
2152      /* force to an even block boundary */
2153      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2154      block <<= 1;
2155      while ((block & binblocks) == 0)
2156      {
2157        idx += BINBLOCKWIDTH;
2158        block <<= 1;
2159      }
2160    }
2161     
2162    /* For each possibly nonempty block ... */
2163    for (;;) 
2164    {
2165      startidx = idx;          /* (track incomplete blocks) */
2166      q = bin = bin_at(idx);
2167
2168      /* For each bin in this block ... */
2169      do
2170      {
2171        /* Find and use first big enough chunk ... */
2172
2173        for (victim = last(bin); victim != bin; victim = victim->bk)
2174        {
2175          victim_size = chunksize(victim);
2176          remainder_size = victim_size - nb;
2177
2178          if (remainder_size >= (long)MINSIZE) /* split */
2179          {
2180            remainder = chunk_at_offset(victim, nb);
2181            set_head(victim, nb | PREV_INUSE);
2182            unlink(victim, bck, fwd);
2183            link_last_remainder(remainder);
2184            set_head(remainder, remainder_size | PREV_INUSE);
2185            set_foot(remainder, remainder_size);
2186            check_malloced_chunk(victim, nb);
2187            return chunk2mem(victim);
2188          }
2189
2190          else if (remainder_size >= 0)  /* take */
2191          {
2192            set_inuse_bit_at_offset(victim, victim_size);
2193            unlink(victim, bck, fwd);
2194            check_malloced_chunk(victim, nb);
2195            return chunk2mem(victim);
2196          }
2197
2198        }
2199
2200       bin = next_bin(bin);
2201
2202      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2203
2204      /* Clear out the block bit. */
2205
2206      do   /* Possibly backtrack to try to clear a partial block */
2207      {
2208        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2209        {
2210          binblocks &= ~block;
2211          break;
2212        }
2213        --startidx;
2214       q = prev_bin(q);
2215      } while (first(q) == q);
2216
2217      /* Get to the next possibly nonempty block */
2218
2219      if ( (block <<= 1) <= binblocks && (block != 0) ) 
2220      {
2221        while ((block & binblocks) == 0)
2222        {
2223          idx += BINBLOCKWIDTH;
2224          block <<= 1;
2225        }
2226      }
2227      else
2228        break;
2229    }
2230  }
2231
2232
2233  /* Try to use top chunk */
2234
2235  /* Require that there be a remainder, ensuring top always exists  */
2236  if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2237  {
2238
2239#if HAVE_MMAP
2240    /* If big and would otherwise need to extend, try to use mmap instead */
2241    if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2242        (victim = mmap_chunk(nb)) != 0)
2243      return chunk2mem(victim);
2244#endif
2245
2246    /* Try to extend */
2247    malloc_extend_top(nb);
2248    if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2249      return 0; /* propagate failure */
2250  }
2251
2252  victim = top;
2253  set_head(victim, nb | PREV_INUSE);
2254  top = chunk_at_offset(victim, nb);
2255  set_head(top, remainder_size | PREV_INUSE);
2256  check_malloced_chunk(victim, nb);
2257  return chunk2mem(victim);
2258
2259}
2260
2261
2262
2263
2264/*
2265
2266  free() algorithm :
2267
2268    cases:
2269
2270       1. free(0) has no effect. 
2271
2272       2. If the chunk was allocated via mmap, it is release via munmap().
2273
2274       3. If a returned chunk borders the current high end of memory,
2275          it is consolidated into the top, and if the total unused
2276          topmost memory exceeds the trim threshold, malloc_trim is
2277          called.
2278
2279       4. Other chunks are consolidated as they arrive, and
2280          placed in corresponding bins. (This includes the case of
2281          consolidating with the current `last_remainder').
2282
2283*/
2284
2285
2286#if __STD_C
2287void fREe(Void_t* mem)
2288#else
2289void fREe(mem) Void_t* mem;
2290#endif
2291{
2292  mchunkptr p;         /* chunk corresponding to mem */
2293  INTERNAL_SIZE_T hd;  /* its head field */
2294  INTERNAL_SIZE_T sz;  /* its size */
2295  int       idx;       /* its bin index */
2296  mchunkptr next;      /* next contiguous chunk */
2297  INTERNAL_SIZE_T nextsz; /* its size */
2298  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2299  mchunkptr bck;       /* misc temp for linking */
2300  mchunkptr fwd;       /* misc temp for linking */
2301  int       islr;      /* track whether merging with last_remainder */
2302
2303  if (mem == 0)                              /* free(0) has no effect */
2304    return;
2305
2306  p = mem2chunk(mem);
2307  hd = p->size;
2308
2309#if HAVE_MMAP
2310  if (hd & IS_MMAPPED)                       /* release mmapped memory. */
2311  {
2312    munmap_chunk(p);
2313    return;
2314  }
2315#endif
2316 
2317  check_inuse_chunk(p);
2318 
2319  sz = hd & ~PREV_INUSE;
2320  next = chunk_at_offset(p, sz);
2321  nextsz = chunksize(next);
2322 
2323  if (next == top)                            /* merge with top */
2324  {
2325    sz += nextsz;
2326
2327    if (!(hd & PREV_INUSE))                    /* consolidate backward */
2328    {
2329      prevsz = p->prev_size;
2330      p = chunk_at_offset(p, -prevsz);
2331      sz += prevsz;
2332      unlink(p, bck, fwd);
2333    }
2334
2335    set_head(p, sz | PREV_INUSE);
2336    top = p;
2337    if ((unsigned long)(sz) >= (unsigned long)trim_threshold) 
2338      malloc_trim(top_pad); 
2339    return;
2340  }
2341
2342  set_head(next, nextsz);                    /* clear inuse bit */
2343
2344  islr = 0;
2345
2346  if (!(hd & PREV_INUSE))                    /* consolidate backward */
2347  {
2348    prevsz = p->prev_size;
2349    p = chunk_at_offset(p, -prevsz);
2350    sz += prevsz;
2351   
2352    if (p->fd == last_remainder)             /* keep as last_remainder */
2353      islr = 1;
2354    else
2355      unlink(p, bck, fwd);
2356  }
2357 
2358  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
2359  {
2360    sz += nextsz;
2361   
2362    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
2363    {
2364      islr = 1;
2365      link_last_remainder(p);   
2366    }
2367    else
2368      unlink(next, bck, fwd);
2369  }
2370
2371
2372  set_head(p, sz | PREV_INUSE);
2373  set_foot(p, sz);
2374  if (!islr)
2375    frontlink(p, sz, idx, bck, fwd); 
2376}
2377
2378
2379
2380
2381
2382/*
2383
2384  Realloc algorithm:
2385
2386    Chunks that were obtained via mmap cannot be extended or shrunk
2387    unless HAVE_MREMAP is defined, in which case mremap is used.
2388    Otherwise, if their reallocation is for additional space, they are
2389    copied.  If for less, they are just left alone.
2390
2391    Otherwise, if the reallocation is for additional space, and the
2392    chunk can be extended, it is, else a malloc-copy-free sequence is
2393    taken.  There are several different ways that a chunk could be
2394    extended. All are tried:
2395
2396       * Extending forward into following adjacent free chunk.
2397       * Shifting backwards, joining preceding adjacent space
2398       * Both shifting backwards and extending forward.
2399       * Extending into newly sbrked space
2400
2401    Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2402    size argument of zero (re)allocates a minimum-sized chunk.
2403
2404    If the reallocation is for less space, and the new request is for
2405    a `small' (<512 bytes) size, then the newly unused space is lopped
2406    off and freed.
2407
2408    The old unix realloc convention of allowing the last-free'd chunk
2409    to be used as an argument to realloc is no longer supported.
2410    I don't know of any programs still relying on this feature,
2411    and allowing it would also allow too many other incorrect
2412    usages of realloc to be sensible.
2413
2414
2415*/
2416
2417
2418#if __STD_C
2419Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2420#else
2421Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2422#endif
2423{
2424  INTERNAL_SIZE_T    nb;      /* padded request size */
2425
2426  mchunkptr oldp;             /* chunk corresponding to oldmem */
2427  INTERNAL_SIZE_T    oldsize; /* its size */
2428
2429  mchunkptr newp;             /* chunk to return */
2430  INTERNAL_SIZE_T    newsize; /* its size */
2431  Void_t*   newmem;           /* corresponding user mem */
2432
2433  mchunkptr next;             /* next contiguous chunk after oldp */
2434  INTERNAL_SIZE_T  nextsize;  /* its size */
2435
2436  mchunkptr prev;             /* previous contiguous chunk before oldp */
2437  INTERNAL_SIZE_T  prevsize;  /* its size */
2438
2439  mchunkptr remainder;        /* holds split off extra space from newp */
2440  INTERNAL_SIZE_T  remainder_size;   /* its size */
2441
2442  mchunkptr bck;              /* misc temp for linking */
2443  mchunkptr fwd;              /* misc temp for linking */
2444
2445#ifdef REALLOC_ZERO_BYTES_FREES
2446  if (bytes == 0) { fREe(oldmem); return 0; }
2447#endif
2448
2449
2450  /* realloc of null is supposed to be same as malloc */
2451  if (oldmem == 0) return mALLOc(bytes);
2452
2453  newp    = oldp    = mem2chunk(oldmem);
2454  newsize = oldsize = chunksize(oldp);
2455
2456
2457  nb = request2size(bytes);
2458
2459#if HAVE_MMAP
2460  if (chunk_is_mmapped(oldp)) 
2461  {
2462#if HAVE_MREMAP
2463    newp = mremap_chunk(oldp, nb);
2464    if(newp) return chunk2mem(newp);
2465#endif
2466    /* Note the extra SIZE_SZ overhead. */
2467    if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2468    /* Must alloc, copy, free. */
2469    newmem = mALLOc(bytes);
2470    if (newmem == 0) return 0; /* propagate failure */
2471    MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2472    munmap_chunk(oldp);
2473    return newmem;
2474  }
2475#endif
2476
2477  check_inuse_chunk(oldp);
2478
2479  if ((long)(oldsize) < (long)(nb)) 
2480  {
2481
2482    /* Try expanding forward */
2483
2484    next = chunk_at_offset(oldp, oldsize);
2485    if (next == top || !inuse(next)) 
2486    {
2487      nextsize = chunksize(next);
2488
2489      /* Forward into top only if a remainder */
2490      if (next == top)
2491      {
2492        if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2493        {
2494          newsize += nextsize;
2495          top = chunk_at_offset(oldp, nb);
2496          set_head(top, (newsize - nb) | PREV_INUSE);
2497          set_head_size(oldp, nb);
2498          return chunk2mem(oldp);
2499        }
2500      }
2501
2502      /* Forward into next chunk */
2503      else if (((long)(nextsize + newsize) >= (long)(nb)))
2504      { 
2505        unlink(next, bck, fwd);
2506        newsize  += nextsize;
2507        goto split;
2508      }
2509    }
2510    else
2511    {
2512      next = 0;
2513      nextsize = 0;
2514    }
2515
2516    /* Try shifting backwards. */
2517
2518    if (!prev_inuse(oldp))
2519    {
2520      prev = prev_chunk(oldp);
2521      prevsize = chunksize(prev);
2522
2523      /* try forward + backward first to save a later consolidation */
2524
2525      if (next != 0)
2526      {
2527        /* into top */
2528        if (next == top)
2529        {
2530          if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2531          {
2532            unlink(prev, bck, fwd);
2533            newp = prev;
2534            newsize += prevsize + nextsize;
2535            newmem = chunk2mem(newp);
2536            MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2537            top = chunk_at_offset(newp, nb);
2538            set_head(top, (newsize - nb) | PREV_INUSE);
2539            set_head_size(newp, nb);
2540            return newmem;
2541          }
2542        }
2543
2544        /* into next chunk */
2545        else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2546        {
2547          unlink(next, bck, fwd);
2548          unlink(prev, bck, fwd);
2549          newp = prev;
2550          newsize += nextsize + prevsize;
2551          newmem = chunk2mem(newp);
2552          MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2553          goto split;
2554        }
2555      }
2556     
2557      /* backward only */
2558      if (prev != 0 && (long)(prevsize + newsize) >= (long)nb) 
2559      {
2560        unlink(prev, bck, fwd);
2561        newp = prev;
2562        newsize += prevsize;
2563        newmem = chunk2mem(newp);
2564        MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2565        goto split;
2566      }
2567    }
2568
2569    /* Must allocate */
2570
2571    newmem = mALLOc (bytes);
2572
2573    if (newmem == 0)  /* propagate failure */
2574      return 0; 
2575
2576    /* Avoid copy if newp is next chunk after oldp. */
2577    /* (This can only happen when new chunk is sbrk'ed.) */
2578
2579    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp)) 
2580    {
2581      newsize += chunksize(newp);
2582      newp = oldp;
2583      goto split;
2584    }
2585
2586    /* Otherwise copy, free, and exit */
2587    MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2588    fREe(oldmem);
2589    return newmem;
2590  }
2591
2592
2593 split:  /* split off extra room in old or expanded chunk */
2594
2595  if (newsize - nb >= MINSIZE) /* split off remainder */
2596  {
2597    remainder = chunk_at_offset(newp, nb);
2598    remainder_size = newsize - nb;
2599    set_head_size(newp, nb);
2600    set_head(remainder, remainder_size | PREV_INUSE);
2601    set_inuse_bit_at_offset(remainder, remainder_size);
2602    fREe(chunk2mem(remainder)); /* let free() deal with it */
2603  }
2604  else
2605  {
2606    set_head_size(newp, newsize);
2607    set_inuse_bit_at_offset(newp, newsize);
2608  }
2609
2610  check_inuse_chunk(newp);
2611  return chunk2mem(newp);
2612}
2613
2614
2615
2616
2617/*
2618
2619  memalign algorithm:
2620
2621    memalign requests more than enough space from malloc, finds a spot
2622    within that chunk that meets the alignment request, and then
2623    possibly frees the leading and trailing space.
2624
2625    The alignment argument must be a power of two. This property is not
2626    checked by memalign, so misuse may result in random runtime errors.
2627
2628    8-byte alignment is guaranteed by normal malloc calls, so don't
2629    bother calling memalign with an argument of 8 or less.
2630
2631    Overreliance on memalign is a sure way to fragment space.
2632
2633*/
2634
2635
2636#if __STD_C
2637Void_t* mEMALIGn(size_t alignment, size_t bytes)
2638#else
2639Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2640#endif
2641{
2642  INTERNAL_SIZE_T    nb;      /* padded  request size */
2643  char*     m;                /* memory returned by malloc call */
2644  mchunkptr p;                /* corresponding chunk */
2645  char*     brk;              /* alignment point within p */
2646  mchunkptr newp;             /* chunk to return */
2647  INTERNAL_SIZE_T  newsize;   /* its size */
2648  INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
2649  mchunkptr remainder;        /* spare room at end to split off */
2650  long      remainder_size;   /* its size */
2651
2652  /* If need less alignment than we give anyway, just relay to malloc */
2653
2654  if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2655
2656  /* Otherwise, ensure that it is at least a minimum chunk size */
2657 
2658  if (alignment <  MINSIZE) alignment = MINSIZE;
2659
2660  /* Call malloc with worst case padding to hit alignment. */
2661
2662  nb = request2size(bytes);
2663  m  = (char*)(mALLOc(nb + alignment + MINSIZE));
2664
2665  if (m == 0) return 0; /* propagate failure */
2666
2667  p = mem2chunk(m);
2668
2669  if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2670  {
2671#if HAVE_MMAP
2672    if(chunk_is_mmapped(p))
2673      return chunk2mem(p); /* nothing more to do */
2674#endif
2675  }
2676  else /* misaligned */
2677  {
2678    /*
2679      Find an aligned spot inside chunk.
2680      Since we need to give back leading space in a chunk of at
2681      least MINSIZE, if the first calculation places us at
2682      a spot with less than MINSIZE leader, we can move to the
2683      next aligned spot -- we've allocated enough total room so that
2684      this is always possible.
2685    */
2686
2687    brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
2688    if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2689
2690    newp = (mchunkptr)brk;
2691    leadsize = brk - (char*)(p);
2692    newsize = chunksize(p) - leadsize;
2693
2694#if HAVE_MMAP
2695    if(chunk_is_mmapped(p)) 
2696    {
2697      newp->prev_size = p->prev_size + leadsize;
2698      set_head(newp, newsize|IS_MMAPPED);
2699      return chunk2mem(newp);
2700    }
2701#endif
2702
2703    /* give back leader, use the rest */
2704
2705    set_head(newp, newsize | PREV_INUSE);
2706    set_inuse_bit_at_offset(newp, newsize);
2707    set_head_size(p, leadsize);
2708    fREe(chunk2mem(p));
2709    p = newp;
2710
2711    assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2712  }
2713
2714  /* Also give back spare room at the end */
2715
2716  remainder_size = chunksize(p) - nb;
2717
2718  if (remainder_size >= (long)MINSIZE)
2719  {
2720    remainder = chunk_at_offset(p, nb);
2721    set_head(remainder, remainder_size | PREV_INUSE);
2722    set_head_size(p, nb);
2723    fREe(chunk2mem(remainder));
2724  }
2725
2726  check_inuse_chunk(p);
2727  return chunk2mem(p);
2728
2729}
2730
2731
2732
2733
2734/*
2735    valloc just invokes memalign with alignment argument equal
2736    to the page size of the system (or as near to this as can
2737    be figured out from all the includes/defines above.)
2738*/
2739
2740#if __STD_C
2741Void_t* vALLOc(size_t bytes)
2742#else
2743Void_t* vALLOc(bytes) size_t bytes;
2744#endif
2745{
2746  return mEMALIGn (malloc_getpagesize, bytes);
2747}
2748
2749/*
2750  pvalloc just invokes valloc for the nearest pagesize
2751  that will accommodate request
2752*/
2753
2754
2755#if __STD_C
2756Void_t* pvALLOc(size_t bytes)
2757#else
2758Void_t* pvALLOc(bytes) size_t bytes;
2759#endif
2760{
2761  size_t pagesize = malloc_getpagesize;
2762  return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2763}
2764
2765/*
2766
2767  calloc calls malloc, then zeroes out the allocated chunk.
2768
2769*/
2770
2771#if __STD_C
2772Void_t* cALLOc(size_t n, size_t elem_size)
2773#else
2774Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2775#endif
2776{
2777  mchunkptr p;
2778  INTERNAL_SIZE_T csz;
2779
2780  INTERNAL_SIZE_T sz = n * elem_size;
2781
2782  /* check if expand_top called, in which case don't need to clear */
2783#if MORECORE_CLEARS
2784  mchunkptr oldtop = top;
2785  INTERNAL_SIZE_T oldtopsize = chunksize(top);
2786#endif
2787  Void_t* mem = mALLOc (sz);
2788
2789  if (mem == 0) 
2790    return 0;
2791  else
2792  {
2793    p = mem2chunk(mem);
2794
2795    /* Two optional cases in which clearing not necessary */
2796
2797
2798#if HAVE_MMAP
2799    if (chunk_is_mmapped(p)) return mem;
2800#endif
2801
2802    csz = chunksize(p);
2803
2804#if MORECORE_CLEARS
2805    if (p == oldtop && csz > oldtopsize) 
2806    {
2807      /* clear only the bytes from non-freshly-sbrked memory */
2808      csz = oldtopsize;
2809    }
2810#endif
2811
2812    MALLOC_ZERO(mem, csz - SIZE_SZ);
2813    return mem;
2814  }
2815}
2816
2817/*
2818 
2819  cfree just calls free. It is needed/defined on some systems
2820  that pair it with calloc, presumably for odd historical reasons.
2821
2822*/
2823
2824#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2825#if __STD_C
2826void cfree(Void_t *mem)
2827#else
2828void cfree(mem) Void_t *mem;
2829#endif
2830{
2831  free(mem);
2832}
2833#endif
2834
2835
2836
2837/*
2838
2839    Malloc_trim gives memory back to the system (via negative
2840    arguments to sbrk) if there is unused memory at the `high' end of
2841    the malloc pool. You can call this after freeing large blocks of
2842    memory to potentially reduce the system-level memory requirements
2843    of a program. However, it cannot guarantee to reduce memory. Under
2844    some allocation patterns, some large free blocks of memory will be
2845    locked between two used chunks, so they cannot be given back to
2846    the system.
2847
2848    The `pad' argument to malloc_trim represents the amount of free
2849    trailing space to leave untrimmed. If this argument is zero,
2850    only the minimum amount of memory to maintain internal data
2851    structures will be left (one page or less). Non-zero arguments
2852    can be supplied to maintain enough trailing space to service
2853    future expected allocations without having to re-obtain memory
2854    from the system.
2855
2856    Malloc_trim returns 1 if it actually released any memory, else 0.
2857
2858*/
2859
2860#if __STD_C
2861int malloc_trim(size_t pad)
2862#else
2863int malloc_trim(pad) size_t pad;
2864#endif
2865{
2866  long  top_size;        /* Amount of top-most memory */
2867  long  extra;           /* Amount to release */
2868  char* current_brk;     /* address returned by pre-check sbrk call */
2869  char* new_brk;         /* address returned by negative sbrk call */
2870
2871  unsigned long pagesz = malloc_getpagesize;
2872
2873  top_size = chunksize(top);
2874  extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2875
2876  if (extra < (long)pagesz)  /* Not enough memory to release */
2877    return 0;
2878
2879  else
2880  {
2881    /* Test to make sure no one else called sbrk */
2882    current_brk = (char*)(MORECORE (0));
2883    if (current_brk != (char*)(top) + top_size)
2884      return 0;     /* Apparently we don't own memory; must fail */
2885
2886    else
2887    {
2888      new_brk = (char*)(MORECORE (-extra));
2889     
2890      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
2891      {
2892        /* Try to figure out what we have */
2893        current_brk = (char*)(MORECORE (0));
2894        top_size = current_brk - (char*)top;
2895        if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
2896        {
2897          sbrked_mem = current_brk - sbrk_base;
2898          set_head(top, top_size | PREV_INUSE);
2899        }
2900        check_chunk(top);
2901        return 0; 
2902      }
2903
2904      else
2905      {
2906        /* Success. Adjust top accordingly. */
2907        set_head(top, (top_size - extra) | PREV_INUSE);
2908        sbrked_mem -= extra;
2909        check_chunk(top);
2910        return 1;
2911      }
2912    }
2913  }
2914}
2915
2916
2917
2918/*
2919  malloc_usable_size:
2920
2921    This routine tells you how many bytes you can actually use in an
2922    allocated chunk, which may be more than you requested (although
2923    often not). You can use this many bytes without worrying about
2924    overwriting other allocated objects. Not a particularly great
2925    programming practice, but still sometimes useful.
2926
2927*/
2928
2929#if __STD_C
2930size_t malloc_usable_size(Void_t* mem)
2931#else
2932size_t malloc_usable_size(mem) Void_t* mem;
2933#endif
2934{
2935  mchunkptr p;
2936  if (mem == 0)
2937    return 0;
2938  else
2939  {
2940    p = mem2chunk(mem);
2941    if(!chunk_is_mmapped(p))
2942    {
2943      if (!inuse(p)) return 0;
2944      check_inuse_chunk(p);
2945      return chunksize(p) - SIZE_SZ;
2946    }
2947    return chunksize(p) - 2*SIZE_SZ;
2948  }
2949}
2950
2951
2952
2953
2954/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
2955
2956static void malloc_update_mallinfo() 
2957{
2958  int i;
2959  mbinptr b;
2960  mchunkptr p;
2961#if DEBUG
2962  mchunkptr q;
2963#endif
2964
2965  INTERNAL_SIZE_T avail = chunksize(top);
2966  int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
2967
2968  for (i = 1; i < NAV; ++i)
2969  {
2970    b = bin_at(i);
2971    for (p = last(b); p != b; p = p->bk) 
2972    {
2973#if DEBUG
2974      check_free_chunk(p);
2975      for (q = next_chunk(p); 
2976           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE; 
2977           q = next_chunk(q))
2978        check_inuse_chunk(q);
2979#endif
2980      avail += chunksize(p);
2981      navail++;
2982    }
2983  }
2984
2985  current_mallinfo.ordblks = navail;
2986  current_mallinfo.uordblks = sbrked_mem - avail;
2987  current_mallinfo.fordblks = avail;
2988  current_mallinfo.hblks = n_mmaps;
2989  current_mallinfo.hblkhd = mmapped_mem;
2990  current_mallinfo.keepcost = chunksize(top);
2991
2992}
2993
2994
2995
2996/*
2997
2998  malloc_stats:
2999
3000    Prints on stderr the amount of space obtain from the system (both
3001    via sbrk and mmap), the maximum amount (which may be more than
3002    current if malloc_trim and/or munmap got called), the maximum
3003    number of simultaneous mmap regions used, and the current number
3004    of bytes allocated via malloc (or realloc, etc) but not yet
3005    freed. (Note that this is the number of bytes allocated, not the
3006    number requested. It will be larger than the number requested
3007    because of alignment and bookkeeping overhead.)
3008
3009*/
3010
3011void malloc_stats()
3012{
3013  malloc_update_mallinfo();
3014  fprintf(stderr, "max system bytes = %10u\n", 
3015          (unsigned int)(max_total_mem));
3016  fprintf(stderr, "system bytes     = %10u\n", 
3017          (unsigned int)(sbrked_mem + mmapped_mem));
3018  fprintf(stderr, "in use bytes     = %10u\n", 
3019          (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3020#if HAVE_MMAP
3021  fprintf(stderr, "max mmap regions = %10u\n", 
3022          (unsigned int)max_n_mmaps);
3023#endif
3024}
3025
3026/*
3027  mallinfo returns a copy of updated current mallinfo.
3028*/
3029
3030struct mallinfo mALLINFo()
3031{
3032  malloc_update_mallinfo();
3033  return current_mallinfo;
3034}
3035
3036
3037
3038
3039/*
3040  mallopt:
3041
3042    mallopt is the general SVID/XPG interface to tunable parameters.
3043    The format is to provide a (parameter-number, parameter-value) pair.
3044    mallopt then sets the corresponding parameter to the argument
3045    value if it can (i.e., so long as the value is meaningful),
3046    and returns 1 if successful else 0.
3047
3048    See descriptions of tunable parameters above.
3049
3050*/
3051
3052#if __STD_C
3053int mALLOPt(int param_number, int value)
3054#else
3055int mALLOPt(param_number, value) int param_number; int value;
3056#endif
3057{
3058  switch(param_number) 
3059  {
3060    case M_TRIM_THRESHOLD:
3061      trim_threshold = value; return 1; 
3062    case M_TOP_PAD:
3063      top_pad = value; return 1; 
3064    case M_MMAP_THRESHOLD:
3065      mmap_threshold = value; return 1;
3066    case M_MMAP_MAX:
3067#if HAVE_MMAP
3068      n_mmaps_max = value; return 1;
3069#else
3070      if (value != 0) return 0; else  n_mmaps_max = value; return 1;
3071#endif
3072
3073    default:
3074      return 0;
3075  }
3076}
3077
3078/*
3079
3080History:
3081
3082    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3083      * Added pvalloc, as recommended by H.J. Liu
3084      * Added 64bit pointer support mainly from Wolfram Gloger
3085      * Added anonymously donated WIN32 sbrk emulation
3086      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3087      * malloc_extend_top: fix mask error that caused wastage after
3088        foreign sbrks
3089      * Add linux mremap support code from HJ Liu
3090   
3091    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3092      * Integrated most documentation with the code.
3093      * Add support for mmap, with help from
3094        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3095      * Use last_remainder in more cases.
3096      * Pack bins using idea from  colin@nyx10.cs.du.edu
3097      * Use ordered bins instead of best-fit threshhold
3098      * Eliminate block-local decls to simplify tracing and debugging.
3099      * Support another case of realloc via move into top
3100      * Fix error occuring when initial sbrk_base not word-aligned. 
3101      * Rely on page size for units instead of SBRK_UNIT to
3102        avoid surprises about sbrk alignment conventions.
3103      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3104        (raymond@es.ele.tue.nl) for the suggestion.
3105      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3106      * More precautions for cases where other routines call sbrk,
3107        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3108      * Added macros etc., allowing use in linux libc from
3109        H.J. Lu (hjl@gnu.ai.mit.edu)
3110      * Inverted this history list
3111
3112    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3113      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3114      * Removed all preallocation code since under current scheme
3115        the work required to undo bad preallocations exceeds
3116        the work saved in good cases for most test programs.
3117      * No longer use return list or unconsolidated bins since
3118        no scheme using them consistently outperforms those that don't
3119        given above changes.
3120      * Use best fit for very large chunks to prevent some worst-cases.
3121      * Added some support for debugging
3122
3123    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3124      * Removed footers when chunks are in use. Thanks to
3125        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3126
3127    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3128      * Added malloc_trim, with help from Wolfram Gloger
3129        (wmglo@Dent.MED.Uni-Muenchen.DE).
3130
3131    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3132
3133    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3134      * realloc: try to expand in both directions
3135      * malloc: swap order of clean-bin strategy;
3136      * realloc: only conditionally expand backwards
3137      * Try not to scavenge used bins
3138      * Use bin counts as a guide to preallocation
3139      * Occasionally bin return list chunks in first scan
3140      * Add a few optimizations from colin@nyx10.cs.du.edu
3141
3142    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3143      * faster bin computation & slightly different binning
3144      * merged all consolidations to one part of malloc proper
3145         (eliminating old malloc_find_space & malloc_clean_bin)
3146      * Scan 2 returns chunks (not just 1)
3147      * Propagate failure in realloc if malloc returns 0
3148      * Add stuff to allow compilation on non-ANSI compilers
3149          from kpv@research.att.com
3150     
3151    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3152      * removed potential for odd address access in prev_chunk
3153      * removed dependency on getpagesize.h
3154      * misc cosmetics and a bit more internal documentation
3155      * anticosmetics: mangled names in macros to evade debugger strangeness
3156      * tested on sparc, hp-700, dec-mips, rs6000
3157          with gcc & native cc (hp, dec only) allowing
3158          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3159
3160    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3161      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3162         structure of old version,  but most details differ.)
3163
3164*/
3165
3166
Note: See TracBrowser for help on using the repository browser.