source: SVN/cambria/redboot/packages/services/memalloc/common/current/doc/dlmalloc/dlmalloc-merged.c @ 1

Last change on this file since 1 was 1, checked in by Tim Harvey, 2 years ago

restored latest version of files from server backup

Signed-off-by: Tim Harvey <tharvey@…>

File size: 112.7 KB
Line 
1/* ---------- To make a malloc.h, start cutting here ------------ */
2
3/*
4  A version of malloc/free/realloc written by Doug Lea and released to the
5  public domain.  Send questions/comments/complaints/performance data
6  to dl@cs.oswego.edu
7
8* VERSION 2.6.6  Sun Mar  5 19:10:03 2000  Doug Lea  (dl at gee)
9 
10   Note: There may be an updated version of this malloc obtainable at
11           ftp://g.oswego.edu/pub/misc/malloc.c
12         Check before installing!
13
14* Why use this malloc?
15
16  This is not the fastest, most space-conserving, most portable, or
17  most tunable malloc ever written. However it is among the fastest
18  while also being among the most space-conserving, portable and tunable.
19  Consistent balance across these factors results in a good general-purpose
20  allocator. For a high-level description, see
21     http://g.oswego.edu/dl/html/malloc.html
22
23* Synopsis of public routines
24
25  (Much fuller descriptions are contained in the program documentation below.)
26
27  malloc(size_t n);
28     Return a pointer to a newly allocated chunk of at least n bytes, or null
29     if no space is available.
30  free(Void_t* p);
31     Release the chunk of memory pointed to by p, or no effect if p is null.
32  realloc(Void_t* p, size_t n);
33     Return a pointer to a chunk of size n that contains the same data
34     as does chunk p up to the minimum of (n, p's size) bytes, or null
35     if no space is available. The returned pointer may or may not be
36     the same as p. If p is null, equivalent to malloc.  Unless the
37     #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
38     size argument of zero (re)allocates a minimum-sized chunk.
39  memalign(size_t alignment, size_t n);
40     Return a pointer to a newly allocated chunk of n bytes, aligned
41     in accord with the alignment argument, which must be a power of
42     two.
43  valloc(size_t n);
44     Equivalent to memalign(pagesize, n), where pagesize is the page
45     size of the system (or as near to this as can be figured out from
46     all the includes/defines below.)
47  pvalloc(size_t n);
48     Equivalent to valloc(minimum-page-that-holds(n)), that is,
49     round up n to nearest pagesize.
50  calloc(size_t unit, size_t quantity);
51     Returns a pointer to quantity * unit bytes, with all locations
52     set to zero.
53  cfree(Void_t* p);
54     Equivalent to free(p).
55  malloc_trim(size_t pad);
56     Release all but pad bytes of freed top-most memory back
57     to the system. Return 1 if successful, else 0.
58  malloc_usable_size(Void_t* p);
59     Report the number usable allocated bytes associated with allocated
60     chunk p. This may or may not report more bytes than were requested,
61     due to alignment and minimum size constraints.
62  malloc_stats();
63     Prints brief summary statistics on stderr.
64  mallinfo()
65     Returns (by copy) a struct containing various summary statistics.
66  mallopt(int parameter_number, int parameter_value)
67     Changes one of the tunable parameters described below. Returns
68     1 if successful in changing the parameter, else 0.
69
70* Vital statistics:
71
72  Alignment:                            8-byte
73       8 byte alignment is currently hardwired into the design.  This
74       seems to suffice for all current machines and C compilers.
75
76  Assumed pointer representation:       4 or 8 bytes
77       Code for 8-byte pointers is untested by me but has worked
78       reliably by Wolfram Gloger, who contributed most of the
79       changes supporting this.
80
81  Assumed size_t  representation:       4 or 8 bytes
82       Note that size_t is allowed to be 4 bytes even if pointers are 8.       
83
84  Minimum overhead per allocated chunk: 4 or 8 bytes
85       Each malloced chunk has a hidden overhead of 4 bytes holding size
86       and status information. 
87
88  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
89                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
90                                     
91       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
92       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
93       needed; 4 (8) for a trailing size field
94       and 8 (16) bytes for free list pointers. Thus, the minimum
95       allocatable size is 16/24/32 bytes.
96
97       Even a request for zero bytes (i.e., malloc(0)) returns a
98       pointer to something of the minimum allocatable size.
99
100  Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
101                          8-byte size_t: 2^63 - 16 bytes
102
103       It is assumed that (possibly signed) size_t bit values suffice to
104       represent chunk sizes. `Possibly signed' is due to the fact
105       that `size_t' may be defined on a system as either a signed or
106       an unsigned type. To be conservative, values that would appear
107       as negative numbers are avoided. 
108       Requests for sizes with a negative sign bit when the request
109       size is treaded as a long will return null.
110
111  Maximum overhead wastage per allocated chunk: normally 15 bytes
112
113       Alignnment demands, plus the minimum allocatable size restriction
114       make the normal worst-case wastage 15 bytes (i.e., up to 15
115       more bytes will be allocated than were requested in malloc), with
116       two exceptions:
117         1. Because requests for zero bytes allocate non-zero space,
118            the worst case wastage for a request of zero bytes is 24 bytes.
119         2. For requests >= mmap_threshold that are serviced via
120            mmap(), the worst case wastage is 8 bytes plus the remainder
121            from a system page (the minimal mmap unit); typically 4096 bytes.
122
123* Limitations
124
125    Here are some features that are NOT currently supported
126
127    * No user-definable hooks for callbacks and the like.
128    * No automated mechanism for fully checking that all accesses
129      to malloced memory stay within their bounds.
130    * No support for compaction.
131
132* Synopsis of compile-time options:
133
134    People have reported using previous versions of this malloc on all
135    versions of Unix, sometimes by tweaking some of the defines
136    below. It has been tested most extensively on Solaris and
137    Linux. It is also reported to work on WIN32 platforms.
138    People have also reported adapting this malloc for use in
139    stand-alone embedded systems.
140
141    The implementation is in straight, hand-tuned ANSI C.  Among other
142    consequences, it uses a lot of macros.  Because of this, to be at
143    all usable, this code should be compiled using an optimizing compiler
144    (for example gcc -O2) that can simplify expressions and control
145    paths.
146
147  __STD_C                  (default: derived from C compiler defines)
148     Nonzero if using ANSI-standard C compiler, a C++ compiler, or
149     a C compiler sufficiently close to ANSI to get away with it.
150  DEBUG                    (default: NOT defined)
151     Define to enable debugging. Adds fairly extensive assertion-based
152     checking to help track down memory errors, but noticeably slows down
153     execution.
154  SEPARATE_OBJECTS         (default: NOT defined)
155     Define this to compile into separate .o files.  You must then
156     compile malloc.c several times, defining a DEFINE_* macro each
157     time.  The list of DEFINE_* macros appears below.
158  MALLOC_LOCK              (default: NOT defined)
159  MALLOC_UNLOCK            (default: NOT defined)
160     Define these to C expressions which are run to lock and unlock
161     the malloc data structures.  Calls may be nested; that is,
162     MALLOC_LOCK may be called more than once before the corresponding
163     MALLOC_UNLOCK calls.  MALLOC_LOCK must avoid waiting for a lock
164     that it already holds.
165  MALLOC_ALIGNMENT          (default: NOT defined)
166     Define this to 16 if you need 16 byte alignment instead of 8 byte alignment
167     which is the normal default.
168  SIZE_T_SMALLER_THAN_LONG (default: NOT defined)
169     Define this when the platform you are compiling has sizeof(long) > sizeof(size_t).
170     The option causes some extra code to be generated to handle operations
171     that use size_t operands and have long results.
172  REALLOC_ZERO_BYTES_FREES (default: NOT defined)
173     Define this if you think that realloc(p, 0) should be equivalent
174     to free(p). Otherwise, since malloc returns a unique pointer for
175     malloc(0), so does realloc(p, 0).
176  HAVE_MEMCPY               (default: defined)
177     Define if you are not otherwise using ANSI STD C, but still
178     have memcpy and memset in your C library and want to use them.
179     Otherwise, simple internal versions are supplied.
180  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
181     Define as 1 if you want the C library versions of memset and
182     memcpy called in realloc and calloc (otherwise macro versions are used).
183     At least on some platforms, the simple macro versions usually
184     outperform libc versions.
185  HAVE_MMAP                 (default: defined as 1)
186     Define to non-zero to optionally make malloc() use mmap() to
187     allocate very large blocks. 
188  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
189     Define to non-zero to optionally make realloc() use mremap() to
190     reallocate very large blocks. 
191  malloc_getpagesize        (default: derived from system #includes)
192     Either a constant or routine call returning the system page size.
193  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
194     Optionally define if you are on a system with a /usr/include/malloc.h
195     that declares struct mallinfo. It is not at all necessary to
196     define this even if you do, but will ensure consistency.
197  INTERNAL_SIZE_T           (default: size_t)
198     Define to a 32-bit type (probably `unsigned int') if you are on a
199     64-bit machine, yet do not want or need to allow malloc requests of
200     greater than 2^31 to be handled. This saves space, especially for
201     very small chunks.
202  INTERNAL_LINUX_C_LIB      (default: NOT defined)
203     Defined only when compiled as part of Linux libc.
204     Also note that there is some odd internal name-mangling via defines
205     (for example, internally, `malloc' is named `mALLOc') needed
206     when compiling in this case. These look funny but don't otherwise
207     affect anything.
208  INTERNAL_NEWLIB           (default: NOT defined)
209     Defined only when compiled as part of the Cygnus newlib
210     distribution.
211  WIN32                     (default: undefined)
212     Define this on MS win (95, nt) platforms to compile in sbrk emulation.
213  LACKS_UNISTD_H            (default: undefined if not WIN32)
214     Define this if your system does not have a <unistd.h>.
215  LACKS_SYS_PARAM_H         (default: undefined if not WIN32)
216     Define this if your system does not have a <sys/param.h>.
217  MORECORE                  (default: sbrk)
218     The name of the routine to call to obtain more memory from the system.
219  MORECORE_FAILURE          (default: -1)
220     The value returned upon failure of MORECORE.
221  MORECORE_CLEARS           (default 1)
222     True (1) if the routine mapped to MORECORE zeroes out memory (which
223     holds for sbrk).
224  DEFAULT_TRIM_THRESHOLD
225  DEFAULT_TOP_PAD       
226  DEFAULT_MMAP_THRESHOLD
227  DEFAULT_MMAP_MAX     
228     Default values of tunable parameters (described in detail below)
229     controlling interaction with host system routines (sbrk, mmap, etc).
230     These values may also be changed dynamically via mallopt(). The
231     preset defaults are those that give best performance for typical
232     programs/systems.
233  USE_DL_PREFIX             (default: undefined)
234     Prefix all public routines with the string 'dl'.  Useful to
235     quickly avoid procedure declaration conflicts and linker symbol
236     conflicts with existing memory allocation routines.
237
238
239*/
240
241
242
243
244/* Preliminaries */
245
246#ifndef __STD_C
247#ifdef __STDC__
248#define __STD_C     1
249#else
250#if __cplusplus
251#define __STD_C     1
252#else
253#define __STD_C     0
254#endif /*__cplusplus*/
255#endif /*__STDC__*/
256#endif /*__STD_C*/
257
258#ifndef Void_t
259#if (__STD_C || defined(WIN32))
260#define Void_t      void
261#else
262#define Void_t      char
263#endif
264#endif /*Void_t*/
265
266#if __STD_C
267#include <stddef.h>   /* for size_t */
268#else
269#include <sys/types.h>
270#endif
271
272#ifdef __cplusplus
273extern "C" {
274#endif
275
276#include <stdio.h>    /* needed for malloc_stats */
277
278
279/*
280  Compile-time options
281*/
282
283
284/*
285
286  Special defines for Cygnus newlib distribution.
287
288 */
289
290#ifdef INTERNAL_NEWLIB
291
292#include <sys/config.h>
293
294/*
295  In newlib, all the publically visible routines take a reentrancy
296  pointer.  We don't currently do anything much with it, but we do
297  pass it to the lock routine.
298 */
299
300#include <reent.h>
301
302#define POINTER_UINT unsigned _POINTER_INT
303#define SEPARATE_OBJECTS
304#define HAVE_MMAP 0
305#define MORECORE(size) _sbrk_r(reent_ptr, (size))
306#define MORECORE_CLEARS 0
307#define MALLOC_LOCK __malloc_lock(reent_ptr)
308#define MALLOC_UNLOCK __malloc_unlock(reent_ptr)
309
310#ifndef _WIN32
311#ifdef SMALL_MEMORY
312#define malloc_getpagesize (128)
313#else
314#define malloc_getpagesize (4096)
315#endif
316#endif
317
318#if __STD_C
319extern void __malloc_lock(struct _reent *);
320extern void __malloc_unlock(struct _reent *);
321#else
322extern void __malloc_lock();
323extern void __malloc_unlock();
324#endif
325
326#if __STD_C
327#define RARG struct _reent *reent_ptr,
328#define RONEARG struct _reent *reent_ptr
329#else
330#define RARG reent_ptr
331#define RONEARG reent_ptr
332#define RDECL struct _reent *reent_ptr;
333#endif
334
335#define RCALL reent_ptr,
336#define RONECALL reent_ptr
337
338#else /* ! INTERNAL_NEWLIB */
339
340#define POINTER_UINT unsigned long
341#define RARG
342#define RONEARG
343#define RDECL
344#define RCALL
345#define RONECALL
346
347#endif /* ! INTERNAL_NEWLIB */
348
349/*
350    Debugging:
351
352    Because freed chunks may be overwritten with link fields, this
353    malloc will often die when freed memory is overwritten by user
354    programs.  This can be very effective (albeit in an annoying way)
355    in helping track down dangling pointers.
356
357    If you compile with -DDEBUG, a number of assertion checks are
358    enabled that will catch more memory errors. You probably won't be
359    able to make much sense of the actual assertion errors, but they
360    should help you locate incorrectly overwritten memory.  The
361    checking is fairly extensive, and will slow down execution
362    noticeably. Calling malloc_stats or mallinfo with DEBUG set will
363    attempt to check every non-mmapped allocated and free chunk in the
364    course of computing the summmaries. (By nature, mmapped regions
365    cannot be checked very much automatically.)
366
367    Setting DEBUG may also be helpful if you are trying to modify
368    this code. The assertions in the check routines spell out in more
369    detail the assumptions and invariants underlying the algorithms.
370
371*/
372
373#if DEBUG
374#include <assert.h>
375#else
376#define assert(x) ((void)0)
377#endif
378
379
380/*
381  SEPARATE_OBJECTS should be defined if you want each function to go
382  into a separate .o file.  You must then compile malloc.c once per
383  function, defining the appropriate DEFINE_ macro.  See below for the
384  list of macros.
385 */
386
387#ifndef SEPARATE_OBJECTS
388#define DEFINE_MALLOC
389#define DEFINE_FREE
390#define DEFINE_REALLOC
391#define DEFINE_CALLOC
392#define DEFINE_CFREE
393#define DEFINE_MEMALIGN
394#define DEFINE_VALLOC
395#define DEFINE_PVALLOC
396#define DEFINE_MALLINFO
397#define DEFINE_MALLOC_STATS
398#define DEFINE_MALLOC_USABLE_SIZE
399#define DEFINE_MALLOPT
400
401#define STATIC static
402#else
403#define STATIC
404#endif
405
406/*
407   Define MALLOC_LOCK and MALLOC_UNLOCK to C expressions to run to
408   lock and unlock the malloc data structures.  MALLOC_LOCK may be
409   called recursively.
410 */
411
412#ifndef MALLOC_LOCK
413#define MALLOC_LOCK
414#endif
415
416#ifndef MALLOC_UNLOCK
417#define MALLOC_UNLOCK
418#endif
419
420/*
421  INTERNAL_SIZE_T is the word-size used for internal bookkeeping
422  of chunk sizes. On a 64-bit machine, you can reduce malloc
423  overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
424  at the expense of not being able to handle requests greater than
425  2^31. This limitation is hardly ever a concern; you are encouraged
426  to set this. However, the default version is the same as size_t.
427*/
428
429#ifndef INTERNAL_SIZE_T
430#define INTERNAL_SIZE_T size_t
431#endif
432
433/*
434  Following is needed on implementations whereby long > size_t.
435  The problem is caused because the code performs subtractions of
436  size_t values and stores the result in long values.  In the case
437  where long > size_t and the first value is actually less than
438  the second value, the resultant value is positive.  For example,
439  (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
440  which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF.  This is due to the
441  fact that assignment from unsigned to signed won't sign extend.
442*/
443
444#ifdef SIZE_T_SMALLER_THAN_LONG
445#define long_sub_size_t(x, y) ( (x < y) ? -((long)(y - x)) : (x - y) );
446#else
447#define long_sub_size_t(x, y) ( (long)(x - y) )
448#endif
449
450/*
451  REALLOC_ZERO_BYTES_FREES should be set if a call to
452  realloc with zero bytes should be the same as a call to free.
453  Some people think it should. Otherwise, since this malloc
454  returns a unique pointer for malloc(0), so does realloc(p, 0).
455*/
456
457
458/*   #define REALLOC_ZERO_BYTES_FREES */
459
460
461/*
462  WIN32 causes an emulation of sbrk to be compiled in
463  mmap-based options are not currently supported in WIN32.
464*/
465
466/* #define WIN32 */
467#ifdef WIN32
468#define MORECORE wsbrk
469#define HAVE_MMAP 0
470
471#define LACKS_UNISTD_H
472#define LACKS_SYS_PARAM_H
473
474/*
475  Include 'windows.h' to get the necessary declarations for the
476  Microsoft Visual C++ data structures and routines used in the 'sbrk'
477  emulation.
478 
479  Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
480  Visual C++ header files are included.
481*/
482#define WIN32_LEAN_AND_MEAN
483#include <windows.h>
484#endif
485
486
487/*
488  HAVE_MEMCPY should be defined if you are not otherwise using
489  ANSI STD C, but still have memcpy and memset in your C library
490  and want to use them in calloc and realloc. Otherwise simple
491  macro versions are defined here.
492
493  USE_MEMCPY should be defined as 1 if you actually want to
494  have memset and memcpy called. People report that the macro
495  versions are often enough faster than libc versions on many
496  systems that it is better to use them.
497
498*/
499
500#define HAVE_MEMCPY
501
502#ifndef USE_MEMCPY
503#ifdef HAVE_MEMCPY
504#define USE_MEMCPY 1
505#else
506#define USE_MEMCPY 0
507#endif
508#endif
509
510#if (__STD_C || defined(HAVE_MEMCPY))
511
512#if __STD_C
513void* memset(void*, int, size_t);
514void* memcpy(void*, const void*, size_t);
515#else
516#ifdef WIN32
517// On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
518// 'windows.h'
519#else
520Void_t* memset();
521Void_t* memcpy();
522#endif
523#endif
524#endif
525
526#if USE_MEMCPY
527
528/* The following macros are only invoked with (2n+1)-multiples of
529   INTERNAL_SIZE_T units, with a positive integer n. This is exploited
530   for fast inline execution when n is small. */
531
532#define MALLOC_ZERO(charp, nbytes)                                            \
533do {                                                                          \
534  INTERNAL_SIZE_T mzsz = (nbytes);                                            \
535  if(mzsz <= 9*sizeof(mzsz)) {                                                \
536    INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
537    if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
538                                     *mz++ = 0;                               \
539      if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
540                                     *mz++ = 0;                               \
541        if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
542                                     *mz++ = 0; }}}                           \
543                                     *mz++ = 0;                               \
544                                     *mz++ = 0;                               \
545                                     *mz   = 0;                               \
546  } else memset((charp), 0, mzsz);                                            \
547} while(0)
548
549#define MALLOC_COPY(dest,src,nbytes)                                          \
550do {                                                                          \
551  INTERNAL_SIZE_T mcsz = (nbytes);                                            \
552  if(mcsz <= 9*sizeof(mcsz)) {                                                \
553    INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
554    INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
555    if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
556                                     *mcdst++ = *mcsrc++;                     \
557      if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
558                                     *mcdst++ = *mcsrc++;                     \
559        if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
560                                     *mcdst++ = *mcsrc++; }}}                 \
561                                     *mcdst++ = *mcsrc++;                     \
562                                     *mcdst++ = *mcsrc++;                     \
563                                     *mcdst   = *mcsrc  ;                     \
564  } else memcpy(dest, src, mcsz);                                             \
565} while(0)
566
567#else /* !USE_MEMCPY */
568
569/* Use Duff's device for good zeroing/copying performance. */
570
571#define MALLOC_ZERO(charp, nbytes)                                            \
572do {                                                                          \
573  INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
574  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
575  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
576  switch (mctmp) {                                                            \
577    case 0: for(;;) { *mzp++ = 0;                                             \
578    case 7:           *mzp++ = 0;                                             \
579    case 6:           *mzp++ = 0;                                             \
580    case 5:           *mzp++ = 0;                                             \
581    case 4:           *mzp++ = 0;                                             \
582    case 3:           *mzp++ = 0;                                             \
583    case 2:           *mzp++ = 0;                                             \
584    case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
585  }                                                                           \
586} while(0)
587
588#define MALLOC_COPY(dest,src,nbytes)                                          \
589do {                                                                          \
590  INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
591  INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
592  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
593  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
594  switch (mctmp) {                                                            \
595    case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
596    case 7:           *mcdst++ = *mcsrc++;                                    \
597    case 6:           *mcdst++ = *mcsrc++;                                    \
598    case 5:           *mcdst++ = *mcsrc++;                                    \
599    case 4:           *mcdst++ = *mcsrc++;                                    \
600    case 3:           *mcdst++ = *mcsrc++;                                    \
601    case 2:           *mcdst++ = *mcsrc++;                                    \
602    case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
603  }                                                                           \
604} while(0)
605
606#endif
607
608
609/*
610  Define HAVE_MMAP to optionally make malloc() use mmap() to
611  allocate very large blocks.  These will be returned to the
612  operating system immediately after a free().
613*/
614
615#ifndef HAVE_MMAP
616#define HAVE_MMAP 1
617#endif
618
619/*
620  Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
621  large blocks.  This is currently only possible on Linux with
622  kernel versions newer than 1.3.77.
623*/
624
625#ifndef HAVE_MREMAP
626#ifdef INTERNAL_LINUX_C_LIB
627#define HAVE_MREMAP 1
628#else
629#define HAVE_MREMAP 0
630#endif
631#endif
632
633#if HAVE_MMAP
634
635#include <unistd.h>
636#include <fcntl.h>
637#include <sys/mman.h>
638
639#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
640#define MAP_ANONYMOUS MAP_ANON
641#endif
642
643#endif /* HAVE_MMAP */
644
645/*
646  Access to system page size. To the extent possible, this malloc
647  manages memory from the system in page-size units.
648 
649  The following mechanics for getpagesize were adapted from
650  bsd/gnu getpagesize.h
651*/
652
653#ifndef LACKS_UNISTD_H
654#  include <unistd.h>
655#endif
656
657#ifndef malloc_getpagesize
658#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
659#    ifndef _SC_PAGE_SIZE
660#      define _SC_PAGE_SIZE _SC_PAGESIZE
661#    endif
662#  endif
663#  ifdef _SC_PAGE_SIZE
664#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
665#  else
666#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
667       extern size_t getpagesize();
668#      define malloc_getpagesize getpagesize()
669#    else
670#      ifdef WIN32
671#        define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
672#      else
673#        ifndef LACKS_SYS_PARAM_H
674#          include <sys/param.h>
675#        endif
676#        ifdef EXEC_PAGESIZE
677#          define malloc_getpagesize EXEC_PAGESIZE
678#        else
679#          ifdef NBPG
680#            ifndef CLSIZE
681#              define malloc_getpagesize NBPG
682#            else
683#              define malloc_getpagesize (NBPG * CLSIZE)
684#            endif
685#          else
686#            ifdef NBPC
687#              define malloc_getpagesize NBPC
688#            else
689#              ifdef PAGESIZE
690#                define malloc_getpagesize PAGESIZE
691#              else
692#                define malloc_getpagesize (4096) /* just guess */
693#              endif
694#            endif
695#          endif
696#        endif
697#      endif
698#    endif
699#  endif
700#endif
701
702
703
704/*
705
706  This version of malloc supports the standard SVID/XPG mallinfo
707  routine that returns a struct containing the same kind of
708  information you can get from malloc_stats. It should work on
709  any SVID/XPG compliant system that has a /usr/include/malloc.h
710  defining struct mallinfo. (If you'd like to install such a thing
711  yourself, cut out the preliminary declarations as described above
712  and below and save them in a malloc.h file. But there's no
713  compelling reason to bother to do this.)
714
715  The main declaration needed is the mallinfo struct that is returned
716  (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
717  bunch of fields, most of which are not even meaningful in this
718  version of malloc. Some of these fields are are instead filled by
719  mallinfo() with other numbers that might possibly be of interest.
720
721  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
722  /usr/include/malloc.h file that includes a declaration of struct
723  mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
724  version is declared below.  These must be precisely the same for
725  mallinfo() to work.
726
727*/
728
729/* #define HAVE_USR_INCLUDE_MALLOC_H */
730
731#if HAVE_USR_INCLUDE_MALLOC_H
732#include "/usr/include/malloc.h"
733#else
734
735/* SVID2/XPG mallinfo structure */
736
737struct mallinfo {
738  int arena;    /* total space allocated from system */
739  int ordblks;  /* number of non-inuse chunks */
740  int smblks;   /* unused -- always zero */
741  int hblks;    /* number of mmapped regions */
742  int hblkhd;   /* total space in mmapped regions */
743  int usmblks;  /* unused -- always zero */
744  int fsmblks;  /* unused -- always zero */
745  int uordblks; /* total allocated space */
746  int fordblks; /* total non-inuse space */
747  int keepcost; /* top-most, releasable (via malloc_trim) space */
748};     
749
750/* SVID2/XPG mallopt options */
751
752#define M_MXFAST  1    /* UNUSED in this malloc */
753#define M_NLBLKS  2    /* UNUSED in this malloc */
754#define M_GRAIN   3    /* UNUSED in this malloc */
755#define M_KEEP    4    /* UNUSED in this malloc */
756
757#endif
758
759/* mallopt options that actually do something */
760
761#define M_TRIM_THRESHOLD    -1
762#define M_TOP_PAD           -2
763#define M_MMAP_THRESHOLD    -3
764#define M_MMAP_MAX          -4
765
766
767
768#ifndef DEFAULT_TRIM_THRESHOLD
769#define DEFAULT_TRIM_THRESHOLD (128L * 1024L)
770#endif
771
772/*
773    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
774      to keep before releasing via malloc_trim in free().
775
776      Automatic trimming is mainly useful in long-lived programs.
777      Because trimming via sbrk can be slow on some systems, and can
778      sometimes be wasteful (in cases where programs immediately
779      afterward allocate more large chunks) the value should be high
780      enough so that your overall system performance would improve by
781      releasing. 
782
783      The trim threshold and the mmap control parameters (see below)
784      can be traded off with one another. Trimming and mmapping are
785      two different ways of releasing unused memory back to the
786      system. Between these two, it is often possible to keep
787      system-level demands of a long-lived program down to a bare
788      minimum. For example, in one test suite of sessions measuring
789      the XF86 X server on Linux, using a trim threshold of 128K and a
790      mmap threshold of 192K led to near-minimal long term resource
791      consumption. 
792
793      If you are using this malloc in a long-lived program, it should
794      pay to experiment with these values.  As a rough guide, you
795      might set to a value close to the average size of a process
796      (program) running on your system.  Releasing this much memory
797      would allow such a process to run in memory.  Generally, it's
798      worth it to tune for trimming rather tham memory mapping when a
799      program undergoes phases where several large chunks are
800      allocated and released in ways that can reuse each other's
801      storage, perhaps mixed with phases where there are no such
802      chunks at all.  And in well-behaved long-lived programs,
803      controlling release of large blocks via trimming versus mapping
804      is usually faster.
805
806      However, in most programs, these parameters serve mainly as
807      protection against the system-level effects of carrying around
808      massive amounts of unneeded memory. Since frequent calls to
809      sbrk, mmap, and munmap otherwise degrade performance, the default
810      parameters are set to relatively high values that serve only as
811      safeguards.
812
813      The default trim value is high enough to cause trimming only in
814      fairly extreme (by current memory consumption standards) cases.
815      It must be greater than page size to have any useful effect.  To
816      disable trimming completely, you can set to (unsigned long)(-1);
817
818
819*/
820
821
822#ifndef DEFAULT_TOP_PAD
823#define DEFAULT_TOP_PAD        (0)
824#endif
825
826/*
827    M_TOP_PAD is the amount of extra `padding' space to allocate or
828      retain whenever sbrk is called. It is used in two ways internally:
829
830      * When sbrk is called to extend the top of the arena to satisfy
831        a new malloc request, this much padding is added to the sbrk
832        request.
833
834      * When malloc_trim is called automatically from free(),
835        it is used as the `pad' argument.
836
837      In both cases, the actual amount of padding is rounded
838      so that the end of the arena is always a system page boundary.
839
840      The main reason for using padding is to avoid calling sbrk so
841      often. Having even a small pad greatly reduces the likelihood
842      that nearly every malloc request during program start-up (or
843      after trimming) will invoke sbrk, which needlessly wastes
844      time.
845
846      Automatic rounding-up to page-size units is normally sufficient
847      to avoid measurable overhead, so the default is 0.  However, in
848      systems where sbrk is relatively slow, it can pay to increase
849      this value, at the expense of carrying around more memory than
850      the program needs.
851
852*/
853
854
855#ifndef DEFAULT_MMAP_THRESHOLD
856#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
857#endif
858
859/*
860
861    M_MMAP_THRESHOLD is the request size threshold for using mmap()
862      to service a request. Requests of at least this size that cannot
863      be allocated using already-existing space will be serviced via mmap. 
864      (If enough normal freed space already exists it is used instead.)
865
866      Using mmap segregates relatively large chunks of memory so that
867      they can be individually obtained and released from the host
868      system. A request serviced through mmap is never reused by any
869      other request (at least not directly; the system may just so
870      happen to remap successive requests to the same locations).
871
872      Segregating space in this way has the benefit that mmapped space
873      can ALWAYS be individually released back to the system, which
874      helps keep the system level memory demands of a long-lived
875      program low. Mapped memory can never become `locked' between
876      other chunks, as can happen with normally allocated chunks, which
877      menas that even trimming via malloc_trim would not release them.
878
879      However, it has the disadvantages that:
880
881         1. The space cannot be reclaimed, consolidated, and then
882            used to service later requests, as happens with normal chunks.
883         2. It can lead to more wastage because of mmap page alignment
884            requirements
885         3. It causes malloc performance to be more dependent on host
886            system memory management support routines which may vary in
887            implementation quality and may impose arbitrary
888            limitations. Generally, servicing a request via normal
889            malloc steps is faster than going through a system's mmap.
890
891      All together, these considerations should lead you to use mmap
892      only for relatively large requests. 
893
894
895*/
896
897
898
899#ifndef DEFAULT_MMAP_MAX
900#if HAVE_MMAP
901#define DEFAULT_MMAP_MAX       (64)
902#else
903#define DEFAULT_MMAP_MAX       (0)
904#endif
905#endif
906
907/*
908    M_MMAP_MAX is the maximum number of requests to simultaneously
909      service using mmap. This parameter exists because:
910
911         1. Some systems have a limited number of internal tables for
912            use by mmap.
913         2. In most systems, overreliance on mmap can degrade overall
914            performance.
915         3. If a program allocates many large regions, it is probably
916            better off using normal sbrk-based allocation routines that
917            can reclaim and reallocate normal heap memory. Using a
918            small value allows transition into this mode after the
919            first few allocations.
920
921      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
922      the default value is 0, and attempts to set it to non-zero values
923      in mallopt will fail.
924*/
925
926
927
928
929/*
930    USE_DL_PREFIX will prefix all public routines with the string 'dl'.
931      Useful to quickly avoid procedure declaration conflicts and linker
932      symbol conflicts with existing memory allocation routines.
933
934*/
935
936/* #define USE_DL_PREFIX */
937
938
939
940
941/*
942
943  Special defines for linux libc
944
945  Except when compiled using these special defines for Linux libc
946  using weak aliases, this malloc is NOT designed to work in
947  multithreaded applications.  No semaphores or other concurrency
948  control are provided to ensure that multiple malloc or free calls
949  don't run at the same time, which could be disasterous. A single
950  semaphore could be used across malloc, realloc, and free (which is
951  essentially the effect of the linux weak alias approach). It would
952  be hard to obtain finer granularity.
953
954*/
955
956
957#ifdef INTERNAL_LINUX_C_LIB
958
959#if __STD_C
960
961Void_t * __default_morecore_init (ptrdiff_t);
962Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
963
964#else
965
966Void_t * __default_morecore_init ();
967Void_t *(*__morecore)() = __default_morecore_init;
968
969#endif
970
971#define MORECORE (*__morecore)
972#define MORECORE_FAILURE 0
973#define MORECORE_CLEARS 1
974
975#else /* INTERNAL_LINUX_C_LIB */
976
977#ifndef INTERNAL_NEWLIB
978#if __STD_C
979extern Void_t*     sbrk(ptrdiff_t);
980#else
981extern Void_t*     sbrk();
982#endif
983#endif
984
985#ifndef MORECORE
986#define MORECORE sbrk
987#endif
988
989#ifndef MORECORE_FAILURE
990#define MORECORE_FAILURE -1
991#endif
992
993#ifndef MORECORE_CLEARS
994#define MORECORE_CLEARS 1
995#endif
996
997#endif /* INTERNAL_LINUX_C_LIB */
998
999#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
1000
1001#define cALLOc          __libc_calloc
1002#define fREe            __libc_free
1003#define mALLOc          __libc_malloc
1004#define mEMALIGn        __libc_memalign
1005#define rEALLOc         __libc_realloc
1006#define vALLOc          __libc_valloc
1007#define pvALLOc         __libc_pvalloc
1008#define mALLINFo        __libc_mallinfo
1009#define mALLOPt         __libc_mallopt
1010
1011#pragma weak calloc = __libc_calloc
1012#pragma weak free = __libc_free
1013#pragma weak cfree = __libc_free
1014#pragma weak malloc = __libc_malloc
1015#pragma weak memalign = __libc_memalign
1016#pragma weak realloc = __libc_realloc
1017#pragma weak valloc = __libc_valloc
1018#pragma weak pvalloc = __libc_pvalloc
1019#pragma weak mallinfo = __libc_mallinfo
1020#pragma weak mallopt = __libc_mallopt
1021
1022#else
1023
1024#ifdef INTERNAL_NEWLIB
1025
1026#define cALLOc          _calloc_r
1027#define fREe            _free_r
1028#define mALLOc          _malloc_r
1029#define mEMALIGn        _memalign_r
1030#define rEALLOc         _realloc_r
1031#define vALLOc          _valloc_r
1032#define pvALLOc         _pvalloc_r
1033#define mALLINFo        _mallinfo_r
1034#define mALLOPt         _mallopt_r
1035
1036#define malloc_stats                    _malloc_stats_r
1037#define malloc_trim                     _malloc_trim_r
1038#define malloc_usable_size              _malloc_usable_size_r
1039
1040#define malloc_update_mallinfo          __malloc_update_mallinfo
1041
1042#define malloc_av_                      __malloc_av_
1043#define malloc_current_mallinfo         __malloc_current_mallinfo
1044#define malloc_max_sbrked_mem           __malloc_max_sbrked_mem
1045#define malloc_max_total_mem            __malloc_max_total_mem
1046#define malloc_sbrk_base                __malloc_sbrk_base
1047#define malloc_top_pad                  __malloc_top_pad
1048#define malloc_trim_threshold           __malloc_trim_threshold
1049
1050#else /* ! INTERNAL_NEWLIB */
1051
1052#ifdef USE_DL_PREFIX
1053#define cALLOc          dlcalloc
1054#define fREe            dlfree
1055#define mALLOc          dlmalloc
1056#define mEMALIGn        dlmemalign
1057#define rEALLOc         dlrealloc
1058#define vALLOc          dlvalloc
1059#define pvALLOc         dlpvalloc
1060#define mALLINFo        dlmallinfo
1061#define mALLOPt         dlmallopt
1062#else /* USE_DL_PREFIX */
1063#define cALLOc          calloc
1064#define fREe            free
1065#define mALLOc          malloc
1066#define mEMALIGn        memalign
1067#define rEALLOc         realloc
1068#define vALLOc          valloc
1069#define pvALLOc         pvalloc
1070#define mALLINFo        mallinfo
1071#define mALLOPt         mallopt
1072#endif /* USE_DL_PREFIX */
1073
1074#endif /* ! INTERNAL_NEWLIB */
1075#endif
1076
1077/* Public routines */
1078
1079#if __STD_C
1080
1081Void_t* mALLOc(RARG size_t);
1082void    fREe(RARG Void_t*);
1083Void_t* rEALLOc(RARG Void_t*, size_t);
1084Void_t* mEMALIGn(RARG size_t, size_t);
1085Void_t* vALLOc(RARG size_t);
1086Void_t* pvALLOc(RARG size_t);
1087Void_t* cALLOc(RARG size_t, size_t);
1088void    cfree(Void_t*);
1089int     malloc_trim(RARG size_t);
1090size_t  malloc_usable_size(RARG Void_t*);
1091void    malloc_stats(RONEARG);
1092int     mALLOPt(RARG int, int);
1093struct mallinfo mALLINFo(RONEARG);
1094#else
1095Void_t* mALLOc();
1096void    fREe();
1097Void_t* rEALLOc();
1098Void_t* mEMALIGn();
1099Void_t* vALLOc();
1100Void_t* pvALLOc();
1101Void_t* cALLOc();
1102void    cfree();
1103int     malloc_trim();
1104size_t  malloc_usable_size();
1105void    malloc_stats();
1106int     mALLOPt();
1107struct mallinfo mALLINFo();
1108#endif
1109
1110
1111#ifdef __cplusplus
1112};  /* end of extern "C" */
1113#endif
1114
1115/* ---------- To make a malloc.h, end cutting here ------------ */
1116
1117
1118/*
1119  Emulation of sbrk for WIN32
1120  All code within the ifdef WIN32 is untested by me.
1121
1122  Thanks to Martin Fong and others for supplying this.
1123*/
1124
1125
1126#ifdef WIN32
1127
1128#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1129~(malloc_getpagesize-1))
1130#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
1131
1132/* resrve 64MB to insure large contiguous space */ 
1133#define RESERVED_SIZE (1024*1024*64)
1134#define NEXT_SIZE (2048*1024)
1135#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1136
1137struct GmListElement;
1138typedef struct GmListElement GmListElement;
1139
1140struct GmListElement
1141{
1142        GmListElement* next;
1143        void* base;
1144};
1145
1146static GmListElement* head = 0;
1147static unsigned int gNextAddress = 0;
1148static unsigned int gAddressBase = 0;
1149static unsigned int gAllocatedSize = 0;
1150
1151static
1152GmListElement* makeGmListElement (void* bas)
1153{
1154        GmListElement* this;
1155        this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1156        assert (this);
1157        if (this)
1158        {
1159                this->base = bas;
1160                this->next = head;
1161                head = this;
1162        }
1163        return this;
1164}
1165
1166void gcleanup ()
1167{
1168        BOOL rval;
1169        assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1170        if (gAddressBase && (gNextAddress - gAddressBase))
1171        {
1172                rval = VirtualFree ((void*)gAddressBase, 
1173                                                        gNextAddress - gAddressBase, 
1174                                                        MEM_DECOMMIT);
1175        assert (rval);
1176        }
1177        while (head)
1178        {
1179                GmListElement* next = head->next;
1180                rval = VirtualFree (head->base, 0, MEM_RELEASE);
1181                assert (rval);
1182                LocalFree (head);
1183                head = next;
1184        }
1185}
1186               
1187static
1188void* findRegion (void* start_address, unsigned long size)
1189{
1190        MEMORY_BASIC_INFORMATION info;
1191        if (size >= TOP_MEMORY) return NULL;
1192
1193        while ((unsigned long)start_address + size < TOP_MEMORY)
1194        {
1195                VirtualQuery (start_address, &info, sizeof (info));
1196                if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1197                        return start_address;
1198                else
1199                {
1200                        // Requested region is not available so see if the
1201                        // next region is available.  Set 'start_address'
1202                        // to the next region and call 'VirtualQuery()'
1203                        // again.
1204
1205                        start_address = (char*)info.BaseAddress + info.RegionSize; 
1206
1207                        // Make sure we start looking for the next region
1208                        // on the *next* 64K boundary.  Otherwise, even if
1209                        // the new region is free according to
1210                        // 'VirtualQuery()', the subsequent call to
1211                        // 'VirtualAlloc()' (which follows the call to
1212                        // this routine in 'wsbrk()') will round *down*
1213                        // the requested address to a 64K boundary which
1214                        // we already know is an address in the
1215                        // unavailable region.  Thus, the subsequent call
1216                        // to 'VirtualAlloc()' will fail and bring us back
1217                        // here, causing us to go into an infinite loop.
1218
1219                        start_address =
1220                                (void *) AlignPage64K((unsigned long) start_address);
1221                }
1222        }
1223        return NULL;
1224       
1225}
1226
1227
1228void* wsbrk (long size)
1229{
1230        void* tmp;
1231        if (size > 0)
1232        {
1233                if (gAddressBase == 0)
1234                {
1235                        gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1236                        gNextAddress = gAddressBase = 
1237                                (unsigned int)VirtualAlloc (NULL, gAllocatedSize, 
1238                                                                                        MEM_RESERVE, PAGE_NOACCESS);
1239                } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1240gAllocatedSize))
1241                {
1242                        long new_size = max (NEXT_SIZE, AlignPage (size));
1243                        void* new_address = (void*)(gAddressBase+gAllocatedSize);
1244                        do 
1245                        {
1246                                new_address = findRegion (new_address, new_size);
1247                               
1248                                if (new_address == 0)
1249                                        return (void*)-1;
1250
1251                                gAddressBase = gNextAddress =
1252                                        (unsigned int)VirtualAlloc (new_address, new_size,
1253                                                                                                MEM_RESERVE, PAGE_NOACCESS);
1254                                // repeat in case of race condition
1255                                // The region that we found has been snagged
1256                                // by another thread
1257                        }
1258                        while (gAddressBase == 0);
1259
1260                        assert (new_address == (void*)gAddressBase);
1261
1262                        gAllocatedSize = new_size;
1263
1264                        if (!makeGmListElement ((void*)gAddressBase))
1265                                return (void*)-1;
1266                }
1267                if ((size + gNextAddress) > AlignPage (gNextAddress))
1268                {
1269                        void* res;
1270                        res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1271                                                                (size + gNextAddress - 
1272                                                                 AlignPage (gNextAddress)), 
1273                                                                MEM_COMMIT, PAGE_READWRITE);
1274                        if (res == 0)
1275                                return (void*)-1;
1276                }
1277                tmp = (void*)gNextAddress;
1278                gNextAddress = (unsigned int)tmp + size;
1279                return tmp;
1280        }
1281        else if (size < 0)
1282        {
1283                unsigned int alignedGoal = AlignPage (gNextAddress + size);
1284                /* Trim by releasing the virtual memory */
1285                if (alignedGoal >= gAddressBase)
1286                {
1287                        VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal, 
1288                                                 MEM_DECOMMIT);
1289                        gNextAddress = gNextAddress + size;
1290                        return (void*)gNextAddress;
1291                }
1292                else 
1293                {
1294                        VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1295                                                 MEM_DECOMMIT);
1296                        gNextAddress = gAddressBase;
1297                        return (void*)-1;
1298                }
1299        }
1300        else
1301        {
1302                return (void*)gNextAddress;
1303        }
1304}
1305
1306#endif
1307
1308
1309
1310/*
1311  Type declarations
1312*/
1313
1314
1315struct malloc_chunk
1316{
1317  INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1318  INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
1319  struct malloc_chunk* fd;   /* double links -- used only if free. */
1320  struct malloc_chunk* bk;
1321};
1322
1323typedef struct malloc_chunk* mchunkptr;
1324
1325/*
1326
1327   malloc_chunk details:
1328
1329    (The following includes lightly edited explanations by Colin Plumb.)
1330
1331    Chunks of memory are maintained using a `boundary tag' method as
1332    described in e.g., Knuth or Standish.  (See the paper by Paul
1333    Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1334    survey of such techniques.)  Sizes of free chunks are stored both
1335    in the front of each chunk and at the end.  This makes
1336    consolidating fragmented chunks into bigger chunks very fast.  The
1337    size fields also hold bits representing whether chunks are free or
1338    in use.
1339
1340    An allocated chunk looks like this: 
1341
1342
1343    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1344            |             Size of previous chunk, if allocated            | |
1345            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1346            |             Size of chunk, in bytes                         |P|
1347      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1348            |             User data starts here...                          .
1349            .                                                               .
1350            .             (malloc_usable_space() bytes)                     .
1351            .                                                               |
1352nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1353            |             Size of chunk                                     |
1354            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1355
1356
1357    Where "chunk" is the front of the chunk for the purpose of most of
1358    the malloc code, but "mem" is the pointer that is returned to the
1359    user.  "Nextchunk" is the beginning of the next contiguous chunk.
1360
1361    Chunks always begin on even word boundries, so the mem portion
1362    (which is returned to the user) is also on an even word boundary, and
1363    thus double-word aligned.
1364
1365    Free chunks are stored in circular doubly-linked lists, and look like this:
1366
1367    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1368            |             Size of previous chunk                            |
1369            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1370    `head:' |             Size of chunk, in bytes                         |P|
1371      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1372            |             Forward pointer to next chunk in list             |
1373            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1374            |             Back pointer to previous chunk in list            |
1375            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1376            |             Unused space (may be 0 bytes long)                .
1377            .                                                               .
1378            .                                                               |
1379nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1380    `foot:' |             Size of chunk, in bytes                           |
1381            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1382
1383    The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1384    chunk size (which is always a multiple of two words), is an in-use
1385    bit for the *previous* chunk.  If that bit is *clear*, then the
1386    word before the current chunk size contains the previous chunk
1387    size, and can be used to find the front of the previous chunk.
1388    (The very first chunk allocated always has this bit set,
1389    preventing access to non-existent (or non-owned) memory.)
1390
1391    Note that the `foot' of the current chunk is actually represented
1392    as the prev_size of the NEXT chunk. (This makes it easier to
1393    deal with alignments etc).
1394
1395    The two exceptions to all this are
1396
1397     1. The special chunk `top', which doesn't bother using the
1398        trailing size field since there is no
1399        next contiguous chunk that would have to index off it. (After
1400        initialization, `top' is forced to always exist.  If it would
1401        become less than MINSIZE bytes long, it is replenished via
1402        malloc_extend_top.)
1403
1404     2. Chunks allocated via mmap, which have the second-lowest-order
1405        bit (IS_MMAPPED) set in their size fields.  Because they are
1406        never merged or traversed from any other chunk, they have no
1407        foot size or inuse information.
1408
1409    Available chunks are kept in any of several places (all declared below):
1410
1411    * `av': An array of chunks serving as bin headers for consolidated
1412       chunks. Each bin is doubly linked.  The bins are approximately
1413       proportionally (log) spaced.  There are a lot of these bins
1414       (128). This may look excessive, but works very well in
1415       practice.  All procedures maintain the invariant that no
1416       consolidated chunk physically borders another one. Chunks in
1417       bins are kept in size order, with ties going to the
1418       approximately least recently used chunk.
1419
1420       The chunks in each bin are maintained in decreasing sorted order by
1421       size.  This is irrelevant for the small bins, which all contain
1422       the same-sized chunks, but facilitates best-fit allocation for
1423       larger chunks. (These lists are just sequential. Keeping them in
1424       order almost never requires enough traversal to warrant using
1425       fancier ordered data structures.)  Chunks of the same size are
1426       linked with the most recently freed at the front, and allocations
1427       are taken from the back.  This results in LRU or FIFO allocation
1428       order, which tends to give each chunk an equal opportunity to be
1429       consolidated with adjacent freed chunks, resulting in larger free
1430       chunks and less fragmentation.
1431
1432    * `top': The top-most available chunk (i.e., the one bordering the
1433       end of available memory) is treated specially. It is never
1434       included in any bin, is used only if no other chunk is
1435       available, and is released back to the system if it is very
1436       large (see M_TRIM_THRESHOLD).
1437
1438    * `last_remainder': A bin holding only the remainder of the
1439       most recently split (non-top) chunk. This bin is checked
1440       before other non-fitting chunks, so as to provide better
1441       locality for runs of sequentially allocated chunks.
1442
1443    *  Implicitly, through the host system's memory mapping tables.
1444       If supported, requests greater than a threshold are usually
1445       serviced via calls to mmap, and then later released via munmap.
1446
1447*/
1448
1449
1450
1451
1452
1453
1454/*  sizes, alignments */
1455
1456#define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
1457#ifndef MALLOC_ALIGNMENT
1458#define MALLOC_ALIGN           8
1459#define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
1460#else
1461#define MALLOC_ALIGN           MALLOC_ALIGNMENT
1462#endif
1463#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
1464#define MINSIZE                (sizeof(struct malloc_chunk))
1465
1466/* conversion from malloc headers to user pointers, and back */
1467
1468#define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1469#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1470
1471/* pad request bytes into a usable size */
1472
1473#define request2size(req) \
1474 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1475  (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? ((MINSIZE + MALLOC_ALIGN_MASK) & ~(MALLOC_ALIGN_MASK)) : \
1476   (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1477
1478/* Check if m has acceptable alignment */
1479
1480#define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1481
1482
1483
1484
1485/*
1486  Physical chunk operations 
1487*/
1488
1489
1490/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1491
1492#define PREV_INUSE 0x1
1493
1494/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1495
1496#define IS_MMAPPED 0x2
1497
1498/* Bits to mask off when extracting size */
1499
1500#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1501
1502
1503/* Ptr to next physical malloc_chunk. */
1504
1505#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1506
1507/* Ptr to previous physical malloc_chunk */
1508
1509#define prev_chunk(p)\
1510   ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1511
1512
1513/* Treat space at ptr + offset as a chunk */
1514
1515#define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1516
1517
1518
1519
1520/*
1521  Dealing with use bits
1522*/
1523
1524/* extract p's inuse bit */
1525
1526#define inuse(p)\
1527((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1528
1529/* extract inuse bit of previous chunk */
1530
1531#define prev_inuse(p)  ((p)->size & PREV_INUSE)
1532
1533/* check for mmap()'ed chunk */
1534
1535#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1536
1537/* set/clear chunk as in use without otherwise disturbing */
1538
1539#define set_inuse(p)\
1540((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1541
1542#define clear_inuse(p)\
1543((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1544
1545/* check/set/clear inuse bits in known places */
1546
1547#define inuse_bit_at_offset(p, s)\
1548 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1549
1550#define set_inuse_bit_at_offset(p, s)\
1551 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1552
1553#define clear_inuse_bit_at_offset(p, s)\
1554 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1555
1556
1557
1558
1559/*
1560  Dealing with size fields
1561*/
1562
1563/* Get size, ignoring use bits */
1564
1565#define chunksize(p)          ((p)->size & ~(SIZE_BITS))
1566
1567/* Set size at head, without disturbing its use bit */
1568
1569#define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1570
1571/* Set size/use ignoring previous bits in header */
1572
1573#define set_head(p, s)        ((p)->size = (s))
1574
1575/* Set size at footer (only when chunk is not in use) */
1576
1577#define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1578
1579
1580
1581
1582
1583/*
1584   Bins
1585
1586    The bins, `av_' are an array of pairs of pointers serving as the
1587    heads of (initially empty) doubly-linked lists of chunks, laid out
1588    in a way so that each pair can be treated as if it were in a
1589    malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1590    and chunks are the same).
1591
1592    Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1593    8 bytes apart. Larger bins are approximately logarithmically
1594    spaced. (See the table below.) The `av_' array is never mentioned
1595    directly in the code, but instead via bin access macros.
1596
1597    Bin layout:
1598
1599    64 bins of size       8
1600    32 bins of size      64
1601    16 bins of size     512
1602     8 bins of size    4096
1603     4 bins of size   32768
1604     2 bins of size  262144
1605     1 bin  of size what's left
1606
1607    There is actually a little bit of slop in the numbers in bin_index
1608    for the sake of speed. This makes no difference elsewhere.
1609
1610    The special chunks `top' and `last_remainder' get their own bins,
1611    (this is implemented via yet more trickery with the av_ array),
1612    although `top' is never properly linked to its bin since it is
1613    always handled specially.
1614
1615*/
1616
1617#ifdef SEPARATE_OBJECTS
1618#define av_ malloc_av_
1619#endif
1620
1621#define NAV             128   /* number of bins */
1622
1623typedef struct malloc_chunk* mbinptr;
1624
1625/* access macros */
1626
1627#define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1628#define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1629#define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1630
1631/*
1632   The first 2 bins are never indexed. The corresponding av_ cells are instead
1633   used for bookkeeping. This is not to save space, but to simplify
1634   indexing, maintain locality, and avoid some initialization tests.
1635*/
1636
1637#define top            (bin_at(0)->fd)   /* The topmost chunk */
1638#define last_remainder (bin_at(1))       /* remainder from last split */
1639
1640
1641/*
1642   Because top initially points to its own bin with initial
1643   zero size, thus forcing extension on the first malloc request,
1644   we avoid having any special code in malloc to check whether
1645   it even exists yet. But we still need to in malloc_extend_top.
1646*/
1647
1648#define initial_top    ((mchunkptr)(bin_at(0)))
1649
1650/* Helper macro to initialize bins */
1651
1652#define IAV(i)  bin_at(i), bin_at(i)
1653
1654#ifdef DEFINE_MALLOC
1655STATIC mbinptr av_[NAV * 2 + 2] = {
1656 0, 0,
1657 IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
1658 IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
1659 IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
1660 IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
1661 IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
1662 IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
1663 IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
1664 IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
1665 IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
1666 IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
1667 IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
1668 IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
1669 IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
1670 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1671 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1672 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1673};
1674#else
1675extern mbinptr av_[NAV * 2 + 2];
1676#endif
1677
1678
1679
1680/* field-extraction macros */
1681
1682#define first(b) ((b)->fd)
1683#define last(b)  ((b)->bk)
1684
1685/*
1686  Indexing into bins
1687*/
1688
1689#define bin_index(sz)                                                          \
1690(((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
1691 ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
1692 ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
1693 ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
1694 ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
1695 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1696                                          126)                     
1697/*
1698  bins for chunks < 512 are all spaced SMALLBIN_WIDTH bytes apart, and hold
1699  identically sized chunks. This is exploited in malloc.
1700*/
1701
1702#define MAX_SMALLBIN_SIZE   512
1703#define SMALLBIN_WIDTH        8
1704#define SMALLBIN_WIDTH_BITS   3
1705#define MAX_SMALLBIN        (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1706
1707#define smallbin_index(sz)  (((unsigned long)(sz)) >> SMALLBIN_WIDTH_BITS)
1708
1709/*
1710   Requests are `small' if both the corresponding and the next bin are small
1711*/
1712
1713#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1714
1715
1716
1717/*
1718    To help compensate for the large number of bins, a one-level index
1719    structure is used for bin-by-bin searching.  `binblocks' is a
1720    one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1721    have any (possibly) non-empty bins, so they can be skipped over
1722    all at once during during traversals. The bits are NOT always
1723    cleared as soon as all bins in a block are empty, but instead only
1724    when all are noticed to be empty during traversal in malloc.
1725*/
1726
1727#define BINBLOCKWIDTH     4   /* bins per block */
1728
1729#define binblocks      (bin_at(0)->size) /* bitvector of nonempty blocks */
1730
1731/* bin<->block macros */
1732
1733#define idx2binblock(ix)    ((unsigned long)1 << (ix / BINBLOCKWIDTH))
1734#define mark_binblock(ii)   (binblocks |= idx2binblock(ii))
1735#define clear_binblock(ii)  (binblocks &= ~(idx2binblock(ii)))
1736
1737
1738
1739
1740
1741/*  Other static bookkeeping data */
1742
1743#ifdef SEPARATE_OBJECTS
1744#define trim_threshold          malloc_trim_threshold
1745#define top_pad                 malloc_top_pad
1746#define n_mmaps_max             malloc_n_mmaps_max
1747#define mmap_threshold          malloc_mmap_threshold
1748#define sbrk_base               malloc_sbrk_base
1749#define max_sbrked_mem          malloc_max_sbrked_mem
1750#define max_total_mem           malloc_max_total_mem
1751#define current_mallinfo        malloc_current_mallinfo
1752#define n_mmaps                 malloc_n_mmaps
1753#define max_n_mmaps             malloc_max_n_mmaps
1754#define mmapped_mem             malloc_mmapped_mem
1755#define max_mmapped_mem         malloc_max_mmapped_mem
1756#endif
1757
1758/* variables holding tunable values */
1759
1760#ifdef DEFINE_MALLOC
1761
1762STATIC unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
1763STATIC unsigned long top_pad          = DEFAULT_TOP_PAD;
1764#if HAVE_MMAP
1765STATIC unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
1766STATIC unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
1767#endif
1768
1769/* The first value returned from sbrk */
1770STATIC char* sbrk_base = (char*)(-1);
1771
1772/* The maximum memory obtained from system via sbrk */
1773STATIC unsigned long max_sbrked_mem = 0; 
1774
1775/* The maximum via either sbrk or mmap */
1776STATIC unsigned long max_total_mem = 0; 
1777
1778/* internal working copy of mallinfo */
1779STATIC struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1780
1781#if HAVE_MMAP
1782
1783/* Tracking mmaps */
1784
1785STATIC unsigned int n_mmaps = 0;
1786STATIC unsigned int max_n_mmaps = 0;
1787STATIC unsigned long mmapped_mem = 0;
1788STATIC unsigned long max_mmapped_mem = 0;
1789
1790#endif
1791
1792#else /* ! DEFINE_MALLOC */
1793
1794extern unsigned long trim_threshold;
1795extern unsigned long top_pad;
1796#if HAVE_MMAP
1797extern unsigned int  n_mmaps_max;
1798extern unsigned long mmap_threshold;
1799#endif
1800extern char* sbrk_base;
1801extern unsigned long max_sbrked_mem;
1802extern unsigned long max_total_mem;
1803extern struct mallinfo current_mallinfo;
1804#if HAVE_MMAP
1805extern unsigned int n_mmaps;
1806extern unsigned int max_n_mmaps;
1807extern unsigned long mmapped_mem;
1808extern unsigned long max_mmapped_mem;
1809#endif
1810
1811#endif /* ! DEFINE_MALLOC */
1812
1813/* The total memory obtained from system via sbrk */
1814#define sbrked_mem  (current_mallinfo.arena)
1815
1816
1817
1818/*
1819  Debugging support
1820*/
1821
1822#if DEBUG
1823
1824
1825/*
1826  These routines make a number of assertions about the states
1827  of data structures that should be true at all times. If any
1828  are not true, it's very likely that a user program has somehow
1829  trashed memory. (It's also possible that there is a coding error
1830  in malloc. In which case, please report it!)
1831*/
1832
1833#if __STD_C
1834static void do_check_chunk(mchunkptr p) 
1835#else
1836static void do_check_chunk(p) mchunkptr p;
1837#endif
1838{ 
1839  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1840
1841  /* No checkable chunk is mmapped */
1842  assert(!chunk_is_mmapped(p));
1843
1844  /* Check for legal address ... */
1845  assert((char*)p >= sbrk_base);
1846  if (p != top) 
1847    assert((char*)p + sz <= (char*)top);
1848  else
1849    assert((char*)p + sz <= sbrk_base + sbrked_mem);
1850
1851}
1852
1853
1854#if __STD_C
1855static void do_check_free_chunk(mchunkptr p) 
1856#else
1857static void do_check_free_chunk(p) mchunkptr p;
1858#endif
1859{ 
1860  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1861  mchunkptr next = chunk_at_offset(p, sz);
1862
1863  do_check_chunk(p);
1864
1865  /* Check whether it claims to be free ... */
1866  assert(!inuse(p));
1867
1868  /* Unless a special marker, must have OK fields */
1869  if ((long)sz >= (long)MINSIZE)
1870  {
1871    assert((sz & MALLOC_ALIGN_MASK) == 0);
1872    assert(aligned_OK(chunk2mem(p)));
1873    /* ... matching footer field */
1874    assert(next->prev_size == sz);
1875    /* ... and is fully consolidated */
1876    assert(prev_inuse(p));
1877    assert (next == top || inuse(next));
1878   
1879    /* ... and has minimally sane links */
1880    assert(p->fd->bk == p);
1881    assert(p->bk->fd == p);
1882  }
1883  else /* markers are always of size SIZE_SZ */
1884    assert(sz == SIZE_SZ); 
1885}
1886
1887#if __STD_C
1888static void do_check_inuse_chunk(mchunkptr p) 
1889#else
1890static void do_check_inuse_chunk(p) mchunkptr p;
1891#endif
1892{ 
1893  mchunkptr next = next_chunk(p);
1894  do_check_chunk(p);
1895
1896  /* Check whether it claims to be in use ... */
1897  assert(inuse(p));
1898
1899  /* ... and is surrounded by OK chunks.
1900    Since more things can be checked with free chunks than inuse ones,
1901    if an inuse chunk borders them and debug is on, it's worth doing them.
1902  */
1903  if (!prev_inuse(p)) 
1904  {
1905    mchunkptr prv = prev_chunk(p);
1906    assert(next_chunk(prv) == p);
1907    do_check_free_chunk(prv);
1908  }
1909  if (next == top)
1910  {
1911    assert(prev_inuse(next));
1912    assert(chunksize(next) >= MINSIZE);
1913  }
1914  else if (!inuse(next))
1915    do_check_free_chunk(next);
1916
1917}
1918
1919#if __STD_C
1920static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) 
1921#else
1922static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1923#endif
1924{
1925  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1926  long room = long_sub_size_t(sz, s);
1927
1928  do_check_inuse_chunk(p);
1929
1930  /* Legal size ... */
1931  assert((long)sz >= (long)MINSIZE);
1932  assert((sz & MALLOC_ALIGN_MASK) == 0);
1933  assert(room >= 0);
1934  assert(room < (long)MINSIZE);
1935
1936  /* ... and alignment */
1937  assert(aligned_OK(chunk2mem(p)));
1938
1939
1940  /* ... and was allocated at front of an available chunk */
1941  assert(prev_inuse(p));
1942
1943}
1944
1945
1946#define check_free_chunk(P)  do_check_free_chunk(P)
1947#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1948#define check_chunk(P) do_check_chunk(P)
1949#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1950#else
1951#define check_free_chunk(P)
1952#define check_inuse_chunk(P)
1953#define check_chunk(P)
1954#define check_malloced_chunk(P,N)
1955#endif
1956
1957
1958
1959/*
1960  Macro-based internal utilities
1961*/
1962
1963
1964/* 
1965  Linking chunks in bin lists.
1966  Call these only with variables, not arbitrary expressions, as arguments.
1967*/
1968
1969/*
1970  Place chunk p of size s in its bin, in size order,
1971  putting it ahead of others of same size.
1972*/
1973
1974
1975#define frontlink(P, S, IDX, BK, FD)                                          \
1976{                                                                             \
1977  if (S < MAX_SMALLBIN_SIZE)                                                  \
1978  {                                                                           \
1979    IDX = smallbin_index(S);                                                  \
1980    mark_binblock(IDX);                                                       \
1981    BK = bin_at(IDX);                                                         \
1982    FD = BK->fd;                                                              \
1983    P->bk = BK;                                                               \
1984    P->fd = FD;                                                               \
1985    FD->bk = BK->fd = P;                                                      \
1986  }                                                                           \
1987  else                                                                        \
1988  {                                                                           \
1989    IDX = bin_index(S);                                                       \
1990    BK = bin_at(IDX);                                                         \
1991    FD = BK->fd;                                                              \
1992    if (FD == BK) mark_binblock(IDX);                                         \
1993    else                                                                      \
1994    {                                                                         \
1995      while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
1996      BK = FD->bk;                                                            \
1997    }                                                                         \
1998    P->bk = BK;                                                               \
1999    P->fd = FD;                                                               \
2000    FD->bk = BK->fd = P;                                                      \
2001  }                                                                           \
2002}
2003
2004
2005/* take a chunk off a list */
2006
2007#define unlink(P, BK, FD)                                                     \
2008{                                                                             \
2009  BK = P->bk;                                                                 \
2010  FD = P->fd;                                                                 \
2011  FD->bk = BK;                                                                \
2012  BK->fd = FD;                                                                \
2013}                                                                             \
2014
2015/* Place p as the last remainder */
2016
2017#define link_last_remainder(P)                                                \
2018{                                                                             \
2019  last_remainder->fd = last_remainder->bk =  P;                               \
2020  P->fd = P->bk = last_remainder;                                             \
2021}
2022
2023/* Clear the last_remainder bin */
2024
2025#define clear_last_remainder \
2026  (last_remainder->fd = last_remainder->bk = last_remainder)
2027
2028
2029
2030
2031
2032
2033/* Routines dealing with mmap(). */
2034
2035#if HAVE_MMAP
2036
2037#ifdef DEFINE_MALLOC
2038
2039#if __STD_C
2040static mchunkptr mmap_chunk(size_t size)
2041#else
2042static mchunkptr mmap_chunk(size) size_t size;
2043#endif
2044{
2045  size_t page_mask = malloc_getpagesize - 1;
2046  mchunkptr p;
2047
2048#ifndef MAP_ANONYMOUS
2049  static int fd = -1;
2050#endif
2051
2052  if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
2053
2054  /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
2055   * there is no following chunk whose prev_size field could be used.
2056   */
2057  size = (size + SIZE_SZ + page_mask) & ~page_mask;
2058
2059#ifdef MAP_ANONYMOUS
2060  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
2061                      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
2062#else /* !MAP_ANONYMOUS */
2063  if (fd < 0) 
2064  {
2065    fd = open("/dev/zero", O_RDWR);
2066    if(fd < 0) return 0;
2067  }
2068  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
2069#endif
2070
2071  if(p == (mchunkptr)-1) return 0;
2072
2073  n_mmaps++;
2074  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
2075 
2076  /* We demand that eight bytes into a page must be 8-byte aligned. */
2077  assert(aligned_OK(chunk2mem(p)));
2078
2079  /* The offset to the start of the mmapped region is stored
2080   * in the prev_size field of the chunk; normally it is zero,
2081   * but that can be changed in memalign().
2082   */
2083  p->prev_size = 0;
2084  set_head(p, size|IS_MMAPPED);
2085 
2086  mmapped_mem += size;
2087  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
2088    max_mmapped_mem = mmapped_mem;
2089  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
2090    max_total_mem = mmapped_mem + sbrked_mem;
2091  return p;
2092}
2093
2094#endif /* DEFINE_MALLOC */
2095
2096#ifdef SEPARATE_OBJECTS
2097#define munmap_chunk malloc_munmap_chunk
2098#endif
2099
2100#ifdef DEFINE_FREE
2101
2102#if __STD_C
2103STATIC void munmap_chunk(mchunkptr p)
2104#else
2105STATIC void munmap_chunk(p) mchunkptr p;
2106#endif
2107{
2108  INTERNAL_SIZE_T size = chunksize(p);
2109  int ret;
2110
2111  assert (chunk_is_mmapped(p));
2112  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2113  assert((n_mmaps > 0));
2114  assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2115
2116  n_mmaps--;
2117  mmapped_mem -= (size + p->prev_size);
2118
2119  ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2120
2121  /* munmap returns non-zero on failure */
2122  assert(ret == 0);
2123}
2124
2125#else /* ! DEFINE_FREE */
2126
2127#if __STD_C
2128extern void munmap_chunk(mchunkptr);
2129#else
2130extern void munmap_chunk();
2131#endif
2132
2133#endif /* ! DEFINE_FREE */
2134
2135#if HAVE_MREMAP
2136
2137#ifdef DEFINE_REALLOC
2138
2139#if __STD_C
2140static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2141#else
2142static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2143#endif
2144{
2145  size_t page_mask = malloc_getpagesize - 1;
2146  INTERNAL_SIZE_T offset = p->prev_size;
2147  INTERNAL_SIZE_T size = chunksize(p);
2148  char *cp;
2149
2150  assert (chunk_is_mmapped(p));
2151  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2152  assert((n_mmaps > 0));
2153  assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2154
2155  /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2156  new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
2157
2158  cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2159
2160  if (cp == (char *)-1) return 0;
2161
2162  p = (mchunkptr)(cp + offset);
2163
2164  assert(aligned_OK(chunk2mem(p)));
2165
2166  assert((p->prev_size == offset));
2167  set_head(p, (new_size - offset)|IS_MMAPPED);
2168
2169  mmapped_mem -= size + offset;
2170  mmapped_mem += new_size;
2171  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
2172    max_mmapped_mem = mmapped_mem;
2173  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2174    max_total_mem = mmapped_mem + sbrked_mem;
2175  return p;
2176}
2177
2178#endif /* DEFINE_REALLOC */
2179
2180#endif /* HAVE_MREMAP */
2181
2182#endif /* HAVE_MMAP */
2183
2184
2185
2186
2187#ifdef DEFINE_MALLOC
2188
2189/*
2190  Extend the top-most chunk by obtaining memory from system.
2191  Main interface to sbrk (but see also malloc_trim).
2192*/
2193
2194#if __STD_C
2195static void malloc_extend_top(RARG INTERNAL_SIZE_T nb)
2196#else
2197static void malloc_extend_top(RARG nb) RDECL INTERNAL_SIZE_T nb;
2198#endif
2199{
2200  char*     brk;                  /* return value from sbrk */
2201  INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2202  INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
2203  char*     new_brk;              /* return of 2nd sbrk call */
2204  INTERNAL_SIZE_T top_size;       /* new size of top chunk */
2205
2206  mchunkptr old_top     = top;  /* Record state of old top */
2207  INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2208  char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
2209
2210  /* Pad request with top_pad plus minimal overhead */
2211 
2212  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
2213  unsigned long pagesz    = malloc_getpagesize;
2214
2215  /* If not the first time through, round to preserve page boundary */
2216  /* Otherwise, we need to correct to a page size below anyway. */
2217  /* (We also correct below if an intervening foreign sbrk call.) */
2218
2219  if (sbrk_base != (char*)(-1))
2220    sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2221
2222  brk = (char*)(MORECORE (sbrk_size));
2223
2224  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2225  if (brk == (char*)(MORECORE_FAILURE) || 
2226      (brk < old_end && old_top != initial_top))
2227    return;     
2228
2229  sbrked_mem += sbrk_size;
2230
2231  if (brk == old_end) /* can just add bytes to current top */
2232  {
2233    top_size = sbrk_size + old_top_size;
2234    set_head(top, top_size | PREV_INUSE);
2235  }
2236  else
2237  {
2238    if (sbrk_base == (char*)(-1))  /* First time through. Record base */
2239      sbrk_base = brk;
2240    else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
2241      sbrked_mem += brk - (char*)old_end;
2242
2243    /* Guarantee alignment of first new chunk made from this space */
2244    front_misalign = (POINTER_UINT)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2245    if (front_misalign > 0) 
2246    {
2247      correction = (MALLOC_ALIGNMENT) - front_misalign;
2248      brk += correction;
2249    }
2250    else
2251      correction = 0;
2252
2253    /* Guarantee the next brk will be at a page boundary */
2254    correction += (((((POINTER_UINT)(brk + sbrk_size))+(pagesz-1)) &
2255                    ~(pagesz - 1)) - ((POINTER_UINT)(brk + sbrk_size));
2256
2257    /* Allocate correction */
2258    new_brk = (char*)(MORECORE (correction));
2259    if (new_brk == (char*)(MORECORE_FAILURE)) return; 
2260
2261    sbrked_mem += correction;
2262
2263    top = (mchunkptr)brk;
2264    top_size = new_brk - brk + correction;
2265    set_head(top, top_size | PREV_INUSE);
2266
2267    if (old_top != initial_top)
2268    {
2269
2270      /* There must have been an intervening foreign sbrk call. */
2271      /* A double fencepost is necessary to prevent consolidation */
2272
2273      /* If not enough space to do this, then user did something very wrong */
2274      if (old_top_size < MINSIZE) 
2275      {
2276        set_head(top, PREV_INUSE); /* will force null return from malloc */
2277        return;
2278      }
2279
2280      /* Also keep size a multiple of MALLOC_ALIGNMENT */
2281      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2282      set_head_size(old_top, old_top_size);
2283      chunk_at_offset(old_top, old_top_size          )->size =
2284        SIZE_SZ|PREV_INUSE;
2285      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2286        SIZE_SZ|PREV_INUSE;
2287      /* If possible, release the rest. */
2288      if (old_top_size >= MINSIZE) 
2289        fREe(RCALL chunk2mem(old_top));
2290    }
2291  }
2292
2293  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) 
2294    max_sbrked_mem = sbrked_mem;
2295#if HAVE_MMAP
2296  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
2297    max_total_mem = mmapped_mem + sbrked_mem;
2298#else
2299  if ((unsigned long)(sbrked_mem) > (unsigned long)max_total_mem) 
2300    max_total_mem = sbrked_mem;
2301#endif
2302
2303  /* We always land on a page boundary */
2304  assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2305}
2306
2307#endif /* DEFINE_MALLOC */
2308
2309
2310/* Main public routines */
2311
2312#ifdef DEFINE_MALLOC
2313
2314/*
2315  Malloc Algorthim:
2316
2317    The requested size is first converted into a usable form, `nb'.
2318    This currently means to add 4 bytes overhead plus possibly more to
2319    obtain 8-byte alignment and/or to obtain a size of at least
2320    MINSIZE (currently 16 bytes), the smallest allocatable size.
2321    (All fits are considered `exact' if they are within MINSIZE bytes.)
2322
2323    From there, the first successful of the following steps is taken:
2324
2325      1. The bin corresponding to the request size is scanned, and if
2326         a chunk of exactly the right size is found, it is taken.
2327
2328      2. The most recently remaindered chunk is used if it is big
2329         enough.  This is a form of (roving) first fit, used only in
2330         the absence of exact fits. Runs of consecutive requests use
2331         the remainder of the chunk used for the previous such request
2332         whenever possible. This limited use of a first-fit style
2333         allocation strategy tends to give contiguous chunks
2334         coextensive lifetimes, which improves locality and can reduce
2335         fragmentation in the long run.
2336
2337      3. Other bins are scanned in increasing size order, using a
2338         chunk big enough to fulfill the request, and splitting off
2339         any remainder.  This search is strictly by best-fit; i.e.,
2340         the smallest (with ties going to approximately the least
2341         recently used) chunk that fits is selected.
2342
2343      4. If large enough, the chunk bordering the end of memory
2344         (`top') is split off. (This use of `top' is in accord with
2345         the best-fit search rule.  In effect, `top' is treated as
2346         larger (and thus less well fitting) than any other available
2347         chunk since it can be extended to be as large as necessary
2348         (up to system limitations).
2349
2350      5. If the request size meets the mmap threshold and the
2351         system supports mmap, and there are few enough currently
2352         allocated mmapped regions, and a call to mmap succeeds,
2353         the request is allocated via direct memory mapping.
2354
2355      6. Otherwise, the top of memory is extended by
2356         obtaining more space from the system (normally using sbrk,
2357         but definable to anything else via the MORECORE macro).
2358         Memory is gathered from the system (in system page-sized
2359         units) in a way that allows chunks obtained across different
2360         sbrk calls to be consolidated, but does not require
2361         contiguous memory. Thus, it should be safe to intersperse
2362         mallocs with other sbrk calls.
2363
2364
2365      All allocations are made from the the `lowest' part of any found
2366      chunk. (The implementation invariant is that prev_inuse is
2367      always true of any allocated chunk; i.e., that each allocated
2368      chunk borders either a previously allocated and still in-use chunk,
2369      or the base of its memory arena.)
2370
2371*/
2372
2373#if __STD_C
2374Void_t* mALLOc(RARG size_t bytes)
2375#else
2376Void_t* mALLOc(RARG bytes) RDECL size_t bytes;
2377#endif
2378{
2379#ifdef MALLOC_PROVIDED
2380
2381  malloc (bytes);
2382
2383#else
2384
2385  mchunkptr victim;                  /* inspected/selected chunk */
2386  INTERNAL_SIZE_T victim_size;       /* its size */
2387  int       idx;                     /* index for bin traversal */
2388  mbinptr   bin;                     /* associated bin */
2389  mchunkptr remainder;               /* remainder from a split */
2390  long      remainder_size;          /* its size */
2391  int       remainder_index;         /* its bin index */
2392  unsigned long block;               /* block traverser bit */
2393  int       startidx;                /* first bin of a traversed block */
2394  mchunkptr fwd;                     /* misc temp for linking */
2395  mchunkptr bck;                     /* misc temp for linking */
2396  mbinptr q;                         /* misc temp */
2397
2398  INTERNAL_SIZE_T nb;
2399
2400  if ((long)bytes < 0) return 0;
2401
2402  nb = request2size(bytes);  /* padded request size; */
2403
2404  MALLOC_LOCK;
2405
2406  /* Check for exact match in a bin */
2407
2408  if (is_small_request(nb))  /* Faster version for small requests */
2409  {
2410    idx = smallbin_index(nb); 
2411
2412    /* No traversal or size check necessary for small bins.  */
2413
2414    q = bin_at(idx);
2415    victim = last(q);
2416
2417#if MALLOC_ALIGN != 16
2418    /* Also scan the next one, since it would have a remainder < MINSIZE */
2419    if (victim == q)
2420    {
2421      q = next_bin(q);
2422      victim = last(q);
2423    }
2424#endif
2425    if (victim != q)
2426    {
2427      victim_size = chunksize(victim);
2428      unlink(victim, bck, fwd);
2429      set_inuse_bit_at_offset(victim, victim_size);
2430      check_malloced_chunk(victim, nb);
2431      MALLOC_UNLOCK;
2432      return chunk2mem(victim);
2433    }
2434
2435    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2436
2437  }
2438  else
2439  {
2440    idx = bin_index(nb);
2441    bin = bin_at(idx);
2442
2443    for (victim = last(bin); victim != bin; victim = victim->bk)
2444    {
2445      victim_size = chunksize(victim);
2446      remainder_size = long_sub_size_t(victim_size, nb);
2447     
2448      if (remainder_size >= (long)MINSIZE) /* too big */
2449      {
2450        --idx; /* adjust to rescan below after checking last remainder */
2451        break;   
2452      }
2453
2454      else if (remainder_size >= 0) /* exact fit */
2455      {
2456        unlink(victim, bck, fwd);
2457        set_inuse_bit_at_offset(victim, victim_size);
2458        check_malloced_chunk(victim, nb);
2459        MALLOC_UNLOCK;
2460        return chunk2mem(victim);
2461      }
2462    }
2463
2464    ++idx; 
2465
2466  }
2467
2468  /* Try to use the last split-off remainder */
2469
2470  if ( (victim = last_remainder->fd) != last_remainder)
2471  {
2472    victim_size = chunksize(victim);
2473    remainder_size = long_sub_size_t(victim_size, nb);
2474
2475    if (remainder_size >= (long)MINSIZE) /* re-split */
2476    {
2477      remainder = chunk_at_offset(victim, nb);
2478      set_head(victim, nb | PREV_INUSE);
2479      link_last_remainder(remainder);
2480      set_head(remainder, remainder_size | PREV_INUSE);
2481      set_foot(remainder, remainder_size);
2482      check_malloced_chunk(victim, nb);
2483      MALLOC_UNLOCK;
2484      return chunk2mem(victim);
2485    }
2486
2487    clear_last_remainder;
2488
2489    if (remainder_size >= 0)  /* exhaust */
2490    {
2491      set_inuse_bit_at_offset(victim, victim_size);
2492      check_malloced_chunk(victim, nb);
2493      MALLOC_UNLOCK;
2494      return chunk2mem(victim);
2495    }
2496
2497    /* Else place in bin */
2498
2499    frontlink(victim, victim_size, remainder_index, bck, fwd);
2500  }
2501
2502  /*
2503     If there are any possibly nonempty big-enough blocks,
2504     search for best fitting chunk by scanning bins in blockwidth units.
2505  */
2506
2507  if ( (block = idx2binblock(idx)) <= binblocks) 
2508  {
2509
2510    /* Get to the first marked block */
2511
2512    if ( (block & binblocks) == 0) 
2513    {
2514      /* force to an even block boundary */
2515      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2516      block <<= 1;
2517      while ((block & binblocks) == 0)
2518      {
2519        idx += BINBLOCKWIDTH;
2520        block <<= 1;
2521      }
2522    }
2523     
2524    /* For each possibly nonempty block ... */
2525    for (;;) 
2526    {
2527      startidx = idx;          /* (track incomplete blocks) */
2528      q = bin = bin_at(idx);
2529
2530      /* For each bin in this block ... */
2531      do
2532      {
2533        /* Find and use first big enough chunk ... */
2534
2535        for (victim = last(bin); victim != bin; victim = victim->bk)
2536        {
2537          victim_size = chunksize(victim);
2538          remainder_size = long_sub_size_t(victim_size, nb);
2539
2540          if (remainder_size >= (long)MINSIZE) /* split */
2541          {
2542            remainder = chunk_at_offset(victim, nb);
2543            set_head(victim, nb | PREV_INUSE);
2544            unlink(victim, bck, fwd);
2545            link_last_remainder(remainder);
2546            set_head(remainder, remainder_size | PREV_INUSE);
2547            set_foot(remainder, remainder_size);
2548            check_malloced_chunk(victim, nb);
2549            MALLOC_UNLOCK;
2550            return chunk2mem(victim);
2551          }
2552
2553          else if (remainder_size >= 0)  /* take */
2554          {
2555            set_inuse_bit_at_offset(victim, victim_size);
2556            unlink(victim, bck, fwd);
2557            check_malloced_chunk(victim, nb);
2558            MALLOC_UNLOCK;
2559            return chunk2mem(victim);
2560          }
2561
2562        }
2563
2564       bin = next_bin(bin);
2565
2566#if MALLOC_ALIGN == 16
2567       if (idx < MAX_SMALLBIN)
2568         {
2569           bin = next_bin(bin);
2570           ++idx;
2571         }
2572#endif
2573      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2574
2575      /* Clear out the block bit. */
2576
2577      do   /* Possibly backtrack to try to clear a partial block */
2578      {
2579        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2580        {
2581          binblocks &= ~block;
2582          break;
2583        }
2584        --startidx;
2585       q = prev_bin(q);
2586      } while (first(q) == q);
2587
2588      /* Get to the next possibly nonempty block */
2589
2590      if ( (block <<= 1) <= binblocks && (block != 0) ) 
2591      {
2592        while ((block & binblocks) == 0)
2593        {
2594          idx += BINBLOCKWIDTH;
2595          block <<= 1;
2596        }
2597      }
2598      else
2599        break;
2600    }
2601  }
2602
2603
2604  /* Try to use top chunk */
2605
2606  /* Require that there be a remainder, ensuring top always exists  */
2607  remainder_size = long_sub_size_t(chunksize(top), nb);
2608  if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2609  {
2610
2611#if HAVE_MMAP
2612    /* If big and would otherwise need to extend, try to use mmap instead */
2613    if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2614        (victim = mmap_chunk(nb)) != 0)
2615    {
2616      MALLOC_UNLOCK;
2617      return chunk2mem(victim);
2618    }
2619#endif
2620
2621    /* Try to extend */
2622    malloc_extend_top(RCALL nb);
2623    remainder_size = long_sub_size_t(chunksize(top), nb);
2624    if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2625    {
2626      MALLOC_UNLOCK;
2627      return 0; /* propagate failure */
2628    }
2629  }
2630
2631  victim = top;
2632  set_head(victim, nb | PREV_INUSE);
2633  top = chunk_at_offset(victim, nb);
2634  set_head(top, remainder_size | PREV_INUSE);
2635  check_malloced_chunk(victim, nb);
2636  MALLOC_UNLOCK;
2637  return chunk2mem(victim);
2638
2639#endif /* MALLOC_PROVIDED */
2640}
2641
2642#endif /* DEFINE_MALLOC */
2643
2644#ifdef DEFINE_FREE
2645
2646/*
2647
2648  free() algorithm :
2649
2650    cases:
2651
2652       1. free(0) has no effect. 
2653
2654       2. If the chunk was allocated via mmap, it is release via munmap().
2655
2656       3. If a returned chunk borders the current high end of memory,
2657          it is consolidated into the top, and if the total unused
2658          topmost memory exceeds the trim threshold, malloc_trim is
2659          called.
2660
2661       4. Other chunks are consolidated as they arrive, and
2662          placed in corresponding bins. (This includes the case of
2663          consolidating with the current `last_remainder').
2664
2665*/
2666
2667
2668#if __STD_C
2669void fREe(RARG Void_t* mem)
2670#else
2671void fREe(RARG mem) RDECL Void_t* mem;
2672#endif
2673{
2674#ifdef MALLOC_PROVIDED
2675
2676  free (mem);
2677
2678#else
2679
2680  mchunkptr p;         /* chunk corresponding to mem */
2681  INTERNAL_SIZE_T hd;  /* its head field */
2682  INTERNAL_SIZE_T sz;  /* its size */
2683  int       idx;       /* its bin index */
2684  mchunkptr next;      /* next contiguous chunk */
2685  INTERNAL_SIZE_T nextsz; /* its size */
2686  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2687  mchunkptr bck;       /* misc temp for linking */
2688  mchunkptr fwd;       /* misc temp for linking */
2689  int       islr;      /* track whether merging with last_remainder */
2690
2691  if (mem == 0)                              /* free(0) has no effect */
2692    return;
2693
2694  MALLOC_LOCK;
2695
2696  p = mem2chunk(mem);
2697  hd = p->size;
2698
2699#if HAVE_MMAP
2700  if (hd & IS_MMAPPED)                       /* release mmapped memory. */
2701  {
2702    munmap_chunk(p);
2703    MALLOC_UNLOCK;
2704    return;
2705  }
2706#endif
2707 
2708  check_inuse_chunk(p);
2709 
2710  sz = hd & ~PREV_INUSE;
2711  next = chunk_at_offset(p, sz);
2712  nextsz = chunksize(next);
2713 
2714  if (next == top)                            /* merge with top */
2715  {
2716    sz += nextsz;
2717
2718    if (!(hd & PREV_INUSE))                    /* consolidate backward */
2719    {
2720      prevsz = p->prev_size;
2721      p = chunk_at_offset(p, -((long) prevsz));
2722      sz += prevsz;
2723      unlink(p, bck, fwd);
2724    }
2725
2726    set_head(p, sz | PREV_INUSE);
2727    top = p;
2728    if ((unsigned long)(sz) >= (unsigned long)trim_threshold) 
2729      malloc_trim(RCALL top_pad); 
2730    MALLOC_UNLOCK;
2731    return;
2732  }
2733
2734  set_head(next, nextsz);                    /* clear inuse bit */
2735
2736  islr = 0;
2737
2738  if (!(hd & PREV_INUSE))                    /* consolidate backward */
2739  {
2740    prevsz = p->prev_size;
2741    p = chunk_at_offset(p, -((long) prevsz));
2742    sz += prevsz;
2743   
2744    if (p->fd == last_remainder)             /* keep as last_remainder */
2745      islr = 1;
2746    else
2747      unlink(p, bck, fwd);
2748  }
2749 
2750  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
2751  {
2752    sz += nextsz;
2753   
2754    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
2755    {
2756      islr = 1;
2757      link_last_remainder(p);   
2758    }
2759    else
2760      unlink(next, bck, fwd);
2761  }
2762
2763
2764  set_head(p, sz | PREV_INUSE);
2765  set_foot(p, sz);
2766  if (!islr)
2767    frontlink(p, sz, idx, bck, fwd); 
2768
2769  MALLOC_UNLOCK;
2770
2771#endif /* MALLOC_PROVIDED */
2772}
2773
2774#endif /* DEFINE_FREE */
2775
2776#ifdef DEFINE_REALLOC
2777
2778/*
2779
2780  Realloc algorithm:
2781
2782    Chunks that were obtained via mmap cannot be extended or shrunk
2783    unless HAVE_MREMAP is defined, in which case mremap is used.
2784    Otherwise, if their reallocation is for additional space, they are
2785    copied.  If for less, they are just left alone.
2786
2787    Otherwise, if the reallocation is for additional space, and the
2788    chunk can be extended, it is, else a malloc-copy-free sequence is
2789    taken.  There are several different ways that a chunk could be
2790    extended. All are tried:
2791
2792       * Extending forward into following adjacent free chunk.
2793       * Shifting backwards, joining preceding adjacent space
2794       * Both shifting backwards and extending forward.
2795       * Extending into newly sbrked space
2796
2797    Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2798    size argument of zero (re)allocates a minimum-sized chunk.
2799
2800    If the reallocation is for less space, and the new request is for
2801    a `small' (<512 bytes) size, then the newly unused space is lopped
2802    off and freed.
2803
2804    The old unix realloc convention of allowing the last-free'd chunk
2805    to be used as an argument to realloc is no longer supported.
2806    I don't know of any programs still relying on this feature,
2807    and allowing it would also allow too many other incorrect
2808    usages of realloc to be sensible.
2809
2810
2811*/
2812
2813
2814#if __STD_C
2815Void_t* rEALLOc(RARG Void_t* oldmem, size_t bytes)
2816#else
2817Void_t* rEALLOc(RARG oldmem, bytes) RDECL Void_t* oldmem; size_t bytes;
2818#endif
2819{
2820#ifdef MALLOC_PROVIDED
2821
2822  realloc (oldmem, bytes);
2823
2824#else
2825
2826  INTERNAL_SIZE_T    nb;      /* padded request size */
2827
2828  mchunkptr oldp;             /* chunk corresponding to oldmem */
2829  INTERNAL_SIZE_T    oldsize; /* its size */
2830
2831  mchunkptr newp;             /* chunk to return */
2832  INTERNAL_SIZE_T    newsize; /* its size */
2833  Void_t*   newmem;           /* corresponding user mem */
2834
2835  mchunkptr next;             /* next contiguous chunk after oldp */
2836  INTERNAL_SIZE_T  nextsize;  /* its size */
2837
2838  mchunkptr prev;             /* previous contiguous chunk before oldp */
2839  INTERNAL_SIZE_T  prevsize;  /* its size */
2840
2841  mchunkptr remainder;        /* holds split off extra space from newp */
2842  INTERNAL_SIZE_T  remainder_size;   /* its size */
2843
2844  mchunkptr bck;              /* misc temp for linking */
2845  mchunkptr fwd;              /* misc temp for linking */
2846
2847#ifdef REALLOC_ZERO_BYTES_FREES
2848  if (bytes == 0) { fREe(RCALL oldmem); return 0; }
2849#endif
2850
2851  if ((long)bytes < 0) return 0;
2852
2853  /* realloc of null is supposed to be same as malloc */
2854  if (oldmem == 0) return mALLOc(RCALL bytes);
2855
2856  MALLOC_LOCK;
2857
2858  newp    = oldp    = mem2chunk(oldmem);
2859  newsize = oldsize = chunksize(oldp);
2860
2861
2862  nb = request2size(bytes);
2863
2864#if HAVE_MMAP
2865  if (chunk_is_mmapped(oldp)) 
2866  {
2867#if HAVE_MREMAP
2868    newp = mremap_chunk(oldp, nb);
2869    if(newp)
2870    {
2871      MALLOC_UNLOCK;
2872      return chunk2mem(newp);
2873    }
2874#endif
2875    /* Note the extra SIZE_SZ overhead. */
2876    if(oldsize - SIZE_SZ >= nb)
2877    {
2878      MALLOC_UNLOCK;
2879      return oldmem; /* do nothing */
2880    }
2881    /* Must alloc, copy, free. */
2882    newmem = mALLOc(RCALL bytes);
2883    if (newmem == 0)
2884    {
2885      MALLOC_UNLOCK;
2886      return 0; /* propagate failure */
2887    }
2888    MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2889    munmap_chunk(oldp);
2890    MALLOC_UNLOCK;
2891    return newmem;
2892  }
2893#endif
2894
2895  check_inuse_chunk(oldp);
2896
2897  if ((long)(oldsize) < (long)(nb)) 
2898  {
2899
2900    /* Try expanding forward */
2901
2902    next = chunk_at_offset(oldp, oldsize);
2903    if (next == top || !inuse(next)) 
2904    {
2905      nextsize = chunksize(next);
2906
2907      /* Forward into top only if a remainder */
2908      if (next == top)
2909      {
2910        if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2911        {
2912          newsize += nextsize;
2913          top = chunk_at_offset(oldp, nb);
2914          set_head(top, (newsize - nb) | PREV_INUSE);
2915          set_head_size(oldp, nb);
2916          MALLOC_UNLOCK;
2917          return chunk2mem(oldp);
2918        }
2919      }
2920
2921      /* Forward into next chunk */
2922      else if (((long)(nextsize + newsize) >= (long)(nb)))
2923      { 
2924        unlink(next, bck, fwd);
2925        newsize  += nextsize;
2926        goto split;
2927      }
2928    }
2929    else
2930    {
2931      next = 0;
2932      nextsize = 0;
2933    }
2934
2935    /* Try shifting backwards. */
2936
2937    if (!prev_inuse(oldp))
2938    {
2939      prev = prev_chunk(oldp);
2940      prevsize = chunksize(prev);
2941
2942      /* try forward + backward first to save a later consolidation */
2943
2944      if (next != 0)
2945      {
2946        /* into top */
2947        if (next == top)
2948        {
2949          if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2950          {
2951            unlink(prev, bck, fwd);
2952            newp = prev;
2953            newsize += prevsize + nextsize;
2954            newmem = chunk2mem(newp);
2955            MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2956            top = chunk_at_offset(newp, nb);
2957            set_head(top, (newsize - nb) | PREV_INUSE);
2958            set_head_size(newp, nb);
2959            MALLOC_UNLOCK;
2960            return newmem;
2961          }
2962        }
2963
2964        /* into next chunk */
2965        else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2966        {
2967          unlink(next, bck, fwd);
2968          unlink(prev, bck, fwd);
2969          newp = prev;
2970          newsize += nextsize + prevsize;
2971          newmem = chunk2mem(newp);
2972          MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2973          goto split;
2974        }
2975      }
2976     
2977      /* backward only */
2978      if (prev != 0 && (long)(prevsize + newsize) >= (long)nb) 
2979      {
2980        unlink(prev, bck, fwd);
2981        newp = prev;
2982        newsize += prevsize;
2983        newmem = chunk2mem(newp);
2984        MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2985        goto split;
2986      }
2987    }
2988
2989    /* Must allocate */
2990
2991    newmem = mALLOc (RCALL bytes);
2992
2993    if (newmem == 0)  /* propagate failure */
2994    {
2995      MALLOC_UNLOCK;
2996      return 0;
2997    }
2998
2999    /* Avoid copy if newp is next chunk after oldp. */
3000    /* (This can only happen when new chunk is sbrk'ed.) */
3001
3002    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp)) 
3003    {
3004      newsize += chunksize(newp);
3005      newp = oldp;
3006      goto split;
3007    }
3008
3009    /* Otherwise copy, free, and exit */
3010    MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
3011    fREe(RCALL oldmem);
3012    MALLOC_UNLOCK;
3013    return newmem;
3014  }
3015
3016
3017 split:  /* split off extra room in old or expanded chunk */
3018
3019  remainder_size = long_sub_size_t(newsize, nb);
3020
3021  if (remainder_size >= (long)MINSIZE) /* split off remainder */
3022  {
3023    remainder = chunk_at_offset(newp, nb);
3024    set_head_size(newp, nb);
3025    set_head(remainder, remainder_size | PREV_INUSE);
3026    set_inuse_bit_at_offset(remainder, remainder_size);
3027    fREe(RCALL chunk2mem(remainder)); /* let free() deal with it */
3028  }
3029  else
3030  {
3031    set_head_size(newp, newsize);
3032    set_inuse_bit_at_offset(newp, newsize);
3033  }
3034
3035  check_inuse_chunk(newp);
3036  MALLOC_UNLOCK;
3037  return chunk2mem(newp);
3038
3039#endif /* MALLOC_PROVIDED */
3040}
3041
3042#endif /* DEFINE_REALLOC */
3043
3044#ifdef DEFINE_MEMALIGN
3045
3046/*
3047
3048  memalign algorithm:
3049
3050    memalign requests more than enough space from malloc, finds a spot
3051    within that chunk that meets the alignment request, and then
3052    possibly frees the leading and trailing space.
3053
3054    The alignment argument must be a power of two. This property is not
3055    checked by memalign, so misuse may result in random runtime errors.
3056
3057    8-byte alignment is guaranteed by normal malloc calls, so don't
3058    bother calling memalign with an argument of 8 or less.
3059
3060    Overreliance on memalign is a sure way to fragment space.
3061
3062*/
3063
3064
3065#if __STD_C
3066Void_t* mEMALIGn(RARG size_t alignment, size_t bytes)
3067#else
3068Void_t* mEMALIGn(RARG alignment, bytes) RDECL size_t alignment; size_t bytes;
3069#endif
3070{
3071  INTERNAL_SIZE_T    nb;      /* padded  request size */
3072  char*     m;                /* memory returned by malloc call */
3073  mchunkptr p;                /* corresponding chunk */
3074  char*     brk;              /* alignment point within p */
3075  mchunkptr newp;             /* chunk to return */
3076  INTERNAL_SIZE_T  newsize;   /* its size */
3077  INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
3078  mchunkptr remainder;        /* spare room at end to split off */
3079  long      remainder_size;   /* its size */
3080
3081  if ((long)bytes < 0) return 0;
3082
3083  /* If need less alignment than we give anyway, just relay to malloc */
3084
3085  if (alignment <= MALLOC_ALIGNMENT) return mALLOc(RCALL bytes);
3086
3087  /* Otherwise, ensure that it is at least a minimum chunk size */
3088 
3089  if (alignment <  MINSIZE) alignment = MINSIZE;
3090
3091  /* Call malloc with worst case padding to hit alignment. */
3092
3093  nb = request2size(bytes);
3094  m  = (char*)(mALLOc(RCALL nb + alignment + MINSIZE));
3095
3096  if (m == 0) return 0; /* propagate failure */
3097
3098  MALLOC_LOCK;
3099
3100  p = mem2chunk(m);
3101
3102  if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3103  {
3104#if HAVE_MMAP
3105    if(chunk_is_mmapped(p))
3106    {
3107      MALLOC_UNLOCK;
3108      return chunk2mem(p); /* nothing more to do */
3109    }
3110#endif
3111  }
3112  else /* misaligned */
3113  {
3114    /*
3115      Find an aligned spot inside chunk.
3116      Since we need to give back leading space in a chunk of at
3117      least MINSIZE, if the first calculation places us at
3118      a spot with less than MINSIZE leader, we can move to the
3119      next aligned spot -- we've allocated enough total room so that
3120      this is always possible.
3121    */
3122
3123    brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
3124    if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
3125
3126    newp = (mchunkptr)brk;
3127    leadsize = brk - (char*)(p);
3128    newsize = chunksize(p) - leadsize;
3129
3130#if HAVE_MMAP
3131    if(chunk_is_mmapped(p)) 
3132    {
3133      newp->prev_size = p->prev_size + leadsize;
3134      set_head(newp, newsize|IS_MMAPPED);
3135      MALLOC_UNLOCK;
3136      return chunk2mem(newp);
3137    }
3138#endif
3139
3140    /* give back leader, use the rest */
3141
3142    set_head(newp, newsize | PREV_INUSE);
3143    set_inuse_bit_at_offset(newp, newsize);
3144    set_head_size(p, leadsize);
3145    fREe(RCALL chunk2mem(p));
3146    p = newp;
3147
3148    assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3149  }
3150
3151  /* Also give back spare room at the end */
3152
3153  remainder_size = long_sub_size_t(chunksize(p), nb);
3154
3155  if (remainder_size >= (long)MINSIZE)
3156  {
3157    remainder = chunk_at_offset(p, nb);
3158    set_head(remainder, remainder_size | PREV_INUSE);
3159    set_head_size(p, nb);
3160    fREe(RCALL chunk2mem(remainder));
3161  }
3162
3163  check_inuse_chunk(p);
3164  MALLOC_UNLOCK;
3165  return chunk2mem(p);
3166
3167}
3168
3169#endif /* DEFINE_MEMALIGN */
3170
3171#ifdef DEFINE_VALLOC
3172
3173/*
3174    valloc just invokes memalign with alignment argument equal
3175    to the page size of the system (or as near to this as can
3176    be figured out from all the includes/defines above.)
3177*/
3178
3179#if __STD_C
3180Void_t* vALLOc(RARG size_t bytes)
3181#else
3182Void_t* vALLOc(RARG bytes) RDECL size_t bytes;
3183#endif
3184{
3185  return mEMALIGn (RCALL malloc_getpagesize, bytes);
3186}
3187
3188#endif /* DEFINE_VALLOC */
3189
3190#ifdef DEFINE_PVALLOC
3191
3192/*
3193  pvalloc just invokes valloc for the nearest pagesize
3194  that will accommodate request
3195*/
3196
3197
3198#if __STD_C
3199Void_t* pvALLOc(RARG size_t bytes)
3200#else
3201Void_t* pvALLOc(RARG bytes) RDECL size_t bytes;
3202#endif
3203{
3204  size_t pagesize = malloc_getpagesize;
3205  return mEMALIGn (RCALL pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3206}
3207
3208#endif /* DEFINE_PVALLOC */
3209
3210#ifdef DEFINE_CALLOC
3211
3212/*
3213
3214  calloc calls malloc, then zeroes out the allocated chunk.
3215
3216*/
3217
3218#if __STD_C
3219Void_t* cALLOc(RARG size_t n, size_t elem_size)
3220#else
3221Void_t* cALLOc(RARG n, elem_size) RDECL size_t n; size_t elem_size;
3222#endif
3223{
3224  mchunkptr p;
3225  INTERNAL_SIZE_T csz;
3226
3227  INTERNAL_SIZE_T sz = n * elem_size;
3228
3229#if MORECORE_CLEARS
3230  mchunkptr oldtop;
3231  INTERNAL_SIZE_T oldtopsize;
3232#endif
3233  Void_t* mem;
3234
3235
3236  /* check if expand_top called, in which case don't need to clear */
3237#if MORECORE_CLEARS
3238  MALLOC_LOCK;
3239  oldtop = top;
3240  oldtopsize = chunksize(top);
3241#endif
3242
3243  mem = mALLOc (RCALL sz);
3244
3245  if ((long)n < 0) return 0;
3246
3247  if (mem == 0) 
3248  {
3249#if MORECORE_CLEARS
3250    MALLOC_UNLOCK;
3251#endif
3252    return 0;
3253  }
3254  else
3255  {
3256    p = mem2chunk(mem);
3257
3258    /* Two optional cases in which clearing not necessary */
3259
3260
3261#if HAVE_MMAP
3262    if (chunk_is_mmapped(p))
3263    {
3264#if MORECORE_CLEARS
3265      MALLOC_UNLOCK;
3266#endif
3267      return mem;
3268    }
3269#endif
3270
3271    csz = chunksize(p);
3272
3273#if MORECORE_CLEARS
3274    if (p == oldtop && csz > oldtopsize) 
3275    {
3276      /* clear only the bytes from non-freshly-sbrked memory */
3277      csz = oldtopsize;
3278    }
3279    MALLOC_UNLOCK;
3280#endif
3281
3282    MALLOC_ZERO(mem, csz - SIZE_SZ);
3283    return mem;
3284  }
3285}
3286
3287#endif /* DEFINE_CALLOC */
3288
3289#ifdef DEFINE_CFREE
3290
3291/*
3292 
3293  cfree just calls free. It is needed/defined on some systems
3294  that pair it with calloc, presumably for odd historical reasons.
3295
3296*/
3297
3298#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3299#if !defined(INTERNAL_NEWLIB) || !defined(_REENT_ONLY)
3300#if __STD_C
3301void cfree(Void_t *mem)
3302#else
3303void cfree(mem) Void_t *mem;
3304#endif
3305{
3306#ifdef INTERNAL_NEWLIB
3307  fREe(_REENT, mem);
3308#else
3309  fREe(mem);
3310#endif
3311}
3312#endif
3313#endif
3314
3315#endif /* DEFINE_CFREE */
3316
3317#ifdef DEFINE_FREE
3318
3319/*
3320
3321    Malloc_trim gives memory back to the system (via negative
3322    arguments to sbrk) if there is unused memory at the `high' end of
3323    the malloc pool. You can call this after freeing large blocks of
3324    memory to potentially reduce the system-level memory requirements
3325    of a program. However, it cannot guarantee to reduce memory. Under
3326    some allocation patterns, some large free blocks of memory will be
3327    locked between two used chunks, so they cannot be given back to
3328    the system.
3329
3330    The `pad' argument to malloc_trim represents the amount of free
3331    trailing space to leave untrimmed. If this argument is zero,
3332    only the minimum amount of memory to maintain internal data
3333    structures will be left (one page or less). Non-zero arguments
3334    can be supplied to maintain enough trailing space to service
3335    future expected allocations without having to re-obtain memory
3336    from the system.
3337
3338    Malloc_trim returns 1 if it actually released any memory, else 0.
3339
3340*/
3341
3342#if __STD_C
3343int malloc_trim(RARG size_t pad)
3344#else
3345int malloc_trim(RARG pad) RDECL size_t pad;
3346#endif
3347{
3348  long  top_size;        /* Amount of top-most memory */
3349  long  extra;           /* Amount to release */
3350  char* current_brk;     /* address returned by pre-check sbrk call */
3351  char* new_brk;         /* address returned by negative sbrk call */
3352
3353  unsigned long pagesz = malloc_getpagesize;
3354
3355  MALLOC_LOCK;
3356
3357  top_size = chunksize(top);
3358  extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3359
3360  if (extra < (long)pagesz)  /* Not enough memory to release */
3361  {
3362    MALLOC_UNLOCK;
3363    return 0;
3364  }
3365
3366  else
3367  {
3368    /* Test to make sure no one else called sbrk */
3369    current_brk = (char*)(MORECORE (0));
3370    if (current_brk != (char*)(top) + top_size)
3371    {
3372      MALLOC_UNLOCK;
3373      return 0;     /* Apparently we don't own memory; must fail */
3374    }
3375
3376    else
3377    {
3378      new_brk = (char*)(MORECORE (-extra));
3379     
3380      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3381      {
3382        /* Try to figure out what we have */
3383        current_brk = (char*)(MORECORE (0));
3384        top_size = current_brk - (char*)top;
3385        if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3386        {
3387          sbrked_mem = current_brk - sbrk_base;
3388          set_head(top, top_size | PREV_INUSE);
3389        }
3390        check_chunk(top);
3391        MALLOC_UNLOCK;
3392        return 0; 
3393      }
3394
3395      else
3396      {
3397        /* Success. Adjust top accordingly. */
3398        set_head(top, (top_size - extra) | PREV_INUSE);
3399        sbrked_mem -= extra;
3400        check_chunk(top);
3401        MALLOC_UNLOCK;
3402        return 1;
3403      }
3404    }
3405  }
3406}
3407
3408#endif /* DEFINE_FREE */
3409
3410#ifdef DEFINE_MALLOC_USABLE_SIZE
3411
3412/*
3413  malloc_usable_size:
3414
3415    This routine tells you how many bytes you can actually use in an
3416    allocated chunk, which may be more than you requested (although
3417    often not). You can use this many bytes without worrying about
3418    overwriting other allocated objects. Not a particularly great
3419    programming practice, but still sometimes useful.
3420
3421*/
3422
3423#if __STD_C
3424size_t malloc_usable_size(RARG Void_t* mem)
3425#else
3426size_t malloc_usable_size(RARG mem) RDECL Void_t* mem;
3427#endif
3428{
3429  mchunkptr p;
3430  if (mem == 0)
3431    return 0;
3432  else
3433  {
3434    p = mem2chunk(mem);
3435    if(!chunk_is_mmapped(p))
3436    {
3437      if (!inuse(p)) return 0;
3438#if DEBUG
3439      MALLOC_LOCK;
3440      check_inuse_chunk(p);
3441      MALLOC_UNLOCK;
3442#endif
3443      return chunksize(p) - SIZE_SZ;
3444    }
3445    return chunksize(p) - 2*SIZE_SZ;
3446  }
3447}
3448
3449#endif /* DEFINE_MALLOC_USABLE_SIZE */
3450
3451#ifdef DEFINE_MALLINFO
3452
3453/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3454
3455STATIC void malloc_update_mallinfo() 
3456{
3457  int i;
3458  mbinptr b;
3459  mchunkptr p;
3460#if DEBUG
3461  mchunkptr q;
3462#endif
3463
3464  INTERNAL_SIZE_T avail = chunksize(top);
3465  int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3466
3467  for (i = 1; i < NAV; ++i)
3468  {
3469    b = bin_at(i);
3470    for (p = last(b); p != b; p = p->bk) 
3471    {
3472#if DEBUG
3473      check_free_chunk(p);
3474      for (q = next_chunk(p); 
3475           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE; 
3476           q = next_chunk(q))
3477        check_inuse_chunk(q);
3478#endif
3479      avail += chunksize(p);
3480      navail++;
3481    }
3482  }
3483
3484  current_mallinfo.ordblks = navail;
3485  current_mallinfo.uordblks = sbrked_mem - avail;
3486  current_mallinfo.fordblks = avail;
3487#if HAVE_MMAP
3488  current_mallinfo.hblks = n_mmaps;
3489  current_mallinfo.hblkhd = mmapped_mem;
3490#endif
3491  current_mallinfo.keepcost = chunksize(top);
3492
3493}
3494
3495#else /* ! DEFINE_MALLINFO */
3496
3497#if __STD_C
3498extern void malloc_update_mallinfo(void);
3499#else
3500extern void malloc_update_mallinfo();
3501#endif
3502
3503#endif /* ! DEFINE_MALLINFO */
3504
3505#ifdef DEFINE_MALLOC_STATS
3506
3507/*
3508
3509  malloc_stats:
3510
3511    Prints on stderr the amount of space obtain from the system (both
3512    via sbrk and mmap), the maximum amount (which may be more than
3513    current if malloc_trim and/or munmap got called), the maximum
3514    number of simultaneous mmap regions used, and the current number
3515    of bytes allocated via malloc (or realloc, etc) but not yet
3516    freed. (Note that this is the number of bytes allocated, not the
3517    number requested. It will be larger than the number requested
3518    because of alignment and bookkeeping overhead.)
3519
3520*/
3521
3522#if __STD_C
3523void malloc_stats(RONEARG)
3524#else
3525void malloc_stats(RONEARG) RDECL
3526#endif
3527{
3528  unsigned long local_max_total_mem;
3529  int local_sbrked_mem;
3530  struct mallinfo local_mallinfo;
3531#if HAVE_MMAP
3532  unsigned long local_mmapped_mem, local_max_n_mmaps;
3533#endif
3534  FILE *fp;
3535
3536  MALLOC_LOCK;
3537  malloc_update_mallinfo();
3538  local_max_total_mem = max_total_mem;
3539  local_sbrked_mem = sbrked_mem;
3540  local_mallinfo = current_mallinfo;
3541#if HAVE_MMAP
3542  local_mmapped_mem = mmapped_mem;
3543  local_max_n_mmaps = max_n_mmaps;
3544#endif
3545  MALLOC_UNLOCK;
3546
3547#ifdef INTERNAL_NEWLIB
3548  fp = _stderr_r(reent_ptr);
3549#define fprintf fiprintf
3550#else
3551  fp = stderr;
3552#endif
3553
3554  fprintf(fp, "max system bytes = %10u\n", 
3555          (unsigned int)(local_max_total_mem));
3556#if HAVE_MMAP
3557  fprintf(fp, "system bytes     = %10u\n", 
3558          (unsigned int)(local_sbrked_mem + local_mmapped_mem));
3559  fprintf(fp, "in use bytes     = %10u\n", 
3560          (unsigned int)(local_mallinfo.uordblks + local_mmapped_mem));
3561#else
3562  fprintf(fp, "system bytes     = %10u\n", 
3563          (unsigned int)local_sbrked_mem);
3564  fprintf(fp, "in use bytes     = %10u\n", 
3565          (unsigned int)local_mallinfo.uordblks);
3566#endif
3567#if HAVE_MMAP
3568  fprintf(fp, "max mmap regions = %10u\n", 
3569          (unsigned int)local_max_n_mmaps);
3570#endif
3571}
3572
3573#endif /* DEFINE_MALLOC_STATS */
3574
3575#ifdef DEFINE_MALLINFO
3576
3577/*
3578  mallinfo returns a copy of updated current mallinfo.
3579*/
3580
3581#if __STD_C
3582struct mallinfo mALLINFo(RONEARG)
3583#else
3584struct mallinfo mALLINFo(RONEARG) RDECL
3585#endif
3586{
3587  struct mallinfo ret;
3588
3589  MALLOC_LOCK;
3590  malloc_update_mallinfo();
3591  ret = current_mallinfo;
3592  MALLOC_UNLOCK;
3593  return ret;
3594}
3595
3596#endif /* DEFINE_MALLINFO */
3597
3598#ifdef DEFINE_MALLOPT
3599
3600/*
3601  mallopt:
3602
3603    mallopt is the general SVID/XPG interface to tunable parameters.
3604    The format is to provide a (parameter-number, parameter-value) pair.
3605    mallopt then sets the corresponding parameter to the argument
3606    value if it can (i.e., so long as the value is meaningful),
3607    and returns 1 if successful else 0.
3608
3609    See descriptions of tunable parameters above.
3610
3611*/
3612
3613#if __STD_C
3614int mALLOPt(RARG int param_number, int value)
3615#else
3616int mALLOPt(RARG param_number, value) RDECL int param_number; int value;
3617#endif
3618{
3619  MALLOC_LOCK;
3620  switch(param_number) 
3621  {
3622    case M_TRIM_THRESHOLD:
3623      trim_threshold = value; MALLOC_UNLOCK; return 1; 
3624    case M_TOP_PAD:
3625      top_pad = value; MALLOC_UNLOCK; return 1; 
3626    case M_MMAP_THRESHOLD:
3627#if HAVE_MMAP
3628      mmap_threshold = value;
3629#endif
3630      MALLOC_UNLOCK;
3631      return 1;
3632    case M_MMAP_MAX:
3633#if HAVE_MMAP
3634      n_mmaps_max = value; MALLOC_UNLOCK; return 1;
3635#else
3636      MALLOC_UNLOCK; return value == 0;
3637#endif
3638
3639    default:
3640      MALLOC_UNLOCK;
3641      return 0;
3642  }
3643}
3644
3645#endif /* DEFINE_MALLOPT */
3646
3647/*
3648
3649History:
3650
3651    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3652      * return null for negative arguments
3653      * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3654         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3655          (e.g. WIN32 platforms)
3656         * Cleanup up header file inclusion for WIN32 platforms
3657         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3658         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3659           memory allocation routines
3660         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3661         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3662           usage of 'assert' in non-WIN32 code
3663         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3664           avoid infinite loop
3665      * Always call 'fREe()' rather than 'free()'
3666
3667    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3668      * Fixed ordering problem with boundary-stamping
3669
3670    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3671      * Added pvalloc, as recommended by H.J. Liu
3672      * Added 64bit pointer support mainly from Wolfram Gloger
3673      * Added anonymously donated WIN32 sbrk emulation
3674      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3675      * malloc_extend_top: fix mask error that caused wastage after
3676        foreign sbrks
3677      * Add linux mremap support code from HJ Liu
3678   
3679    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3680      * Integrated most documentation with the code.
3681      * Add support for mmap, with help from
3682        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3683      * Use last_remainder in more cases.
3684      * Pack bins using idea from  colin@nyx10.cs.du.edu
3685      * Use ordered bins instead of best-fit threshhold
3686      * Eliminate block-local decls to simplify tracing and debugging.
3687      * Support another case of realloc via move into top
3688      * Fix error occuring when initial sbrk_base not word-aligned. 
3689      * Rely on page size for units instead of SBRK_UNIT to
3690        avoid surprises about sbrk alignment conventions.
3691      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3692        (raymond@es.ele.tue.nl) for the suggestion.
3693      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3694      * More precautions for cases where other routines call sbrk,
3695        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3696      * Added macros etc., allowing use in linux libc from
3697        H.J. Lu (hjl@gnu.ai.mit.edu)
3698      * Inverted this history list
3699
3700    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3701      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3702      * Removed all preallocation code since under current scheme
3703        the work required to undo bad preallocations exceeds
3704        the work saved in good cases for most test programs.
3705      * No longer use return list or unconsolidated bins since
3706        no scheme using them consistently outperforms those that don't
3707        given above changes.
3708      * Use best fit for very large chunks to prevent some worst-cases.
3709      * Added some support for debugging
3710
3711    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3712      * Removed footers when chunks are in use. Thanks to
3713        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3714
3715    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3716      * Added malloc_trim, with help from Wolfram Gloger
3717        (wmglo@Dent.MED.Uni-Muenchen.DE).
3718
3719    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3720
3721    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3722      * realloc: try to expand in both directions
3723      * malloc: swap order of clean-bin strategy;
3724      * realloc: only conditionally expand backwards
3725      * Try not to scavenge used bins
3726      * Use bin counts as a guide to preallocation
3727      * Occasionally bin return list chunks in first scan
3728      * Add a few optimizations from colin@nyx10.cs.du.edu
3729
3730    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3731      * faster bin computation & slightly different binning
3732      * merged all consolidations to one part of malloc proper
3733         (eliminating old malloc_find_space & malloc_clean_bin)
3734      * Scan 2 returns chunks (not just 1)
3735      * Propagate failure in realloc if malloc returns 0
3736      * Add stuff to allow compilation on non-ANSI compilers
3737          from kpv@research.att.com
3738     
3739    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3740      * removed potential for odd address access in prev_chunk
3741      * removed dependency on getpagesize.h
3742      * misc cosmetics and a bit more internal documentation
3743      * anticosmetics: mangled names in macros to evade debugger strangeness
3744      * tested on sparc, hp-700, dec-mips, rs6000
3745          with gcc & native cc (hp, dec only) allowing
3746          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3747
3748    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3749      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3750         structure of old version,  but most details differ.)
3751
3752*/
3753
Note: See TracBrowser for help on using the repository browser.