source: SVN/rincon/u-boot/common/dlmalloc.c @ 55

Last change on this file since 55 was 55, checked in by Tim Harvey, 2 years ago

rincon: added latest u-boot source

restored form server backup

Signed-off-by: Tim Harvey <tharvey@…>

File size: 100.6 KB
Line 
1#include <common.h>
2
3#if 0   /* Moved to malloc.h */
4/* ---------- To make a malloc.h, start cutting here ------------ */
5
6/*
7  A version of malloc/free/realloc written by Doug Lea and released to the
8  public domain.  Send questions/comments/complaints/performance data
9  to dl@cs.oswego.edu
10
11* VERSION 2.6.6  Sun Mar  5 19:10:03 2000  Doug Lea  (dl at gee)
12
13   Note: There may be an updated version of this malloc obtainable at
14           ftp://g.oswego.edu/pub/misc/malloc.c
15         Check before installing!
16
17* Why use this malloc?
18
19  This is not the fastest, most space-conserving, most portable, or
20  most tunable malloc ever written. However it is among the fastest
21  while also being among the most space-conserving, portable and tunable.
22  Consistent balance across these factors results in a good general-purpose
23  allocator. For a high-level description, see
24     http://g.oswego.edu/dl/html/malloc.html
25
26* Synopsis of public routines
27
28  (Much fuller descriptions are contained in the program documentation below.)
29
30  malloc(size_t n);
31     Return a pointer to a newly allocated chunk of at least n bytes, or null
32     if no space is available.
33  free(Void_t* p);
34     Release the chunk of memory pointed to by p, or no effect if p is null.
35  realloc(Void_t* p, size_t n);
36     Return a pointer to a chunk of size n that contains the same data
37     as does chunk p up to the minimum of (n, p's size) bytes, or null
38     if no space is available. The returned pointer may or may not be
39     the same as p. If p is null, equivalent to malloc.  Unless the
40     #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
41     size argument of zero (re)allocates a minimum-sized chunk.
42  memalign(size_t alignment, size_t n);
43     Return a pointer to a newly allocated chunk of n bytes, aligned
44     in accord with the alignment argument, which must be a power of
45     two.
46  valloc(size_t n);
47     Equivalent to memalign(pagesize, n), where pagesize is the page
48     size of the system (or as near to this as can be figured out from
49     all the includes/defines below.)
50  pvalloc(size_t n);
51     Equivalent to valloc(minimum-page-that-holds(n)), that is,
52     round up n to nearest pagesize.
53  calloc(size_t unit, size_t quantity);
54     Returns a pointer to quantity * unit bytes, with all locations
55     set to zero.
56  cfree(Void_t* p);
57     Equivalent to free(p).
58  malloc_trim(size_t pad);
59     Release all but pad bytes of freed top-most memory back
60     to the system. Return 1 if successful, else 0.
61  malloc_usable_size(Void_t* p);
62     Report the number usable allocated bytes associated with allocated
63     chunk p. This may or may not report more bytes than were requested,
64     due to alignment and minimum size constraints.
65  malloc_stats();
66     Prints brief summary statistics.
67  mallinfo()
68     Returns (by copy) a struct containing various summary statistics.
69  mallopt(int parameter_number, int parameter_value)
70     Changes one of the tunable parameters described below. Returns
71     1 if successful in changing the parameter, else 0.
72
73* Vital statistics:
74
75  Alignment:                            8-byte
76       8 byte alignment is currently hardwired into the design.  This
77       seems to suffice for all current machines and C compilers.
78
79  Assumed pointer representation:       4 or 8 bytes
80       Code for 8-byte pointers is untested by me but has worked
81       reliably by Wolfram Gloger, who contributed most of the
82       changes supporting this.
83
84  Assumed size_t  representation:       4 or 8 bytes
85       Note that size_t is allowed to be 4 bytes even if pointers are 8.
86
87  Minimum overhead per allocated chunk: 4 or 8 bytes
88       Each malloced chunk has a hidden overhead of 4 bytes holding size
89       and status information.
90
91  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
92                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
93
94       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
95       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
96       needed; 4 (8) for a trailing size field
97       and 8 (16) bytes for free list pointers. Thus, the minimum
98       allocatable size is 16/24/32 bytes.
99
100       Even a request for zero bytes (i.e., malloc(0)) returns a
101       pointer to something of the minimum allocatable size.
102
103  Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
104                          8-byte size_t: 2^63 - 16 bytes
105
106       It is assumed that (possibly signed) size_t bit values suffice to
107       represent chunk sizes. `Possibly signed' is due to the fact
108       that `size_t' may be defined on a system as either a signed or
109       an unsigned type. To be conservative, values that would appear
110       as negative numbers are avoided.
111       Requests for sizes with a negative sign bit when the request
112       size is treaded as a long will return null.
113
114  Maximum overhead wastage per allocated chunk: normally 15 bytes
115
116       Alignnment demands, plus the minimum allocatable size restriction
117       make the normal worst-case wastage 15 bytes (i.e., up to 15
118       more bytes will be allocated than were requested in malloc), with
119       two exceptions:
120         1. Because requests for zero bytes allocate non-zero space,
121            the worst case wastage for a request of zero bytes is 24 bytes.
122         2. For requests >= mmap_threshold that are serviced via
123            mmap(), the worst case wastage is 8 bytes plus the remainder
124            from a system page (the minimal mmap unit); typically 4096 bytes.
125
126* Limitations
127
128    Here are some features that are NOT currently supported
129
130    * No user-definable hooks for callbacks and the like.
131    * No automated mechanism for fully checking that all accesses
132      to malloced memory stay within their bounds.
133    * No support for compaction.
134
135* Synopsis of compile-time options:
136
137    People have reported using previous versions of this malloc on all
138    versions of Unix, sometimes by tweaking some of the defines
139    below. It has been tested most extensively on Solaris and
140    Linux. It is also reported to work on WIN32 platforms.
141    People have also reported adapting this malloc for use in
142    stand-alone embedded systems.
143
144    The implementation is in straight, hand-tuned ANSI C.  Among other
145    consequences, it uses a lot of macros.  Because of this, to be at
146    all usable, this code should be compiled using an optimizing compiler
147    (for example gcc -O2) that can simplify expressions and control
148    paths.
149
150  __STD_C                  (default: derived from C compiler defines)
151     Nonzero if using ANSI-standard C compiler, a C++ compiler, or
152     a C compiler sufficiently close to ANSI to get away with it.
153  DEBUG                    (default: NOT defined)
154     Define to enable debugging. Adds fairly extensive assertion-based
155     checking to help track down memory errors, but noticeably slows down
156     execution.
157  REALLOC_ZERO_BYTES_FREES (default: NOT defined)
158     Define this if you think that realloc(p, 0) should be equivalent
159     to free(p). Otherwise, since malloc returns a unique pointer for
160     malloc(0), so does realloc(p, 0).
161  HAVE_MEMCPY               (default: defined)
162     Define if you are not otherwise using ANSI STD C, but still
163     have memcpy and memset in your C library and want to use them.
164     Otherwise, simple internal versions are supplied.
165  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
166     Define as 1 if you want the C library versions of memset and
167     memcpy called in realloc and calloc (otherwise macro versions are used).
168     At least on some platforms, the simple macro versions usually
169     outperform libc versions.
170  HAVE_MMAP                 (default: defined as 1)
171     Define to non-zero to optionally make malloc() use mmap() to
172     allocate very large blocks.
173  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
174     Define to non-zero to optionally make realloc() use mremap() to
175     reallocate very large blocks.
176  malloc_getpagesize        (default: derived from system #includes)
177     Either a constant or routine call returning the system page size.
178  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
179     Optionally define if you are on a system with a /usr/include/malloc.h
180     that declares struct mallinfo. It is not at all necessary to
181     define this even if you do, but will ensure consistency.
182  INTERNAL_SIZE_T           (default: size_t)
183     Define to a 32-bit type (probably `unsigned int') if you are on a
184     64-bit machine, yet do not want or need to allow malloc requests of
185     greater than 2^31 to be handled. This saves space, especially for
186     very small chunks.
187  INTERNAL_LINUX_C_LIB      (default: NOT defined)
188     Defined only when compiled as part of Linux libc.
189     Also note that there is some odd internal name-mangling via defines
190     (for example, internally, `malloc' is named `mALLOc') needed
191     when compiling in this case. These look funny but don't otherwise
192     affect anything.
193  WIN32                     (default: undefined)
194     Define this on MS win (95, nt) platforms to compile in sbrk emulation.
195  LACKS_UNISTD_H            (default: undefined if not WIN32)
196     Define this if your system does not have a <unistd.h>.
197  LACKS_SYS_PARAM_H         (default: undefined if not WIN32)
198     Define this if your system does not have a <sys/param.h>.
199  MORECORE                  (default: sbrk)
200     The name of the routine to call to obtain more memory from the system.
201  MORECORE_FAILURE          (default: -1)
202     The value returned upon failure of MORECORE.
203  MORECORE_CLEARS           (default 1)
204     True (1) if the routine mapped to MORECORE zeroes out memory (which
205     holds for sbrk).
206  DEFAULT_TRIM_THRESHOLD
207  DEFAULT_TOP_PAD
208  DEFAULT_MMAP_THRESHOLD
209  DEFAULT_MMAP_MAX
210     Default values of tunable parameters (described in detail below)
211     controlling interaction with host system routines (sbrk, mmap, etc).
212     These values may also be changed dynamically via mallopt(). The
213     preset defaults are those that give best performance for typical
214     programs/systems.
215  USE_DL_PREFIX             (default: undefined)
216     Prefix all public routines with the string 'dl'.  Useful to
217     quickly avoid procedure declaration conflicts and linker symbol
218     conflicts with existing memory allocation routines.
219
220
221*/
222
223
224
225
226/* Preliminaries */
227
228#ifndef __STD_C
229#ifdef __STDC__
230#define __STD_C     1
231#else
232#if __cplusplus
233#define __STD_C     1
234#else
235#define __STD_C     0
236#endif /*__cplusplus*/
237#endif /*__STDC__*/
238#endif /*__STD_C*/
239
240#ifndef Void_t
241#if (__STD_C || defined(WIN32))
242#define Void_t      void
243#else
244#define Void_t      char
245#endif
246#endif /*Void_t*/
247
248#if __STD_C
249#include <stddef.h>   /* for size_t */
250#else
251#include <sys/types.h>
252#endif
253
254#ifdef __cplusplus
255extern "C" {
256#endif
257
258#include <stdio.h>    /* needed for malloc_stats */
259
260
261/*
262  Compile-time options
263*/
264
265
266/*
267    Debugging:
268
269    Because freed chunks may be overwritten with link fields, this
270    malloc will often die when freed memory is overwritten by user
271    programs.  This can be very effective (albeit in an annoying way)
272    in helping track down dangling pointers.
273
274    If you compile with -DDEBUG, a number of assertion checks are
275    enabled that will catch more memory errors. You probably won't be
276    able to make much sense of the actual assertion errors, but they
277    should help you locate incorrectly overwritten memory.  The
278    checking is fairly extensive, and will slow down execution
279    noticeably. Calling malloc_stats or mallinfo with DEBUG set will
280    attempt to check every non-mmapped allocated and free chunk in the
281    course of computing the summmaries. (By nature, mmapped regions
282    cannot be checked very much automatically.)
283
284    Setting DEBUG may also be helpful if you are trying to modify
285    this code. The assertions in the check routines spell out in more
286    detail the assumptions and invariants underlying the algorithms.
287
288*/
289
290#ifdef DEBUG
291#include <assert.h>
292#else
293#define assert(x) ((void)0)
294#endif
295
296
297/*
298  INTERNAL_SIZE_T is the word-size used for internal bookkeeping
299  of chunk sizes. On a 64-bit machine, you can reduce malloc
300  overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
301  at the expense of not being able to handle requests greater than
302  2^31. This limitation is hardly ever a concern; you are encouraged
303  to set this. However, the default version is the same as size_t.
304*/
305
306#ifndef INTERNAL_SIZE_T
307#define INTERNAL_SIZE_T size_t
308#endif
309
310/*
311  REALLOC_ZERO_BYTES_FREES should be set if a call to
312  realloc with zero bytes should be the same as a call to free.
313  Some people think it should. Otherwise, since this malloc
314  returns a unique pointer for malloc(0), so does realloc(p, 0).
315*/
316
317
318/*   #define REALLOC_ZERO_BYTES_FREES */
319
320
321/*
322  WIN32 causes an emulation of sbrk to be compiled in
323  mmap-based options are not currently supported in WIN32.
324*/
325
326/* #define WIN32 */
327#ifdef WIN32
328#define MORECORE wsbrk
329#define HAVE_MMAP 0
330
331#define LACKS_UNISTD_H
332#define LACKS_SYS_PARAM_H
333
334/*
335  Include 'windows.h' to get the necessary declarations for the
336  Microsoft Visual C++ data structures and routines used in the 'sbrk'
337  emulation.
338
339  Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
340  Visual C++ header files are included.
341*/
342#define WIN32_LEAN_AND_MEAN
343#include <windows.h>
344#endif
345
346
347/*
348  HAVE_MEMCPY should be defined if you are not otherwise using
349  ANSI STD C, but still have memcpy and memset in your C library
350  and want to use them in calloc and realloc. Otherwise simple
351  macro versions are defined here.
352
353  USE_MEMCPY should be defined as 1 if you actually want to
354  have memset and memcpy called. People report that the macro
355  versions are often enough faster than libc versions on many
356  systems that it is better to use them.
357
358*/
359
360#define HAVE_MEMCPY
361
362#ifndef USE_MEMCPY
363#ifdef HAVE_MEMCPY
364#define USE_MEMCPY 1
365#else
366#define USE_MEMCPY 0
367#endif
368#endif
369
370#if (__STD_C || defined(HAVE_MEMCPY))
371
372#if __STD_C
373void* memset(void*, int, size_t);
374void* memcpy(void*, const void*, size_t);
375#else
376#ifdef WIN32
377/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
378/* 'windows.h' */
379#else
380Void_t* memset();
381Void_t* memcpy();
382#endif
383#endif
384#endif
385
386#if USE_MEMCPY
387
388/* The following macros are only invoked with (2n+1)-multiples of
389   INTERNAL_SIZE_T units, with a positive integer n. This is exploited
390   for fast inline execution when n is small. */
391
392#define MALLOC_ZERO(charp, nbytes)                                            \
393do {                                                                          \
394  INTERNAL_SIZE_T mzsz = (nbytes);                                            \
395  if(mzsz <= 9*sizeof(mzsz)) {                                                \
396    INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
397    if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
398                                     *mz++ = 0;                               \
399      if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
400                                     *mz++ = 0;                               \
401        if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
402                                     *mz++ = 0; }}}                           \
403                                     *mz++ = 0;                               \
404                                     *mz++ = 0;                               \
405                                     *mz   = 0;                               \
406  } else memset((charp), 0, mzsz);                                            \
407} while(0)
408
409#define MALLOC_COPY(dest,src,nbytes)                                          \
410do {                                                                          \
411  INTERNAL_SIZE_T mcsz = (nbytes);                                            \
412  if(mcsz <= 9*sizeof(mcsz)) {                                                \
413    INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
414    INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
415    if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
416                                     *mcdst++ = *mcsrc++;                     \
417      if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
418                                     *mcdst++ = *mcsrc++;                     \
419        if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
420                                     *mcdst++ = *mcsrc++; }}}                 \
421                                     *mcdst++ = *mcsrc++;                     \
422                                     *mcdst++ = *mcsrc++;                     \
423                                     *mcdst   = *mcsrc  ;                     \
424  } else memcpy(dest, src, mcsz);                                             \
425} while(0)
426
427#else /* !USE_MEMCPY */
428
429/* Use Duff's device for good zeroing/copying performance. */
430
431#define MALLOC_ZERO(charp, nbytes)                                            \
432do {                                                                          \
433  INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
434  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
435  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
436  switch (mctmp) {                                                            \
437    case 0: for(;;) { *mzp++ = 0;                                             \
438    case 7:           *mzp++ = 0;                                             \
439    case 6:           *mzp++ = 0;                                             \
440    case 5:           *mzp++ = 0;                                             \
441    case 4:           *mzp++ = 0;                                             \
442    case 3:           *mzp++ = 0;                                             \
443    case 2:           *mzp++ = 0;                                             \
444    case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
445  }                                                                           \
446} while(0)
447
448#define MALLOC_COPY(dest,src,nbytes)                                          \
449do {                                                                          \
450  INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
451  INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
452  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
453  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
454  switch (mctmp) {                                                            \
455    case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
456    case 7:           *mcdst++ = *mcsrc++;                                    \
457    case 6:           *mcdst++ = *mcsrc++;                                    \
458    case 5:           *mcdst++ = *mcsrc++;                                    \
459    case 4:           *mcdst++ = *mcsrc++;                                    \
460    case 3:           *mcdst++ = *mcsrc++;                                    \
461    case 2:           *mcdst++ = *mcsrc++;                                    \
462    case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
463  }                                                                           \
464} while(0)
465
466#endif
467
468
469/*
470  Define HAVE_MMAP to optionally make malloc() use mmap() to
471  allocate very large blocks.  These will be returned to the
472  operating system immediately after a free().
473*/
474
475#ifndef HAVE_MMAP
476#define HAVE_MMAP 1
477#endif
478
479/*
480  Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
481  large blocks.  This is currently only possible on Linux with
482  kernel versions newer than 1.3.77.
483*/
484
485#ifndef HAVE_MREMAP
486#ifdef INTERNAL_LINUX_C_LIB
487#define HAVE_MREMAP 1
488#else
489#define HAVE_MREMAP 0
490#endif
491#endif
492
493#if HAVE_MMAP
494
495#include <unistd.h>
496#include <fcntl.h>
497#include <sys/mman.h>
498
499#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
500#define MAP_ANONYMOUS MAP_ANON
501#endif
502
503#endif /* HAVE_MMAP */
504
505/*
506  Access to system page size. To the extent possible, this malloc
507  manages memory from the system in page-size units.
508
509  The following mechanics for getpagesize were adapted from
510  bsd/gnu getpagesize.h
511*/
512
513#ifndef LACKS_UNISTD_H
514#  include <unistd.h>
515#endif
516
517#ifndef malloc_getpagesize
518#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
519#    ifndef _SC_PAGE_SIZE
520#      define _SC_PAGE_SIZE _SC_PAGESIZE
521#    endif
522#  endif
523#  ifdef _SC_PAGE_SIZE
524#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
525#  else
526#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
527       extern size_t getpagesize();
528#      define malloc_getpagesize getpagesize()
529#    else
530#      ifdef WIN32
531#        define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
532#      else
533#        ifndef LACKS_SYS_PARAM_H
534#          include <sys/param.h>
535#        endif
536#        ifdef EXEC_PAGESIZE
537#          define malloc_getpagesize EXEC_PAGESIZE
538#        else
539#          ifdef NBPG
540#            ifndef CLSIZE
541#              define malloc_getpagesize NBPG
542#            else
543#              define malloc_getpagesize (NBPG * CLSIZE)
544#            endif
545#          else
546#            ifdef NBPC
547#              define malloc_getpagesize NBPC
548#            else
549#              ifdef PAGESIZE
550#                define malloc_getpagesize PAGESIZE
551#              else
552#                define malloc_getpagesize (4096) /* just guess */
553#              endif
554#            endif
555#          endif
556#        endif
557#      endif
558#    endif
559#  endif
560#endif
561
562
563/*
564
565  This version of malloc supports the standard SVID/XPG mallinfo
566  routine that returns a struct containing the same kind of
567  information you can get from malloc_stats. It should work on
568  any SVID/XPG compliant system that has a /usr/include/malloc.h
569  defining struct mallinfo. (If you'd like to install such a thing
570  yourself, cut out the preliminary declarations as described above
571  and below and save them in a malloc.h file. But there's no
572  compelling reason to bother to do this.)
573
574  The main declaration needed is the mallinfo struct that is returned
575  (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
576  bunch of fields, most of which are not even meaningful in this
577  version of malloc. Some of these fields are are instead filled by
578  mallinfo() with other numbers that might possibly be of interest.
579
580  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
581  /usr/include/malloc.h file that includes a declaration of struct
582  mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
583  version is declared below.  These must be precisely the same for
584  mallinfo() to work.
585
586*/
587
588/* #define HAVE_USR_INCLUDE_MALLOC_H */
589
590#if HAVE_USR_INCLUDE_MALLOC_H
591#include "/usr/include/malloc.h"
592#else
593
594/* SVID2/XPG mallinfo structure */
595
596struct mallinfo {
597  int arena;    /* total space allocated from system */
598  int ordblks;  /* number of non-inuse chunks */
599  int smblks;   /* unused -- always zero */
600  int hblks;    /* number of mmapped regions */
601  int hblkhd;   /* total space in mmapped regions */
602  int usmblks;  /* unused -- always zero */
603  int fsmblks;  /* unused -- always zero */
604  int uordblks; /* total allocated space */
605  int fordblks; /* total non-inuse space */
606  int keepcost; /* top-most, releasable (via malloc_trim) space */
607};
608
609/* SVID2/XPG mallopt options */
610
611#define M_MXFAST  1    /* UNUSED in this malloc */
612#define M_NLBLKS  2    /* UNUSED in this malloc */
613#define M_GRAIN   3    /* UNUSED in this malloc */
614#define M_KEEP    4    /* UNUSED in this malloc */
615
616#endif
617
618/* mallopt options that actually do something */
619
620#define M_TRIM_THRESHOLD    -1
621#define M_TOP_PAD           -2
622#define M_MMAP_THRESHOLD    -3
623#define M_MMAP_MAX          -4
624
625
626#ifndef DEFAULT_TRIM_THRESHOLD
627#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
628#endif
629
630/*
631    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
632      to keep before releasing via malloc_trim in free().
633
634      Automatic trimming is mainly useful in long-lived programs.
635      Because trimming via sbrk can be slow on some systems, and can
636      sometimes be wasteful (in cases where programs immediately
637      afterward allocate more large chunks) the value should be high
638      enough so that your overall system performance would improve by
639      releasing.
640
641      The trim threshold and the mmap control parameters (see below)
642      can be traded off with one another. Trimming and mmapping are
643      two different ways of releasing unused memory back to the
644      system. Between these two, it is often possible to keep
645      system-level demands of a long-lived program down to a bare
646      minimum. For example, in one test suite of sessions measuring
647      the XF86 X server on Linux, using a trim threshold of 128K and a
648      mmap threshold of 192K led to near-minimal long term resource
649      consumption.
650
651      If you are using this malloc in a long-lived program, it should
652      pay to experiment with these values.  As a rough guide, you
653      might set to a value close to the average size of a process
654      (program) running on your system.  Releasing this much memory
655      would allow such a process to run in memory.  Generally, it's
656      worth it to tune for trimming rather tham memory mapping when a
657      program undergoes phases where several large chunks are
658      allocated and released in ways that can reuse each other's
659      storage, perhaps mixed with phases where there are no such
660      chunks at all.  And in well-behaved long-lived programs,
661      controlling release of large blocks via trimming versus mapping
662      is usually faster.
663
664      However, in most programs, these parameters serve mainly as
665      protection against the system-level effects of carrying around
666      massive amounts of unneeded memory. Since frequent calls to
667      sbrk, mmap, and munmap otherwise degrade performance, the default
668      parameters are set to relatively high values that serve only as
669      safeguards.
670
671      The default trim value is high enough to cause trimming only in
672      fairly extreme (by current memory consumption standards) cases.
673      It must be greater than page size to have any useful effect.  To
674      disable trimming completely, you can set to (unsigned long)(-1);
675
676
677*/
678
679
680#ifndef DEFAULT_TOP_PAD
681#define DEFAULT_TOP_PAD        (0)
682#endif
683
684/*
685    M_TOP_PAD is the amount of extra `padding' space to allocate or
686      retain whenever sbrk is called. It is used in two ways internally:
687
688      * When sbrk is called to extend the top of the arena to satisfy
689        a new malloc request, this much padding is added to the sbrk
690        request.
691
692      * When malloc_trim is called automatically from free(),
693        it is used as the `pad' argument.
694
695      In both cases, the actual amount of padding is rounded
696      so that the end of the arena is always a system page boundary.
697
698      The main reason for using padding is to avoid calling sbrk so
699      often. Having even a small pad greatly reduces the likelihood
700      that nearly every malloc request during program start-up (or
701      after trimming) will invoke sbrk, which needlessly wastes
702      time.
703
704      Automatic rounding-up to page-size units is normally sufficient
705      to avoid measurable overhead, so the default is 0.  However, in
706      systems where sbrk is relatively slow, it can pay to increase
707      this value, at the expense of carrying around more memory than
708      the program needs.
709
710*/
711
712
713#ifndef DEFAULT_MMAP_THRESHOLD
714#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
715#endif
716
717/*
718
719    M_MMAP_THRESHOLD is the request size threshold for using mmap()
720      to service a request. Requests of at least this size that cannot
721      be allocated using already-existing space will be serviced via mmap.
722      (If enough normal freed space already exists it is used instead.)
723
724      Using mmap segregates relatively large chunks of memory so that
725      they can be individually obtained and released from the host
726      system. A request serviced through mmap is never reused by any
727      other request (at least not directly; the system may just so
728      happen to remap successive requests to the same locations).
729
730      Segregating space in this way has the benefit that mmapped space
731      can ALWAYS be individually released back to the system, which
732      helps keep the system level memory demands of a long-lived
733      program low. Mapped memory can never become `locked' between
734      other chunks, as can happen with normally allocated chunks, which
735      menas that even trimming via malloc_trim would not release them.
736
737      However, it has the disadvantages that:
738
739         1. The space cannot be reclaimed, consolidated, and then
740            used to service later requests, as happens with normal chunks.
741         2. It can lead to more wastage because of mmap page alignment
742            requirements
743         3. It causes malloc performance to be more dependent on host
744            system memory management support routines which may vary in
745            implementation quality and may impose arbitrary
746            limitations. Generally, servicing a request via normal
747            malloc steps is faster than going through a system's mmap.
748
749      All together, these considerations should lead you to use mmap
750      only for relatively large requests.
751
752
753*/
754
755
756#ifndef DEFAULT_MMAP_MAX
757#if HAVE_MMAP
758#define DEFAULT_MMAP_MAX       (64)
759#else
760#define DEFAULT_MMAP_MAX       (0)
761#endif
762#endif
763
764/*
765    M_MMAP_MAX is the maximum number of requests to simultaneously
766      service using mmap. This parameter exists because:
767
768         1. Some systems have a limited number of internal tables for
769            use by mmap.
770         2. In most systems, overreliance on mmap can degrade overall
771            performance.
772         3. If a program allocates many large regions, it is probably
773            better off using normal sbrk-based allocation routines that
774            can reclaim and reallocate normal heap memory. Using a
775            small value allows transition into this mode after the
776            first few allocations.
777
778      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
779      the default value is 0, and attempts to set it to non-zero values
780      in mallopt will fail.
781*/
782
783
784/*
785    USE_DL_PREFIX will prefix all public routines with the string 'dl'.
786      Useful to quickly avoid procedure declaration conflicts and linker
787      symbol conflicts with existing memory allocation routines.
788
789*/
790
791/* #define USE_DL_PREFIX */
792
793
794/*
795
796  Special defines for linux libc
797
798  Except when compiled using these special defines for Linux libc
799  using weak aliases, this malloc is NOT designed to work in
800  multithreaded applications.  No semaphores or other concurrency
801  control are provided to ensure that multiple malloc or free calls
802  don't run at the same time, which could be disasterous. A single
803  semaphore could be used across malloc, realloc, and free (which is
804  essentially the effect of the linux weak alias approach). It would
805  be hard to obtain finer granularity.
806
807*/
808
809
810#ifdef INTERNAL_LINUX_C_LIB
811
812#if __STD_C
813
814Void_t * __default_morecore_init (ptrdiff_t);
815Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
816
817#else
818
819Void_t * __default_morecore_init ();
820Void_t *(*__morecore)() = __default_morecore_init;
821
822#endif
823
824#define MORECORE (*__morecore)
825#define MORECORE_FAILURE 0
826#define MORECORE_CLEARS 1
827
828#else /* INTERNAL_LINUX_C_LIB */
829
830#if __STD_C
831extern Void_t*     sbrk(ptrdiff_t);
832#else
833extern Void_t*     sbrk();
834#endif
835
836#ifndef MORECORE
837#define MORECORE sbrk
838#endif
839
840#ifndef MORECORE_FAILURE
841#define MORECORE_FAILURE -1
842#endif
843
844#ifndef MORECORE_CLEARS
845#define MORECORE_CLEARS 1
846#endif
847
848#endif /* INTERNAL_LINUX_C_LIB */
849
850#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
851
852#define cALLOc          __libc_calloc
853#define fREe            __libc_free
854#define mALLOc          __libc_malloc
855#define mEMALIGn        __libc_memalign
856#define rEALLOc         __libc_realloc
857#define vALLOc          __libc_valloc
858#define pvALLOc         __libc_pvalloc
859#define mALLINFo        __libc_mallinfo
860#define mALLOPt         __libc_mallopt
861
862#pragma weak calloc = __libc_calloc
863#pragma weak free = __libc_free
864#pragma weak cfree = __libc_free
865#pragma weak malloc = __libc_malloc
866#pragma weak memalign = __libc_memalign
867#pragma weak realloc = __libc_realloc
868#pragma weak valloc = __libc_valloc
869#pragma weak pvalloc = __libc_pvalloc
870#pragma weak mallinfo = __libc_mallinfo
871#pragma weak mallopt = __libc_mallopt
872
873#else
874
875#ifdef USE_DL_PREFIX
876#define cALLOc          dlcalloc
877#define fREe            dlfree
878#define mALLOc          dlmalloc
879#define mEMALIGn        dlmemalign
880#define rEALLOc         dlrealloc
881#define vALLOc          dlvalloc
882#define pvALLOc         dlpvalloc
883#define mALLINFo        dlmallinfo
884#define mALLOPt         dlmallopt
885#else /* USE_DL_PREFIX */
886#define cALLOc          calloc
887#define fREe            free
888#define mALLOc          malloc
889#define mEMALIGn        memalign
890#define rEALLOc         realloc
891#define vALLOc          valloc
892#define pvALLOc         pvalloc
893#define mALLINFo        mallinfo
894#define mALLOPt         mallopt
895#endif /* USE_DL_PREFIX */
896
897#endif
898
899/* Public routines */
900
901#if __STD_C
902
903Void_t* mALLOc(size_t);
904void    fREe(Void_t*);
905Void_t* rEALLOc(Void_t*, size_t);
906Void_t* mEMALIGn(size_t, size_t);
907Void_t* vALLOc(size_t);
908Void_t* pvALLOc(size_t);
909Void_t* cALLOc(size_t, size_t);
910void    cfree(Void_t*);
911int     malloc_trim(size_t);
912size_t  malloc_usable_size(Void_t*);
913void    malloc_stats();
914int     mALLOPt(int, int);
915struct mallinfo mALLINFo(void);
916#else
917Void_t* mALLOc();
918void    fREe();
919Void_t* rEALLOc();
920Void_t* mEMALIGn();
921Void_t* vALLOc();
922Void_t* pvALLOc();
923Void_t* cALLOc();
924void    cfree();
925int     malloc_trim();
926size_t  malloc_usable_size();
927void    malloc_stats();
928int     mALLOPt();
929struct mallinfo mALLINFo();
930#endif
931
932
933#ifdef __cplusplus
934};  /* end of extern "C" */
935#endif
936
937/* ---------- To make a malloc.h, end cutting here ------------ */
938#else                           /* Moved to malloc.h */
939
940#include <malloc.h>
941#if 0
942#if __STD_C
943static void malloc_update_mallinfo (void);
944void malloc_stats (void);
945#else
946static void malloc_update_mallinfo ();
947void malloc_stats();
948#endif
949#endif  /* 0 */
950
951#endif  /* 0 */                 /* Moved to malloc.h */
952
953DECLARE_GLOBAL_DATA_PTR;
954
955/*
956  Emulation of sbrk for WIN32
957  All code within the ifdef WIN32 is untested by me.
958
959  Thanks to Martin Fong and others for supplying this.
960*/
961
962
963#ifdef WIN32
964
965#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
966~(malloc_getpagesize-1))
967#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
968
969/* resrve 64MB to insure large contiguous space */
970#define RESERVED_SIZE (1024*1024*64)
971#define NEXT_SIZE (2048*1024)
972#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
973
974struct GmListElement;
975typedef struct GmListElement GmListElement;
976
977struct GmListElement
978{
979        GmListElement* next;
980        void* base;
981};
982
983static GmListElement* head = 0;
984static unsigned int gNextAddress = 0;
985static unsigned int gAddressBase = 0;
986static unsigned int gAllocatedSize = 0;
987
988static
989GmListElement* makeGmListElement (void* bas)
990{
991        GmListElement* this;
992        this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
993        assert (this);
994        if (this)
995        {
996                this->base = bas;
997                this->next = head;
998                head = this;
999        }
1000        return this;
1001}
1002
1003void gcleanup ()
1004{
1005        BOOL rval;
1006        assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1007        if (gAddressBase && (gNextAddress - gAddressBase))
1008        {
1009                rval = VirtualFree ((void*)gAddressBase,
1010                                                        gNextAddress - gAddressBase,
1011                                                        MEM_DECOMMIT);
1012        assert (rval);
1013        }
1014        while (head)
1015        {
1016                GmListElement* next = head->next;
1017                rval = VirtualFree (head->base, 0, MEM_RELEASE);
1018                assert (rval);
1019                LocalFree (head);
1020                head = next;
1021        }
1022}
1023
1024static
1025void* findRegion (void* start_address, unsigned long size)
1026{
1027        MEMORY_BASIC_INFORMATION info;
1028        if (size >= TOP_MEMORY) return NULL;
1029
1030        while ((unsigned long)start_address + size < TOP_MEMORY)
1031        {
1032                VirtualQuery (start_address, &info, sizeof (info));
1033                if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1034                        return start_address;
1035                else
1036                {
1037                        /* Requested region is not available so see if the */
1038                        /* next region is available.  Set 'start_address' */
1039                        /* to the next region and call 'VirtualQuery()' */
1040                        /* again. */
1041
1042                        start_address = (char*)info.BaseAddress + info.RegionSize;
1043
1044                        /* Make sure we start looking for the next region */
1045                        /* on the *next* 64K boundary.  Otherwise, even if */
1046                        /* the new region is free according to */
1047                        /* 'VirtualQuery()', the subsequent call to */
1048                        /* 'VirtualAlloc()' (which follows the call to */
1049                        /* this routine in 'wsbrk()') will round *down* */
1050                        /* the requested address to a 64K boundary which */
1051                        /* we already know is an address in the */
1052                        /* unavailable region.  Thus, the subsequent call */
1053                        /* to 'VirtualAlloc()' will fail and bring us back */
1054                        /* here, causing us to go into an infinite loop. */
1055
1056                        start_address =
1057                                (void *) AlignPage64K((unsigned long) start_address);
1058                }
1059        }
1060        return NULL;
1061
1062}
1063
1064
1065void* wsbrk (long size)
1066{
1067        void* tmp;
1068        if (size > 0)
1069        {
1070                if (gAddressBase == 0)
1071                {
1072                        gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1073                        gNextAddress = gAddressBase =
1074                                (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1075                                                                                        MEM_RESERVE, PAGE_NOACCESS);
1076                } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1077gAllocatedSize))
1078                {
1079                        long new_size = max (NEXT_SIZE, AlignPage (size));
1080                        void* new_address = (void*)(gAddressBase+gAllocatedSize);
1081                        do
1082                        {
1083                                new_address = findRegion (new_address, new_size);
1084
1085                                if (new_address == 0)
1086                                        return (void*)-1;
1087
1088                                gAddressBase = gNextAddress =
1089                                        (unsigned int)VirtualAlloc (new_address, new_size,
1090                                                                                                MEM_RESERVE, PAGE_NOACCESS);
1091                                /* repeat in case of race condition */
1092                                /* The region that we found has been snagged */
1093                                /* by another thread */
1094                        }
1095                        while (gAddressBase == 0);
1096
1097                        assert (new_address == (void*)gAddressBase);
1098
1099                        gAllocatedSize = new_size;
1100
1101                        if (!makeGmListElement ((void*)gAddressBase))
1102                                return (void*)-1;
1103                }
1104                if ((size + gNextAddress) > AlignPage (gNextAddress))
1105                {
1106                        void* res;
1107                        res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1108                                                                (size + gNextAddress -
1109                                                                 AlignPage (gNextAddress)),
1110                                                                MEM_COMMIT, PAGE_READWRITE);
1111                        if (res == 0)
1112                                return (void*)-1;
1113                }
1114                tmp = (void*)gNextAddress;
1115                gNextAddress = (unsigned int)tmp + size;
1116                return tmp;
1117        }
1118        else if (size < 0)
1119        {
1120                unsigned int alignedGoal = AlignPage (gNextAddress + size);
1121                /* Trim by releasing the virtual memory */
1122                if (alignedGoal >= gAddressBase)
1123                {
1124                        VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1125                                                 MEM_DECOMMIT);
1126                        gNextAddress = gNextAddress + size;
1127                        return (void*)gNextAddress;
1128                }
1129                else
1130                {
1131                        VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1132                                                 MEM_DECOMMIT);
1133                        gNextAddress = gAddressBase;
1134                        return (void*)-1;
1135                }
1136        }
1137        else
1138        {
1139                return (void*)gNextAddress;
1140        }
1141}
1142
1143#endif
1144
1145
1146
1147/*
1148  Type declarations
1149*/
1150
1151
1152struct malloc_chunk
1153{
1154  INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1155  INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
1156  struct malloc_chunk* fd;   /* double links -- used only if free. */
1157  struct malloc_chunk* bk;
1158};
1159
1160typedef struct malloc_chunk* mchunkptr;
1161
1162/*
1163
1164   malloc_chunk details:
1165
1166    (The following includes lightly edited explanations by Colin Plumb.)
1167
1168    Chunks of memory are maintained using a `boundary tag' method as
1169    described in e.g., Knuth or Standish.  (See the paper by Paul
1170    Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1171    survey of such techniques.)  Sizes of free chunks are stored both
1172    in the front of each chunk and at the end.  This makes
1173    consolidating fragmented chunks into bigger chunks very fast.  The
1174    size fields also hold bits representing whether chunks are free or
1175    in use.
1176
1177    An allocated chunk looks like this:
1178
1179
1180    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1181            |             Size of previous chunk, if allocated            | |
1182            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1183            |             Size of chunk, in bytes                         |P|
1184      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1185            |             User data starts here...                          .
1186            .                                                               .
1187            .             (malloc_usable_space() bytes)                     .
1188            .                                                               |
1189nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1190            |             Size of chunk                                     |
1191            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1192
1193
1194    Where "chunk" is the front of the chunk for the purpose of most of
1195    the malloc code, but "mem" is the pointer that is returned to the
1196    user.  "Nextchunk" is the beginning of the next contiguous chunk.
1197
1198    Chunks always begin on even word boundries, so the mem portion
1199    (which is returned to the user) is also on an even word boundary, and
1200    thus double-word aligned.
1201
1202    Free chunks are stored in circular doubly-linked lists, and look like this:
1203
1204    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1205            |             Size of previous chunk                            |
1206            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1207    `head:' |             Size of chunk, in bytes                         |P|
1208      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209            |             Forward pointer to next chunk in list             |
1210            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1211            |             Back pointer to previous chunk in list            |
1212            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1213            |             Unused space (may be 0 bytes long)                .
1214            .                                                               .
1215            .                                                               |
1216nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1217    `foot:' |             Size of chunk, in bytes                           |
1218            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1219
1220    The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1221    chunk size (which is always a multiple of two words), is an in-use
1222    bit for the *previous* chunk.  If that bit is *clear*, then the
1223    word before the current chunk size contains the previous chunk
1224    size, and can be used to find the front of the previous chunk.
1225    (The very first chunk allocated always has this bit set,
1226    preventing access to non-existent (or non-owned) memory.)
1227
1228    Note that the `foot' of the current chunk is actually represented
1229    as the prev_size of the NEXT chunk. (This makes it easier to
1230    deal with alignments etc).
1231
1232    The two exceptions to all this are
1233
1234     1. The special chunk `top', which doesn't bother using the
1235        trailing size field since there is no
1236        next contiguous chunk that would have to index off it. (After
1237        initialization, `top' is forced to always exist.  If it would
1238        become less than MINSIZE bytes long, it is replenished via
1239        malloc_extend_top.)
1240
1241     2. Chunks allocated via mmap, which have the second-lowest-order
1242        bit (IS_MMAPPED) set in their size fields.  Because they are
1243        never merged or traversed from any other chunk, they have no
1244        foot size or inuse information.
1245
1246    Available chunks are kept in any of several places (all declared below):
1247
1248    * `av': An array of chunks serving as bin headers for consolidated
1249       chunks. Each bin is doubly linked.  The bins are approximately
1250       proportionally (log) spaced.  There are a lot of these bins
1251       (128). This may look excessive, but works very well in
1252       practice.  All procedures maintain the invariant that no
1253       consolidated chunk physically borders another one. Chunks in
1254       bins are kept in size order, with ties going to the
1255       approximately least recently used chunk.
1256
1257       The chunks in each bin are maintained in decreasing sorted order by
1258       size.  This is irrelevant for the small bins, which all contain
1259       the same-sized chunks, but facilitates best-fit allocation for
1260       larger chunks. (These lists are just sequential. Keeping them in
1261       order almost never requires enough traversal to warrant using
1262       fancier ordered data structures.)  Chunks of the same size are
1263       linked with the most recently freed at the front, and allocations
1264       are taken from the back.  This results in LRU or FIFO allocation
1265       order, which tends to give each chunk an equal opportunity to be
1266       consolidated with adjacent freed chunks, resulting in larger free
1267       chunks and less fragmentation.
1268
1269    * `top': The top-most available chunk (i.e., the one bordering the
1270       end of available memory) is treated specially. It is never
1271       included in any bin, is used only if no other chunk is
1272       available, and is released back to the system if it is very
1273       large (see M_TRIM_THRESHOLD).
1274
1275    * `last_remainder': A bin holding only the remainder of the
1276       most recently split (non-top) chunk. This bin is checked
1277       before other non-fitting chunks, so as to provide better
1278       locality for runs of sequentially allocated chunks.
1279
1280    *  Implicitly, through the host system's memory mapping tables.
1281       If supported, requests greater than a threshold are usually
1282       serviced via calls to mmap, and then later released via munmap.
1283
1284*/
1285
1286/*  sizes, alignments */
1287
1288#define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
1289#define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
1290#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
1291#define MINSIZE                (sizeof(struct malloc_chunk))
1292
1293/* conversion from malloc headers to user pointers, and back */
1294
1295#define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1296#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1297
1298/* pad request bytes into a usable size */
1299
1300#define request2size(req) \
1301 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1302  (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1303   (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1304
1305/* Check if m has acceptable alignment */
1306
1307#define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1308
1309
1310
1311
1312/*
1313  Physical chunk operations
1314*/
1315
1316
1317/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1318
1319#define PREV_INUSE 0x1
1320
1321/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1322
1323#define IS_MMAPPED 0x2
1324
1325/* Bits to mask off when extracting size */
1326
1327#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1328
1329
1330/* Ptr to next physical malloc_chunk. */
1331
1332#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1333
1334/* Ptr to previous physical malloc_chunk */
1335
1336#define prev_chunk(p)\
1337   ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1338
1339
1340/* Treat space at ptr + offset as a chunk */
1341
1342#define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1343
1344
1345
1346
1347/*
1348  Dealing with use bits
1349*/
1350
1351/* extract p's inuse bit */
1352
1353#define inuse(p)\
1354((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1355
1356/* extract inuse bit of previous chunk */
1357
1358#define prev_inuse(p)  ((p)->size & PREV_INUSE)
1359
1360/* check for mmap()'ed chunk */
1361
1362#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1363
1364/* set/clear chunk as in use without otherwise disturbing */
1365
1366#define set_inuse(p)\
1367((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1368
1369#define clear_inuse(p)\
1370((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1371
1372/* check/set/clear inuse bits in known places */
1373
1374#define inuse_bit_at_offset(p, s)\
1375 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1376
1377#define set_inuse_bit_at_offset(p, s)\
1378 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1379
1380#define clear_inuse_bit_at_offset(p, s)\
1381 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1382
1383
1384
1385
1386/*
1387  Dealing with size fields
1388*/
1389
1390/* Get size, ignoring use bits */
1391
1392#define chunksize(p)          ((p)->size & ~(SIZE_BITS))
1393
1394/* Set size at head, without disturbing its use bit */
1395
1396#define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1397
1398/* Set size/use ignoring previous bits in header */
1399
1400#define set_head(p, s)        ((p)->size = (s))
1401
1402/* Set size at footer (only when chunk is not in use) */
1403
1404#define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1405
1406
1407
1408
1409
1410/*
1411   Bins
1412
1413    The bins, `av_' are an array of pairs of pointers serving as the
1414    heads of (initially empty) doubly-linked lists of chunks, laid out
1415    in a way so that each pair can be treated as if it were in a
1416    malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1417    and chunks are the same).
1418
1419    Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1420    8 bytes apart. Larger bins are approximately logarithmically
1421    spaced. (See the table below.) The `av_' array is never mentioned
1422    directly in the code, but instead via bin access macros.
1423
1424    Bin layout:
1425
1426    64 bins of size       8
1427    32 bins of size      64
1428    16 bins of size     512
1429     8 bins of size    4096
1430     4 bins of size   32768
1431     2 bins of size  262144
1432     1 bin  of size what's left
1433
1434    There is actually a little bit of slop in the numbers in bin_index
1435    for the sake of speed. This makes no difference elsewhere.
1436
1437    The special chunks `top' and `last_remainder' get their own bins,
1438    (this is implemented via yet more trickery with the av_ array),
1439    although `top' is never properly linked to its bin since it is
1440    always handled specially.
1441
1442*/
1443
1444#define NAV             128   /* number of bins */
1445
1446typedef struct malloc_chunk* mbinptr;
1447
1448/* access macros */
1449
1450#define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1451#define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1452#define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1453
1454/*
1455   The first 2 bins are never indexed. The corresponding av_ cells are instead
1456   used for bookkeeping. This is not to save space, but to simplify
1457   indexing, maintain locality, and avoid some initialization tests.
1458*/
1459
1460#define top            (av_[2])          /* The topmost chunk */
1461#define last_remainder (bin_at(1))       /* remainder from last split */
1462
1463
1464/*
1465   Because top initially points to its own bin with initial
1466   zero size, thus forcing extension on the first malloc request,
1467   we avoid having any special code in malloc to check whether
1468   it even exists yet. But we still need to in malloc_extend_top.
1469*/
1470
1471#define initial_top    ((mchunkptr)(bin_at(0)))
1472
1473/* Helper macro to initialize bins */
1474
1475#define IAV(i)  bin_at(i), bin_at(i)
1476
1477static mbinptr av_[NAV * 2 + 2] = {
1478 0, 0,
1479 IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
1480 IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
1481 IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
1482 IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
1483 IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
1484 IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
1485 IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
1486 IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
1487 IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
1488 IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
1489 IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
1490 IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
1491 IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
1492 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1493 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1494 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1495};
1496
1497void malloc_bin_reloc (void)
1498{
1499        unsigned long *p = (unsigned long *)(&av_[2]);
1500        int i;
1501        for (i=2; i<(sizeof(av_)/sizeof(mbinptr)); ++i) {
1502                *p++ += gd->reloc_off;
1503        }
1504}
1505
1506
1507/* field-extraction macros */
1508
1509#define first(b) ((b)->fd)
1510#define last(b)  ((b)->bk)
1511
1512/*
1513  Indexing into bins
1514*/
1515
1516#define bin_index(sz)                                                          \
1517(((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
1518 ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
1519 ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
1520 ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
1521 ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
1522 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1523                                          126)
1524/*
1525  bins for chunks < 512 are all spaced 8 bytes apart, and hold
1526  identically sized chunks. This is exploited in malloc.
1527*/
1528
1529#define MAX_SMALLBIN         63
1530#define MAX_SMALLBIN_SIZE   512
1531#define SMALLBIN_WIDTH        8
1532
1533#define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
1534
1535/*
1536   Requests are `small' if both the corresponding and the next bin are small
1537*/
1538
1539#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1540
1541
1542
1543/*
1544    To help compensate for the large number of bins, a one-level index
1545    structure is used for bin-by-bin searching.  `binblocks' is a
1546    one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1547    have any (possibly) non-empty bins, so they can be skipped over
1548    all at once during during traversals. The bits are NOT always
1549    cleared as soon as all bins in a block are empty, but instead only
1550    when all are noticed to be empty during traversal in malloc.
1551*/
1552
1553#define BINBLOCKWIDTH     4   /* bins per block */
1554
1555#define binblocks_r     ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1556#define binblocks_w     (av_[1])
1557
1558/* bin<->block macros */
1559
1560#define idx2binblock(ix)    ((unsigned)1 << (ix / BINBLOCKWIDTH))
1561#define mark_binblock(ii)   (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1562#define clear_binblock(ii)  (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
1563
1564
1565
1566
1567
1568/*  Other static bookkeeping data */
1569
1570/* variables holding tunable values */
1571
1572static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
1573static unsigned long top_pad          = DEFAULT_TOP_PAD;
1574static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
1575static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
1576
1577/* The first value returned from sbrk */
1578static char* sbrk_base = (char*)(-1);
1579
1580/* The maximum memory obtained from system via sbrk */
1581static unsigned long max_sbrked_mem = 0;
1582
1583/* The maximum via either sbrk or mmap */
1584static unsigned long max_total_mem = 0;
1585
1586/* internal working copy of mallinfo */
1587static struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1588
1589/* The total memory obtained from system via sbrk */
1590#define sbrked_mem  (current_mallinfo.arena)
1591
1592/* Tracking mmaps */
1593
1594#if 0
1595static unsigned int n_mmaps = 0;
1596#endif  /* 0 */
1597static unsigned long mmapped_mem = 0;
1598#if HAVE_MMAP
1599static unsigned int max_n_mmaps = 0;
1600static unsigned long max_mmapped_mem = 0;
1601#endif
1602
1603
1604
1605/*
1606  Debugging support
1607*/
1608
1609#ifdef DEBUG
1610
1611
1612/*
1613  These routines make a number of assertions about the states
1614  of data structures that should be true at all times. If any
1615  are not true, it's very likely that a user program has somehow
1616  trashed memory. (It's also possible that there is a coding error
1617  in malloc. In which case, please report it!)
1618*/
1619
1620#if __STD_C
1621static void do_check_chunk(mchunkptr p)
1622#else
1623static void do_check_chunk(p) mchunkptr p;
1624#endif
1625{
1626#if 0   /* causes warnings because assert() is off */
1627  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1628#endif  /* 0 */
1629
1630  /* No checkable chunk is mmapped */
1631  assert(!chunk_is_mmapped(p));
1632
1633  /* Check for legal address ... */
1634  assert((char*)p >= sbrk_base);
1635  if (p != top)
1636    assert((char*)p + sz <= (char*)top);
1637  else
1638    assert((char*)p + sz <= sbrk_base + sbrked_mem);
1639
1640}
1641
1642
1643#if __STD_C
1644static void do_check_free_chunk(mchunkptr p)
1645#else
1646static void do_check_free_chunk(p) mchunkptr p;
1647#endif
1648{
1649  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1650#if 0   /* causes warnings because assert() is off */
1651  mchunkptr next = chunk_at_offset(p, sz);
1652#endif  /* 0 */
1653
1654  do_check_chunk(p);
1655
1656  /* Check whether it claims to be free ... */
1657  assert(!inuse(p));
1658
1659  /* Unless a special marker, must have OK fields */
1660  if ((long)sz >= (long)MINSIZE)
1661  {
1662    assert((sz & MALLOC_ALIGN_MASK) == 0);
1663    assert(aligned_OK(chunk2mem(p)));
1664    /* ... matching footer field */
1665    assert(next->prev_size == sz);
1666    /* ... and is fully consolidated */
1667    assert(prev_inuse(p));
1668    assert (next == top || inuse(next));
1669
1670    /* ... and has minimally sane links */
1671    assert(p->fd->bk == p);
1672    assert(p->bk->fd == p);
1673  }
1674  else /* markers are always of size SIZE_SZ */
1675    assert(sz == SIZE_SZ);
1676}
1677
1678#if __STD_C
1679static void do_check_inuse_chunk(mchunkptr p)
1680#else
1681static void do_check_inuse_chunk(p) mchunkptr p;
1682#endif
1683{
1684  mchunkptr next = next_chunk(p);
1685  do_check_chunk(p);
1686
1687  /* Check whether it claims to be in use ... */
1688  assert(inuse(p));
1689
1690  /* ... and is surrounded by OK chunks.
1691    Since more things can be checked with free chunks than inuse ones,
1692    if an inuse chunk borders them and debug is on, it's worth doing them.
1693  */
1694  if (!prev_inuse(p))
1695  {
1696    mchunkptr prv = prev_chunk(p);
1697    assert(next_chunk(prv) == p);
1698    do_check_free_chunk(prv);
1699  }
1700  if (next == top)
1701  {
1702    assert(prev_inuse(next));
1703    assert(chunksize(next) >= MINSIZE);
1704  }
1705  else if (!inuse(next))
1706    do_check_free_chunk(next);
1707
1708}
1709
1710#if __STD_C
1711static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1712#else
1713static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1714#endif
1715{
1716#if 0   /* causes warnings because assert() is off */
1717  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1718  long room = sz - s;
1719#endif  /* 0 */
1720
1721  do_check_inuse_chunk(p);
1722
1723  /* Legal size ... */
1724  assert((long)sz >= (long)MINSIZE);
1725  assert((sz & MALLOC_ALIGN_MASK) == 0);
1726  assert(room >= 0);
1727  assert(room < (long)MINSIZE);
1728
1729  /* ... and alignment */
1730  assert(aligned_OK(chunk2mem(p)));
1731
1732
1733  /* ... and was allocated at front of an available chunk */
1734  assert(prev_inuse(p));
1735
1736}
1737
1738
1739#define check_free_chunk(P)  do_check_free_chunk(P)
1740#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1741#define check_chunk(P) do_check_chunk(P)
1742#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1743#else
1744#define check_free_chunk(P)
1745#define check_inuse_chunk(P)
1746#define check_chunk(P)
1747#define check_malloced_chunk(P,N)
1748#endif
1749
1750
1751
1752/*
1753  Macro-based internal utilities
1754*/
1755
1756
1757/*
1758  Linking chunks in bin lists.
1759  Call these only with variables, not arbitrary expressions, as arguments.
1760*/
1761
1762/*
1763  Place chunk p of size s in its bin, in size order,
1764  putting it ahead of others of same size.
1765*/
1766
1767
1768#define frontlink(P, S, IDX, BK, FD)                                          \
1769{                                                                             \
1770  if (S < MAX_SMALLBIN_SIZE)                                                  \
1771  {                                                                           \
1772    IDX = smallbin_index(S);                                                  \
1773    mark_binblock(IDX);                                                       \
1774    BK = bin_at(IDX);                                                         \
1775    FD = BK->fd;                                                              \
1776    P->bk = BK;                                                               \
1777    P->fd = FD;                                                               \
1778    FD->bk = BK->fd = P;                                                      \
1779  }                                                                           \
1780  else                                                                        \
1781  {                                                                           \
1782    IDX = bin_index(S);                                                       \
1783    BK = bin_at(IDX);                                                         \
1784    FD = BK->fd;                                                              \
1785    if (FD == BK) mark_binblock(IDX);                                         \
1786    else                                                                      \
1787    {                                                                         \
1788      while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
1789      BK = FD->bk;                                                            \
1790    }                                                                         \
1791    P->bk = BK;                                                               \
1792    P->fd = FD;                                                               \
1793    FD->bk = BK->fd = P;                                                      \
1794  }                                                                           \
1795}
1796
1797
1798/* take a chunk off a list */
1799
1800#define unlink(P, BK, FD)                                                     \
1801{                                                                             \
1802  BK = P->bk;                                                                 \
1803  FD = P->fd;                                                                 \
1804  FD->bk = BK;                                                                \
1805  BK->fd = FD;                                                                \
1806}                                                                             \
1807
1808/* Place p as the last remainder */
1809
1810#define link_last_remainder(P)                                                \
1811{                                                                             \
1812  last_remainder->fd = last_remainder->bk =  P;                               \
1813  P->fd = P->bk = last_remainder;                                             \
1814}
1815
1816/* Clear the last_remainder bin */
1817
1818#define clear_last_remainder \
1819  (last_remainder->fd = last_remainder->bk = last_remainder)
1820
1821
1822
1823
1824
1825/* Routines dealing with mmap(). */
1826
1827#if HAVE_MMAP
1828
1829#if __STD_C
1830static mchunkptr mmap_chunk(size_t size)
1831#else
1832static mchunkptr mmap_chunk(size) size_t size;
1833#endif
1834{
1835  size_t page_mask = malloc_getpagesize - 1;
1836  mchunkptr p;
1837
1838#ifndef MAP_ANONYMOUS
1839  static int fd = -1;
1840#endif
1841
1842  if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1843
1844  /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1845   * there is no following chunk whose prev_size field could be used.
1846   */
1847  size = (size + SIZE_SZ + page_mask) & ~page_mask;
1848
1849#ifdef MAP_ANONYMOUS
1850  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1851                      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1852#else /* !MAP_ANONYMOUS */
1853  if (fd < 0)
1854  {
1855    fd = open("/dev/zero", O_RDWR);
1856    if(fd < 0) return 0;
1857  }
1858  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1859#endif
1860
1861  if(p == (mchunkptr)-1) return 0;
1862
1863  n_mmaps++;
1864  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1865
1866  /* We demand that eight bytes into a page must be 8-byte aligned. */
1867  assert(aligned_OK(chunk2mem(p)));
1868
1869  /* The offset to the start of the mmapped region is stored
1870   * in the prev_size field of the chunk; normally it is zero,
1871   * but that can be changed in memalign().
1872   */
1873  p->prev_size = 0;
1874  set_head(p, size|IS_MMAPPED);
1875
1876  mmapped_mem += size;
1877  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1878    max_mmapped_mem = mmapped_mem;
1879  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1880    max_total_mem = mmapped_mem + sbrked_mem;
1881  return p;
1882}
1883
1884#if __STD_C
1885static void munmap_chunk(mchunkptr p)
1886#else
1887static void munmap_chunk(p) mchunkptr p;
1888#endif
1889{
1890  INTERNAL_SIZE_T size = chunksize(p);
1891  int ret;
1892
1893  assert (chunk_is_mmapped(p));
1894  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1895  assert((n_mmaps > 0));
1896  assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1897
1898  n_mmaps--;
1899  mmapped_mem -= (size + p->prev_size);
1900
1901  ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1902
1903  /* munmap returns non-zero on failure */
1904  assert(ret == 0);
1905}
1906
1907#if HAVE_MREMAP
1908
1909#if __STD_C
1910static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1911#else
1912static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1913#endif
1914{
1915  size_t page_mask = malloc_getpagesize - 1;
1916  INTERNAL_SIZE_T offset = p->prev_size;
1917  INTERNAL_SIZE_T size = chunksize(p);
1918  char *cp;
1919
1920  assert (chunk_is_mmapped(p));
1921  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1922  assert((n_mmaps > 0));
1923  assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1924
1925  /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1926  new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1927
1928  cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1929
1930  if (cp == (char *)-1) return 0;
1931
1932  p = (mchunkptr)(cp + offset);
1933
1934  assert(aligned_OK(chunk2mem(p)));
1935
1936  assert((p->prev_size == offset));
1937  set_head(p, (new_size - offset)|IS_MMAPPED);
1938
1939  mmapped_mem -= size + offset;
1940  mmapped_mem += new_size;
1941  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1942    max_mmapped_mem = mmapped_mem;
1943  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1944    max_total_mem = mmapped_mem + sbrked_mem;
1945  return p;
1946}
1947
1948#endif /* HAVE_MREMAP */
1949
1950#endif /* HAVE_MMAP */
1951
1952
1953
1954
1955/*
1956  Extend the top-most chunk by obtaining memory from system.
1957  Main interface to sbrk (but see also malloc_trim).
1958*/
1959
1960#if __STD_C
1961static void malloc_extend_top(INTERNAL_SIZE_T nb)
1962#else
1963static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1964#endif
1965{
1966  char*     brk;                  /* return value from sbrk */
1967  INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1968  INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
1969  char*     new_brk;              /* return of 2nd sbrk call */
1970  INTERNAL_SIZE_T top_size;       /* new size of top chunk */
1971
1972  mchunkptr old_top     = top;  /* Record state of old top */
1973  INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1974  char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
1975
1976  /* Pad request with top_pad plus minimal overhead */
1977
1978  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
1979  unsigned long pagesz    = malloc_getpagesize;
1980
1981  /* If not the first time through, round to preserve page boundary */
1982  /* Otherwise, we need to correct to a page size below anyway. */
1983  /* (We also correct below if an intervening foreign sbrk call.) */
1984
1985  if (sbrk_base != (char*)(-1))
1986    sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1987
1988  brk = (char*)(MORECORE (sbrk_size));
1989
1990  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
1991  if (brk == (char*)(MORECORE_FAILURE) ||
1992      (brk < old_end && old_top != initial_top))
1993    return;
1994
1995  sbrked_mem += sbrk_size;
1996
1997  if (brk == old_end) /* can just add bytes to current top */
1998  {
1999    top_size = sbrk_size + old_top_size;
2000    set_head(top, top_size | PREV_INUSE);
2001  }
2002  else
2003  {
2004    if (sbrk_base == (char*)(-1))  /* First time through. Record base */
2005      sbrk_base = brk;
2006    else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
2007      sbrked_mem += brk - (char*)old_end;
2008
2009    /* Guarantee alignment of first new chunk made from this space */
2010    front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2011    if (front_misalign > 0)
2012    {
2013      correction = (MALLOC_ALIGNMENT) - front_misalign;
2014      brk += correction;
2015    }
2016    else
2017      correction = 0;
2018
2019    /* Guarantee the next brk will be at a page boundary */
2020
2021    correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
2022                   ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
2023
2024    /* Allocate correction */
2025    new_brk = (char*)(MORECORE (correction));
2026    if (new_brk == (char*)(MORECORE_FAILURE)) return;
2027
2028    sbrked_mem += correction;
2029
2030    top = (mchunkptr)brk;
2031    top_size = new_brk - brk + correction;
2032    set_head(top, top_size | PREV_INUSE);
2033
2034    if (old_top != initial_top)
2035    {
2036
2037      /* There must have been an intervening foreign sbrk call. */
2038      /* A double fencepost is necessary to prevent consolidation */
2039
2040      /* If not enough space to do this, then user did something very wrong */
2041      if (old_top_size < MINSIZE)
2042      {
2043        set_head(top, PREV_INUSE); /* will force null return from malloc */
2044        return;
2045      }
2046
2047      /* Also keep size a multiple of MALLOC_ALIGNMENT */
2048      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2049      set_head_size(old_top, old_top_size);
2050      chunk_at_offset(old_top, old_top_size          )->size =
2051        SIZE_SZ|PREV_INUSE;
2052      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2053        SIZE_SZ|PREV_INUSE;
2054      /* If possible, release the rest. */
2055      if (old_top_size >= MINSIZE)
2056        fREe(chunk2mem(old_top));
2057    }
2058  }
2059
2060  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2061    max_sbrked_mem = sbrked_mem;
2062  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2063    max_total_mem = mmapped_mem + sbrked_mem;
2064
2065  /* We always land on a page boundary */
2066  assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2067}
2068
2069
2070
2071
2072/* Main public routines */
2073
2074
2075/*
2076  Malloc Algorthim:
2077
2078    The requested size is first converted into a usable form, `nb'.
2079    This currently means to add 4 bytes overhead plus possibly more to
2080    obtain 8-byte alignment and/or to obtain a size of at least
2081    MINSIZE (currently 16 bytes), the smallest allocatable size.
2082    (All fits are considered `exact' if they are within MINSIZE bytes.)
2083
2084    From there, the first successful of the following steps is taken:
2085
2086      1. The bin corresponding to the request size is scanned, and if
2087         a chunk of exactly the right size is found, it is taken.
2088
2089      2. The most recently remaindered chunk is used if it is big
2090         enough.  This is a form of (roving) first fit, used only in
2091         the absence of exact fits. Runs of consecutive requests use
2092         the remainder of the chunk used for the previous such request
2093         whenever possible. This limited use of a first-fit style
2094         allocation strategy tends to give contiguous chunks
2095         coextensive lifetimes, which improves locality and can reduce
2096         fragmentation in the long run.
2097
2098      3. Other bins are scanned in increasing size order, using a
2099         chunk big enough to fulfill the request, and splitting off
2100         any remainder.  This search is strictly by best-fit; i.e.,
2101         the smallest (with ties going to approximately the least
2102         recently used) chunk that fits is selected.
2103
2104      4. If large enough, the chunk bordering the end of memory
2105         (`top') is split off. (This use of `top' is in accord with
2106         the best-fit search rule.  In effect, `top' is treated as
2107         larger (and thus less well fitting) than any other available
2108         chunk since it can be extended to be as large as necessary
2109         (up to system limitations).
2110
2111      5. If the request size meets the mmap threshold and the
2112         system supports mmap, and there are few enough currently
2113         allocated mmapped regions, and a call to mmap succeeds,
2114         the request is allocated via direct memory mapping.
2115
2116      6. Otherwise, the top of memory is extended by
2117         obtaining more space from the system (normally using sbrk,
2118         but definable to anything else via the MORECORE macro).
2119         Memory is gathered from the system (in system page-sized
2120         units) in a way that allows chunks obtained across different
2121         sbrk calls to be consolidated, but does not require
2122         contiguous memory. Thus, it should be safe to intersperse
2123         mallocs with other sbrk calls.
2124
2125
2126      All allocations are made from the the `lowest' part of any found
2127      chunk. (The implementation invariant is that prev_inuse is
2128      always true of any allocated chunk; i.e., that each allocated
2129      chunk borders either a previously allocated and still in-use chunk,
2130      or the base of its memory arena.)
2131
2132*/
2133
2134#if __STD_C
2135Void_t* mALLOc(size_t bytes)
2136#else
2137Void_t* mALLOc(bytes) size_t bytes;
2138#endif
2139{
2140  mchunkptr victim;                  /* inspected/selected chunk */
2141  INTERNAL_SIZE_T victim_size;       /* its size */
2142  int       idx;                     /* index for bin traversal */
2143  mbinptr   bin;                     /* associated bin */
2144  mchunkptr remainder;               /* remainder from a split */
2145  long      remainder_size;          /* its size */
2146  int       remainder_index;         /* its bin index */
2147  unsigned long block;               /* block traverser bit */
2148  int       startidx;                /* first bin of a traversed block */
2149  mchunkptr fwd;                     /* misc temp for linking */
2150  mchunkptr bck;                     /* misc temp for linking */
2151  mbinptr q;                         /* misc temp */
2152
2153  INTERNAL_SIZE_T nb;
2154
2155  if ((long)bytes < 0) return 0;
2156
2157  nb = request2size(bytes);  /* padded request size; */
2158
2159  /* Check for exact match in a bin */
2160
2161  if (is_small_request(nb))  /* Faster version for small requests */
2162  {
2163    idx = smallbin_index(nb);
2164
2165    /* No traversal or size check necessary for small bins.  */
2166
2167    q = bin_at(idx);
2168    victim = last(q);
2169
2170    /* Also scan the next one, since it would have a remainder < MINSIZE */
2171    if (victim == q)
2172    {
2173      q = next_bin(q);
2174      victim = last(q);
2175    }
2176    if (victim != q)
2177    {
2178      victim_size = chunksize(victim);
2179      unlink(victim, bck, fwd);
2180      set_inuse_bit_at_offset(victim, victim_size);
2181      check_malloced_chunk(victim, nb);
2182      return chunk2mem(victim);
2183    }
2184
2185    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2186
2187  }
2188  else
2189  {
2190    idx = bin_index(nb);
2191    bin = bin_at(idx);
2192
2193    for (victim = last(bin); victim != bin; victim = victim->bk)
2194    {
2195      victim_size = chunksize(victim);
2196      remainder_size = victim_size - nb;
2197
2198      if (remainder_size >= (long)MINSIZE) /* too big */
2199      {
2200        --idx; /* adjust to rescan below after checking last remainder */
2201        break;
2202      }
2203
2204      else if (remainder_size >= 0) /* exact fit */
2205      {
2206        unlink(victim, bck, fwd);
2207        set_inuse_bit_at_offset(victim, victim_size);
2208        check_malloced_chunk(victim, nb);
2209        return chunk2mem(victim);
2210      }
2211    }
2212
2213    ++idx;
2214
2215  }
2216
2217  /* Try to use the last split-off remainder */
2218
2219  if ( (victim = last_remainder->fd) != last_remainder)
2220  {
2221    victim_size = chunksize(victim);
2222    remainder_size = victim_size - nb;
2223
2224    if (remainder_size >= (long)MINSIZE) /* re-split */
2225    {
2226      remainder = chunk_at_offset(victim, nb);
2227      set_head(victim, nb | PREV_INUSE);
2228      link_last_remainder(remainder);
2229      set_head(remainder, remainder_size | PREV_INUSE);
2230      set_foot(remainder, remainder_size);
2231      check_malloced_chunk(victim, nb);
2232      return chunk2mem(victim);
2233    }
2234
2235    clear_last_remainder;
2236
2237    if (remainder_size >= 0)  /* exhaust */
2238    {
2239      set_inuse_bit_at_offset(victim, victim_size);
2240      check_malloced_chunk(victim, nb);
2241      return chunk2mem(victim);
2242    }
2243
2244    /* Else place in bin */
2245
2246    frontlink(victim, victim_size, remainder_index, bck, fwd);
2247  }
2248
2249  /*
2250     If there are any possibly nonempty big-enough blocks,
2251     search for best fitting chunk by scanning bins in blockwidth units.
2252  */
2253
2254  if ( (block = idx2binblock(idx)) <= binblocks_r)
2255  {
2256
2257    /* Get to the first marked block */
2258
2259    if ( (block & binblocks_r) == 0)
2260    {
2261      /* force to an even block boundary */
2262      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2263      block <<= 1;
2264      while ((block & binblocks_r) == 0)
2265      {
2266        idx += BINBLOCKWIDTH;
2267        block <<= 1;
2268      }
2269    }
2270
2271    /* For each possibly nonempty block ... */
2272    for (;;)
2273    {
2274      startidx = idx;          /* (track incomplete blocks) */
2275      q = bin = bin_at(idx);
2276
2277      /* For each bin in this block ... */
2278      do
2279      {
2280        /* Find and use first big enough chunk ... */
2281
2282        for (victim = last(bin); victim != bin; victim = victim->bk)
2283        {
2284          victim_size = chunksize(victim);
2285          remainder_size = victim_size - nb;
2286
2287          if (remainder_size >= (long)MINSIZE) /* split */
2288          {
2289            remainder = chunk_at_offset(victim, nb);
2290            set_head(victim, nb | PREV_INUSE);
2291            unlink(victim, bck, fwd);
2292            link_last_remainder(remainder);
2293            set_head(remainder, remainder_size | PREV_INUSE);
2294            set_foot(remainder, remainder_size);
2295            check_malloced_chunk(victim, nb);
2296            return chunk2mem(victim);
2297          }
2298
2299          else if (remainder_size >= 0)  /* take */
2300          {
2301            set_inuse_bit_at_offset(victim, victim_size);
2302            unlink(victim, bck, fwd);
2303            check_malloced_chunk(victim, nb);
2304            return chunk2mem(victim);
2305          }
2306
2307        }
2308
2309       bin = next_bin(bin);
2310
2311      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2312
2313      /* Clear out the block bit. */
2314
2315      do   /* Possibly backtrack to try to clear a partial block */
2316      {
2317        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2318        {
2319          av_[1] = (mbinptr)(binblocks_r & ~block);
2320          break;
2321        }
2322        --startidx;
2323       q = prev_bin(q);
2324      } while (first(q) == q);
2325
2326      /* Get to the next possibly nonempty block */
2327
2328      if ( (block <<= 1) <= binblocks_r && (block != 0) )
2329      {
2330        while ((block & binblocks_r) == 0)
2331        {
2332          idx += BINBLOCKWIDTH;
2333          block <<= 1;
2334        }
2335      }
2336      else
2337        break;
2338    }
2339  }
2340
2341
2342  /* Try to use top chunk */
2343
2344  /* Require that there be a remainder, ensuring top always exists  */
2345  if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2346  {
2347
2348#if HAVE_MMAP
2349    /* If big and would otherwise need to extend, try to use mmap instead */
2350    if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2351        (victim = mmap_chunk(nb)) != 0)
2352      return chunk2mem(victim);
2353#endif
2354
2355    /* Try to extend */
2356    malloc_extend_top(nb);
2357    if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2358      return 0; /* propagate failure */
2359  }
2360
2361  victim = top;
2362  set_head(victim, nb | PREV_INUSE);
2363  top = chunk_at_offset(victim, nb);
2364  set_head(top, remainder_size | PREV_INUSE);
2365  check_malloced_chunk(victim, nb);
2366  return chunk2mem(victim);
2367
2368}
2369
2370
2371
2372
2373/*
2374
2375  free() algorithm :
2376
2377    cases:
2378
2379       1. free(0) has no effect.
2380
2381       2. If the chunk was allocated via mmap, it is release via munmap().
2382
2383       3. If a returned chunk borders the current high end of memory,
2384          it is consolidated into the top, and if the total unused
2385          topmost memory exceeds the trim threshold, malloc_trim is
2386          called.
2387
2388       4. Other chunks are consolidated as they arrive, and
2389          placed in corresponding bins. (This includes the case of
2390          consolidating with the current `last_remainder').
2391
2392*/
2393
2394
2395#if __STD_C
2396void fREe(Void_t* mem)
2397#else
2398void fREe(mem) Void_t* mem;
2399#endif
2400{
2401  mchunkptr p;         /* chunk corresponding to mem */
2402  INTERNAL_SIZE_T hd;  /* its head field */
2403  INTERNAL_SIZE_T sz;  /* its size */
2404  int       idx;       /* its bin index */
2405  mchunkptr next;      /* next contiguous chunk */
2406  INTERNAL_SIZE_T nextsz; /* its size */
2407  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2408  mchunkptr bck;       /* misc temp for linking */
2409  mchunkptr fwd;       /* misc temp for linking */
2410  int       islr;      /* track whether merging with last_remainder */
2411
2412  if (mem == 0)                              /* free(0) has no effect */
2413    return;
2414
2415  p = mem2chunk(mem);
2416  hd = p->size;
2417
2418#if HAVE_MMAP
2419  if (hd & IS_MMAPPED)                       /* release mmapped memory. */
2420  {
2421    munmap_chunk(p);
2422    return;
2423  }
2424#endif
2425
2426  check_inuse_chunk(p);
2427
2428  sz = hd & ~PREV_INUSE;
2429  next = chunk_at_offset(p, sz);
2430  nextsz = chunksize(next);
2431
2432  if (next == top)                            /* merge with top */
2433  {
2434    sz += nextsz;
2435
2436    if (!(hd & PREV_INUSE))                    /* consolidate backward */
2437    {
2438      prevsz = p->prev_size;
2439      p = chunk_at_offset(p, -((long) prevsz));
2440      sz += prevsz;
2441      unlink(p, bck, fwd);
2442    }
2443
2444    set_head(p, sz | PREV_INUSE);
2445    top = p;
2446    if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2447      malloc_trim(top_pad);
2448    return;
2449  }
2450
2451  set_head(next, nextsz);                    /* clear inuse bit */
2452
2453  islr = 0;
2454
2455  if (!(hd & PREV_INUSE))                    /* consolidate backward */
2456  {
2457    prevsz = p->prev_size;
2458    p = chunk_at_offset(p, -((long) prevsz));
2459    sz += prevsz;
2460
2461    if (p->fd == last_remainder)             /* keep as last_remainder */
2462      islr = 1;
2463    else
2464      unlink(p, bck, fwd);
2465  }
2466
2467  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
2468  {
2469    sz += nextsz;
2470
2471    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
2472    {
2473      islr = 1;
2474      link_last_remainder(p);
2475    }
2476    else
2477      unlink(next, bck, fwd);
2478  }
2479
2480
2481  set_head(p, sz | PREV_INUSE);
2482  set_foot(p, sz);
2483  if (!islr)
2484    frontlink(p, sz, idx, bck, fwd);
2485}
2486
2487
2488
2489
2490
2491/*
2492
2493  Realloc algorithm:
2494
2495    Chunks that were obtained via mmap cannot be extended or shrunk
2496    unless HAVE_MREMAP is defined, in which case mremap is used.
2497    Otherwise, if their reallocation is for additional space, they are
2498    copied.  If for less, they are just left alone.
2499
2500    Otherwise, if the reallocation is for additional space, and the
2501    chunk can be extended, it is, else a malloc-copy-free sequence is
2502    taken.  There are several different ways that a chunk could be
2503    extended. All are tried:
2504
2505       * Extending forward into following adjacent free chunk.
2506       * Shifting backwards, joining preceding adjacent space
2507       * Both shifting backwards and extending forward.
2508       * Extending into newly sbrked space
2509
2510    Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2511    size argument of zero (re)allocates a minimum-sized chunk.
2512
2513    If the reallocation is for less space, and the new request is for
2514    a `small' (<512 bytes) size, then the newly unused space is lopped
2515    off and freed.
2516
2517    The old unix realloc convention of allowing the last-free'd chunk
2518    to be used as an argument to realloc is no longer supported.
2519    I don't know of any programs still relying on this feature,
2520    and allowing it would also allow too many other incorrect
2521    usages of realloc to be sensible.
2522
2523
2524*/
2525
2526
2527#if __STD_C
2528Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2529#else
2530Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2531#endif
2532{
2533  INTERNAL_SIZE_T    nb;      /* padded request size */
2534
2535  mchunkptr oldp;             /* chunk corresponding to oldmem */
2536  INTERNAL_SIZE_T    oldsize; /* its size */
2537
2538  mchunkptr newp;             /* chunk to return */
2539  INTERNAL_SIZE_T    newsize; /* its size */
2540  Void_t*   newmem;           /* corresponding user mem */
2541
2542  mchunkptr next;             /* next contiguous chunk after oldp */
2543  INTERNAL_SIZE_T  nextsize;  /* its size */
2544
2545  mchunkptr prev;             /* previous contiguous chunk before oldp */
2546  INTERNAL_SIZE_T  prevsize;  /* its size */
2547
2548  mchunkptr remainder;        /* holds split off extra space from newp */
2549  INTERNAL_SIZE_T  remainder_size;   /* its size */
2550
2551  mchunkptr bck;              /* misc temp for linking */
2552  mchunkptr fwd;              /* misc temp for linking */
2553
2554#ifdef REALLOC_ZERO_BYTES_FREES
2555  if (bytes == 0) { fREe(oldmem); return 0; }
2556#endif
2557
2558  if ((long)bytes < 0) return 0;
2559
2560  /* realloc of null is supposed to be same as malloc */
2561  if (oldmem == 0) return mALLOc(bytes);
2562
2563  newp    = oldp    = mem2chunk(oldmem);
2564  newsize = oldsize = chunksize(oldp);
2565
2566
2567  nb = request2size(bytes);
2568
2569#if HAVE_MMAP
2570  if (chunk_is_mmapped(oldp))
2571  {
2572#if HAVE_MREMAP
2573    newp = mremap_chunk(oldp, nb);
2574    if(newp) return chunk2mem(newp);
2575#endif
2576    /* Note the extra SIZE_SZ overhead. */
2577    if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2578    /* Must alloc, copy, free. */
2579    newmem = mALLOc(bytes);
2580    if (newmem == 0) return 0; /* propagate failure */
2581    MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2582    munmap_chunk(oldp);
2583    return newmem;
2584  }
2585#endif
2586
2587  check_inuse_chunk(oldp);
2588
2589  if ((long)(oldsize) < (long)(nb))
2590  {
2591
2592    /* Try expanding forward */
2593
2594    next = chunk_at_offset(oldp, oldsize);
2595    if (next == top || !inuse(next))
2596    {
2597      nextsize = chunksize(next);
2598
2599      /* Forward into top only if a remainder */
2600      if (next == top)
2601      {
2602        if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2603        {
2604          newsize += nextsize;
2605          top = chunk_at_offset(oldp, nb);
2606          set_head(top, (newsize - nb) | PREV_INUSE);
2607          set_head_size(oldp, nb);
2608          return chunk2mem(oldp);
2609        }
2610      }
2611
2612      /* Forward into next chunk */
2613      else if (((long)(nextsize + newsize) >= (long)(nb)))
2614      {
2615        unlink(next, bck, fwd);
2616        newsize  += nextsize;
2617        goto split;
2618      }
2619    }
2620    else
2621    {
2622      next = 0;
2623      nextsize = 0;
2624    }
2625
2626    /* Try shifting backwards. */
2627
2628    if (!prev_inuse(oldp))
2629    {
2630      prev = prev_chunk(oldp);
2631      prevsize = chunksize(prev);
2632
2633      /* try forward + backward first to save a later consolidation */
2634
2635      if (next != 0)
2636      {
2637        /* into top */
2638        if (next == top)
2639        {
2640          if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2641          {
2642            unlink(prev, bck, fwd);
2643            newp = prev;
2644            newsize += prevsize + nextsize;
2645            newmem = chunk2mem(newp);
2646            MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2647            top = chunk_at_offset(newp, nb);
2648            set_head(top, (newsize - nb) | PREV_INUSE);
2649            set_head_size(newp, nb);
2650            return newmem;
2651          }
2652        }
2653
2654        /* into next chunk */
2655        else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2656        {
2657          unlink(next, bck, fwd);
2658          unlink(prev, bck, fwd);
2659          newp = prev;
2660          newsize += nextsize + prevsize;
2661          newmem = chunk2mem(newp);
2662          MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2663          goto split;
2664        }
2665      }
2666
2667      /* backward only */
2668      if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2669      {
2670        unlink(prev, bck, fwd);
2671        newp = prev;
2672        newsize += prevsize;
2673        newmem = chunk2mem(newp);
2674        MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2675        goto split;
2676      }
2677    }
2678
2679    /* Must allocate */
2680
2681    newmem = mALLOc (bytes);
2682
2683    if (newmem == 0)  /* propagate failure */
2684      return 0;
2685
2686    /* Avoid copy if newp is next chunk after oldp. */
2687    /* (This can only happen when new chunk is sbrk'ed.) */
2688
2689    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2690    {
2691      newsize += chunksize(newp);
2692      newp = oldp;
2693      goto split;
2694    }
2695
2696    /* Otherwise copy, free, and exit */
2697    MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2698    fREe(oldmem);
2699    return newmem;
2700  }
2701
2702
2703 split:  /* split off extra room in old or expanded chunk */
2704
2705  if (newsize - nb >= MINSIZE) /* split off remainder */
2706  {
2707    remainder = chunk_at_offset(newp, nb);
2708    remainder_size = newsize - nb;
2709    set_head_size(newp, nb);
2710    set_head(remainder, remainder_size | PREV_INUSE);
2711    set_inuse_bit_at_offset(remainder, remainder_size);
2712    fREe(chunk2mem(remainder)); /* let free() deal with it */
2713  }
2714  else
2715  {
2716    set_head_size(newp, newsize);
2717    set_inuse_bit_at_offset(newp, newsize);
2718  }
2719
2720  check_inuse_chunk(newp);
2721  return chunk2mem(newp);
2722}
2723
2724
2725
2726
2727/*
2728
2729  memalign algorithm:
2730
2731    memalign requests more than enough space from malloc, finds a spot
2732    within that chunk that meets the alignment request, and then
2733    possibly frees the leading and trailing space.
2734
2735    The alignment argument must be a power of two. This property is not
2736    checked by memalign, so misuse may result in random runtime errors.
2737
2738    8-byte alignment is guaranteed by normal malloc calls, so don't
2739    bother calling memalign with an argument of 8 or less.
2740
2741    Overreliance on memalign is a sure way to fragment space.
2742
2743*/
2744
2745
2746#if __STD_C
2747Void_t* mEMALIGn(size_t alignment, size_t bytes)
2748#else
2749Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2750#endif
2751{
2752  INTERNAL_SIZE_T    nb;      /* padded  request size */
2753  char*     m;                /* memory returned by malloc call */
2754  mchunkptr p;                /* corresponding chunk */
2755  char*     brk;              /* alignment point within p */
2756  mchunkptr newp;             /* chunk to return */
2757  INTERNAL_SIZE_T  newsize;   /* its size */
2758  INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
2759  mchunkptr remainder;        /* spare room at end to split off */
2760  long      remainder_size;   /* its size */
2761
2762  if ((long)bytes < 0) return 0;
2763
2764  /* If need less alignment than we give anyway, just relay to malloc */
2765
2766  if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2767
2768  /* Otherwise, ensure that it is at least a minimum chunk size */
2769
2770  if (alignment <  MINSIZE) alignment = MINSIZE;
2771
2772  /* Call malloc with worst case padding to hit alignment. */
2773
2774  nb = request2size(bytes);
2775  m  = (char*)(mALLOc(nb + alignment + MINSIZE));
2776
2777  if (m == 0) return 0; /* propagate failure */
2778
2779  p = mem2chunk(m);
2780
2781  if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2782  {
2783#if HAVE_MMAP
2784    if(chunk_is_mmapped(p))
2785      return chunk2mem(p); /* nothing more to do */
2786#endif
2787  }
2788  else /* misaligned */
2789  {
2790    /*
2791      Find an aligned spot inside chunk.
2792      Since we need to give back leading space in a chunk of at
2793      least MINSIZE, if the first calculation places us at
2794      a spot with less than MINSIZE leader, we can move to the
2795      next aligned spot -- we've allocated enough total room so that
2796      this is always possible.
2797    */
2798
2799    brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2800    if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2801
2802    newp = (mchunkptr)brk;
2803    leadsize = brk - (char*)(p);
2804    newsize = chunksize(p) - leadsize;
2805
2806#if HAVE_MMAP
2807    if(chunk_is_mmapped(p))
2808    {
2809      newp->prev_size = p->prev_size + leadsize;
2810      set_head(newp, newsize|IS_MMAPPED);
2811      return chunk2mem(newp);
2812    }
2813#endif
2814
2815    /* give back leader, use the rest */
2816
2817    set_head(newp, newsize | PREV_INUSE);
2818    set_inuse_bit_at_offset(newp, newsize);
2819    set_head_size(p, leadsize);
2820    fREe(chunk2mem(p));
2821    p = newp;
2822
2823    assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2824  }
2825
2826  /* Also give back spare room at the end */
2827
2828  remainder_size = chunksize(p) - nb;
2829
2830  if (remainder_size >= (long)MINSIZE)
2831  {
2832    remainder = chunk_at_offset(p, nb);
2833    set_head(remainder, remainder_size | PREV_INUSE);
2834    set_head_size(p, nb);
2835    fREe(chunk2mem(remainder));
2836  }
2837
2838  check_inuse_chunk(p);
2839  return chunk2mem(p);
2840
2841}
2842
2843
2844
2845
2846/*
2847    valloc just invokes memalign with alignment argument equal
2848    to the page size of the system (or as near to this as can
2849    be figured out from all the includes/defines above.)
2850*/
2851
2852#if __STD_C
2853Void_t* vALLOc(size_t bytes)
2854#else
2855Void_t* vALLOc(bytes) size_t bytes;
2856#endif
2857{
2858  return mEMALIGn (malloc_getpagesize, bytes);
2859}
2860
2861/*
2862  pvalloc just invokes valloc for the nearest pagesize
2863  that will accommodate request
2864*/
2865
2866
2867#if __STD_C
2868Void_t* pvALLOc(size_t bytes)
2869#else
2870Void_t* pvALLOc(bytes) size_t bytes;
2871#endif
2872{
2873  size_t pagesize = malloc_getpagesize;
2874  return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2875}
2876
2877/*
2878
2879  calloc calls malloc, then zeroes out the allocated chunk.
2880
2881*/
2882
2883#if __STD_C
2884Void_t* cALLOc(size_t n, size_t elem_size)
2885#else
2886Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2887#endif
2888{
2889  mchunkptr p;
2890  INTERNAL_SIZE_T csz;
2891
2892  INTERNAL_SIZE_T sz = n * elem_size;
2893
2894
2895  /* check if expand_top called, in which case don't need to clear */
2896#if MORECORE_CLEARS
2897  mchunkptr oldtop = top;
2898  INTERNAL_SIZE_T oldtopsize = chunksize(top);
2899#endif
2900  Void_t* mem = mALLOc (sz);
2901
2902  if ((long)n < 0) return 0;
2903
2904  if (mem == 0)
2905    return 0;
2906  else
2907  {
2908    p = mem2chunk(mem);
2909
2910    /* Two optional cases in which clearing not necessary */
2911
2912
2913#if HAVE_MMAP
2914    if (chunk_is_mmapped(p)) return mem;
2915#endif
2916
2917    csz = chunksize(p);
2918
2919#if MORECORE_CLEARS
2920    if (p == oldtop && csz > oldtopsize)
2921    {
2922      /* clear only the bytes from non-freshly-sbrked memory */
2923      csz = oldtopsize;
2924    }
2925#endif
2926
2927    MALLOC_ZERO(mem, csz - SIZE_SZ);
2928    return mem;
2929  }
2930}
2931
2932/*
2933
2934  cfree just calls free. It is needed/defined on some systems
2935  that pair it with calloc, presumably for odd historical reasons.
2936
2937*/
2938
2939#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2940#if __STD_C
2941void cfree(Void_t *mem)
2942#else
2943void cfree(mem) Void_t *mem;
2944#endif
2945{
2946  fREe(mem);
2947}
2948#endif
2949
2950
2951
2952/*
2953
2954    Malloc_trim gives memory back to the system (via negative
2955    arguments to sbrk) if there is unused memory at the `high' end of
2956    the malloc pool. You can call this after freeing large blocks of
2957    memory to potentially reduce the system-level memory requirements
2958    of a program. However, it cannot guarantee to reduce memory. Under
2959    some allocation patterns, some large free blocks of memory will be
2960    locked between two used chunks, so they cannot be given back to
2961    the system.
2962
2963    The `pad' argument to malloc_trim represents the amount of free
2964    trailing space to leave untrimmed. If this argument is zero,
2965    only the minimum amount of memory to maintain internal data
2966    structures will be left (one page or less). Non-zero arguments
2967    can be supplied to maintain enough trailing space to service
2968    future expected allocations without having to re-obtain memory
2969    from the system.
2970
2971    Malloc_trim returns 1 if it actually released any memory, else 0.
2972
2973*/
2974
2975#if __STD_C
2976int malloc_trim(size_t pad)
2977#else
2978int malloc_trim(pad) size_t pad;
2979#endif
2980{
2981  long  top_size;        /* Amount of top-most memory */
2982  long  extra;           /* Amount to release */
2983  char* current_brk;     /* address returned by pre-check sbrk call */
2984  char* new_brk;         /* address returned by negative sbrk call */
2985
2986  unsigned long pagesz = malloc_getpagesize;
2987
2988  top_size = chunksize(top);
2989  extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2990
2991  if (extra < (long)pagesz)  /* Not enough memory to release */
2992    return 0;
2993
2994  else
2995  {
2996    /* Test to make sure no one else called sbrk */
2997    current_brk = (char*)(MORECORE (0));
2998    if (current_brk != (char*)(top) + top_size)
2999      return 0;     /* Apparently we don't own memory; must fail */
3000
3001    else
3002    {
3003      new_brk = (char*)(MORECORE (-extra));
3004
3005      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3006      {
3007        /* Try to figure out what we have */
3008        current_brk = (char*)(MORECORE (0));
3009        top_size = current_brk - (char*)top;
3010        if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3011        {
3012          sbrked_mem = current_brk - sbrk_base;
3013          set_head(top, top_size | PREV_INUSE);
3014        }
3015        check_chunk(top);
3016        return 0;
3017      }
3018
3019      else
3020      {
3021        /* Success. Adjust top accordingly. */
3022        set_head(top, (top_size - extra) | PREV_INUSE);
3023        sbrked_mem -= extra;
3024        check_chunk(top);
3025        return 1;
3026      }
3027    }
3028  }
3029}
3030
3031
3032
3033/*
3034  malloc_usable_size:
3035
3036    This routine tells you how many bytes you can actually use in an
3037    allocated chunk, which may be more than you requested (although
3038    often not). You can use this many bytes without worrying about
3039    overwriting other allocated objects. Not a particularly great
3040    programming practice, but still sometimes useful.
3041
3042*/
3043
3044#if __STD_C
3045size_t malloc_usable_size(Void_t* mem)
3046#else
3047size_t malloc_usable_size(mem) Void_t* mem;
3048#endif
3049{
3050  mchunkptr p;
3051  if (mem == 0)
3052    return 0;
3053  else
3054  {
3055    p = mem2chunk(mem);
3056    if(!chunk_is_mmapped(p))
3057    {
3058      if (!inuse(p)) return 0;
3059      check_inuse_chunk(p);
3060      return chunksize(p) - SIZE_SZ;
3061    }
3062    return chunksize(p) - 2*SIZE_SZ;
3063  }
3064}
3065
3066
3067
3068
3069/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3070
3071#if 0
3072static void malloc_update_mallinfo()
3073{
3074  int i;
3075  mbinptr b;
3076  mchunkptr p;
3077#ifdef DEBUG
3078  mchunkptr q;
3079#endif
3080
3081  INTERNAL_SIZE_T avail = chunksize(top);
3082  int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3083
3084  for (i = 1; i < NAV; ++i)
3085  {
3086    b = bin_at(i);
3087    for (p = last(b); p != b; p = p->bk)
3088    {
3089#ifdef DEBUG
3090      check_free_chunk(p);
3091      for (q = next_chunk(p);
3092           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3093           q = next_chunk(q))
3094        check_inuse_chunk(q);
3095#endif
3096      avail += chunksize(p);
3097      navail++;
3098    }
3099  }
3100
3101  current_mallinfo.ordblks = navail;
3102  current_mallinfo.uordblks = sbrked_mem - avail;
3103  current_mallinfo.fordblks = avail;
3104  current_mallinfo.hblks = n_mmaps;
3105  current_mallinfo.hblkhd = mmapped_mem;
3106  current_mallinfo.keepcost = chunksize(top);
3107
3108}
3109#endif  /* 0 */
3110
3111
3112
3113/*
3114
3115  malloc_stats:
3116
3117    Prints on the amount of space obtain from the system (both
3118    via sbrk and mmap), the maximum amount (which may be more than
3119    current if malloc_trim and/or munmap got called), the maximum
3120    number of simultaneous mmap regions used, and the current number
3121    of bytes allocated via malloc (or realloc, etc) but not yet
3122    freed. (Note that this is the number of bytes allocated, not the
3123    number requested. It will be larger than the number requested
3124    because of alignment and bookkeeping overhead.)
3125
3126*/
3127
3128#if 0
3129void malloc_stats()
3130{
3131  malloc_update_mallinfo();
3132  printf("max system bytes = %10u\n",
3133          (unsigned int)(max_total_mem));
3134  printf("system bytes     = %10u\n",
3135          (unsigned int)(sbrked_mem + mmapped_mem));
3136  printf("in use bytes     = %10u\n",
3137          (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3138#if HAVE_MMAP
3139  printf("max mmap regions = %10u\n",
3140          (unsigned int)max_n_mmaps);
3141#endif
3142}
3143#endif  /* 0 */
3144
3145/*
3146  mallinfo returns a copy of updated current mallinfo.
3147*/
3148
3149#if 0
3150struct mallinfo mALLINFo()
3151{
3152  malloc_update_mallinfo();
3153  return current_mallinfo;
3154}
3155#endif  /* 0 */
3156
3157
3158
3159
3160/*
3161  mallopt:
3162
3163    mallopt is the general SVID/XPG interface to tunable parameters.
3164    The format is to provide a (parameter-number, parameter-value) pair.
3165    mallopt then sets the corresponding parameter to the argument
3166    value if it can (i.e., so long as the value is meaningful),
3167    and returns 1 if successful else 0.
3168
3169    See descriptions of tunable parameters above.
3170
3171*/
3172
3173#if __STD_C
3174int mALLOPt(int param_number, int value)
3175#else
3176int mALLOPt(param_number, value) int param_number; int value;
3177#endif
3178{
3179  switch(param_number)
3180  {
3181    case M_TRIM_THRESHOLD:
3182      trim_threshold = value; return 1;
3183    case M_TOP_PAD:
3184      top_pad = value; return 1;
3185    case M_MMAP_THRESHOLD:
3186      mmap_threshold = value; return 1;
3187    case M_MMAP_MAX:
3188#if HAVE_MMAP
3189      n_mmaps_max = value; return 1;
3190#else
3191      if (value != 0) return 0; else  n_mmaps_max = value; return 1;
3192#endif
3193
3194    default:
3195      return 0;
3196  }
3197}
3198
3199/*
3200
3201History:
3202
3203    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3204      * return null for negative arguments
3205      * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3206         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3207          (e.g. WIN32 platforms)
3208         * Cleanup up header file inclusion for WIN32 platforms
3209         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3210         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3211           memory allocation routines
3212         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3213         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3214           usage of 'assert' in non-WIN32 code
3215         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3216           avoid infinite loop
3217      * Always call 'fREe()' rather than 'free()'
3218
3219    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3220      * Fixed ordering problem with boundary-stamping
3221
3222    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3223      * Added pvalloc, as recommended by H.J. Liu
3224      * Added 64bit pointer support mainly from Wolfram Gloger
3225      * Added anonymously donated WIN32 sbrk emulation
3226      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3227      * malloc_extend_top: fix mask error that caused wastage after
3228        foreign sbrks
3229      * Add linux mremap support code from HJ Liu
3230
3231    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3232      * Integrated most documentation with the code.
3233      * Add support for mmap, with help from
3234        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3235      * Use last_remainder in more cases.
3236      * Pack bins using idea from  colin@nyx10.cs.du.edu
3237      * Use ordered bins instead of best-fit threshhold
3238      * Eliminate block-local decls to simplify tracing and debugging.
3239      * Support another case of realloc via move into top
3240      * Fix error occuring when initial sbrk_base not word-aligned.
3241      * Rely on page size for units instead of SBRK_UNIT to
3242        avoid surprises about sbrk alignment conventions.
3243      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3244        (raymond@es.ele.tue.nl) for the suggestion.
3245      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3246      * More precautions for cases where other routines call sbrk,
3247        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3248      * Added macros etc., allowing use in linux libc from
3249        H.J. Lu (hjl@gnu.ai.mit.edu)
3250      * Inverted this history list
3251
3252    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3253      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3254      * Removed all preallocation code since under current scheme
3255        the work required to undo bad preallocations exceeds
3256        the work saved in good cases for most test programs.
3257      * No longer use return list or unconsolidated bins since
3258        no scheme using them consistently outperforms those that don't
3259        given above changes.
3260      * Use best fit for very large chunks to prevent some worst-cases.
3261      * Added some support for debugging
3262
3263    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3264      * Removed footers when chunks are in use. Thanks to
3265        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3266
3267    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3268      * Added malloc_trim, with help from Wolfram Gloger
3269        (wmglo@Dent.MED.Uni-Muenchen.DE).
3270
3271    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3272
3273    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3274      * realloc: try to expand in both directions
3275      * malloc: swap order of clean-bin strategy;
3276      * realloc: only conditionally expand backwards
3277      * Try not to scavenge used bins
3278      * Use bin counts as a guide to preallocation
3279      * Occasionally bin return list chunks in first scan
3280      * Add a few optimizations from colin@nyx10.cs.du.edu
3281
3282    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3283      * faster bin computation & slightly different binning
3284      * merged all consolidations to one part of malloc proper
3285         (eliminating old malloc_find_space & malloc_clean_bin)
3286      * Scan 2 returns chunks (not just 1)
3287      * Propagate failure in realloc if malloc returns 0
3288      * Add stuff to allow compilation on non-ANSI compilers
3289          from kpv@research.att.com
3290
3291    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3292      * removed potential for odd address access in prev_chunk
3293      * removed dependency on getpagesize.h
3294      * misc cosmetics and a bit more internal documentation
3295      * anticosmetics: mangled names in macros to evade debugger strangeness
3296      * tested on sparc, hp-700, dec-mips, rs6000
3297          with gcc & native cc (hp, dec only) allowing
3298          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3299
3300    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3301      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3302         structure of old version,  but most details differ.)
3303
3304*/
Note: See TracBrowser for help on using the repository browser.