The Pedigree Project  0.1
glue-dlmalloc.c
1 /*
2  * Copyright (c) 2008-2014, Pedigree Developers
3  *
4  * Please see the CONTRIB file in the root of the source tree for a full
5  * list of contributors.
6  *
7  * Permission to use, copy, modify, and distribute this software for any
8  * purpose with or without fee is hereby granted, provided that the above
9  * copyright notice and this permission notice appear in all copies.
10  *
11  * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
12  * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
13  * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
14  * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
15  * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
16  * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
17  * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
18  */
19 
20 /*
21  This is a version (aka dlmalloc) of malloc/free/realloc written by
22  Doug Lea and released to the public domain, as explained at
23  http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
24  comments, complaints, performance data, etc to dl@cs.oswego.edu
25 
26 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
27  Note: There may be an updated version of this malloc obtainable at
28  ftp://gee.cs.oswego.edu/pub/misc/malloc.c
29  Check before installing!
30 
31 * Quickstart
32 
33  This library is all in one file to simplify the most common usage:
34  ftp it, compile it (-O3), and link it into another program. All of
35  the compile-time options default to reasonable values for use on
36  most platforms. You might later want to step through various
37  compile-time and dynamic tuning options.
38 
39  For convenience, an include file for code using this malloc is at:
40  ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
41  You don't really need this .h file unless you call functions not
42  defined in your system include files. The .h file contains only the
43  excerpts from this file needed for using this malloc on ANSI C/C++
44  systems, so long as you haven't changed compile-time options about
45  naming and tuning parameters. If you do, then you can create your
46  own malloc.h that does include all settings by cutting at the point
47  indicated below. Note that you may already by default be using a C
48  library containing a malloc that is based on some version of this
49  malloc (for example in linux). You might still want to use the one
50  in this file to customize settings or to avoid overheads associated
51  with library versions.
52 
53 * Vital statistics:
54 
55  Supported pointer/size_t representation: 4 or 8 bytes
56  size_t MUST be an unsigned type of the same width as
57  pointers. (If you are using an ancient system that declares
58  size_t as a signed type, or need it to be a different width
59  than pointers, you can use a previous release of this malloc
60  (e.g. 2.7.2) supporting these.)
61 
62  Alignment: 8 bytes (minimum)
63  This suffices for nearly all current machines and C compilers.
64  However, you can define MALLOC_ALIGNMENT to be wider than this
65  if necessary (up to 128bytes), at the expense of using more space.
66 
67  Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
68  8 or 16 bytes (if 8byte sizes)
69  Each malloced chunk has a hidden word of overhead holding size
70  and status information, and additional cross-check word
71  if FOOTERS is defined.
72 
73  Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
74  8-byte ptrs: 32 bytes (including overhead)
75 
76  Even a request for zero bytes (i.e., malloc(0)) returns a
77  pointer to something of the minimum allocatable size.
78  The maximum overhead wastage (i.e., number of extra bytes
79  allocated than were requested in malloc) is less than or equal
80  to the minimum size, except for requests >= mmap_threshold that
81  are serviced via mmap(), where the worst case wastage is about
82  32 bytes plus the remainder from a system page (the minimal
83  mmap unit); typically 4096 or 8192 bytes.
84 
85  Security: static-safe; optionally more or less
86  The "security" of malloc refers to the ability of malicious
87  code to accentuate the effects of errors (for example, freeing
88  space that is not currently malloc'ed or overwriting past the
89  ends of chunks) in code that calls malloc. This malloc
90  guarantees not to modify any memory locations below the base of
91  heap, i.e., static variables, even in the presence of usage
92  errors. The routines additionally detect most improper frees
93  and reallocs. All this holds as long as the static bookkeeping
94  for malloc itself is not corrupted by some other means. This
95  is only one aspect of security -- these checks do not, and
96  cannot, detect all possible programming errors.
97 
98  If FOOTERS is defined nonzero, then each allocated chunk
99  carries an additional check word to verify that it was malloced
100  from its space. These check words are the same within each
101  execution of a program using malloc, but differ across
102  executions, so externally crafted fake chunks cannot be
103  freed. This improves security by rejecting frees/reallocs that
104  could corrupt heap memory, in addition to the checks preventing
105  writes to statics that are always on. This may further improve
106  security at the expense of time and space overhead. (Note that
107  FOOTERS may also be worth using with MSPACES.)
108 
109  By default detected errors cause the program to abort (calling
110  "abort()"). You can override this to instead proceed past
111  errors by defining PROCEED_ON_ERROR. In this case, a bad free
112  has no effect, and a malloc that encounters a bad address
113  caused by user overwrites will ignore the bad address by
114  dropping pointers and indices to all known memory. This may
115  be appropriate for programs that should continue if at all
116  possible in the face of programming errors, although they may
117  run out of memory because dropped memory is never reclaimed.
118 
119  If you don't like either of these options, you can define
120  CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
121  else. And if if you are sure that your program using malloc has
122  no errors or vulnerabilities, you can define INSECURE to 1,
123  which might (or might not) provide a small performance improvement.
124 
125  It is also possible to limit the maximum total allocatable
126  space, using malloc_set_footprint_limit. This is not
127  designed as a security feature in itself (calls to set limits
128  are not screened or privileged), but may be useful as one
129  aspect of a secure implementation.
130 
131  Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
132  When USE_LOCKS is defined, each public call to malloc, free,
133  etc is surrounded with a lock. By default, this uses a plain
134  pthread mutex, win32 critical section, or a spin-lock if if
135  available for the platform and not disabled by setting
136  USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
137  recursive versions are used instead (which are not required for
138  base functionality but may be needed in layered extensions).
139  Using a global lock is not especially fast, and can be a major
140  bottleneck. It is designed only to provide minimal protection
141  in concurrent environments, and to provide a basis for
142  extensions. If you are using malloc in a concurrent program,
143  consider instead using nedmalloc
144  (http://www.nedprod.com/programs/portable/nedmalloc/) or
145  ptmalloc (See http://www.malloc.de), which are derived from
146  versions of this malloc.
147 
148  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
149  This malloc can use unix sbrk or any emulation (invoked using
150  the CALL_MORECORE macro) and/or mmap/munmap or any emulation
151  (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
152  memory. On most unix systems, it tends to work best if both
153  MORECORE and MMAP are enabled. On Win32, it uses emulations
154  based on VirtualAlloc. It also uses common C library functions
155  like memset.
156 
157  Compliance: I believe it is compliant with the Single Unix Specification
158  (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
159  others as well.
160 
161 * Overview of algorithms
162 
163  This is not the fastest, most space-conserving, most portable, or
164  most tunable malloc ever written. However it is among the fastest
165  while also being among the most space-conserving, portable and
166  tunable. Consistent balance across these factors results in a good
167  general-purpose allocator for malloc-intensive programs.
168 
169  In most ways, this malloc is a best-fit allocator. Generally, it
170  chooses the best-fitting existing chunk for a request, with ties
171  broken in approximately least-recently-used order. (This strategy
172  normally maintains low fragmentation.) However, for requests less
173  than 256bytes, it deviates from best-fit when there is not an
174  exactly fitting available chunk by preferring to use space adjacent
175  to that used for the previous small request, as well as by breaking
176  ties in approximately most-recently-used order. (These enhance
177  locality of series of small allocations.) And for very large requests
178  (>= 256Kb by default), it relies on system memory mapping
179  facilities, if supported. (This helps avoid carrying around and
180  possibly fragmenting memory used only for large chunks.)
181 
182  All operations (except malloc_stats and mallinfo) have execution
183  times that are bounded by a constant factor of the number of bits in
184  a size_t, not counting any clearing in calloc or copying in realloc,
185  or actions surrounding MORECORE and MMAP that have times
186  proportional to the number of non-contiguous regions returned by
187  system allocation routines, which is often just 1. In real-time
188  applications, you can optionally suppress segment traversals using
189  NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
190  system allocators return non-contiguous spaces, at the typical
191  expense of carrying around more memory and increased fragmentation.
192 
193  The implementation is not very modular and seriously overuses
194  macros. Perhaps someday all C compilers will do as good a job
195  inlining modular code as can now be done by brute-force expansion,
196  but now, enough of them seem not to.
197 
198  Some compilers issue a lot of warnings about code that is
199  dead/unreachable only on some platforms, and also about intentional
200  uses of negation on unsigned types. All known cases of each can be
201  ignored.
202 
203  For a longer but out of date high-level description, see
204  http://gee.cs.oswego.edu/dl/html/malloc.html
205 
206 * MSPACES
207  If MSPACES is defined, then in addition to malloc, free, etc.,
208  this file also defines mspace_malloc, mspace_free, etc. These
209  are versions of malloc routines that take an "mspace" argument
210  obtained using create_mspace, to control all internal bookkeeping.
211  If ONLY_MSPACES is defined, only these versions are compiled.
212  So if you would like to use this allocator for only some allocations,
213  and your system malloc for others, you can compile with
214  ONLY_MSPACES and then do something like...
215  static mspace mymspace = create_mspace(0,0); // for example
216  #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
217 
218  (Note: If you only need one instance of an mspace, you can instead
219  use "USE_DL_PREFIX" to relabel the global malloc.)
220 
221  You can similarly create thread-local allocators by storing
222  mspaces as thread-locals. For example:
223  static __thread mspace tlms = 0;
224  void* tlmalloc(size_t bytes) {
225  if (tlms == 0) tlms = create_mspace(0, 0);
226  return mspace_malloc(tlms, bytes);
227  }
228  void tlfree(void* mem) { mspace_free(tlms, mem); }
229 
230  Unless FOOTERS is defined, each mspace is completely independent.
231  You cannot allocate from one and free to another (although
232  conformance is only weakly checked, so usage errors are not always
233  caught). If FOOTERS is defined, then each chunk carries around a tag
234  indicating its originating mspace, and frees are directed to their
235  originating spaces. Normally, this requires use of locks.
236 
237  ------------------------- Compile-time options ---------------------------
238 
239 Be careful in setting #define values for numerical constants of type
240 size_t. On some systems, literal values are not automatically extended
241 to size_t precision unless they are explicitly casted. You can also
242 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
243 
244 WIN32 default: defined if _WIN32 defined
245  Defining WIN32 sets up defaults for MS environment and compilers.
246  Otherwise defaults are for unix. Beware that there seem to be some
247  cases where this malloc might not be a pure drop-in replacement for
248  Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
249  SetDIBits()) may be due to bugs in some video driver implementations
250  when pixel buffers are malloc()ed, and the region spans more than
251  one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
252  default granularity, pixel buffers may straddle virtual allocation
253  regions more often than when using the Microsoft allocator. You can
254  avoid this by using VirtualAlloc() and VirtualFree() for all pixel
255  buffers rather than using malloc(). If this is not possible,
256  recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
257  in cases where MSC and gcc (cygwin) are known to differ on WIN32,
258  conditions use _MSC_VER to distinguish them.
259 
260 DLMALLOC_EXPORT default: extern
261  Defines how public APIs are declared. If you want to export via a
262  Windows DLL, you might define this as
263  #define DLMALLOC_EXPORT extern __declspec(dllexport)
264  If you want a POSIX ELF shared object, you might use
265  #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
266 
267 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
268  Controls the minimum alignment for malloc'ed chunks. It must be a
269  power of two and at least 8, even on machines for which smaller
270  alignments would suffice. It may be defined as larger than this
271  though. Note however that code and data structures are optimized for
272  the case of 8-byte alignment.
273 
274 MSPACES default: 0 (false)
275  If true, compile in support for independent allocation spaces.
276  This is only supported if HAVE_MMAP is true.
277 
278 ONLY_MSPACES default: 0 (false)
279  If true, only compile in mspace versions, not regular versions.
280 
281 USE_LOCKS default: 0 (false)
282  Causes each call to each public routine to be surrounded with
283  pthread or WIN32 mutex lock/unlock. (If set true, this can be
284  overridden on a per-mspace basis for mspace versions.) If set to a
285  non-zero value other than 1, locks are used, but their
286  implementation is left out, so lock functions must be supplied manually,
287  as described below.
288 
289 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
290  If true, uses custom spin locks for locking. This is currently
291  supported only gcc >= 4.1, older gccs on x86 platforms, and recent
292  MS compilers. Otherwise, posix locks or win32 critical sections are
293  used.
294 
295 USE_RECURSIVE_LOCKS default: not defined
296  If defined nonzero, uses recursive (aka reentrant) locks, otherwise
297  uses plain mutexes. This is not required for malloc proper, but may
298  be needed for layered allocators such as nedmalloc.
299 
300 LOCK_AT_FORK default: not defined
301  If defined nonzero, performs pthread_atfork upon initialization
302  to initialize child lock while holding parent lock. The implementation
303  assumes that pthread locks (not custom locks) are being used. In other
304  cases, you may need to customize the implementation.
305 
306 FOOTERS default: 0
307  If true, provide extra checking and dispatching by placing
308  information in the footers of allocated chunks. This adds
309  space and time overhead.
310 
311 INSECURE default: 0
312  If true, omit checks for usage errors and heap space overwrites.
313 
314 USE_DL_PREFIX default: NOT defined
315  Causes compiler to prefix all public routines with the string 'dl'.
316  This can be useful when you only want to use this malloc in one part
317  of a program, using your regular system malloc elsewhere.
318 
319 MALLOC_INSPECT_ALL default: NOT defined
320  If defined, compiles malloc_inspect_all and mspace_inspect_all, that
321  perform traversal of all heap space. Unless access to these
322  functions is otherwise restricted, you probably do not want to
323  include them in secure implementations.
324 
325 ABORT default: defined as abort()
326  Defines how to abort on failed checks. On most systems, a failed
327  check cannot die with an "assert" or even print an informative
328  message, because the underlying print routines in turn call malloc,
329  which will fail again. Generally, the best policy is to simply call
330  abort(). It's not very useful to do more than this because many
331  errors due to overwriting will show up as address faults (null, odd
332  addresses etc) rather than malloc-triggered checks, so will also
333  abort. Also, most compilers know that abort() does not return, so
334  can better optimize code conditionally calling it.
335 
336 PROCEED_ON_ERROR default: defined as 0 (false)
337  Controls whether detected bad addresses cause them to bypassed
338  rather than aborting. If set, detected bad arguments to free and
339  realloc are ignored. And all bookkeeping information is zeroed out
340  upon a detected overwrite of freed heap space, thus losing the
341  ability to ever return it from malloc again, but enabling the
342  application to proceed. If PROCEED_ON_ERROR is defined, the
343  static variable malloc_corruption_error_count is compiled in
344  and can be examined to see if errors have occurred. This option
345  generates slower code than the default abort policy.
346 
347 DEBUG default: NOT defined
348  The DEBUG setting is mainly intended for people trying to modify
349  this code or diagnose problems when porting to new platforms.
350  However, it may also be able to better isolate user errors than just
351  using runtime checks. The assertions in the check routines spell
352  out in more detail the assumptions and invariants underlying the
353  algorithms. The checking is fairly extensive, and will slow down
354  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
355  set will attempt to check every non-mmapped allocated and free chunk
356  in the course of computing the summaries.
357 
358 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
359  Debugging assertion failures can be nearly impossible if your
360  version of the assert macro causes malloc to be called, which will
361  lead to a cascade of further failures, blowing the runtime stack.
362  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
363  which will usually make debugging easier.
364 
365 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
366  The action to take before "return 0" when malloc fails to be able to
367  return memory because there is none available.
368 
369 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
370  True if this system supports sbrk or an emulation of it.
371 
372 MORECORE default: sbrk
373  The name of the sbrk-style system routine to call to obtain more
374  memory. See below for guidance on writing custom MORECORE
375  functions. The type of the argument to sbrk/MORECORE varies across
376  systems. It cannot be size_t, because it supports negative
377  arguments, so it is normally the signed type of the same width as
378  size_t (sometimes declared as "intptr_t"). It doesn't much matter
379  though. Internally, we only call it with arguments less than half
380  the max value of a size_t, which should work across all reasonable
381  possibilities, although sometimes generating compiler warnings.
382 
383 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
384  If true, take advantage of fact that consecutive calls to MORECORE
385  with positive arguments always return contiguous increasing
386  addresses. This is true of unix sbrk. It does not hurt too much to
387  set it true anyway, since malloc copes with non-contiguities.
388  Setting it false when definitely non-contiguous saves time
389  and possibly wasted space it would take to discover this though.
390 
391 MORECORE_CANNOT_TRIM default: NOT defined
392  True if MORECORE cannot release space back to the system when given
393  negative arguments. This is generally necessary only if you are
394  using a hand-crafted MORECORE function that cannot handle negative
395  arguments.
396 
397 NO_SEGMENT_TRAVERSAL default: 0
398  If non-zero, suppresses traversals of memory segments
399  returned by either MORECORE or CALL_MMAP. This disables
400  merging of segments that are contiguous, and selectively
401  releasing them to the OS if unused, but bounds execution times.
402 
403 HAVE_MMAP default: 1 (true)
404  True if this system supports mmap or an emulation of it. If so, and
405  HAVE_MORECORE is not true, MMAP is used for all system
406  allocation. If set and HAVE_MORECORE is true as well, MMAP is
407  primarily used to directly allocate very large blocks. It is also
408  used as a backup strategy in cases where MORECORE fails to provide
409  space from system. Note: A single call to MUNMAP is assumed to be
410  able to unmap memory that may have be allocated using multiple calls
411  to MMAP, so long as they are adjacent.
412 
413 HAVE_MREMAP default: 1 on linux, else 0
414  If true realloc() uses mremap() to re-allocate large blocks and
415  extend or shrink allocation spaces.
416 
417 MMAP_CLEARS default: 1 except on WINCE.
418  True if mmap clears memory so calloc doesn't need to. This is true
419  for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
420 
421 USE_BUILTIN_FFS default: 0 (i.e., not used)
422  Causes malloc to use the builtin ffs() function to compute indices.
423  Some compilers may recognize and intrinsify ffs to be faster than the
424  supplied C version. Also, the case of x86 using gcc is special-cased
425  to an asm instruction, so is already as fast as it can be, and so
426  this setting has no effect. Similarly for Win32 under recent MS compilers.
427  (On most x86s, the asm version is only slightly faster than the C version.)
428 
429 malloc_getpagesize default: derive from system includes, or 4096.
430  The system page size. To the extent possible, this malloc manages
431  memory from the system in page-size units. This may be (and
432  usually is) a function rather than a constant. This is ignored
433  if WIN32, where page size is determined using getSystemInfo during
434  initialization.
435 
436 USE_DEV_RANDOM default: 0 (i.e., not used)
437  Causes malloc to use /dev/random to initialize secure magic seed for
438  stamping footers. Otherwise, the current time is used.
439 
440 NO_MALLINFO default: 0
441  If defined, don't compile "mallinfo". This can be a simple way
442  of dealing with mismatches between system declarations and
443  those in this file.
444 
445 MALLINFO_FIELD_TYPE default: size_t
446  The type of the fields in the mallinfo struct. This was originally
447  defined as "int" in SVID etc, but is more usefully defined as
448  size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
449 
450 NO_MALLOC_STATS default: 0
451  If defined, don't compile "malloc_stats". This avoids calls to
452  fprintf and bringing in stdio dependencies you might not want.
453 
454 REALLOC_ZERO_BYTES_FREES default: not defined
455  This should be set if a call to realloc with zero bytes should
456  be the same as a call to free. Some people think it should. Otherwise,
457  since this malloc returns a unique pointer for malloc(0), so does
458  realloc(p, 0).
459 
460 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
461 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
462 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
463  Define these if your system does not have these header files.
464  You might need to manually insert some of the declarations they provide.
465 
466 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
467  system_info.dwAllocationGranularity in WIN32,
468  otherwise 64K.
469  Also settable using mallopt(M_GRANULARITY, x)
470  The unit for allocating and deallocating memory from the system. On
471  most systems with contiguous MORECORE, there is no reason to
472  make this more than a page. However, systems with MMAP tend to
473  either require or encourage larger granularities. You can increase
474  this value to prevent system allocation functions to be called so
475  often, especially if they are slow. The value must be at least one
476  page and must be a power of two. Setting to 0 causes initialization
477  to either page size or win32 region size. (Note: In previous
478  versions of malloc, the equivalent of this option was called
479  "TOP_PAD")
480 
481 DEFAULT_TRIM_THRESHOLD default: 2MB
482  Also settable using mallopt(M_TRIM_THRESHOLD, x)
483  The maximum amount of unused top-most memory to keep before
484  releasing via malloc_trim in free(). Automatic trimming is mainly
485  useful in long-lived programs using contiguous MORECORE. Because
486  trimming via sbrk can be slow on some systems, and can sometimes be
487  wasteful (in cases where programs immediately afterward allocate
488  more large chunks) the value should be high enough so that your
489  overall system performance would improve by releasing this much
490  memory. As a rough guide, you might set to a value close to the
491  average size of a process (program) running on your system.
492  Releasing this much memory would allow such a process to run in
493  memory. Generally, it is worth tuning trim thresholds when a
494  program undergoes phases where several large chunks are allocated
495  and released in ways that can reuse each other's storage, perhaps
496  mixed with phases where there are no such chunks at all. The trim
497  value must be greater than page size to have any useful effect. To
498  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
499  some people use of mallocing a huge space and then freeing it at
500  program startup, in an attempt to reserve system memory, doesn't
501  have the intended effect under automatic trimming, since that memory
502  will immediately be returned to the system.
503 
504 DEFAULT_MMAP_THRESHOLD default: 256K
505  Also settable using mallopt(M_MMAP_THRESHOLD, x)
506  The request size threshold for using MMAP to directly service a
507  request. Requests of at least this size that cannot be allocated
508  using already-existing space will be serviced via mmap. (If enough
509  normal freed space already exists it is used instead.) Using mmap
510  segregates relatively large chunks of memory so that they can be
511  individually obtained and released from the host system. A request
512  serviced through mmap is never reused by any other request (at least
513  not directly; the system may just so happen to remap successive
514  requests to the same locations). Segregating space in this way has
515  the benefits that: Mmapped space can always be individually released
516  back to the system, which helps keep the system level memory demands
517  of a long-lived program low. Also, mapped memory doesn't become
518  `locked' between other chunks, as can happen with normally allocated
519  chunks, which means that even trimming via malloc_trim would not
520  release them. However, it has the disadvantage that the space
521  cannot be reclaimed, consolidated, and then used to service later
522  requests, as happens with normal chunks. The advantages of mmap
523  nearly always outweigh disadvantages for "large" chunks, but the
524  value of "large" may vary across systems. The default is an
525  empirically derived value that works well in most systems. You can
526  disable mmap by setting to MAX_SIZE_T.
527 
528 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
529  The number of consolidated frees between checks to release
530  unused segments when freeing. When using non-contiguous segments,
531  especially with multiple mspaces, checking only for topmost space
532  doesn't always suffice to trigger trimming. To compensate for this,
533  free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
534  current number of segments, if greater) try to release unused
535  segments to the OS when freeing chunks that result in
536  consolidation. The best value for this parameter is a compromise
537  between slowing down frees with relatively costly checks that
538  rarely trigger versus holding on to unused memory. To effectively
539  disable, set to MAX_SIZE_T. This may lead to a very slight speed
540  improvement at the expense of carrying around more memory.
541 */
542 
544 #define _COMPILING_NEWLIB
545 #include <newlib.h>
546 #include <_ansi.h>
547 
548 // Define errno before including syscall.h.
549 #include "errno.h"
550 #define errno (*__errno())
551 extern int *__errno (void);
555 /* Version identifier to allow people to support multiple versions */
556 #ifndef DLMALLOC_VERSION
557 #define DLMALLOC_VERSION 20806
558 #endif /* DLMALLOC_VERSION */
559 
560 #ifndef DLMALLOC_EXPORT
561 #define DLMALLOC_EXPORT extern
562 #endif
563 
564 #ifndef WIN32
565 #ifdef _WIN32
566 #define WIN32 1
567 #endif /* _WIN32 */
568 #ifdef _WIN32_WCE
569 #define LACKS_FCNTL_H
570 #define WIN32 1
571 #endif /* _WIN32_WCE */
572 #endif /* WIN32 */
573 #ifdef WIN32
574 #define WIN32_LEAN_AND_MEAN
575 #include <windows.h>
576 #include <tchar.h>
577 #define HAVE_MMAP 1
578 #define HAVE_MORECORE 0
579 #define LACKS_UNISTD_H
580 #define LACKS_SYS_PARAM_H
581 #define LACKS_SYS_MMAN_H
582 #define LACKS_STRING_H
583 #define LACKS_STRINGS_H
584 #define LACKS_SYS_TYPES_H
585 #define LACKS_ERRNO_H
586 #define LACKS_SCHED_H
587 #ifndef MALLOC_FAILURE_ACTION
588 #define MALLOC_FAILURE_ACTION
589 #endif /* MALLOC_FAILURE_ACTION */
590 #ifndef MMAP_CLEARS
591 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
592 #define MMAP_CLEARS 0
593 #else
594 #define MMAP_CLEARS 1
595 #endif /* _WIN32_WCE */
596 #endif /*MMAP_CLEARS */
597 #endif /* WIN32 */
598 
599 #if defined(DARWIN) || defined(_DARWIN)
600 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
601 #ifndef HAVE_MORECORE
602 #define HAVE_MORECORE 0
603 #define HAVE_MMAP 1
604 /* OSX allocators provide 16 byte alignment */
605 #ifndef MALLOC_ALIGNMENT
606 #define MALLOC_ALIGNMENT ((size_t)16U)
607 #endif
608 #endif /* HAVE_MORECORE */
609 #endif /* DARWIN */
610 
611 #ifndef LACKS_SYS_TYPES_H
612 #include <sys/types.h> /* For size_t */
613 #endif /* LACKS_SYS_TYPES_H */
614 
615 /* The maximum possible size_t value has all bits set */
616 #define MAX_SIZE_T (~(size_t)0)
617 
618 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
619 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
620  (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
621 #endif /* USE_LOCKS */
622 
623 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
624 #if ((defined(__GNUC__) && \
625  ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
626  defined(__i386__) || defined(__x86_64__))) || \
627  (defined(_MSC_VER) && _MSC_VER>=1310))
628 #ifndef USE_SPIN_LOCKS
629 #define USE_SPIN_LOCKS 1
630 #endif /* USE_SPIN_LOCKS */
631 #elif USE_SPIN_LOCKS
632 #error "USE_SPIN_LOCKS defined without implementation"
633 #endif /* ... locks available... */
634 #elif !defined(USE_SPIN_LOCKS)
635 #define USE_SPIN_LOCKS 0
636 #endif /* USE_LOCKS */
637 
638 #ifndef ONLY_MSPACES
639 #define ONLY_MSPACES 0
640 #endif /* ONLY_MSPACES */
641 #ifndef MSPACES
642 #if ONLY_MSPACES
643 #define MSPACES 1
644 #else /* ONLY_MSPACES */
645 #define MSPACES 0
646 #endif /* ONLY_MSPACES */
647 #endif /* MSPACES */
648 #ifndef MALLOC_ALIGNMENT
649 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
650 #endif /* MALLOC_ALIGNMENT */
651 #ifndef FOOTERS
652 #define FOOTERS 0
653 #endif /* FOOTERS */
654 #ifndef ABORT
655 #define ABORT abort()
656 #endif /* ABORT */
657 #ifndef ABORT_ON_ASSERT_FAILURE
658 #define ABORT_ON_ASSERT_FAILURE 1
659 #endif /* ABORT_ON_ASSERT_FAILURE */
660 #ifndef PROCEED_ON_ERROR
661 #define PROCEED_ON_ERROR 0
662 #endif /* PROCEED_ON_ERROR */
663 
664 #ifndef INSECURE
665 #define INSECURE 0
666 #endif /* INSECURE */
667 #ifndef MALLOC_INSPECT_ALL
668 #define MALLOC_INSPECT_ALL 0
669 #endif /* MALLOC_INSPECT_ALL */
670 #ifndef HAVE_MMAP
671 #define HAVE_MMAP 1
672 #endif /* HAVE_MMAP */
673 #ifndef MMAP_CLEARS
674 #define MMAP_CLEARS 1
675 #endif /* MMAP_CLEARS */
676 #ifndef HAVE_MREMAP
677 #ifdef linux
678 #define HAVE_MREMAP 1
679 #define _GNU_SOURCE /* Turns on mremap() definition */
680 #else /* linux */
681 #define HAVE_MREMAP 0
682 #endif /* linux */
683 #endif /* HAVE_MREMAP */
684 #ifndef MALLOC_FAILURE_ACTION
685 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
686 #endif /* MALLOC_FAILURE_ACTION */
687 #ifndef HAVE_MORECORE
688 #if ONLY_MSPACES
689 #define HAVE_MORECORE 0
690 #else /* ONLY_MSPACES */
691 #define HAVE_MORECORE 1
692 #endif /* ONLY_MSPACES */
693 #endif /* HAVE_MORECORE */
694 #if !HAVE_MORECORE
695 #define MORECORE_CONTIGUOUS 0
696 #else /* !HAVE_MORECORE */
697 #define MORECORE_DEFAULT sbrk
698 #ifndef MORECORE_CONTIGUOUS
699 #define MORECORE_CONTIGUOUS 1
700 #endif /* MORECORE_CONTIGUOUS */
701 #endif /* HAVE_MORECORE */
702 #ifndef DEFAULT_GRANULARITY
703 #if (MORECORE_CONTIGUOUS || defined(WIN32))
704 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
705 #else /* MORECORE_CONTIGUOUS */
706 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
707 #endif /* MORECORE_CONTIGUOUS */
708 #endif /* DEFAULT_GRANULARITY */
709 #ifndef DEFAULT_TRIM_THRESHOLD
710 #ifndef MORECORE_CANNOT_TRIM
711 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
712 #else /* MORECORE_CANNOT_TRIM */
713 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
714 #endif /* MORECORE_CANNOT_TRIM */
715 #endif /* DEFAULT_TRIM_THRESHOLD */
716 #ifndef DEFAULT_MMAP_THRESHOLD
717 #if HAVE_MMAP
718 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
719 #else /* HAVE_MMAP */
720 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
721 #endif /* HAVE_MMAP */
722 #endif /* DEFAULT_MMAP_THRESHOLD */
723 #ifndef MAX_RELEASE_CHECK_RATE
724 #if HAVE_MMAP
725 #define MAX_RELEASE_CHECK_RATE 4095
726 #else
727 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
728 #endif /* HAVE_MMAP */
729 #endif /* MAX_RELEASE_CHECK_RATE */
730 #ifndef USE_BUILTIN_FFS
731 #define USE_BUILTIN_FFS 0
732 #endif /* USE_BUILTIN_FFS */
733 #ifndef USE_DEV_RANDOM
734 #define USE_DEV_RANDOM 0
735 #endif /* USE_DEV_RANDOM */
736 #ifndef NO_MALLINFO
737 #define NO_MALLINFO 0
738 #endif /* NO_MALLINFO */
739 #ifndef MALLINFO_FIELD_TYPE
740 #define MALLINFO_FIELD_TYPE size_t
741 #endif /* MALLINFO_FIELD_TYPE */
742 #ifndef NO_MALLOC_STATS
743 #define NO_MALLOC_STATS 0
744 #endif /* NO_MALLOC_STATS */
745 #ifndef NO_SEGMENT_TRAVERSAL
746 #define NO_SEGMENT_TRAVERSAL 0
747 #endif /* NO_SEGMENT_TRAVERSAL */
748 
749 /*
750  mallopt tuning options. SVID/XPG defines four standard parameter
751  numbers for mallopt, normally defined in malloc.h. None of these
752  are used in this malloc, so setting them has no effect. But this
753  malloc does support the following options.
754 */
755 
756 #define M_TRIM_THRESHOLD (-1)
757 #define M_GRANULARITY (-2)
758 #define M_MMAP_THRESHOLD (-3)
759 
760 /* ------------------------ Mallinfo declarations ------------------------ */
761 
762 #if !NO_MALLINFO
763 /*
764  This version of malloc supports the standard SVID/XPG mallinfo
765  routine that returns a struct containing usage properties and
766  statistics. It should work on any system that has a
767  /usr/include/malloc.h defining struct mallinfo. The main
768  declaration needed is the mallinfo struct that is returned (by-copy)
769  by mallinfo(). The malloinfo struct contains a bunch of fields that
770  are not even meaningful in this version of malloc. These fields are
771  are instead filled by mallinfo() with other numbers that might be of
772  interest.
773 
774  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
775  /usr/include/malloc.h file that includes a declaration of struct
776  mallinfo. If so, it is included; else a compliant version is
777  declared below. These must be precisely the same for mallinfo() to
778  work. The original SVID version of this struct, defined on most
779  systems with mallinfo, declares all fields as ints. But some others
780  define as unsigned long. If your system defines the fields using a
781  type of different width than listed here, you MUST #include your
782  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
783 */
784 
785 /* #define HAVE_USR_INCLUDE_MALLOC_H */
786 
787 #ifdef HAVE_USR_INCLUDE_MALLOC_H
788 #include "/usr/include/malloc.h"
789 #else /* HAVE_USR_INCLUDE_MALLOC_H */
790 #ifndef STRUCT_MALLINFO_DECLARED
791 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
792 #define _STRUCT_MALLINFO
793 #define STRUCT_MALLINFO_DECLARED 1
794 struct mallinfo {
795  MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
796  MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
797  MALLINFO_FIELD_TYPE smblks; /* always 0 */
798  MALLINFO_FIELD_TYPE hblks; /* always 0 */
799  MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
800  MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
801  MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
802  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
803  MALLINFO_FIELD_TYPE fordblks; /* total free space */
804  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
805 };
806 #endif /* STRUCT_MALLINFO_DECLARED */
807 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
808 #endif /* NO_MALLINFO */
809 
810 /*
811  Try to persuade compilers to inline. The most critical functions for
812  inlining are defined as macros, so these aren't used for them.
813 */
814 
815 #ifndef FORCEINLINE
816  #if defined(__GNUC__)
817 #define FORCEINLINE __inline __attribute__ ((always_inline))
818  #elif defined(_MSC_VER)
819  #define FORCEINLINE __forceinline
820  #endif
821 #endif
822 #ifndef NOINLINE
823  #if defined(__GNUC__)
824  #define NOINLINE __attribute__ ((noinline))
825  #elif defined(_MSC_VER)
826  #define NOINLINE __declspec(noinline)
827  #else
828  #define NOINLINE
829  #endif
830 #endif
831 
832 #ifdef __cplusplus
833 extern "C" {
834 #ifndef FORCEINLINE
835  #define FORCEINLINE inline
836 #endif
837 #endif /* __cplusplus */
838 #ifndef FORCEINLINE
839  #define FORCEINLINE
840 #endif
841 
842 #if !ONLY_MSPACES
843 
844 /* ------------------- Declarations of public routines ------------------- */
845 
846 #ifndef USE_DL_PREFIX
847 #define dlcalloc calloc
848 #define dlfree free
849 #define dlmalloc malloc
850 #define dlmemalign memalign
851 #define dlposix_memalign posix_memalign
852 #define dlrealloc realloc
853 #define dlrealloc_in_place realloc_in_place
854 #define dlvalloc valloc
855 #define dlpvalloc pvalloc
856 #define dlmallinfo mallinfo
857 #define dlmallopt mallopt
858 #define dlmalloc_trim malloc_trim
859 #define dlmalloc_stats malloc_stats
860 #define dlmalloc_usable_size malloc_usable_size
861 #define dlmalloc_footprint malloc_footprint
862 #define dlmalloc_max_footprint malloc_max_footprint
863 #define dlmalloc_footprint_limit malloc_footprint_limit
864 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
865 #define dlmalloc_inspect_all malloc_inspect_all
866 #define dlindependent_calloc independent_calloc
867 #define dlindependent_comalloc independent_comalloc
868 #define dlbulk_free bulk_free
869 #endif /* USE_DL_PREFIX */
870 
871 /*
872  malloc(size_t n)
873  Returns a pointer to a newly allocated chunk of at least n bytes, or
874  null if no space is available, in which case errno is set to ENOMEM
875  on ANSI C systems.
876 
877  If n is zero, malloc returns a minimum-sized chunk. (The minimum
878  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
879  systems.) Note that size_t is an unsigned type, so calls with
880  arguments that would be negative if signed are interpreted as
881  requests for huge amounts of space, which will often fail. The
882  maximum supported value of n differs across systems, but is in all
883  cases less than the maximum representable value of a size_t.
884 */
885 DLMALLOC_EXPORT void* dlmalloc(size_t);
886 
887 /*
888  free(void* p)
889  Releases the chunk of memory pointed to by p, that had been previously
890  allocated using malloc or a related routine such as realloc.
891  It has no effect if p is null. If p was not malloced or already
892  freed, free(p) will by default cause the current program to abort.
893 */
894 DLMALLOC_EXPORT void dlfree(void*);
895 
896 /*
897  calloc(size_t n_elements, size_t element_size);
898  Returns a pointer to n_elements * element_size bytes, with all locations
899  set to zero.
900 */
901 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
902 
903 /*
904  realloc(void* p, size_t n)
905  Returns a pointer to a chunk of size n that contains the same data
906  as does chunk p up to the minimum of (n, p's size) bytes, or null
907  if no space is available.
908 
909  The returned pointer may or may not be the same as p. The algorithm
910  prefers extending p in most cases when possible, otherwise it
911  employs the equivalent of a malloc-copy-free sequence.
912 
913  If p is null, realloc is equivalent to malloc.
914 
915  If space is not available, realloc returns null, errno is set (if on
916  ANSI) and p is NOT freed.
917 
918  if n is for fewer bytes than already held by p, the newly unused
919  space is lopped off and freed if possible. realloc with a size
920  argument of zero (re)allocates a minimum-sized chunk.
921 
922  The old unix realloc convention of allowing the last-free'd chunk
923  to be used as an argument to realloc is not supported.
924 */
925 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
926 
927 /*
928  realloc_in_place(void* p, size_t n)
929  Resizes the space allocated for p to size n, only if this can be
930  done without moving p (i.e., only if there is adjacent space
931  available if n is greater than p's current allocated size, or n is
932  less than or equal to p's size). This may be used instead of plain
933  realloc if an alternative allocation strategy is needed upon failure
934  to expand space; for example, reallocation of a buffer that must be
935  memory-aligned or cleared. You can use realloc_in_place to trigger
936  these alternatives only when needed.
937 
938  Returns p if successful; otherwise null.
939 */
940 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
941 
942 /*
943  memalign(size_t alignment, size_t n);
944  Returns a pointer to a newly allocated chunk of n bytes, aligned
945  in accord with the alignment argument.
946 
947  The alignment argument should be a power of two. If the argument is
948  not a power of two, the nearest greater power is used.
949  8-byte alignment is guaranteed by normal malloc calls, so don't
950  bother calling memalign with an argument of 8 or less.
951 
952  Overreliance on memalign is a sure way to fragment space.
953 */
954 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
955 
956 /*
957  int posix_memalign(void** pp, size_t alignment, size_t n);
958  Allocates a chunk of n bytes, aligned in accord with the alignment
959  argument. Differs from memalign only in that it (1) assigns the
960  allocated memory to *pp rather than returning it, (2) fails and
961  returns EINVAL if the alignment is not a power of two (3) fails and
962  returns ENOMEM if memory cannot be allocated.
963 */
964 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
965 
966 /*
967  valloc(size_t n);
968  Equivalent to memalign(pagesize, n), where pagesize is the page
969  size of the system. If the pagesize is unknown, 4096 is used.
970 */
971 DLMALLOC_EXPORT void* dlvalloc(size_t);
972 
973 /*
974  mallopt(int parameter_number, int parameter_value)
975  Sets tunable parameters The format is to provide a
976  (parameter-number, parameter-value) pair. mallopt then sets the
977  corresponding parameter to the argument value if it can (i.e., so
978  long as the value is meaningful), and returns 1 if successful else
979  0. To workaround the fact that mallopt is specified to use int,
980  not size_t parameters, the value -1 is specially treated as the
981  maximum unsigned size_t value.
982 
983  SVID/XPG/ANSI defines four standard param numbers for mallopt,
984  normally defined in malloc.h. None of these are use in this malloc,
985  so setting them has no effect. But this malloc also supports other
986  options in mallopt. See below for details. Briefly, supported
987  parameters are as follows (listed defaults are for "typical"
988  configurations).
989 
990  Symbol param # default allowed param values
991  M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
992  M_GRANULARITY -2 page size any power of 2 >= page size
993  M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
994 */
995 DLMALLOC_EXPORT int dlmallopt(int, int);
996 
997 /*
998  malloc_footprint();
999  Returns the number of bytes obtained from the system. The total
1000  number of bytes allocated by malloc, realloc etc., is less than this
1001  value. Unlike mallinfo, this function returns only a precomputed
1002  result, so can be called frequently to monitor memory consumption.
1003  Even if locks are otherwise defined, this function does not use them,
1004  so results might not be up to date.
1005 */
1006 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
1007 
1008 /*
1009  malloc_max_footprint();
1010  Returns the maximum number of bytes obtained from the system. This
1011  value will be greater than current footprint if deallocated space
1012  has been reclaimed by the system. The peak number of bytes allocated
1013  by malloc, realloc etc., is less than this value. Unlike mallinfo,
1014  this function returns only a precomputed result, so can be called
1015  frequently to monitor memory consumption. Even if locks are
1016  otherwise defined, this function does not use them, so results might
1017  not be up to date.
1018 */
1019 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
1020 
1021 /*
1022  malloc_footprint_limit();
1023  Returns the number of bytes that the heap is allowed to obtain from
1024  the system, returning the last value returned by
1025  malloc_set_footprint_limit, or the maximum size_t value if
1026  never set. The returned value reflects a permission. There is no
1027  guarantee that this number of bytes can actually be obtained from
1028  the system.
1029 */
1030 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1031 
1032 /*
1033  malloc_set_footprint_limit();
1034  Sets the maximum number of bytes to obtain from the system, causing
1035  failure returns from malloc and related functions upon attempts to
1036  exceed this value. The argument value may be subject to page
1037  rounding to an enforceable limit; this actual value is returned.
1038  Using an argument of the maximum possible size_t effectively
1039  disables checks. If the argument is less than or equal to the
1040  current malloc_footprint, then all future allocations that require
1041  additional system memory will fail. However, invocation cannot
1042  retroactively deallocate existing used memory.
1043 */
1044 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1045 
1046 #if MALLOC_INSPECT_ALL
1047 /*
1048  malloc_inspect_all(void(*handler)(void *start,
1049  void *end,
1050  size_t used_bytes,
1051  void* callback_arg),
1052  void* arg);
1053  Traverses the heap and calls the given handler for each managed
1054  region, skipping all bytes that are (or may be) used for bookkeeping
1055  purposes. Traversal does not include include chunks that have been
1056  directly memory mapped. Each reported region begins at the start
1057  address, and continues up to but not including the end address. The
1058  first used_bytes of the region contain allocated data. If
1059  used_bytes is zero, the region is unallocated. The handler is
1060  invoked with the given callback argument. If locks are defined, they
1061  are held during the entire traversal. It is a bad idea to invoke
1062  other malloc functions from within the handler.
1063 
1064  For example, to count the number of in-use chunks with size greater
1065  than 1000, you could write:
1066  static int count = 0;
1067  void count_chunks(void* start, void* end, size_t used, void* arg) {
1068  if (used >= 1000) ++count;
1069  }
1070  then:
1071  malloc_inspect_all(count_chunks, NULL);
1072 
1073  malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1074 */
1075 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1076  void* arg);
1077 
1078 #endif /* MALLOC_INSPECT_ALL */
1079 
1080 #if !NO_MALLINFO
1081 /*
1082  mallinfo()
1083  Returns (by copy) a struct containing various summary statistics:
1084 
1085  arena: current total non-mmapped bytes allocated from system
1086  ordblks: the number of free chunks
1087  smblks: always zero.
1088  hblks: current number of mmapped regions
1089  hblkhd: total bytes held in mmapped regions
1090  usmblks: the maximum total allocated space. This will be greater
1091  than current total if trimming has occurred.
1092  fsmblks: always zero
1093  uordblks: current total allocated space (normal or mmapped)
1094  fordblks: total free space
1095  keepcost: the maximum number of bytes that could ideally be released
1096  back to system via malloc_trim. ("ideally" means that
1097  it ignores page restrictions etc.)
1098 
1099  Because these fields are ints, but internal bookkeeping may
1100  be kept as longs, the reported values may wrap around zero and
1101  thus be inaccurate.
1102 */
1103 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1104 #endif /* NO_MALLINFO */
1105 
1106 /*
1107  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1108 
1109  independent_calloc is similar to calloc, but instead of returning a
1110  single cleared space, it returns an array of pointers to n_elements
1111  independent elements that can hold contents of size elem_size, each
1112  of which starts out cleared, and can be independently freed,
1113  realloc'ed etc. The elements are guaranteed to be adjacently
1114  allocated (this is not guaranteed to occur with multiple callocs or
1115  mallocs), which may also improve cache locality in some
1116  applications.
1117 
1118  The "chunks" argument is optional (i.e., may be null, which is
1119  probably the most typical usage). If it is null, the returned array
1120  is itself dynamically allocated and should also be freed when it is
1121  no longer needed. Otherwise, the chunks array must be of at least
1122  n_elements in length. It is filled in with the pointers to the
1123  chunks.
1124 
1125  In either case, independent_calloc returns this pointer array, or
1126  null if the allocation failed. If n_elements is zero and "chunks"
1127  is null, it returns a chunk representing an array with zero elements
1128  (which should be freed if not wanted).
1129 
1130  Each element must be freed when it is no longer needed. This can be
1131  done all at once using bulk_free.
1132 
1133  independent_calloc simplifies and speeds up implementations of many
1134  kinds of pools. It may also be useful when constructing large data
1135  structures that initially have a fixed number of fixed-sized nodes,
1136  but the number is not known at compile time, and some of the nodes
1137  may later need to be freed. For example:
1138 
1139  struct Node { int item; struct Node* next; };
1140 
1141  struct Node* build_list() {
1142  struct Node** pool;
1143  int n = read_number_of_nodes_needed();
1144  if (n <= 0) return 0;
1145  pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1146  if (pool == 0) die();
1147  // organize into a linked list...
1148  struct Node* first = pool[0];
1149  for (i = 0; i < n-1; ++i)
1150  pool[i]->next = pool[i+1];
1151  free(pool); // Can now free the array (or not, if it is needed later)
1152  return first;
1153  }
1154 */
1155 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1156 
1157 /*
1158  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1159 
1160  independent_comalloc allocates, all at once, a set of n_elements
1161  chunks with sizes indicated in the "sizes" array. It returns
1162  an array of pointers to these elements, each of which can be
1163  independently freed, realloc'ed etc. The elements are guaranteed to
1164  be adjacently allocated (this is not guaranteed to occur with
1165  multiple callocs or mallocs), which may also improve cache locality
1166  in some applications.
1167 
1168  The "chunks" argument is optional (i.e., may be null). If it is null
1169  the returned array is itself dynamically allocated and should also
1170  be freed when it is no longer needed. Otherwise, the chunks array
1171  must be of at least n_elements in length. It is filled in with the
1172  pointers to the chunks.
1173 
1174  In either case, independent_comalloc returns this pointer array, or
1175  null if the allocation failed. If n_elements is zero and chunks is
1176  null, it returns a chunk representing an array with zero elements
1177  (which should be freed if not wanted).
1178 
1179  Each element must be freed when it is no longer needed. This can be
1180  done all at once using bulk_free.
1181 
1182  independent_comallac differs from independent_calloc in that each
1183  element may have a different size, and also that it does not
1184  automatically clear elements.
1185 
1186  independent_comalloc can be used to speed up allocation in cases
1187  where several structs or objects must always be allocated at the
1188  same time. For example:
1189 
1190  struct Head { ... }
1191  struct Foot { ... }
1192 
1193  void send_message(char* msg) {
1194  int msglen = strlen(msg);
1195  size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1196  void* chunks[3];
1197  if (independent_comalloc(3, sizes, chunks) == 0)
1198  die();
1199  struct Head* head = (struct Head*)(chunks[0]);
1200  char* body = (char*)(chunks[1]);
1201  struct Foot* foot = (struct Foot*)(chunks[2]);
1202  // ...
1203  }
1204 
1205  In general though, independent_comalloc is worth using only for
1206  larger values of n_elements. For small values, you probably won't
1207  detect enough difference from series of malloc calls to bother.
1208 
1209  Overuse of independent_comalloc can increase overall memory usage,
1210  since it cannot reuse existing noncontiguous small chunks that
1211  might be available for some of the elements.
1212 */
1213 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1214 
1215 /*
1216  bulk_free(void* array[], size_t n_elements)
1217  Frees and clears (sets to null) each non-null pointer in the given
1218  array. This is likely to be faster than freeing them one-by-one.
1219  If footers are used, pointers that have been allocated in different
1220  mspaces are not freed or cleared, and the count of all such pointers
1221  is returned. For large arrays of pointers with poor locality, it
1222  may be worthwhile to sort this array before calling bulk_free.
1223 */
1224 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1225 
1226 /*
1227  pvalloc(size_t n);
1228  Equivalent to valloc(minimum-page-that-holds(n)), that is,
1229  round up n to nearest pagesize.
1230  */
1231 DLMALLOC_EXPORT void* dlpvalloc(size_t);
1232 
1233 /*
1234  malloc_trim(size_t pad);
1235 
1236  If possible, gives memory back to the system (via negative arguments
1237  to sbrk) if there is unused memory at the `high' end of the malloc
1238  pool or in unused MMAP segments. You can call this after freeing
1239  large blocks of memory to potentially reduce the system-level memory
1240  requirements of a program. However, it cannot guarantee to reduce
1241  memory. Under some allocation patterns, some large free blocks of
1242  memory will be locked between two used chunks, so they cannot be
1243  given back to the system.
1244 
1245  The `pad' argument to malloc_trim represents the amount of free
1246  trailing space to leave untrimmed. If this argument is zero, only
1247  the minimum amount of memory to maintain internal data structures
1248  will be left. Non-zero arguments can be supplied to maintain enough
1249  trailing space to service future expected allocations without having
1250  to re-obtain memory from the system.
1251 
1252  Malloc_trim returns 1 if it actually released any memory, else 0.
1253 */
1254 DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1255 
1256 /*
1257  malloc_stats();
1258  Prints on stderr the amount of space obtained from the system (both
1259  via sbrk and mmap), the maximum amount (which may be more than
1260  current if malloc_trim and/or munmap got called), and the current
1261  number of bytes allocated via malloc (or realloc, etc) but not yet
1262  freed. Note that this is the number of bytes allocated, not the
1263  number requested. It will be larger than the number requested
1264  because of alignment and bookkeeping overhead. Because it includes
1265  alignment wastage as being in use, this figure may be greater than
1266  zero even when no user-level chunks are allocated.
1267 
1268  The reported current and maximum system memory can be inaccurate if
1269  a program makes other calls to system memory allocation functions
1270  (normally sbrk) outside of malloc.
1271 
1272  malloc_stats prints only the most commonly interesting statistics.
1273  More information can be obtained by calling mallinfo.
1274 */
1275 DLMALLOC_EXPORT void dlmalloc_stats(void);
1276 
1277 /*
1278  malloc_usable_size(void* p);
1279 
1280  Returns the number of bytes you can actually use in
1281  an allocated chunk, which may be more than you requested (although
1282  often not) due to alignment and minimum size constraints.
1283  You can use this many bytes without worrying about
1284  overwriting other allocated objects. This is not a particularly great
1285  programming practice. malloc_usable_size can be more useful in
1286  debugging and assertions, for example:
1287 
1288  p = malloc(n);
1289  assert(malloc_usable_size(p) >= 256);
1290 */
1291 size_t dlmalloc_usable_size(void*);
1292 
1293 #endif /* ONLY_MSPACES */
1294 
1295 #if MSPACES
1296 
1297 /*
1298  mspace is an opaque type representing an independent
1299  region of space that supports mspace_malloc, etc.
1300 */
1301 typedef void* mspace;
1302 
1303 /*
1304  create_mspace creates and returns a new independent space with the
1305  given initial capacity, or, if 0, the default granularity size. It
1306  returns null if there is no system memory available to create the
1307  space. If argument locked is non-zero, the space uses a separate
1308  lock to control access. The capacity of the space will grow
1309  dynamically as needed to service mspace_malloc requests. You can
1310  control the sizes of incremental increases of this space by
1311  compiling with a different DEFAULT_GRANULARITY or dynamically
1312  setting with mallopt(M_GRANULARITY, value).
1313 */
1314 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1315 
1316 /*
1317  destroy_mspace destroys the given space, and attempts to return all
1318  of its memory back to the system, returning the total number of
1319  bytes freed. After destruction, the results of access to all memory
1320  used by the space become undefined.
1321 */
1322 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1323 
1324 /*
1325  create_mspace_with_base uses the memory supplied as the initial base
1326  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1327  space is used for bookkeeping, so the capacity must be at least this
1328  large. (Otherwise 0 is returned.) When this initial space is
1329  exhausted, additional memory will be obtained from the system.
1330  Destroying this space will deallocate all additionally allocated
1331  space (if possible) but not the initial base.
1332 */
1333 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1334 
1335 /*
1336  mspace_track_large_chunks controls whether requests for large chunks
1337  are allocated in their own untracked mmapped regions, separate from
1338  others in this mspace. By default large chunks are not tracked,
1339  which reduces fragmentation. However, such chunks are not
1340  necessarily released to the system upon destroy_mspace. Enabling
1341  tracking by setting to true may increase fragmentation, but avoids
1342  leakage when relying on destroy_mspace to release all memory
1343  allocated using this space. The function returns the previous
1344  setting.
1345 */
1346 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1347 
1348 
1349 /*
1350  mspace_malloc behaves as malloc, but operates within
1351  the given space.
1352 */
1353 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1354 
1355 /*
1356  mspace_free behaves as free, but operates within
1357  the given space.
1358 
1359  If compiled with FOOTERS==1, mspace_free is not actually needed.
1360  free may be called instead of mspace_free because freed chunks from
1361  any space are handled by their originating spaces.
1362 */
1363 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1364 
1365 /*
1366  mspace_realloc behaves as realloc, but operates within
1367  the given space.
1368 
1369  If compiled with FOOTERS==1, mspace_realloc is not actually
1370  needed. realloc may be called instead of mspace_realloc because
1371  realloced chunks from any space are handled by their originating
1372  spaces.
1373 */
1374 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1375 
1376 /*
1377  mspace_calloc behaves as calloc, but operates within
1378  the given space.
1379 */
1380 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1381 
1382 /*
1383  mspace_memalign behaves as memalign, but operates within
1384  the given space.
1385 */
1386 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1387 
1388 /*
1389  mspace_independent_calloc behaves as independent_calloc, but
1390  operates within the given space.
1391 */
1392 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1393  size_t elem_size, void* chunks[]);
1394 
1395 /*
1396  mspace_independent_comalloc behaves as independent_comalloc, but
1397  operates within the given space.
1398 */
1399 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1400  size_t sizes[], void* chunks[]);
1401 
1402 /*
1403  mspace_footprint() returns the number of bytes obtained from the
1404  system for this space.
1405 */
1406 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1407 
1408 /*
1409  mspace_max_footprint() returns the peak number of bytes obtained from the
1410  system for this space.
1411 */
1412 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1413 
1414 
1415 #if !NO_MALLINFO
1416 /*
1417  mspace_mallinfo behaves as mallinfo, but reports properties of
1418  the given space.
1419 */
1420 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1421 #endif /* NO_MALLINFO */
1422 
1423 /*
1424  malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1425 */
1426 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1427 
1428 /*
1429  mspace_malloc_stats behaves as malloc_stats, but reports
1430  properties of the given space.
1431 */
1432 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1433 
1434 /*
1435  mspace_trim behaves as malloc_trim, but
1436  operates within the given space.
1437 */
1438 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1439 
1440 /*
1441  An alias for mallopt.
1442 */
1443 DLMALLOC_EXPORT int mspace_mallopt(int, int);
1444 
1445 #endif /* MSPACES */
1446 
1447 #ifdef __cplusplus
1448 } /* end of extern "C" */
1449 #endif /* __cplusplus */
1450 
1451 /*
1452  ========================================================================
1453  To make a fully customizable malloc.h header file, cut everything
1454  above this line, put into file malloc.h, edit to suit, and #include it
1455  on the next line, as well as in programs that use this malloc.
1456  ========================================================================
1457 */
1458 
1459 /* #include "malloc.h" */
1460 
1461 /*------------------------------ internal #includes ---------------------- */
1462 
1463 #ifdef _MSC_VER
1464 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1465 #endif /* _MSC_VER */
1466 #if !NO_MALLOC_STATS
1467 #include <stdio.h> /* for printing in malloc_stats */
1468 #endif /* NO_MALLOC_STATS */
1469 #ifndef LACKS_ERRNO_H
1470 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1471 #endif /* LACKS_ERRNO_H */
1472 #ifdef DEBUG
1473 #if ABORT_ON_ASSERT_FAILURE
1474 #undef assert
1475 #define assert(x) if(!(x)) ABORT
1476 #else /* ABORT_ON_ASSERT_FAILURE */
1477 #include <assert.h>
1478 #endif /* ABORT_ON_ASSERT_FAILURE */
1479 #else /* DEBUG */
1480 #ifndef assert
1481 #define assert(x)
1482 #endif
1483 #define DEBUG 0
1484 #endif /* DEBUG */
1485 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1486 #include <time.h> /* for magic initialization */
1487 #endif /* WIN32 */
1488 #ifndef LACKS_STDLIB_H
1489 #include <stdlib.h> /* for abort() */
1490 #endif /* LACKS_STDLIB_H */
1491 #ifndef LACKS_STRING_H
1492 #include <string.h> /* for memset etc */
1493 #endif /* LACKS_STRING_H */
1494 #if USE_BUILTIN_FFS
1495 #ifndef LACKS_STRINGS_H
1496 #include <strings.h> /* for ffs */
1497 #endif /* LACKS_STRINGS_H */
1498 #endif /* USE_BUILTIN_FFS */
1499 #if HAVE_MMAP
1500 #ifndef LACKS_SYS_MMAN_H
1501 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1502 #if (defined(linux) && !defined(__USE_GNU))
1503 #define __USE_GNU 1
1504 #include <sys/mman.h> /* for mmap */
1505 #undef __USE_GNU
1506 #else
1507 #include <sys/mman.h> /* for mmap */
1508 #endif /* linux */
1509 #endif /* LACKS_SYS_MMAN_H */
1510 #ifndef LACKS_FCNTL_H
1511 #include <fcntl.h>
1512 #endif /* LACKS_FCNTL_H */
1513 #endif /* HAVE_MMAP */
1514 #ifndef LACKS_UNISTD_H
1515 #include <unistd.h> /* for sbrk, sysconf */
1516 #else /* LACKS_UNISTD_H */
1517 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1518 extern void* sbrk(ptrdiff_t);
1519 #endif /* FreeBSD etc */
1520 #endif /* LACKS_UNISTD_H */
1521 
1522 /* Declarations for locking */
1523 #if USE_LOCKS
1524 #ifndef WIN32
1525 #if defined (__SVR4) && defined (__sun) /* solaris */
1526 #include <thread.h>
1527 #elif !defined(LACKS_SCHED_H)
1528 #include <sched.h>
1529 #endif /* solaris or LACKS_SCHED_H */
1530 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1531 #include <pthread.h>
1532 #endif /* USE_RECURSIVE_LOCKS ... */
1533 #elif defined(_MSC_VER)
1534 #ifndef _M_AMD64
1535 /* These are already defined on AMD64 builds */
1536 #ifdef __cplusplus
1537 extern "C" {
1538 #endif /* __cplusplus */
1539 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1540 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1541 #ifdef __cplusplus
1542 }
1543 #endif /* __cplusplus */
1544 #endif /* _M_AMD64 */
1545 #pragma intrinsic (_InterlockedCompareExchange)
1546 #pragma intrinsic (_InterlockedExchange)
1547 #define interlockedcompareexchange _InterlockedCompareExchange
1548 #define interlockedexchange _InterlockedExchange
1549 #elif defined(WIN32) && defined(__GNUC__)
1550 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1551 #define interlockedexchange __sync_lock_test_and_set
1552 #endif /* Win32 */
1553 #else /* USE_LOCKS */
1554 #endif /* USE_LOCKS */
1555 
1556 #ifndef LOCK_AT_FORK
1557 #define LOCK_AT_FORK 0
1558 #endif
1559 
1560 /* Declarations for bit scanning on win32 */
1561 #if defined(_MSC_VER) && _MSC_VER>=1300
1562 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1563 #ifdef __cplusplus
1564 extern "C" {
1565 #endif /* __cplusplus */
1566 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1567 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1568 #ifdef __cplusplus
1569 }
1570 #endif /* __cplusplus */
1571 
1572 #define BitScanForward _BitScanForward
1573 #define BitScanReverse _BitScanReverse
1574 #pragma intrinsic(_BitScanForward)
1575 #pragma intrinsic(_BitScanReverse)
1576 #endif /* BitScanForward */
1577 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1578 
1579 #ifndef WIN32
1580 #ifndef malloc_getpagesize
1581 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1582 # ifndef _SC_PAGE_SIZE
1583 # define _SC_PAGE_SIZE _SC_PAGESIZE
1584 # endif
1585 # endif
1586 # ifdef _SC_PAGE_SIZE
1587 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1588 # else
1589 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1590  extern size_t getpagesize();
1591 # define malloc_getpagesize getpagesize()
1592 # else
1593 # ifdef WIN32 /* use supplied emulation of getpagesize */
1594 # define malloc_getpagesize getpagesize()
1595 # else
1596 # ifndef LACKS_SYS_PARAM_H
1597 # include <sys/param.h>
1598 # endif
1599 # ifdef EXEC_PAGESIZE
1600 # define malloc_getpagesize EXEC_PAGESIZE
1601 # else
1602 # ifdef NBPG
1603 # ifndef CLSIZE
1604 # define malloc_getpagesize NBPG
1605 # else
1606 # define malloc_getpagesize (NBPG * CLSIZE)
1607 # endif
1608 # else
1609 # ifdef NBPC
1610 # define malloc_getpagesize NBPC
1611 # else
1612 # ifdef PAGESIZE
1613 # define malloc_getpagesize PAGESIZE
1614 # else /* just guess */
1615 # define malloc_getpagesize ((size_t)4096U)
1616 # endif
1617 # endif
1618 # endif
1619 # endif
1620 # endif
1621 # endif
1622 # endif
1623 #endif
1624 #endif
1625 
1626 /* ------------------- size_t and alignment properties -------------------- */
1627 
1628 /* The byte and bit size of a size_t */
1629 #define SIZE_T_SIZE (sizeof(size_t))
1630 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1631 
1632 /* Some constants coerced to size_t */
1633 /* Annoying but necessary to avoid errors on some platforms */
1634 #define SIZE_T_ZERO ((size_t)0)
1635 #define SIZE_T_ONE ((size_t)1)
1636 #define SIZE_T_TWO ((size_t)2)
1637 #define SIZE_T_FOUR ((size_t)4)
1638 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1639 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1640 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1641 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1642 
1643 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1644 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1645 
1646 /* True if address a has acceptable alignment */
1647 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1648 
1649 /* the number of bytes to offset an address to align it */
1650 #define align_offset(A)\
1651  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1652  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1653 
1654 /* -------------------------- MMAP preliminaries ------------------------- */
1655 
1656 /*
1657  If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1658  checks to fail so compiler optimizer can delete code rather than
1659  using so many "#if"s.
1660 */
1661 
1662 
1663 /* MORECORE and MMAP must return MFAIL on failure */
1664 #define MFAIL ((void*)(MAX_SIZE_T))
1665 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1666 
1667 #if HAVE_MMAP
1668 
1669 #ifndef WIN32
1670 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1671 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1672 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1673 #define MAP_ANONYMOUS MAP_ANON
1674 #endif /* MAP_ANON */
1675 #ifdef MAP_ANONYMOUS
1676 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1677 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1678 #else /* MAP_ANONYMOUS */
1679 /*
1680  Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1681  is unlikely to be needed, but is supplied just in case.
1682 */
1683 #define MMAP_FLAGS (MAP_PRIVATE)
1684 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1685 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1686  (dev_zero_fd = open("/dev/zero", O_RDWR), \
1687  mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1688  mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1689 #endif /* MAP_ANONYMOUS */
1690 
1691 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1692 
1693 #else /* WIN32 */
1694 
1695 /* Win32 MMAP via VirtualAlloc */
1696 static FORCEINLINE void* win32mmap(size_t size) {
1697  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1698  return (ptr != 0)? ptr: MFAIL;
1699 }
1700 
1701 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1702 static FORCEINLINE void* win32direct_mmap(size_t size) {
1703  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1704  PAGE_READWRITE);
1705  return (ptr != 0)? ptr: MFAIL;
1706 }
1707 
1708 /* This function supports releasing coalesed segments */
1709 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1710  MEMORY_BASIC_INFORMATION minfo;
1711  char* cptr = (char*)ptr;
1712  while (size) {
1713  if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1714  return -1;
1715  if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1716  minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1717  return -1;
1718  if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1719  return -1;
1720  cptr += minfo.RegionSize;
1721  size -= minfo.RegionSize;
1722  }
1723  return 0;
1724 }
1725 
1726 #define MMAP_DEFAULT(s) win32mmap(s)
1727 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1728 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1729 #endif /* WIN32 */
1730 #endif /* HAVE_MMAP */
1731 
1732 #if HAVE_MREMAP
1733 #ifndef WIN32
1734 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1735 #endif /* WIN32 */
1736 #endif /* HAVE_MREMAP */
1737 
1741 #if HAVE_MORECORE
1742  #ifdef MORECORE
1743  #define CALL_MORECORE(S) MORECORE(S)
1744  #else /* MORECORE */
1745  #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1746  #endif /* MORECORE */
1747 #else /* HAVE_MORECORE */
1748  #define CALL_MORECORE(S) MFAIL
1749 #endif /* HAVE_MORECORE */
1750 
1754 #if HAVE_MMAP
1755  #define USE_MMAP_BIT (SIZE_T_ONE)
1756 
1757  #ifdef MMAP
1758  #define CALL_MMAP(s) MMAP(s)
1759  #else /* MMAP */
1760  #define CALL_MMAP(s) MMAP_DEFAULT(s)
1761  #endif /* MMAP */
1762  #ifdef MUNMAP
1763  #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1764  #else /* MUNMAP */
1765  #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1766  #endif /* MUNMAP */
1767  #ifdef DIRECT_MMAP
1768  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1769  #else /* DIRECT_MMAP */
1770  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1771  #endif /* DIRECT_MMAP */
1772 #else /* HAVE_MMAP */
1773  #define USE_MMAP_BIT (SIZE_T_ZERO)
1774 
1775  #define MMAP(s) MFAIL
1776  #define MUNMAP(a, s) (-1)
1777  #define DIRECT_MMAP(s) MFAIL
1778  #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1779  #define CALL_MMAP(s) MMAP(s)
1780  #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1781 #endif /* HAVE_MMAP */
1782 
1786 #if HAVE_MMAP && HAVE_MREMAP
1787  #ifdef MREMAP
1788  #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1789  #else /* MREMAP */
1790  #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1791  #endif /* MREMAP */
1792 #else /* HAVE_MMAP && HAVE_MREMAP */
1793  #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1794 #endif /* HAVE_MMAP && HAVE_MREMAP */
1795 
1796 /* mstate bit set if continguous morecore disabled or failed */
1797 #define USE_NONCONTIGUOUS_BIT (4U)
1798 
1799 /* segment bit set in create_mspace_with_base */
1800 #define EXTERN_BIT (8U)
1801 
1802 
1803 /* --------------------------- Lock preliminaries ------------------------ */
1804 
1805 /*
1806  When locks are defined, there is one global lock, plus
1807  one per-mspace lock.
1808 
1809  The global lock_ensures that mparams.magic and other unique
1810  mparams values are initialized only once. It also protects
1811  sequences of calls to MORECORE. In many cases sys_alloc requires
1812  two calls, that should not be interleaved with calls by other
1813  threads. This does not protect against direct calls to MORECORE
1814  by other threads not using this lock, so there is still code to
1815  cope the best we can on interference.
1816 
1817  Per-mspace locks surround calls to malloc, free, etc.
1818  By default, locks are simple non-reentrant mutexes.
1819 
1820  Because lock-protected regions generally have bounded times, it is
1821  OK to use the supplied simple spinlocks. Spinlocks are likely to
1822  improve performance for lightly contended applications, but worsen
1823  performance under heavy contention.
1824 
1825  If USE_LOCKS is > 1, the definitions of lock routines here are
1826  bypassed, in which case you will need to define the type MLOCK_T,
1827  and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1828  and TRY_LOCK. You must also declare a
1829  static MLOCK_T malloc_global_mutex = { initialization values };.
1830 
1831 */
1832 
1833 #if !USE_LOCKS
1834 #define USE_LOCK_BIT (0U)
1835 #define INITIAL_LOCK(l) (0)
1836 #define DESTROY_LOCK(l) (0)
1837 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1838 #define RELEASE_MALLOC_GLOBAL_LOCK()
1839 
1840 #else
1841 #if USE_LOCKS > 1
1842 /* ----------------------- User-defined locks ------------------------ */
1843 /* Define your own lock implementation here */
1844 /* #define INITIAL_LOCK(lk) ... */
1845 /* #define DESTROY_LOCK(lk) ... */
1846 /* #define ACQUIRE_LOCK(lk) ... */
1847 /* #define RELEASE_LOCK(lk) ... */
1848 /* #define TRY_LOCK(lk) ... */
1849 /* static MLOCK_T malloc_global_mutex = ... */
1850 
1851 #elif USE_SPIN_LOCKS
1852 
1853 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1854 /* Note CAS_LOCK defined to return 0 on success */
1855 
1856 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1857 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1858 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1859 
1860 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1861 /* Custom spin locks for older gcc on x86 */
1862 static FORCEINLINE int x86_cas_lock(int *sl) {
1863  int ret;
1864  int val = 1;
1865  int cmp = 0;
1866  __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1867  : "=a" (ret)
1868  : "r" (val), "m" (*(sl)), "0"(cmp)
1869  : "memory", "cc");
1870  return ret;
1871 }
1872 
1873 static FORCEINLINE void x86_clear_lock(int* sl) {
1874  assert(*sl != 0);
1875  int prev = 0;
1876  int ret;
1877  __asm__ __volatile__ ("lock; xchgl %0, %1"
1878  : "=r" (ret)
1879  : "m" (*(sl)), "0"(prev)
1880  : "memory");
1881 }
1882 
1883 #define CAS_LOCK(sl) x86_cas_lock(sl)
1884 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1885 
1886 #else /* Win32 MSC */
1887 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1888 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1889 
1890 #endif /* ... gcc spins locks ... */
1891 
1892 /* How to yield for a spin lock */
1893 #define SPINS_PER_YIELD 63
1894 #if defined(_MSC_VER)
1895 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1896 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1897 #elif defined (__SVR4) && defined (__sun) /* solaris */
1898 #define SPIN_LOCK_YIELD thr_yield();
1899 #elif !defined(LACKS_SCHED_H)
1900 #define SPIN_LOCK_YIELD sched_yield();
1901 #else
1902 #define SPIN_LOCK_YIELD
1903 #endif /* ... yield ... */
1904 
1905 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1906 /* Plain spin locks use single word (embedded in malloc_states) */
1907 static int spin_acquire_lock(int *sl) {
1908  int spins = 0;
1909  while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1910  if ((++spins & SPINS_PER_YIELD) == 0) {
1911  SPIN_LOCK_YIELD;
1912  }
1913  }
1914  return 0;
1915 }
1916 
1917 #define MLOCK_T int
1918 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1919 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1920 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1921 #define INITIAL_LOCK(sl) (*sl = 0)
1922 #define DESTROY_LOCK(sl) (0)
1923 static MLOCK_T malloc_global_mutex = 0;
1924 
1925 #else /* USE_RECURSIVE_LOCKS */
1926 /* types for lock owners */
1927 #ifdef WIN32
1928 #define THREAD_ID_T DWORD
1929 #define CURRENT_THREAD GetCurrentThreadId()
1930 #define EQ_OWNER(X,Y) ((X) == (Y))
1931 #else
1932 /*
1933  Note: the following assume that pthread_t is a type that can be
1934  initialized to (casted) zero. If this is not the case, you will need to
1935  somehow redefine these or not use spin locks.
1936 */
1937 #define THREAD_ID_T pthread_t
1938 #define CURRENT_THREAD pthread_self()
1939 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1940 #endif
1941 
1942 struct malloc_recursive_lock {
1943  int sl;
1944  unsigned int c;
1945  THREAD_ID_T threadid;
1946 };
1947 
1948 #define MLOCK_T struct malloc_recursive_lock
1949 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1950 
1951 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1952  assert(lk->sl != 0);
1953  if (--lk->c == 0) {
1954  CLEAR_LOCK(&lk->sl);
1955  }
1956 }
1957 
1958 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1959  THREAD_ID_T mythreadid = CURRENT_THREAD;
1960  int spins = 0;
1961  for (;;) {
1962  if (*((volatile int *)(&lk->sl)) == 0) {
1963  if (!CAS_LOCK(&lk->sl)) {
1964  lk->threadid = mythreadid;
1965  lk->c = 1;
1966  return 0;
1967  }
1968  }
1969  else if (EQ_OWNER(lk->threadid, mythreadid)) {
1970  ++lk->c;
1971  return 0;
1972  }
1973  if ((++spins & SPINS_PER_YIELD) == 0) {
1974  SPIN_LOCK_YIELD;
1975  }
1976  }
1977 }
1978 
1979 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1980  THREAD_ID_T mythreadid = CURRENT_THREAD;
1981  if (*((volatile int *)(&lk->sl)) == 0) {
1982  if (!CAS_LOCK(&lk->sl)) {
1983  lk->threadid = mythreadid;
1984  lk->c = 1;
1985  return 1;
1986  }
1987  }
1988  else if (EQ_OWNER(lk->threadid, mythreadid)) {
1989  ++lk->c;
1990  return 1;
1991  }
1992  return 0;
1993 }
1994 
1995 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1996 #define TRY_LOCK(lk) recursive_try_lock(lk)
1997 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1998 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1999 #define DESTROY_LOCK(lk) (0)
2000 #endif /* USE_RECURSIVE_LOCKS */
2001 
2002 #elif defined(WIN32) /* Win32 critical sections */
2003 #define MLOCK_T CRITICAL_SECTION
2004 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
2005 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
2006 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
2007 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
2008 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
2009 #define NEED_GLOBAL_LOCK_INIT
2010 
2011 static MLOCK_T malloc_global_mutex;
2012 static volatile LONG malloc_global_mutex_status;
2013 
2014 /* Use spin loop to initialize global lock */
2015 static void init_malloc_global_mutex() {
2016  for (;;) {
2017  long stat = malloc_global_mutex_status;
2018  if (stat > 0)
2019  return;
2020  /* transition to < 0 while initializing, then to > 0) */
2021  if (stat == 0 &&
2022  interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
2023  InitializeCriticalSection(&malloc_global_mutex);
2024  interlockedexchange(&malloc_global_mutex_status, (LONG)1);
2025  return;
2026  }
2027  SleepEx(0, FALSE);
2028  }
2029 }
2030 
2031 #else /* pthreads-based locks */
2032 #define MLOCK_T pthread_mutex_t
2033 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2034 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2035 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2036 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2037 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2038 
2039 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2040 /* Cope with old-style linux recursive lock initialization by adding */
2041 /* skipped internal declaration from pthread.h */
2042 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2043  int __kind));
2044 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2045 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2046 #endif /* USE_RECURSIVE_LOCKS ... */
2047 
2048 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2049 
2050 static int pthread_init_lock (MLOCK_T *lk) {
2051  pthread_mutexattr_t attr;
2052  if (pthread_mutexattr_init(&attr)) return 1;
2053 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2054  if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2055 #endif
2056  if (pthread_mutex_init(lk, &attr)) return 1;
2057  if (pthread_mutexattr_destroy(&attr)) return 1;
2058  return 0;
2059 }
2060 
2061 #endif /* ... lock types ... */
2062 
2063 /* Common code for all lock types */
2064 #define USE_LOCK_BIT (2U)
2065 
2066 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2067 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2068 #endif
2069 
2070 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2071 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2072 #endif
2073 
2074 #endif /* USE_LOCKS */
2075 
2076 /* ----------------------- Chunk representations ------------------------ */
2077 
2078 /*
2079  (The following includes lightly edited explanations by Colin Plumb.)
2080 
2081  The malloc_chunk declaration below is misleading (but accurate and
2082  necessary). It declares a "view" into memory allowing access to
2083  necessary fields at known offsets from a given base.
2084 
2085  Chunks of memory are maintained using a `boundary tag' method as
2086  originally described by Knuth. (See the paper by Paul Wilson
2087  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2088  techniques.) Sizes of free chunks are stored both in the front of
2089  each chunk and at the end. This makes consolidating fragmented
2090  chunks into bigger chunks fast. The head fields also hold bits
2091  representing whether chunks are free or in use.
2092 
2093  Here are some pictures to make it clearer. They are "exploded" to
2094  show that the state of a chunk can be thought of as extending from
2095  the high 31 bits of the head field of its header through the
2096  prev_foot and PINUSE_BIT bit of the following chunk header.
2097 
2098  A chunk that's in use looks like:
2099 
2100  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101  | Size of previous chunk (if P = 0) |
2102  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2103  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2104  | Size of this chunk 1| +-+
2105  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2106  | |
2107  +- -+
2108  | |
2109  +- -+
2110  | :
2111  +- size - sizeof(size_t) available payload bytes -+
2112  : |
2113  chunk-> +- -+
2114  | |
2115  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2117  | Size of next chunk (may or may not be in use) | +-+
2118  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2119 
2120  And if it's free, it looks like this:
2121 
2122  chunk-> +- -+
2123  | User payload (must be in use, or we would have merged!) |
2124  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2125  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2126  | Size of this chunk 0| +-+
2127  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2128  | Next pointer |
2129  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2130  | Prev pointer |
2131  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2132  | :
2133  +- size - sizeof(struct chunk) unused bytes -+
2134  : |
2135  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2136  | Size of this chunk |
2137  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2138  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2139  | Size of next chunk (must be in use, or we would have merged)| +-+
2140  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2141  | :
2142  +- User payload -+
2143  : |
2144  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2145  |0|
2146  +-+
2147  Note that since we always merge adjacent free chunks, the chunks
2148  adjacent to a free chunk must be in use.
2149 
2150  Given a pointer to a chunk (which can be derived trivially from the
2151  payload pointer) we can, in O(1) time, find out whether the adjacent
2152  chunks are free, and if so, unlink them from the lists that they
2153  are on and merge them with the current chunk.
2154 
2155  Chunks always begin on even word boundaries, so the mem portion
2156  (which is returned to the user) is also on an even word boundary, and
2157  thus at least double-word aligned.
2158 
2159  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2160  chunk size (which is always a multiple of two words), is an in-use
2161  bit for the *previous* chunk. If that bit is *clear*, then the
2162  word before the current chunk size contains the previous chunk
2163  size, and can be used to find the front of the previous chunk.
2164  The very first chunk allocated always has this bit set, preventing
2165  access to non-existent (or non-owned) memory. If pinuse is set for
2166  any given chunk, then you CANNOT determine the size of the
2167  previous chunk, and might even get a memory addressing fault when
2168  trying to do so.
2169 
2170  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2171  the chunk size redundantly records whether the current chunk is
2172  inuse (unless the chunk is mmapped). This redundancy enables usage
2173  checks within free and realloc, and reduces indirection when freeing
2174  and consolidating chunks.
2175 
2176  Each freshly allocated chunk must have both cinuse and pinuse set.
2177  That is, each allocated chunk borders either a previously allocated
2178  and still in-use chunk, or the base of its memory arena. This is
2179  ensured by making all allocations from the `lowest' part of any
2180  found chunk. Further, no free chunk physically borders another one,
2181  so each free chunk is known to be preceded and followed by either
2182  inuse chunks or the ends of memory.
2183 
2184  Note that the `foot' of the current chunk is actually represented
2185  as the prev_foot of the NEXT chunk. This makes it easier to
2186  deal with alignments etc but can be very confusing when trying
2187  to extend or adapt this code.
2188 
2189  The exceptions to all this are
2190 
2191  1. The special chunk `top' is the top-most available chunk (i.e.,
2192  the one bordering the end of available memory). It is treated
2193  specially. Top is never included in any bin, is used only if
2194  no other chunk is available, and is released back to the
2195  system if it is very large (see M_TRIM_THRESHOLD). In effect,
2196  the top chunk is treated as larger (and thus less well
2197  fitting) than any other available chunk. The top chunk
2198  doesn't update its trailing size field since there is no next
2199  contiguous chunk that would have to index off it. However,
2200  space is still allocated for it (TOP_FOOT_SIZE) to enable
2201  separation or merging when space is extended.
2202 
2203  3. Chunks allocated via mmap, have both cinuse and pinuse bits
2204  cleared in their head fields. Because they are allocated
2205  one-by-one, each must carry its own prev_foot field, which is
2206  also used to hold the offset this chunk has within its mmapped
2207  region, which is needed to preserve alignment. Each mmapped
2208  chunk is trailed by the first two fields of a fake next-chunk
2209  for sake of usage checks.
2210 
2211 */
2212 
2214  size_t prev_foot; /* Size of previous chunk (if free). */
2215  size_t head; /* Size and inuse bits. */
2216  struct malloc_chunk* fd; /* double links -- used only if free. */
2217  struct malloc_chunk* bk;
2218 };
2219 
2220 typedef struct malloc_chunk mchunk;
2221 typedef struct malloc_chunk* mchunkptr;
2222 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2223 typedef unsigned int bindex_t; /* Described below */
2224 typedef unsigned int binmap_t; /* Described below */
2225 typedef unsigned int flag_t; /* The type of various bit flag sets */
2226 
2227 /* ------------------- Chunks sizes and alignments ----------------------- */
2228 
2229 #define MCHUNK_SIZE (sizeof(mchunk))
2230 
2231 #if FOOTERS
2232 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2233 #else /* FOOTERS */
2234 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2235 #endif /* FOOTERS */
2236 
2237 /* MMapped chunks need a second word of overhead ... */
2238 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2239 /* ... and additional padding for fake next-chunk at foot */
2240 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2241 
2242 /* The smallest size we can malloc is an aligned minimal chunk */
2243 #define MIN_CHUNK_SIZE\
2244  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2245 
2246 /* conversion from malloc headers to user pointers, and back */
2247 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2248 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2249 /* chunk associated with aligned address A */
2250 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2251 
2252 /* Bounds on request (not chunk) sizes. */
2253 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2254 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2255 
2256 /* pad request bytes into a usable size */
2257 #define pad_request(req) \
2258  (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2259 
2260 /* pad request, checking for minimum (but not maximum) */
2261 #define request2size(req) \
2262  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2263 
2264 
2265 /* ------------------ Operations on head and foot fields ----------------- */
2266 
2267 /*
2268  The head field of a chunk is or'ed with PINUSE_BIT when previous
2269  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2270  use, unless mmapped, in which case both bits are cleared.
2271 
2272  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2273 */
2274 
2275 #define PINUSE_BIT (SIZE_T_ONE)
2276 #define CINUSE_BIT (SIZE_T_TWO)
2277 #define FLAG4_BIT (SIZE_T_FOUR)
2278 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2279 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2280 
2281 /* Head value for fenceposts */
2282 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2283 
2284 /* extraction of fields from head words */
2285 #define cinuse(p) ((p)->head & CINUSE_BIT)
2286 #define pinuse(p) ((p)->head & PINUSE_BIT)
2287 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2288 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2289 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2290 
2291 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2292 
2293 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2294 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2295 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2296 
2297 /* Treat space at ptr +/- offset as a chunk */
2298 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2299 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2300 
2301 /* Ptr to next or previous physical malloc_chunk. */
2302 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2303 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2304 
2305 /* extract next chunk's pinuse bit */
2306 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2307 
2308 /* Get/set size at footer */
2309 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2310 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2311 
2312 /* Set size, pinuse bit, and foot */
2313 #define set_size_and_pinuse_of_free_chunk(p, s)\
2314  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2315 
2316 /* Set size, pinuse bit, foot, and clear next pinuse */
2317 #define set_free_with_pinuse(p, s, n)\
2318  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2319 
2320 /* Get the internal overhead associated with chunk p */
2321 #define overhead_for(p)\
2322  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2323 
2324 /* Return true if malloced space is not necessarily cleared */
2325 #if MMAP_CLEARS
2326 #define calloc_must_clear(p) (!is_mmapped(p))
2327 #else /* MMAP_CLEARS */
2328 #define calloc_must_clear(p) (1)
2329 #endif /* MMAP_CLEARS */
2330 
2331 /* ---------------------- Overlaid data structures ----------------------- */
2332 
2333 /*
2334  When chunks are not in use, they are treated as nodes of either
2335  lists or trees.
2336 
2337  "Small" chunks are stored in circular doubly-linked lists, and look
2338  like this:
2339 
2340  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341  | Size of previous chunk |
2342  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343  `head:' | Size of chunk, in bytes |P|
2344  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2345  | Forward pointer to next chunk in list |
2346  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2347  | Back pointer to previous chunk in list |
2348  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2349  | Unused space (may be 0 bytes long) .
2350  . .
2351  . |
2352 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2353  `foot:' | Size of chunk, in bytes |
2354  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2355 
2356  Larger chunks are kept in a form of bitwise digital trees (aka
2357  tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2358  free chunks greater than 256 bytes, their size doesn't impose any
2359  constraints on user chunk sizes. Each node looks like:
2360 
2361  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2362  | Size of previous chunk |
2363  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2364  `head:' | Size of chunk, in bytes |P|
2365  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2366  | Forward pointer to next chunk of same size |
2367  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2368  | Back pointer to previous chunk of same size |
2369  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2370  | Pointer to left child (child[0]) |
2371  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2372  | Pointer to right child (child[1]) |
2373  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2374  | Pointer to parent |
2375  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2376  | bin index of this chunk |
2377  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2378  | Unused space .
2379  . |
2380 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2381  `foot:' | Size of chunk, in bytes |
2382  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2383 
2384  Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2385  of the same size are arranged in a circularly-linked list, with only
2386  the oldest chunk (the next to be used, in our FIFO ordering)
2387  actually in the tree. (Tree members are distinguished by a non-null
2388  parent pointer.) If a chunk with the same size an an existing node
2389  is inserted, it is linked off the existing node using pointers that
2390  work in the same way as fd/bk pointers of small chunks.
2391 
2392  Each tree contains a power of 2 sized range of chunk sizes (the
2393  smallest is 0x100 <= x < 0x180), which is is divided in half at each
2394  tree level, with the chunks in the smaller half of the range (0x100
2395  <= x < 0x140 for the top nose) in the left subtree and the larger
2396  half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2397  done by inspecting individual bits.
2398 
2399  Using these rules, each node's left subtree contains all smaller
2400  sizes than its right subtree. However, the node at the root of each
2401  subtree has no particular ordering relationship to either. (The
2402  dividing line between the subtree sizes is based on trie relation.)
2403  If we remove the last chunk of a given size from the interior of the
2404  tree, we need to replace it with a leaf node. The tree ordering
2405  rules permit a node to be replaced by any leaf below it.
2406 
2407  The smallest chunk in a tree (a common operation in a best-fit
2408  allocator) can be found by walking a path to the leftmost leaf in
2409  the tree. Unlike a usual binary tree, where we follow left child
2410  pointers until we reach a null, here we follow the right child
2411  pointer any time the left one is null, until we reach a leaf with
2412  both child pointers null. The smallest chunk in the tree will be
2413  somewhere along that path.
2414 
2415  The worst case number of steps to add, find, or remove a node is
2416  bounded by the number of bits differentiating chunks within
2417  bins. Under current bin calculations, this ranges from 6 up to 21
2418  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2419  is of course much better.
2420 */
2421 
2423  /* The first four fields must be compatible with malloc_chunk */
2424  size_t prev_foot;
2425  size_t head;
2426  struct malloc_tree_chunk* fd;
2427  struct malloc_tree_chunk* bk;
2428 
2429  struct malloc_tree_chunk* child[2];
2430  struct malloc_tree_chunk* parent;
2431  bindex_t index;
2432 };
2433 
2434 typedef struct malloc_tree_chunk tchunk;
2435 typedef struct malloc_tree_chunk* tchunkptr;
2436 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2437 
2438 /* A little helper macro for trees */
2439 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2440 
2441 /* ----------------------------- Segments -------------------------------- */
2442 
2443 /*
2444  Each malloc space may include non-contiguous segments, held in a
2445  list headed by an embedded malloc_segment record representing the
2446  top-most space. Segments also include flags holding properties of
2447  the space. Large chunks that are directly allocated by mmap are not
2448  included in this list. They are instead independently created and
2449  destroyed without otherwise keeping track of them.
2450 
2451  Segment management mainly comes into play for spaces allocated by
2452  MMAP. Any call to MMAP might or might not return memory that is
2453  adjacent to an existing segment. MORECORE normally contiguously
2454  extends the current space, so this space is almost always adjacent,
2455  which is simpler and faster to deal with. (This is why MORECORE is
2456  used preferentially to MMAP when both are available -- see
2457  sys_alloc.) When allocating using MMAP, we don't use any of the
2458  hinting mechanisms (inconsistently) supported in various
2459  implementations of unix mmap, or distinguish reserving from
2460  committing memory. Instead, we just ask for space, and exploit
2461  contiguity when we get it. It is probably possible to do
2462  better than this on some systems, but no general scheme seems
2463  to be significantly better.
2464 
2465  Management entails a simpler variant of the consolidation scheme
2466  used for chunks to reduce fragmentation -- new adjacent memory is
2467  normally prepended or appended to an existing segment. However,
2468  there are limitations compared to chunk consolidation that mostly
2469  reflect the fact that segment processing is relatively infrequent
2470  (occurring only when getting memory from system) and that we
2471  don't expect to have huge numbers of segments:
2472 
2473  * Segments are not indexed, so traversal requires linear scans. (It
2474  would be possible to index these, but is not worth the extra
2475  overhead and complexity for most programs on most platforms.)
2476  * New segments are only appended to old ones when holding top-most
2477  memory; if they cannot be prepended to others, they are held in
2478  different segments.
2479 
2480  Except for the top-most segment of an mstate, each segment record
2481  is kept at the tail of its segment. Segments are added by pushing
2482  segment records onto the list headed by &mstate.seg for the
2483  containing mstate.
2484 
2485  Segment flags control allocation/merge/deallocation policies:
2486  * If EXTERN_BIT set, then we did not allocate this segment,
2487  and so should not try to deallocate or merge with others.
2488  (This currently holds only for the initial segment passed
2489  into create_mspace_with_base.)
2490  * If USE_MMAP_BIT set, the segment may be merged with
2491  other surrounding mmapped segments and trimmed/de-allocated
2492  using munmap.
2493  * If neither bit is set, then the segment was obtained using
2494  MORECORE so can be merged with surrounding MORECORE'd segments
2495  and deallocated/trimmed using MORECORE with negative arguments.
2496 */
2497 
2499  char* base; /* base address */
2500  size_t size; /* allocated size */
2501  struct malloc_segment* next; /* ptr to next segment */
2502  flag_t sflags; /* mmap and extern flag */
2503 };
2504 
2505 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2506 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2507 
2508 typedef struct malloc_segment msegment;
2509 typedef struct malloc_segment* msegmentptr;
2510 
2511 /* ---------------------------- malloc_state ----------------------------- */
2512 
2513 /*
2514  A malloc_state holds all of the bookkeeping for a space.
2515  The main fields are:
2516 
2517  Top
2518  The topmost chunk of the currently active segment. Its size is
2519  cached in topsize. The actual size of topmost space is
2520  topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2521  fenceposts and segment records if necessary when getting more
2522  space from the system. The size at which to autotrim top is
2523  cached from mparams in trim_check, except that it is disabled if
2524  an autotrim fails.
2525 
2526  Designated victim (dv)
2527  This is the preferred chunk for servicing small requests that
2528  don't have exact fits. It is normally the chunk split off most
2529  recently to service another small request. Its size is cached in
2530  dvsize. The link fields of this chunk are not maintained since it
2531  is not kept in a bin.
2532 
2533  SmallBins
2534  An array of bin headers for free chunks. These bins hold chunks
2535  with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2536  chunks of all the same size, spaced 8 bytes apart. To simplify
2537  use in double-linked lists, each bin header acts as a malloc_chunk
2538  pointing to the real first node, if it exists (else pointing to
2539  itself). This avoids special-casing for headers. But to avoid
2540  waste, we allocate only the fd/bk pointers of bins, and then use
2541  repositioning tricks to treat these as the fields of a chunk.
2542 
2543  TreeBins
2544  Treebins are pointers to the roots of trees holding a range of
2545  sizes. There are 2 equally spaced treebins for each power of two
2546  from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2547  larger.
2548 
2549  Bin maps
2550  There is one bit map for small bins ("smallmap") and one for
2551  treebins ("treemap). Each bin sets its bit when non-empty, and
2552  clears the bit when empty. Bit operations are then used to avoid
2553  bin-by-bin searching -- nearly all "search" is done without ever
2554  looking at bins that won't be selected. The bit maps
2555  conservatively use 32 bits per map word, even if on 64bit system.
2556  For a good description of some of the bit-based techniques used
2557  here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2558  supplement at http://hackersdelight.org/). Many of these are
2559  intended to reduce the branchiness of paths through malloc etc, as
2560  well as to reduce the number of memory locations read or written.
2561 
2562  Segments
2563  A list of segments headed by an embedded malloc_segment record
2564  representing the initial space.
2565 
2566  Address check support
2567  The least_addr field is the least address ever obtained from
2568  MORECORE or MMAP. Attempted frees and reallocs of any address less
2569  than this are trapped (unless INSECURE is defined).
2570 
2571  Magic tag
2572  A cross-check field that should always hold same value as mparams.magic.
2573 
2574  Max allowed footprint
2575  The maximum allowed bytes to allocate from system (zero means no limit)
2576 
2577  Flags
2578  Bits recording whether to use MMAP, locks, or contiguous MORECORE
2579 
2580  Statistics
2581  Each space keeps track of current and maximum system memory
2582  obtained via MORECORE or MMAP.
2583 
2584  Trim support
2585  Fields holding the amount of unused topmost memory that should trigger
2586  trimming, and a counter to force periodic scanning to release unused
2587  non-topmost segments.
2588 
2589  Locking
2590  If USE_LOCKS is defined, the "mutex" lock is acquired and released
2591  around every public call using this mspace.
2592 
2593  Extension support
2594  A void* pointer and a size_t field that can be used to help implement
2595  extensions to this malloc.
2596 */
2597 
2598 /* Bin types, widths and sizes */
2599 #define NSMALLBINS (32U)
2600 #define NTREEBINS (32U)
2601 #define SMALLBIN_SHIFT (3U)
2602 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2603 #define TREEBIN_SHIFT (8U)
2604 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2605 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2606 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2607 
2609  binmap_t smallmap;
2610  binmap_t treemap;
2611  size_t dvsize;
2612  size_t topsize;
2613  char* least_addr;
2614  mchunkptr dv;
2615  mchunkptr top;
2616  size_t trim_check;
2617  size_t release_checks;
2618  size_t magic;
2619  mchunkptr smallbins[(NSMALLBINS+1)*2];
2620  tbinptr treebins[NTREEBINS];
2621  size_t footprint;
2622  size_t max_footprint;
2623  size_t footprint_limit; /* zero means no limit */
2624  flag_t mflags;
2625 #if USE_LOCKS
2626  MLOCK_T mutex; /* locate lock among fields that rarely change */
2627 #endif /* USE_LOCKS */
2628  msegment seg;
2629  void* extp; /* Unused but available for extensions */
2630  size_t exts;
2631 };
2632 
2633 typedef struct malloc_state* mstate;
2634 
2635 /* ------------- Global malloc_state and malloc_params ------------------- */
2636 
2637 /*
2638  malloc_params holds global properties, including those that can be
2639  dynamically set using mallopt. There is a single instance, mparams,
2640  initialized in init_mparams. Note that the non-zeroness of "magic"
2641  also serves as an initialization flag.
2642 */
2643 
2645  size_t magic;
2646  size_t page_size;
2647  size_t granularity;
2648  size_t mmap_threshold;
2649  size_t trim_threshold;
2650  flag_t default_mflags;
2651 };
2652 
2653 static struct malloc_params mparams;
2654 
2655 /* Ensure mparams initialized */
2656 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2657 
2658 #if !ONLY_MSPACES
2659 
2660 /* The global malloc_state used for all non-"mspace" calls */
2661 static struct malloc_state _gm_;
2662 #define gm (&_gm_)
2663 #define is_global(M) ((M) == &_gm_)
2664 
2665 #endif /* !ONLY_MSPACES */
2666 
2667 #define is_initialized(M) ((M)->top != 0)
2668 
2669 /* -------------------------- system alloc setup ------------------------- */
2670 
2671 /* Operations on mflags */
2672 
2673 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2674 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2675 #if USE_LOCKS
2676 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2677 #else
2678 #define disable_lock(M)
2679 #endif
2680 
2681 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2682 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2683 #if HAVE_MMAP
2684 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2685 #else
2686 #define disable_mmap(M)
2687 #endif
2688 
2689 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2690 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2691 
2692 #define set_lock(M,L)\
2693  ((M)->mflags = (L)?\
2694  ((M)->mflags | USE_LOCK_BIT) :\
2695  ((M)->mflags & ~USE_LOCK_BIT))
2696 
2697 /* page-align a size */
2698 #define page_align(S)\
2699  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2700 
2701 /* granularity-align a size */
2702 #define granularity_align(S)\
2703  (((S) + (mparams.granularity - SIZE_T_ONE))\
2704  & ~(mparams.granularity - SIZE_T_ONE))
2705 
2706 
2707 /* For mmap, use granularity alignment on windows, else page-align */
2708 #ifdef WIN32
2709 #define mmap_align(S) granularity_align(S)
2710 #else
2711 #define mmap_align(S) page_align(S)
2712 #endif
2713 
2714 /* For sys_alloc, enough padding to ensure can malloc request on success */
2715 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2716 
2717 #define is_page_aligned(S)\
2718  (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2719 #define is_granularity_aligned(S)\
2720  (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2721 
2722 /* True if segment S holds address A */
2723 #define segment_holds(S, A)\
2724  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2725 
2726 /* Return segment holding given address */
2727 static msegmentptr segment_holding(mstate m, char* addr) {
2728  msegmentptr sp = &m->seg;
2729  for (;;) {
2730  if (addr >= sp->base && addr < sp->base + sp->size)
2731  return sp;
2732  if ((sp = sp->next) == 0)
2733  return 0;
2734  }
2735 }
2736 
2737 /* Return true if segment contains a segment link */
2738 static int has_segment_link(mstate m, msegmentptr ss) {
2739  msegmentptr sp = &m->seg;
2740  for (;;) {
2741  if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2742  return 1;
2743  if ((sp = sp->next) == 0)
2744  return 0;
2745  }
2746 }
2747 
2748 #ifndef MORECORE_CANNOT_TRIM
2749 #define should_trim(M,s) ((s) > (M)->trim_check)
2750 #else /* MORECORE_CANNOT_TRIM */
2751 #define should_trim(M,s) (0)
2752 #endif /* MORECORE_CANNOT_TRIM */
2753 
2754 /*
2755  TOP_FOOT_SIZE is padding at the end of a segment, including space
2756  that may be needed to place segment records and fenceposts when new
2757  noncontiguous segments are added.
2758 */
2759 #define TOP_FOOT_SIZE\
2760  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2761 
2762 
2763 /* ------------------------------- Hooks -------------------------------- */
2764 
2765 /*
2766  PREACTION should be defined to return 0 on success, and nonzero on
2767  failure. If you are not using locking, you can redefine these to do
2768  anything you like.
2769 */
2770 
2771 #if USE_LOCKS
2772 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2773 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2774 #else /* USE_LOCKS */
2775 
2776 #ifndef PREACTION
2777 #define PREACTION(M) (0)
2778 #endif /* PREACTION */
2779 
2780 #ifndef POSTACTION
2781 #define POSTACTION(M)
2782 #endif /* POSTACTION */
2783 
2784 #endif /* USE_LOCKS */
2785 
2786 /*
2787  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2788  USAGE_ERROR_ACTION is triggered on detected bad frees and
2789  reallocs. The argument p is an address that might have triggered the
2790  fault. It is ignored by the two predefined actions, but might be
2791  useful in custom actions that try to help diagnose errors.
2792 */
2793 
2794 #if PROCEED_ON_ERROR
2795 
2796 /* A count of the number of corruption errors causing resets */
2797 int malloc_corruption_error_count;
2798 
2799 /* default corruption action */
2800 static void reset_on_error(mstate m);
2801 
2802 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2803 #define USAGE_ERROR_ACTION(m, p)
2804 
2805 #else /* PROCEED_ON_ERROR */
2806 
2807 #ifndef CORRUPTION_ERROR_ACTION
2808 #define CORRUPTION_ERROR_ACTION(m) ABORT
2809 #endif /* CORRUPTION_ERROR_ACTION */
2810 
2811 #ifndef USAGE_ERROR_ACTION
2812 #define USAGE_ERROR_ACTION(m,p) ABORT
2813 #endif /* USAGE_ERROR_ACTION */
2814 
2815 #endif /* PROCEED_ON_ERROR */
2816 
2817 
2818 /* -------------------------- Debugging setup ---------------------------- */
2819 
2820 #if ! DEBUG
2821 
2822 #define check_free_chunk(M,P)
2823 #define check_inuse_chunk(M,P)
2824 #define check_malloced_chunk(M,P,N)
2825 #define check_mmapped_chunk(M,P)
2826 #define check_malloc_state(M)
2827 #define check_top_chunk(M,P)
2828 
2829 #else /* DEBUG */
2830 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2831 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2832 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2833 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2834 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2835 #define check_malloc_state(M) do_check_malloc_state(M)
2836 
2837 static void do_check_any_chunk(mstate m, mchunkptr p);
2838 static void do_check_top_chunk(mstate m, mchunkptr p);
2839 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2840 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2841 static void do_check_free_chunk(mstate m, mchunkptr p);
2842 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2843 static void do_check_tree(mstate m, tchunkptr t);
2844 static void do_check_treebin(mstate m, bindex_t i);
2845 static void do_check_smallbin(mstate m, bindex_t i);
2846 static void do_check_malloc_state(mstate m);
2847 static int bin_find(mstate m, mchunkptr x);
2848 static size_t traverse_and_check(mstate m);
2849 #endif /* DEBUG */
2850 
2851 /* ---------------------------- Indexing Bins ---------------------------- */
2852 
2853 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2854 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2855 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2856 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2857 
2858 /* addressing by index. See above about smallbin repositioning */
2859 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2860 #define treebin_at(M,i) (&((M)->treebins[i]))
2861 
2862 /* assign tree index for size S to variable I. Use x86 asm if possible */
2863 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2864 #define compute_tree_index(S, I)\
2865 {\
2866  unsigned int X = S >> TREEBIN_SHIFT;\
2867  if (X == 0)\
2868  I = 0;\
2869  else if (X > 0xFFFF)\
2870  I = NTREEBINS-1;\
2871  else {\
2872  unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2873  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2874  }\
2875 }
2876 
2877 #elif defined (__INTEL_COMPILER)
2878 #define compute_tree_index(S, I)\
2879 {\
2880  size_t X = S >> TREEBIN_SHIFT;\
2881  if (X == 0)\
2882  I = 0;\
2883  else if (X > 0xFFFF)\
2884  I = NTREEBINS-1;\
2885  else {\
2886  unsigned int K = _bit_scan_reverse (X); \
2887  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2888  }\
2889 }
2890 
2891 #elif defined(_MSC_VER) && _MSC_VER>=1300
2892 #define compute_tree_index(S, I)\
2893 {\
2894  size_t X = S >> TREEBIN_SHIFT;\
2895  if (X == 0)\
2896  I = 0;\
2897  else if (X > 0xFFFF)\
2898  I = NTREEBINS-1;\
2899  else {\
2900  unsigned int K;\
2901  _BitScanReverse((DWORD *) &K, (DWORD) X);\
2902  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2903  }\
2904 }
2905 
2906 #else /* GNUC */
2907 #define compute_tree_index(S, I)\
2908 {\
2909  size_t X = S >> TREEBIN_SHIFT;\
2910  if (X == 0)\
2911  I = 0;\
2912  else if (X > 0xFFFF)\
2913  I = NTREEBINS-1;\
2914  else {\
2915  unsigned int Y = (unsigned int)X;\
2916  unsigned int N = ((Y - 0x100) >> 16) & 8;\
2917  unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2918  N += K;\
2919  N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2920  K = 14 - N + ((Y <<= K) >> 15);\
2921  I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2922  }\
2923 }
2924 #endif /* GNUC */
2925 
2926 /* Bit representing maximum resolved size in a treebin at i */
2927 #define bit_for_tree_index(i) \
2928  (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2929 
2930 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2931 #define leftshift_for_tree_index(i) \
2932  ((i == NTREEBINS-1)? 0 : \
2933  ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2934 
2935 /* The size of the smallest chunk held in bin with index i */
2936 #define minsize_for_tree_index(i) \
2937  ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2938  (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2939 
2940 
2941 /* ------------------------ Operations on bin maps ----------------------- */
2942 
2943 /* bit corresponding to given index */
2944 #define idx2bit(i) ((binmap_t)(1) << (i))
2945 
2946 /* Mark/Clear bits with given index */
2947 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2948 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2949 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2950 
2951 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2952 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2953 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2954 
2955 /* isolate the least set bit of a bitmap */
2956 #define least_bit(x) ((x) & -(x))
2957 
2958 /* mask with all bits to left of least bit of x on */
2959 #define left_bits(x) ((x<<1) | -(x<<1))
2960 
2961 /* mask with all bits to left of or equal to least bit of x on */
2962 #define same_or_left_bits(x) ((x) | -(x))
2963 
2964 /* index corresponding to given bit. Use x86 asm if possible */
2965 
2966 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2967 #define compute_bit2idx(X, I)\
2968 {\
2969  unsigned int J;\
2970  J = __builtin_ctz(X); \
2971  I = (bindex_t)J;\
2972 }
2973 
2974 #elif defined (__INTEL_COMPILER)
2975 #define compute_bit2idx(X, I)\
2976 {\
2977  unsigned int J;\
2978  J = _bit_scan_forward (X); \
2979  I = (bindex_t)J;\
2980 }
2981 
2982 #elif defined(_MSC_VER) && _MSC_VER>=1300
2983 #define compute_bit2idx(X, I)\
2984 {\
2985  unsigned int J;\
2986  _BitScanForward((DWORD *) &J, X);\
2987  I = (bindex_t)J;\
2988 }
2989 
2990 #elif USE_BUILTIN_FFS
2991 #define compute_bit2idx(X, I) I = ffs(X)-1
2992 
2993 #else
2994 #define compute_bit2idx(X, I)\
2995 {\
2996  unsigned int Y = X - 1;\
2997  unsigned int K = Y >> (16-4) & 16;\
2998  unsigned int N = K; Y >>= K;\
2999  N += K = Y >> (8-3) & 8; Y >>= K;\
3000  N += K = Y >> (4-2) & 4; Y >>= K;\
3001  N += K = Y >> (2-1) & 2; Y >>= K;\
3002  N += K = Y >> (1-0) & 1; Y >>= K;\
3003  I = (bindex_t)(N + Y);\
3004 }
3005 #endif /* GNUC */
3006 
3007 
3008 /* ----------------------- Runtime Check Support ------------------------- */
3009 
3010 /*
3011  For security, the main invariant is that malloc/free/etc never
3012  writes to a static address other than malloc_state, unless static
3013  malloc_state itself has been corrupted, which cannot occur via
3014  malloc (because of these checks). In essence this means that we
3015  believe all pointers, sizes, maps etc held in malloc_state, but
3016  check all of those linked or offsetted from other embedded data
3017  structures. These checks are interspersed with main code in a way
3018  that tends to minimize their run-time cost.
3019 
3020  When FOOTERS is defined, in addition to range checking, we also
3021  verify footer fields of inuse chunks, which can be used guarantee
3022  that the mstate controlling malloc/free is intact. This is a
3023  streamlined version of the approach described by William Robertson
3024  et al in "Run-time Detection of Heap-based Overflows" LISA'03
3025  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3026  of an inuse chunk holds the xor of its mstate and a random seed,
3027  that is checked upon calls to free() and realloc(). This is
3028  (probabalistically) unguessable from outside the program, but can be
3029  computed by any code successfully malloc'ing any chunk, so does not
3030  itself provide protection against code that has already broken
3031  security through some other means. Unlike Robertson et al, we
3032  always dynamically check addresses of all offset chunks (previous,
3033  next, etc). This turns out to be cheaper than relying on hashes.
3034 */
3035 
3036 #if !INSECURE
3037 /* Check if address a is at least as high as any from MORECORE or MMAP */
3038 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3039 /* Check if address of next chunk n is higher than base chunk p */
3040 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3041 /* Check if p has inuse status */
3042 #define ok_inuse(p) is_inuse(p)
3043 /* Check if p has its pinuse bit on */
3044 #define ok_pinuse(p) pinuse(p)
3045 
3046 #else /* !INSECURE */
3047 #define ok_address(M, a) (1)
3048 #define ok_next(b, n) (1)
3049 #define ok_inuse(p) (1)
3050 #define ok_pinuse(p) (1)
3051 #endif /* !INSECURE */
3052 
3053 #if (FOOTERS && !INSECURE)
3054 /* Check if (alleged) mstate m has expected magic field */
3055 #define ok_magic(M) ((M)->magic == mparams.magic)
3056 #else /* (FOOTERS && !INSECURE) */
3057 #define ok_magic(M) (1)
3058 #endif /* (FOOTERS && !INSECURE) */
3059 
3060 /* In gcc, use __builtin_expect to minimize impact of checks */
3061 #if !INSECURE
3062 #if defined(__GNUC__) && __GNUC__ >= 3
3063 #define RTCHECK(e) __builtin_expect(e, 1)
3064 #else /* GNUC */
3065 #define RTCHECK(e) (e)
3066 #endif /* GNUC */
3067 #else /* !INSECURE */
3068 #define RTCHECK(e) (1)
3069 #endif /* !INSECURE */
3070 
3071 /* macros to set up inuse chunks with or without footers */
3072 
3073 #if !FOOTERS
3074 
3075 #define mark_inuse_foot(M,p,s)
3076 
3077 /* Macros for setting head/foot of non-mmapped chunks */
3078 
3079 /* Set cinuse bit and pinuse bit of next chunk */
3080 #define set_inuse(M,p,s)\
3081  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3082  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3083 
3084 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3085 #define set_inuse_and_pinuse(M,p,s)\
3086  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3087  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3088 
3089 /* Set size, cinuse and pinuse bit of this chunk */
3090 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3091  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3092 
3093 #else /* FOOTERS */
3094 
3095 /* Set foot of inuse chunk to be xor of mstate and seed */
3096 #define mark_inuse_foot(M,p,s)\
3097  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3098 
3099 #define get_mstate_for(p)\
3100  ((mstate)(((mchunkptr)((char*)(p) +\
3101  (chunksize(p))))->prev_foot ^ mparams.magic))
3102 
3103 #define set_inuse(M,p,s)\
3104  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3105  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3106  mark_inuse_foot(M,p,s))
3107 
3108 #define set_inuse_and_pinuse(M,p,s)\
3109  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3110  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3111  mark_inuse_foot(M,p,s))
3112 
3113 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3114  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3115  mark_inuse_foot(M, p, s))
3116 
3117 #endif /* !FOOTERS */
3118 
3119 /* ---------------------------- setting mparams -------------------------- */
3120 
3121 #if LOCK_AT_FORK
3122 static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
3123 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
3124 static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3125 #endif /* LOCK_AT_FORK */
3126 
3127 /* Initialize mparams */
3128 static int init_mparams(void) {
3129 #ifdef NEED_GLOBAL_LOCK_INIT
3130  if (malloc_global_mutex_status <= 0)
3131  init_malloc_global_mutex();
3132 #endif
3133 
3134  ACQUIRE_MALLOC_GLOBAL_LOCK();
3135  if (mparams.magic == 0) {
3136  size_t magic;
3137  size_t psize;
3138  size_t gsize;
3139 
3140 #ifndef WIN32
3141  psize = malloc_getpagesize;
3142  gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3143 #else /* WIN32 */
3144  {
3145  SYSTEM_INFO system_info;
3146  GetSystemInfo(&system_info);
3147  psize = system_info.dwPageSize;
3148  gsize = ((DEFAULT_GRANULARITY != 0)?
3149  DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3150  }
3151 #endif /* WIN32 */
3152 
3153  /* Sanity-check configuration:
3154  size_t must be unsigned and as wide as pointer type.
3155  ints must be at least 4 bytes.
3156  alignment must be at least 8.
3157  Alignment, min chunk size, and page size must all be powers of 2.
3158  */
3159  if ((sizeof(size_t) != sizeof(char*)) ||
3160  (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3161  (sizeof(int) < 4) ||
3162  (MALLOC_ALIGNMENT < (size_t)8U) ||
3163  ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3164  ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3165  ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3166  ((psize & (psize-SIZE_T_ONE)) != 0))
3167  ABORT;
3168  mparams.granularity = gsize;
3169  mparams.page_size = psize;
3170  mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3171  mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3172 #if MORECORE_CONTIGUOUS
3173  mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3174 #else /* MORECORE_CONTIGUOUS */
3175  mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3176 #endif /* MORECORE_CONTIGUOUS */
3177 
3178 #if !ONLY_MSPACES
3179  /* Set up lock for main malloc area */
3180  gm->mflags = mparams.default_mflags;
3181  (void)INITIAL_LOCK(&gm->mutex);
3182 #endif
3183 #if LOCK_AT_FORK
3184  pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3185 #endif
3186 
3187  {
3188 #if USE_DEV_RANDOM
3189  int fd;
3190  unsigned char buf[sizeof(size_t)];
3191  /* Try to use /dev/urandom, else fall back on using time */
3192  if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3193  read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3194  magic = *((size_t *) buf);
3195  close(fd);
3196  }
3197  else
3198 #endif /* USE_DEV_RANDOM */
3199 #ifdef WIN32
3200  magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3201 #elif defined(LACKS_TIME_H)
3202  magic = (size_t)&magic ^ (size_t)0x55555555U;
3203 #else
3204  magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3205 #endif
3206  magic |= (size_t)8U; /* ensure nonzero */
3207  magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3208  /* Until memory modes commonly available, use volatile-write */
3209  (*(volatile size_t *)(&(mparams.magic))) = magic;
3210  }
3211  }
3212 
3213  RELEASE_MALLOC_GLOBAL_LOCK();
3214  return 1;
3215 }
3216 
3217 /* support for mallopt */
3218 static int change_mparam(int param_number, int value) {
3219  size_t val;
3220  ensure_initialization();
3221  val = (value == -1)? MAX_SIZE_T : (size_t)value;
3222  switch(param_number) {
3223  case M_TRIM_THRESHOLD:
3224  mparams.trim_threshold = val;
3225  return 1;
3226  case M_GRANULARITY:
3227  if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3228  mparams.granularity = val;
3229  return 1;
3230  }
3231  else
3232  return 0;
3233  case M_MMAP_THRESHOLD:
3234  mparams.mmap_threshold = val;
3235  return 1;
3236  default:
3237  return 0;
3238  }
3239 }
3240 
3241 #if DEBUG
3242 /* ------------------------- Debugging Support --------------------------- */
3243 
3244 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3245 static void do_check_any_chunk(mstate m, mchunkptr p) {
3246  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3247  assert(ok_address(m, p));
3248 }
3249 
3250 /* Check properties of top chunk */
3251 static void do_check_top_chunk(mstate m, mchunkptr p) {
3252  msegmentptr sp = segment_holding(m, (char*)p);
3253  size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3254  assert(sp != 0);
3255  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3256  assert(ok_address(m, p));
3257  assert(sz == m->topsize);
3258  assert(sz > 0);
3259  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3260  assert(pinuse(p));
3261  assert(!pinuse(chunk_plus_offset(p, sz)));
3262 }
3263 
3264 /* Check properties of (inuse) mmapped chunks */
3265 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3266  size_t sz = chunksize(p);
3267  size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3268  assert(is_mmapped(p));
3269  assert(use_mmap(m));
3270  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3271  assert(ok_address(m, p));
3272  assert(!is_small(sz));
3273  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3274  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3275  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3276 }
3277 
3278 /* Check properties of inuse chunks */
3279 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3280  do_check_any_chunk(m, p);
3281  assert(is_inuse(p));
3282  assert(next_pinuse(p));
3283  /* If not pinuse and not mmapped, previous chunk has OK offset */
3284  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3285  if (is_mmapped(p))
3286  do_check_mmapped_chunk(m, p);
3287 }
3288 
3289 /* Check properties of free chunks */
3290 static void do_check_free_chunk(mstate m, mchunkptr p) {
3291  size_t sz = chunksize(p);
3292  mchunkptr next = chunk_plus_offset(p, sz);
3293  do_check_any_chunk(m, p);
3294  assert(!is_inuse(p));
3295  assert(!next_pinuse(p));
3296  assert (!is_mmapped(p));
3297  if (p != m->dv && p != m->top) {
3298  if (sz >= MIN_CHUNK_SIZE) {
3299  assert((sz & CHUNK_ALIGN_MASK) == 0);
3300  assert(is_aligned(chunk2mem(p)));
3301  assert(next->prev_foot == sz);
3302  assert(pinuse(p));
3303  assert (next == m->top || is_inuse(next));
3304  assert(p->fd->bk == p);
3305  assert(p->bk->fd == p);
3306  }
3307  else /* markers are always of size SIZE_T_SIZE */
3308  assert(sz == SIZE_T_SIZE);
3309  }
3310 }
3311 
3312 /* Check properties of malloced chunks at the point they are malloced */
3313 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3314  if (mem != 0) {
3315  mchunkptr p = mem2chunk(mem);
3316  size_t sz = p->head & ~INUSE_BITS;
3317  do_check_inuse_chunk(m, p);
3318  assert((sz & CHUNK_ALIGN_MASK) == 0);
3319  assert(sz >= MIN_CHUNK_SIZE);
3320  assert(sz >= s);
3321  /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3322  assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3323  }
3324 }
3325 
3326 /* Check a tree and its subtrees. */
3327 static void do_check_tree(mstate m, tchunkptr t) {
3328  tchunkptr head = 0;
3329  tchunkptr u = t;
3330  bindex_t tindex = t->index;
3331  size_t tsize = chunksize(t);
3332  bindex_t idx;
3333  compute_tree_index(tsize, idx);
3334  assert(tindex == idx);
3335  assert(tsize >= MIN_LARGE_SIZE);
3336  assert(tsize >= minsize_for_tree_index(idx));
3337  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3338 
3339  do { /* traverse through chain of same-sized nodes */
3340  do_check_any_chunk(m, ((mchunkptr)u));
3341  assert(u->index == tindex);
3342  assert(chunksize(u) == tsize);
3343  assert(!is_inuse(u));
3344  assert(!next_pinuse(u));
3345  assert(u->fd->bk == u);
3346  assert(u->bk->fd == u);
3347  if (u->parent == 0) {
3348  assert(u->child[0] == 0);
3349  assert(u->child[1] == 0);
3350  }
3351  else {
3352  assert(head == 0); /* only one node on chain has parent */
3353  head = u;
3354  assert(u->parent != u);
3355  assert (u->parent->child[0] == u ||
3356  u->parent->child[1] == u ||
3357  *((tbinptr*)(u->parent)) == u);
3358  if (u->child[0] != 0) {
3359  assert(u->child[0]->parent == u);
3360  assert(u->child[0] != u);
3361  do_check_tree(m, u->child[0]);
3362  }
3363  if (u->child[1] != 0) {
3364  assert(u->child[1]->parent == u);
3365  assert(u->child[1] != u);
3366  do_check_tree(m, u->child[1]);
3367  }
3368  if (u->child[0] != 0 && u->child[1] != 0) {
3369  assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3370  }
3371  }
3372  u = u->fd;
3373  } while (u != t);
3374  assert(head != 0);
3375 }
3376 
3377 /* Check all the chunks in a treebin. */
3378 static void do_check_treebin(mstate m, bindex_t i) {
3379  tbinptr* tb = treebin_at(m, i);
3380  tchunkptr t = *tb;
3381  int empty = (m->treemap & (1U << i)) == 0;
3382  if (t == 0)
3383  assert(empty);
3384  if (!empty)
3385  do_check_tree(m, t);
3386 }
3387 
3388 /* Check all the chunks in a smallbin. */
3389 static void do_check_smallbin(mstate m, bindex_t i) {
3390  sbinptr b = smallbin_at(m, i);
3391  mchunkptr p = b->bk;
3392  unsigned int empty = (m->smallmap & (1U << i)) == 0;
3393  if (p == b)
3394  assert(empty);
3395  if (!empty) {
3396  for (; p != b; p = p->bk) {
3397  size_t size = chunksize(p);
3398  mchunkptr q;
3399  /* each chunk claims to be free */
3400  do_check_free_chunk(m, p);
3401  /* chunk belongs in bin */
3402  assert(small_index(size) == i);
3403  assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3404  /* chunk is followed by an inuse chunk */
3405  q = next_chunk(p);
3406  if (q->head != FENCEPOST_HEAD)
3407  do_check_inuse_chunk(m, q);
3408  }
3409  }
3410 }
3411 
3412 /* Find x in a bin. Used in other check functions. */
3413 static int bin_find(mstate m, mchunkptr x) {
3414  size_t size = chunksize(x);
3415  if (is_small(size)) {
3416  bindex_t sidx = small_index(size);
3417  sbinptr b = smallbin_at(m, sidx);
3418  if (smallmap_is_marked(m, sidx)) {
3419  mchunkptr p = b;
3420  do {
3421  if (p == x)
3422  return 1;
3423  } while ((p = p->fd) != b);
3424  }
3425  }
3426  else {
3427  bindex_t tidx;
3428  compute_tree_index(size, tidx);
3429  if (treemap_is_marked(m, tidx)) {
3430  tchunkptr t = *treebin_at(m, tidx);
3431  size_t sizebits = size << leftshift_for_tree_index(tidx);
3432  while (t != 0 && chunksize(t) != size) {
3433  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3434  sizebits <<= 1;
3435  }
3436  if (t != 0) {
3437  tchunkptr u = t;
3438  do {
3439  if (u == (tchunkptr)x)
3440  return 1;
3441  } while ((u = u->fd) != t);
3442  }
3443  }
3444  }
3445  return 0;
3446 }
3447 
3448 /* Traverse each chunk and check it; return total */
3449 static size_t traverse_and_check(mstate m) {
3450  size_t sum = 0;
3451  if (is_initialized(m)) {
3452  msegmentptr s = &m->seg;
3453  sum += m->topsize + TOP_FOOT_SIZE;
3454  while (s != 0) {
3455  mchunkptr q = align_as_chunk(s->base);
3456  mchunkptr lastq = 0;
3457  assert(pinuse(q));
3458  while (segment_holds(s, q) &&
3459  q != m->top && q->head != FENCEPOST_HEAD) {
3460  sum += chunksize(q);
3461  if (is_inuse(q)) {
3462  assert(!bin_find(m, q));
3463  do_check_inuse_chunk(m, q);
3464  }
3465  else {
3466  assert(q == m->dv || bin_find(m, q));
3467  assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3468  do_check_free_chunk(m, q);
3469  }
3470  lastq = q;
3471  q = next_chunk(q);
3472  }
3473  s = s->next;
3474  }
3475  }
3476  return sum;
3477 }
3478 
3479 
3480 /* Check all properties of malloc_state. */
3481 static void do_check_malloc_state(mstate m) {
3482  bindex_t i;
3483  size_t total;
3484  /* check bins */
3485  for (i = 0; i < NSMALLBINS; ++i)
3486  do_check_smallbin(m, i);
3487  for (i = 0; i < NTREEBINS; ++i)
3488  do_check_treebin(m, i);
3489 
3490  if (m->dvsize != 0) { /* check dv chunk */
3491  do_check_any_chunk(m, m->dv);
3492  assert(m->dvsize == chunksize(m->dv));
3493  assert(m->dvsize >= MIN_CHUNK_SIZE);
3494  assert(bin_find(m, m->dv) == 0);
3495  }
3496 
3497  if (m->top != 0) { /* check top chunk */
3498  do_check_top_chunk(m, m->top);
3499  /*assert(m->topsize == chunksize(m->top)); redundant */
3500  assert(m->topsize > 0);
3501  assert(bin_find(m, m->top) == 0);
3502  }
3503 
3504  total = traverse_and_check(m);
3505  assert(total <= m->footprint);
3506  assert(m->footprint <= m->max_footprint);
3507 }
3508 #endif /* DEBUG */
3509 
3510 /* ----------------------------- statistics ------------------------------ */
3511 
3512 #if !NO_MALLINFO
3513 static struct mallinfo internal_mallinfo(mstate m) {
3514  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3515  ensure_initialization();
3516  if (!PREACTION(m)) {
3517  check_malloc_state(m);
3518  if (is_initialized(m)) {
3519  size_t nfree = SIZE_T_ONE; /* top always free */
3520  size_t mfree = m->topsize + TOP_FOOT_SIZE;
3521  size_t sum = mfree;
3522  msegmentptr s = &m->seg;
3523  while (s != 0) {
3524  mchunkptr q = align_as_chunk(s->base);
3525  while (segment_holds(s, q) &&
3526  q != m->top && q->head != FENCEPOST_HEAD) {
3527  size_t sz = chunksize(q);
3528  sum += sz;
3529  if (!is_inuse(q)) {
3530  mfree += sz;
3531  ++nfree;
3532  }
3533  q = next_chunk(q);
3534  }
3535  s = s->next;
3536  }
3537 
3538  nm.arena = sum;
3539  nm.ordblks = nfree;
3540  nm.hblkhd = m->footprint - sum;
3541  nm.usmblks = m->max_footprint;
3542  nm.uordblks = m->footprint - mfree;
3543  nm.fordblks = mfree;
3544  nm.keepcost = m->topsize;
3545  }
3546 
3547  POSTACTION(m);
3548  }
3549  return nm;
3550 }
3551 #endif /* !NO_MALLINFO */
3552 
3553 #if !NO_MALLOC_STATS
3554 static void internal_malloc_stats(mstate m) {
3555  ensure_initialization();
3556  if (!PREACTION(m)) {
3557  size_t maxfp = 0;
3558  size_t fp = 0;
3559  size_t used = 0;
3560  check_malloc_state(m);
3561  if (is_initialized(m)) {
3562  msegmentptr s = &m->seg;
3563  maxfp = m->max_footprint;
3564  fp = m->footprint;
3565  used = fp - (m->topsize + TOP_FOOT_SIZE);
3566 
3567  while (s != 0) {
3568  mchunkptr q = align_as_chunk(s->base);
3569  while (segment_holds(s, q) &&
3570  q != m->top && q->head != FENCEPOST_HEAD) {
3571  if (!is_inuse(q))
3572  used -= chunksize(q);
3573  q = next_chunk(q);
3574  }
3575  s = s->next;
3576  }
3577  }
3578  POSTACTION(m); /* drop lock */
3579  fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3580  fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3581  fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3582  }
3583 }
3584 #endif /* NO_MALLOC_STATS */
3585 
3586 /* ----------------------- Operations on smallbins ----------------------- */
3587 
3588 /*
3589  Various forms of linking and unlinking are defined as macros. Even
3590  the ones for trees, which are very long but have very short typical
3591  paths. This is ugly but reduces reliance on inlining support of
3592  compilers.
3593 */
3594 
3595 /* Link a free chunk into a smallbin */
3596 #define insert_small_chunk(M, P, S) {\
3597  bindex_t I = small_index(S);\
3598  mchunkptr B = smallbin_at(M, I);\
3599  mchunkptr F = B;\
3600  assert(S >= MIN_CHUNK_SIZE);\
3601  if (!smallmap_is_marked(M, I))\
3602  mark_smallmap(M, I);\
3603  else if (RTCHECK(ok_address(M, B->fd)))\
3604  F = B->fd;\
3605  else {\
3606  CORRUPTION_ERROR_ACTION(M);\
3607  }\
3608  B->fd = P;\
3609  F->bk = P;\
3610  P->fd = F;\
3611  P->bk = B;\
3612 }
3613 
3614 /* Unlink a chunk from a smallbin */
3615 #define unlink_small_chunk(M, P, S) {\
3616  mchunkptr F = P->fd;\
3617  mchunkptr B = P->bk;\
3618  bindex_t I = small_index(S);\
3619  assert(P != B);\
3620  assert(P != F);\
3621  assert(chunksize(P) == small_index2size(I));\
3622  if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3623  if (B == F) {\
3624  clear_smallmap(M, I);\
3625  }\
3626  else if (RTCHECK(B == smallbin_at(M,I) ||\
3627  (ok_address(M, B) && B->fd == P))) {\
3628  F->bk = B;\
3629  B->fd = F;\
3630  }\
3631  else {\
3632  CORRUPTION_ERROR_ACTION(M);\
3633  }\
3634  }\
3635  else {\
3636  CORRUPTION_ERROR_ACTION(M);\
3637  }\
3638 }
3639 
3640 /* Unlink the first chunk from a smallbin */
3641 #define unlink_first_small_chunk(M, B, P, I) {\
3642  mchunkptr F = P->fd;\
3643  assert(P != B);\
3644  assert(P != F);\
3645  assert(chunksize(P) == small_index2size(I));\
3646  if (B == F) {\
3647  clear_smallmap(M, I);\
3648  }\
3649  else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3650  F->bk = B;\
3651  B->fd = F;\
3652  }\
3653  else {\
3654  CORRUPTION_ERROR_ACTION(M);\
3655  }\
3656 }
3657 
3658 /* Replace dv node, binning the old one */
3659 /* Used only when dvsize known to be small */
3660 #define replace_dv(M, P, S) {\
3661  size_t DVS = M->dvsize;\
3662  assert(is_small(DVS));\
3663  if (DVS != 0) {\
3664  mchunkptr DV = M->dv;\
3665  insert_small_chunk(M, DV, DVS);\
3666  }\
3667  M->dvsize = S;\
3668  M->dv = P;\
3669 }
3670 
3671 /* ------------------------- Operations on trees ------------------------- */
3672 
3673 /* Insert chunk into tree */
3674 #define insert_large_chunk(M, X, S) {\
3675  tbinptr* H;\
3676  bindex_t I;\
3677  compute_tree_index(S, I);\
3678  H = treebin_at(M, I);\
3679  X->index = I;\
3680  X->child[0] = X->child[1] = 0;\
3681  if (!treemap_is_marked(M, I)) {\
3682  mark_treemap(M, I);\
3683  *H = X;\
3684  X->parent = (tchunkptr)H;\
3685  X->fd = X->bk = X;\
3686  }\
3687  else {\
3688  tchunkptr T = *H;\
3689  size_t K = S << leftshift_for_tree_index(I);\
3690  for (;;) {\
3691  if (chunksize(T) != S) {\
3692  tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3693  K <<= 1;\
3694  if (*C != 0)\
3695  T = *C;\
3696  else if (RTCHECK(ok_address(M, C))) {\
3697  *C = X;\
3698  X->parent = T;\
3699  X->fd = X->bk = X;\
3700  break;\
3701  }\
3702  else {\
3703  CORRUPTION_ERROR_ACTION(M);\
3704  break;\
3705  }\
3706  }\
3707  else {\
3708  tchunkptr F = T->fd;\
3709  if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3710  T->fd = F->bk = X;\
3711  X->fd = F;\
3712  X->bk = T;\
3713  X->parent = 0;\
3714  break;\
3715  }\
3716  else {\
3717  CORRUPTION_ERROR_ACTION(M);\
3718  break;\
3719  }\
3720  }\
3721  }\
3722  }\
3723 }
3724 
3725 /*
3726  Unlink steps:
3727 
3728  1. If x is a chained node, unlink it from its same-sized fd/bk links
3729  and choose its bk node as its replacement.
3730  2. If x was the last node of its size, but not a leaf node, it must
3731  be replaced with a leaf node (not merely one with an open left or
3732  right), to make sure that lefts and rights of descendents
3733  correspond properly to bit masks. We use the rightmost descendent
3734  of x. We could use any other leaf, but this is easy to locate and
3735  tends to counteract removal of leftmosts elsewhere, and so keeps
3736  paths shorter than minimally guaranteed. This doesn't loop much
3737  because on average a node in a tree is near the bottom.
3738  3. If x is the base of a chain (i.e., has parent links) relink
3739  x's parent and children to x's replacement (or null if none).
3740 */
3741 
3742 #define unlink_large_chunk(M, X) {\
3743  tchunkptr XP = X->parent;\
3744  tchunkptr R;\
3745  if (X->bk != X) {\
3746  tchunkptr F = X->fd;\
3747  R = X->bk;\
3748  if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3749  F->bk = R;\
3750  R->fd = F;\
3751  }\
3752  else {\
3753  CORRUPTION_ERROR_ACTION(M);\
3754  }\
3755  }\
3756  else {\
3757  tchunkptr* RP;\
3758  if (((R = *(RP = &(X->child[1]))) != 0) ||\
3759  ((R = *(RP = &(X->child[0]))) != 0)) {\
3760  tchunkptr* CP;\
3761  while ((*(CP = &(R->child[1])) != 0) ||\
3762  (*(CP = &(R->child[0])) != 0)) {\
3763  R = *(RP = CP);\
3764  }\
3765  if (RTCHECK(ok_address(M, RP)))\
3766  *RP = 0;\
3767  else {\
3768  CORRUPTION_ERROR_ACTION(M);\
3769  }\
3770  }\
3771  }\
3772  if (XP != 0) {\
3773  tbinptr* H = treebin_at(M, X->index);\
3774  if (X == *H) {\
3775  if ((*H = R) == 0) \
3776  clear_treemap(M, X->index);\
3777  }\
3778  else if (RTCHECK(ok_address(M, XP))) {\
3779  if (XP->child[0] == X) \
3780  XP->child[0] = R;\
3781  else \
3782  XP->child[1] = R;\
3783  }\
3784  else\
3785  CORRUPTION_ERROR_ACTION(M);\
3786  if (R != 0) {\
3787  if (RTCHECK(ok_address(M, R))) {\
3788  tchunkptr C0, C1;\
3789  R->parent = XP;\
3790  if ((C0 = X->child[0]) != 0) {\
3791  if (RTCHECK(ok_address(M, C0))) {\
3792  R->child[0] = C0;\
3793  C0->parent = R;\
3794  }\
3795  else\
3796  CORRUPTION_ERROR_ACTION(M);\
3797  }\
3798  if ((C1 = X->child[1]) != 0) {\
3799  if (RTCHECK(ok_address(M, C1))) {\
3800  R->child[1] = C1;\
3801  C1->parent = R;\
3802  }\
3803  else\
3804  CORRUPTION_ERROR_ACTION(M);\
3805  }\
3806  }\
3807  else\
3808  CORRUPTION_ERROR_ACTION(M);\
3809  }\
3810  }\
3811 }
3812 
3813 /* Relays to large vs small bin operations */
3814 
3815 #define insert_chunk(M, P, S)\
3816  if (is_small(S)) insert_small_chunk(M, P, S)\
3817  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3818 
3819 #define unlink_chunk(M, P, S)\
3820  if (is_small(S)) unlink_small_chunk(M, P, S)\
3821  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3822 
3823 
3824 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3825 
3826 #if ONLY_MSPACES
3827 #define internal_malloc(m, b) mspace_malloc(m, b)
3828 #define internal_free(m, mem) mspace_free(m,mem);
3829 #else /* ONLY_MSPACES */
3830 #if MSPACES
3831 #define internal_malloc(m, b)\
3832  ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3833 #define internal_free(m, mem)\
3834  if (m == gm) dlfree(mem); else mspace_free(m,mem);
3835 #else /* MSPACES */
3836 #define internal_malloc(m, b) dlmalloc(b)
3837 #define internal_free(m, mem) dlfree(mem)
3838 #endif /* MSPACES */
3839 #endif /* ONLY_MSPACES */
3840 
3841 /* ----------------------- Direct-mmapping chunks ----------------------- */
3842 
3843 /*
3844  Directly mmapped chunks are set up with an offset to the start of
3845  the mmapped region stored in the prev_foot field of the chunk. This
3846  allows reconstruction of the required argument to MUNMAP when freed,
3847  and also allows adjustment of the returned chunk to meet alignment
3848  requirements (especially in memalign).
3849 */
3850 
3851 /* Malloc using mmap */
3852 static void* mmap_alloc(mstate m, size_t nb) {
3853  size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3854  if (m->footprint_limit != 0) {
3855  size_t fp = m->footprint + mmsize;
3856  if (fp <= m->footprint || fp > m->footprint_limit)
3857  return 0;
3858  }
3859  if (mmsize > nb) { /* Check for wrap around 0 */
3860  char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3861  if (mm != CMFAIL) {
3862  size_t offset = align_offset(chunk2mem(mm));
3863  size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3864  mchunkptr p = (mchunkptr)(mm + offset);
3865  p->prev_foot = offset;
3866  p->head = psize;
3867  mark_inuse_foot(m, p, psize);
3868  chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3869  chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3870 
3871  if (m->least_addr == 0 || mm < m->least_addr)
3872  m->least_addr = mm;
3873  if ((m->footprint += mmsize) > m->max_footprint)
3874  m->max_footprint = m->footprint;
3875  assert(is_aligned(chunk2mem(p)));
3876  check_mmapped_chunk(m, p);
3877  return chunk2mem(p);
3878  }
3879  }
3880  return 0;
3881 }
3882 
3883 /* Realloc using mmap */
3884 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3885  size_t oldsize = chunksize(oldp);
3886  (void)flags; /* placate people compiling -Wunused */
3887  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3888  return 0;
3889  /* Keep old chunk if big enough but not too big */
3890  if (oldsize >= nb + SIZE_T_SIZE &&
3891  (oldsize - nb) <= (mparams.granularity << 1))
3892  return oldp;
3893  else {
3894  size_t offset = oldp->prev_foot;
3895  size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3896  size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3897  char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3898  oldmmsize, newmmsize, flags);
3899  if (cp != CMFAIL) {
3900  mchunkptr newp = (mchunkptr)(cp + offset);
3901  size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3902  newp->head = psize;
3903  mark_inuse_foot(m, newp, psize);
3904  chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3905  chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3906 
3907  if (cp < m->least_addr)
3908  m->least_addr = cp;
3909  if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3910  m->max_footprint = m->footprint;
3911  check_mmapped_chunk(m, newp);
3912  return newp;
3913  }
3914  }
3915  return 0;
3916 }
3917 
3918 
3919 /* -------------------------- mspace management -------------------------- */
3920 
3921 /* Initialize top chunk and its size */
3922 static void init_top(mstate m, mchunkptr p, size_t psize) {
3923  /* Ensure alignment */
3924  size_t offset = align_offset(chunk2mem(p));
3925  p = (mchunkptr)((char*)p + offset);
3926  psize -= offset;
3927 
3928  m->top = p;
3929  m->topsize = psize;
3930  p->head = psize | PINUSE_BIT;
3931  /* set size of fake trailing chunk holding overhead space only once */
3932  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3933  m->trim_check = mparams.trim_threshold; /* reset on each update */
3934 }
3935 
3936 /* Initialize bins for a new mstate that is otherwise zeroed out */
3937 static void init_bins(mstate m) {
3938  /* Establish circular links for smallbins */
3939  bindex_t i;
3940  for (i = 0; i < NSMALLBINS; ++i) {
3941  sbinptr bin = smallbin_at(m,i);
3942  bin->fd = bin->bk = bin;
3943  }
3944 }
3945 
3946 #if PROCEED_ON_ERROR
3947 
3948 /* default corruption action */
3949 static void reset_on_error(mstate m) {
3950  int i;
3951  ++malloc_corruption_error_count;
3952  /* Reinitialize fields to forget about all memory */
3953  m->smallmap = m->treemap = 0;
3954  m->dvsize = m->topsize = 0;
3955  m->seg.base = 0;
3956  m->seg.size = 0;
3957  m->seg.next = 0;
3958  m->top = m->dv = 0;
3959  for (i = 0; i < NTREEBINS; ++i)
3960  *treebin_at(m, i) = 0;
3961  init_bins(m);
3962 }
3963 #endif /* PROCEED_ON_ERROR */
3964 
3965 /* Allocate chunk and prepend remainder with chunk in successor base. */
3966 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3967  size_t nb) {
3968  mchunkptr p = align_as_chunk(newbase);
3969  mchunkptr oldfirst = align_as_chunk(oldbase);
3970  size_t psize = (char*)oldfirst - (char*)p;
3971  mchunkptr q = chunk_plus_offset(p, nb);
3972  size_t qsize = psize - nb;
3973  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3974 
3975  assert((char*)oldfirst > (char*)q);
3976  assert(pinuse(oldfirst));
3977  assert(qsize >= MIN_CHUNK_SIZE);
3978 
3979  /* consolidate remainder with first chunk of old base */
3980  if (oldfirst == m->top) {
3981  size_t tsize = m->topsize += qsize;
3982  m->top = q;
3983  q->head = tsize | PINUSE_BIT;
3984  check_top_chunk(m, q);
3985  }
3986  else if (oldfirst == m->dv) {
3987  size_t dsize = m->dvsize += qsize;
3988  m->dv = q;
3989  set_size_and_pinuse_of_free_chunk(q, dsize);
3990  }
3991  else {
3992  if (!is_inuse(oldfirst)) {
3993  size_t nsize = chunksize(oldfirst);
3994  unlink_chunk(m, oldfirst, nsize);
3995  oldfirst = chunk_plus_offset(oldfirst, nsize);
3996  qsize += nsize;
3997  }
3998  set_free_with_pinuse(q, qsize, oldfirst);
3999  insert_chunk(m, q, qsize);
4000  check_free_chunk(m, q);
4001  }
4002 
4003  check_malloced_chunk(m, chunk2mem(p), nb);
4004  return chunk2mem(p);
4005 }
4006 
4007 /* Add a segment to hold a new noncontiguous region */
4008 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
4009  /* Determine locations and sizes of segment, fenceposts, old top */
4010  char* old_top = (char*)m->top;
4011  msegmentptr oldsp = segment_holding(m, old_top);
4012  char* old_end = oldsp->base + oldsp->size;
4013  size_t ssize = pad_request(sizeof(struct malloc_segment));
4014  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
4015  size_t offset = align_offset(chunk2mem(rawsp));
4016  char* asp = rawsp + offset;
4017  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
4018  mchunkptr sp = (mchunkptr)csp;
4019  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
4020  mchunkptr tnext = chunk_plus_offset(sp, ssize);
4021  mchunkptr p = tnext;
4022  int nfences = 0;
4023 
4024  /* reset top to new space */
4025  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4026 
4027  /* Set up segment record */
4028  assert(is_aligned(ss));
4029  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4030  *ss = m->seg; /* Push current record */
4031  m->seg.base = tbase;
4032  m->seg.size = tsize;
4033  m->seg.sflags = mmapped;
4034  m->seg.next = ss;
4035 
4036  /* Insert trailing fenceposts */
4037  for (;;) {
4038  mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4039  p->head = FENCEPOST_HEAD;
4040  ++nfences;
4041  if ((char*)(&(nextp->head)) < old_end)
4042  p = nextp;
4043  else
4044  break;
4045  }
4046  assert(nfences >= 2);
4047 
4048  /* Insert the rest of old top into a bin as an ordinary free chunk */
4049  if (csp != old_top) {
4050  mchunkptr q = (mchunkptr)old_top;
4051  size_t psize = csp - old_top;
4052  mchunkptr tn = chunk_plus_offset(q, psize);
4053  set_free_with_pinuse(q, psize, tn);
4054  insert_chunk(m, q, psize);
4055  }
4056 
4057  check_top_chunk(m, m->top);
4058 }
4059 
4060 /* -------------------------- System allocation -------------------------- */
4061 
4062 /* Get memory from system using MORECORE or MMAP */
4063 static void* sys_alloc(mstate m, size_t nb) {
4064  char* tbase = CMFAIL;
4065  size_t tsize = 0;
4066  flag_t mmap_flag = 0;
4067  size_t asize; /* allocation size */
4068 
4069  ensure_initialization();
4070 
4071  /* Directly map large chunks, but only if already initialized */
4072  if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4073  void* mem = mmap_alloc(m, nb);
4074  if (mem != 0)
4075  return mem;
4076  }
4077 
4078  asize = granularity_align(nb + SYS_ALLOC_PADDING);
4079  if (asize <= nb)
4080  return 0; /* wraparound */
4081  if (m->footprint_limit != 0) {
4082  size_t fp = m->footprint + asize;
4083  if (fp <= m->footprint || fp > m->footprint_limit)
4084  return 0;
4085  }
4086 
4087  /*
4088  Try getting memory in any of three ways (in most-preferred to
4089  least-preferred order):
4090  1. A call to MORECORE that can normally contiguously extend memory.
4091  (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4092  or main space is mmapped or a previous contiguous call failed)
4093  2. A call to MMAP new space (disabled if not HAVE_MMAP).
4094  Note that under the default settings, if MORECORE is unable to
4095  fulfill a request, and HAVE_MMAP is true, then mmap is
4096  used as a noncontiguous system allocator. This is a useful backup
4097  strategy for systems with holes in address spaces -- in this case
4098  sbrk cannot contiguously expand the heap, but mmap may be able to
4099  find space.
4100  3. A call to MORECORE that cannot usually contiguously extend memory.
4101  (disabled if not HAVE_MORECORE)
4102 
4103  In all cases, we need to request enough bytes from system to ensure
4104  we can malloc nb bytes upon success, so pad with enough space for
4105  top_foot, plus alignment-pad to make sure we don't lose bytes if
4106  not on boundary, and round this up to a granularity unit.
4107  */
4108 
4109  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4110  char* br = CMFAIL;
4111  size_t ssize = asize; /* sbrk call size */
4112  msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4113  ACQUIRE_MALLOC_GLOBAL_LOCK();
4114 
4115  if (ss == 0) { /* First time through or recovery */
4116  char* base = (char*)CALL_MORECORE(0);
4117  if (base != CMFAIL) {
4118  size_t fp;
4119  /* Adjust to end on a page boundary */
4120  if (!is_page_aligned(base))
4121  ssize += (page_align((size_t)base) - (size_t)base);
4122  fp = m->footprint + ssize; /* recheck limits */
4123  if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4124  (m->footprint_limit == 0 ||
4125  (fp > m->footprint && fp <= m->footprint_limit)) &&
4126  (br = (char*)(CALL_MORECORE(ssize))) == base) {
4127  tbase = base;
4128  tsize = ssize;
4129  }
4130  }
4131  }
4132  else {
4133  /* Subtract out existing available top space from MORECORE request. */
4134  ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4135  /* Use mem here only if it did continuously extend old space */
4136  if (ssize < HALF_MAX_SIZE_T &&
4137  (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4138  tbase = br;
4139  tsize = ssize;
4140  }
4141  }
4142 
4143  if (tbase == CMFAIL) { /* Cope with partial failure */
4144  if (br != CMFAIL) { /* Try to use/extend the space we did get */
4145  if (ssize < HALF_MAX_SIZE_T &&
4146  ssize < nb + SYS_ALLOC_PADDING) {
4147  size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4148  if (esize < HALF_MAX_SIZE_T) {
4149  char* end = (char*)CALL_MORECORE(esize);
4150  if (end != CMFAIL)
4151  ssize += esize;
4152  else { /* Can't use; try to release */
4153  (void) CALL_MORECORE(-ssize);
4154  br = CMFAIL;
4155  }
4156  }
4157  }
4158  }
4159  if (br != CMFAIL) { /* Use the space we did get */
4160  tbase = br;
4161  tsize = ssize;
4162  }
4163  else
4164  disable_contiguous(m); /* Don't try contiguous path in the future */
4165  }
4166 
4167  RELEASE_MALLOC_GLOBAL_LOCK();
4168  }
4169 
4170  if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4171  char* mp = (char*)(CALL_MMAP(asize));
4172  if (mp != CMFAIL) {
4173  tbase = mp;
4174  tsize = asize;
4175  mmap_flag = USE_MMAP_BIT;
4176  }
4177  }
4178 
4179  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4180  if (asize < HALF_MAX_SIZE_T) {
4181  char* br = CMFAIL;
4182  char* end = CMFAIL;
4183  ACQUIRE_MALLOC_GLOBAL_LOCK();
4184  br = (char*)(CALL_MORECORE(asize));
4185  end = (char*)(CALL_MORECORE(0));
4186  RELEASE_MALLOC_GLOBAL_LOCK();
4187  if (br != CMFAIL && end != CMFAIL && br < end) {
4188  size_t ssize = end - br;
4189  if (ssize > nb + TOP_FOOT_SIZE) {
4190  tbase = br;
4191  tsize = ssize;
4192  }
4193  }
4194  }
4195  }
4196 
4197  if (tbase != CMFAIL) {
4198 
4199  if ((m->footprint += tsize) > m->max_footprint)
4200  m->max_footprint = m->footprint;
4201 
4202  if (!is_initialized(m)) { /* first-time initialization */
4203  if (m->least_addr == 0 || tbase < m->least_addr)
4204  m->least_addr = tbase;
4205  m->seg.base = tbase;
4206  m->seg.size = tsize;
4207  m->seg.sflags = mmap_flag;
4208  m->magic = mparams.magic;
4209  m->release_checks = MAX_RELEASE_CHECK_RATE;
4210  init_bins(m);
4211 #if !ONLY_MSPACES
4212  if (is_global(m))
4213  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4214  else
4215 #endif
4216  {
4217  /* Offset top by embedded malloc_state */
4218  mchunkptr mn = next_chunk(mem2chunk(m));
4219  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4220  }
4221  }
4222 
4223  else {
4224  /* Try to merge with an existing segment */
4225  msegmentptr sp = &m->seg;
4226  /* Only consider most recent segment if traversal suppressed */
4227  while (sp != 0 && tbase != sp->base + sp->size)
4228  sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4229  if (sp != 0 &&
4230  !is_extern_segment(sp) &&
4231  (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4232  segment_holds(sp, m->top)) { /* append */
4233  sp->size += tsize;
4234  init_top(m, m->top, m->topsize + tsize);
4235  }
4236  else {
4237  if (tbase < m->least_addr)
4238  m->least_addr = tbase;
4239  sp = &m->seg;
4240  while (sp != 0 && sp->base != tbase + tsize)
4241  sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4242  if (sp != 0 &&
4243  !is_extern_segment(sp) &&
4244  (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4245  char* oldbase = sp->base;
4246  sp->base = tbase;
4247  sp->size += tsize;
4248  return prepend_alloc(m, tbase, oldbase, nb);
4249  }
4250  else
4251  add_segment(m, tbase, tsize, mmap_flag);
4252  }
4253  }
4254 
4255  if (nb < m->topsize) { /* Allocate from new or extended top space */
4256  size_t rsize = m->topsize -= nb;
4257  mchunkptr p = m->top;
4258  mchunkptr r = m->top = chunk_plus_offset(p, nb);
4259  r->head = rsize | PINUSE_BIT;
4260  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4261  check_top_chunk(m, m->top);
4262  check_malloced_chunk(m, chunk2mem(p), nb);
4263  return chunk2mem(p);
4264  }
4265  }
4266 
4267  MALLOC_FAILURE_ACTION;
4268  return 0;
4269 }
4270 
4271 /* ----------------------- system deallocation -------------------------- */
4272 
4273 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4274 static size_t release_unused_segments(mstate m) {
4275  size_t released = 0;
4276  int nsegs = 0;
4277  msegmentptr pred = &m->seg;
4278  msegmentptr sp = pred->next;
4279  while (sp != 0) {
4280  char* base = sp->base;
4281  size_t size = sp->size;
4282  msegmentptr next = sp->next;
4283  ++nsegs;
4284  if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4285  mchunkptr p = align_as_chunk(base);
4286  size_t psize = chunksize(p);
4287  /* Can unmap if first chunk holds entire segment and not pinned */
4288  if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4289  tchunkptr tp = (tchunkptr)p;
4290  assert(segment_holds(sp, (char*)sp));
4291  if (p == m->dv) {
4292  m->dv = 0;
4293  m->dvsize = 0;
4294  }
4295  else {
4296  unlink_large_chunk(m, tp);
4297  }
4298  if (CALL_MUNMAP(base, size) == 0) {
4299  released += size;
4300  m->footprint -= size;
4301  /* unlink obsoleted record */
4302  sp = pred;
4303  sp->next = next;
4304  }
4305  else { /* back out if cannot unmap */
4306  insert_large_chunk(m, tp, psize);
4307  }
4308  }
4309  }
4310  if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4311  break;
4312  pred = sp;
4313  sp = next;
4314  }
4315  /* Reset check counter */
4316  m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4317  (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4318  return released;
4319 }
4320 
4321 static int sys_trim(mstate m, size_t pad) {
4322  size_t released = 0;
4323  ensure_initialization();
4324  if (pad < MAX_REQUEST && is_initialized(m)) {
4325  pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4326 
4327  if (m->topsize > pad) {
4328  /* Shrink top space in granularity-size units, keeping at least one */
4329  size_t unit = mparams.granularity;
4330  size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4331  SIZE_T_ONE) * unit;
4332  msegmentptr sp = segment_holding(m, (char*)m->top);
4333 
4334  if (!is_extern_segment(sp)) {
4335  if (is_mmapped_segment(sp)) {
4336  if (HAVE_MMAP &&
4337  sp->size >= extra &&
4338  !has_segment_link(m, sp)) { /* can't shrink if pinned */
4339  size_t newsize = sp->size - extra;
4340  (void)newsize; /* placate people compiling -Wunused-variable */
4341  /* Prefer mremap, fall back to munmap */
4342  if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4343  (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4344  released = extra;
4345  }
4346  }
4347  }
4348  else if (HAVE_MORECORE) {
4349  if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4350  extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4351  ACQUIRE_MALLOC_GLOBAL_LOCK();
4352  {
4353  /* Make sure end of memory is where we last set it. */
4354  char* old_br = (char*)(CALL_MORECORE(0));
4355  if (old_br == sp->base + sp->size) {
4356  char* rel_br = (char*)(CALL_MORECORE(-extra));
4357  char* new_br = (char*)(CALL_MORECORE(0));
4358  if (rel_br != CMFAIL && new_br < old_br)
4359  released = old_br - new_br;
4360  }
4361  }
4362  RELEASE_MALLOC_GLOBAL_LOCK();
4363  }
4364  }
4365 
4366  if (released != 0) {
4367  sp->size -= released;
4368  m->footprint -= released;
4369  init_top(m, m->top, m->topsize - released);
4370  check_top_chunk(m, m->top);
4371  }
4372  }
4373 
4374  /* Unmap any unused mmapped segments */
4375  if (HAVE_MMAP)
4376  released += release_unused_segments(m);
4377 
4378  /* On failure, disable autotrim to avoid repeated failed future calls */
4379  if (released == 0 && m->topsize > m->trim_check)
4380  m->trim_check = MAX_SIZE_T;
4381  }
4382 
4383  return (released != 0)? 1 : 0;
4384 }
4385 
4386 /* Consolidate and bin a chunk. Differs from exported versions
4387  of free mainly in that the chunk need not be marked as inuse.
4388 */
4389 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4390  mchunkptr next = chunk_plus_offset(p, psize);
4391  if (!pinuse(p)) {
4392  mchunkptr prev;
4393  size_t prevsize = p->prev_foot;
4394  if (is_mmapped(p)) {
4395  psize += prevsize + MMAP_FOOT_PAD;
4396  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4397  m->footprint -= psize;
4398  return;
4399  }
4400  prev = chunk_minus_offset(p, prevsize);
4401  psize += prevsize;
4402  p = prev;
4403  if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4404  if (p != m->dv) {
4405  unlink_chunk(m, p, prevsize);
4406  }
4407  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4408  m->dvsize = psize;
4409  set_free_with_pinuse(p, psize, next);
4410  return;
4411  }
4412  }
4413  else {
4414  CORRUPTION_ERROR_ACTION(m);
4415  return;
4416  }
4417  }
4418  if (RTCHECK(ok_address(m, next))) {
4419  if (!cinuse(next)) { /* consolidate forward */
4420  if (next == m->top) {
4421  size_t tsize = m->topsize += psize;
4422  m->top = p;
4423  p->head = tsize | PINUSE_BIT;
4424  if (p == m->dv) {
4425  m->dv = 0;
4426  m->dvsize = 0;
4427  }
4428  return;
4429  }
4430  else if (next == m->dv) {
4431  size_t dsize = m->dvsize += psize;
4432  m->dv = p;
4433  set_size_and_pinuse_of_free_chunk(p, dsize);
4434  return;
4435  }
4436  else {
4437  size_t nsize = chunksize(next);
4438  psize += nsize;
4439  unlink_chunk(m, next, nsize);
4440  set_size_and_pinuse_of_free_chunk(p, psize);
4441  if (p == m->dv) {
4442  m->dvsize = psize;
4443  return;
4444  }
4445  }
4446  }
4447  else {
4448  set_free_with_pinuse(p, psize, next);
4449  }
4450  insert_chunk(m, p, psize);
4451  }
4452  else {
4453  CORRUPTION_ERROR_ACTION(m);
4454  }
4455 }
4456 
4457 /* ---------------------------- malloc --------------------------- */
4458 
4459 /* allocate a large request from the best fitting chunk in a treebin */
4460 static void* tmalloc_large(mstate m, size_t nb) {
4461  tchunkptr v = 0;
4462  size_t rsize = -nb; /* Unsigned negation */
4463  tchunkptr t;
4464  bindex_t idx;
4465  compute_tree_index(nb, idx);
4466  if ((t = *treebin_at(m, idx)) != 0) {
4467  /* Traverse tree for this bin looking for node with size == nb */
4468  size_t sizebits = nb << leftshift_for_tree_index(idx);
4469  tchunkptr rst = 0; /* The deepest untaken right subtree */
4470  for (;;) {
4471  tchunkptr rt;
4472  size_t trem = chunksize(t) - nb;
4473  if (trem < rsize) {
4474  v = t;
4475  if ((rsize = trem) == 0)
4476  break;
4477  }
4478  rt = t->child[1];
4479  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4480  if (rt != 0 && rt != t)
4481  rst = rt;
4482  if (t == 0) {
4483  t = rst; /* set t to least subtree holding sizes > nb */
4484  break;
4485  }
4486  sizebits <<= 1;
4487  }
4488  }
4489  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4490  binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4491  if (leftbits != 0) {
4492  bindex_t i;
4493  binmap_t leastbit = least_bit(leftbits);
4494  compute_bit2idx(leastbit, i);
4495  t = *treebin_at(m, i);
4496  }
4497  }
4498 
4499  while (t != 0) { /* find smallest of tree or subtree */
4500  size_t trem = chunksize(t) - nb;
4501  if (trem < rsize) {
4502  rsize = trem;
4503  v = t;
4504  }
4505  t = leftmost_child(t);
4506  }
4507 
4508  /* If dv is a better fit, return 0 so malloc will use it */
4509  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4510  if (RTCHECK(ok_address(m, v))) { /* split */
4511  mchunkptr r = chunk_plus_offset(v, nb);
4512  assert(chunksize(v) == rsize + nb);
4513  if (RTCHECK(ok_next(v, r))) {
4514  unlink_large_chunk(m, v);
4515  if (rsize < MIN_CHUNK_SIZE)
4516  set_inuse_and_pinuse(m, v, (rsize + nb));
4517  else {
4518  set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4519  set_size_and_pinuse_of_free_chunk(r, rsize);
4520  insert_chunk(m, r, rsize);
4521  }
4522  return chunk2mem(v);
4523  }
4524  }
4525  CORRUPTION_ERROR_ACTION(m);
4526  }
4527  return 0;
4528 }
4529 
4530 /* allocate a small request from the best fitting chunk in a treebin */
4531 static void* tmalloc_small(mstate m, size_t nb) {
4532  tchunkptr t, v;
4533  size_t rsize;
4534  bindex_t i;
4535  binmap_t leastbit = least_bit(m->treemap);
4536  compute_bit2idx(leastbit, i);
4537  v = t = *treebin_at(m, i);
4538  rsize = chunksize(t) - nb;
4539 
4540  while ((t = leftmost_child(t)) != 0) {
4541  size_t trem = chunksize(t) - nb;
4542  if (trem < rsize) {
4543  rsize = trem;
4544  v = t;
4545  }
4546  }
4547 
4548  if (RTCHECK(ok_address(m, v))) {
4549  mchunkptr r = chunk_plus_offset(v, nb);
4550  assert(chunksize(v) == rsize + nb);
4551  if (RTCHECK(ok_next(v, r))) {
4552  unlink_large_chunk(m, v);
4553  if (rsize < MIN_CHUNK_SIZE)
4554  set_inuse_and_pinuse(m, v, (rsize + nb));
4555  else {
4556  set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4557  set_size_and_pinuse_of_free_chunk(r, rsize);
4558  replace_dv(m, r, rsize);
4559  }
4560  return chunk2mem(v);
4561  }
4562  }
4563 
4564  CORRUPTION_ERROR_ACTION(m);
4565  return 0;
4566 }
4567 
4568 #if !ONLY_MSPACES
4569 
4570 void* dlmalloc(size_t bytes) {
4571  /*
4572  Basic algorithm:
4573  If a small request (< 256 bytes minus per-chunk overhead):
4574  1. If one exists, use a remainderless chunk in associated smallbin.
4575  (Remainderless means that there are too few excess bytes to
4576  represent as a chunk.)
4577  2. If it is big enough, use the dv chunk, which is normally the
4578  chunk adjacent to the one used for the most recent small request.
4579  3. If one exists, split the smallest available chunk in a bin,
4580  saving remainder in dv.
4581  4. If it is big enough, use the top chunk.
4582  5. If available, get memory from system and use it
4583  Otherwise, for a large request:
4584  1. Find the smallest available binned chunk that fits, and use it
4585  if it is better fitting than dv chunk, splitting if necessary.
4586  2. If better fitting than any binned chunk, use the dv chunk.
4587  3. If it is big enough, use the top chunk.
4588  4. If request size >= mmap threshold, try to directly mmap this chunk.
4589  5. If available, get memory from system and use it
4590 
4591  The ugly goto's here ensure that postaction occurs along all paths.
4592  */
4593 
4594 #if USE_LOCKS
4595  ensure_initialization(); /* initialize in sys_alloc if not using locks */
4596 #endif
4597 
4598  if (!PREACTION(gm)) {
4599  void* mem;
4600  size_t nb;
4601  if (bytes <= MAX_SMALL_REQUEST) {
4602  bindex_t idx;
4603  binmap_t smallbits;
4604  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4605  idx = small_index(nb);
4606  smallbits = gm->smallmap >> idx;
4607 
4608  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4609  mchunkptr b, p;
4610  idx += ~smallbits & 1; /* Uses next bin if idx empty */
4611  b = smallbin_at(gm, idx);
4612  p = b->fd;
4613  assert(chunksize(p) == small_index2size(idx));
4614  unlink_first_small_chunk(gm, b, p, idx);
4615  set_inuse_and_pinuse(gm, p, small_index2size(idx));
4616  mem = chunk2mem(p);
4617  check_malloced_chunk(gm, mem, nb);
4618  goto postaction;
4619  }
4620 
4621  else if (nb > gm->dvsize) {
4622  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4623  mchunkptr b, p, r;
4624  size_t rsize;
4625  bindex_t i;
4626  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4627  binmap_t leastbit = least_bit(leftbits);
4628  compute_bit2idx(leastbit, i);
4629  b = smallbin_at(gm, i);
4630  p = b->fd;
4631  assert(chunksize(p) == small_index2size(i));
4632  unlink_first_small_chunk(gm, b, p, i);
4633  rsize = small_index2size(i) - nb;
4634  /* Fit here cannot be remainderless if 4byte sizes */
4635  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4636  set_inuse_and_pinuse(gm, p, small_index2size(i));
4637  else {
4638  set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4639  r = chunk_plus_offset(p, nb);
4640  set_size_and_pinuse_of_free_chunk(r, rsize);
4641  replace_dv(gm, r, rsize);
4642  }
4643  mem = chunk2mem(p);
4644  check_malloced_chunk(gm, mem, nb);
4645  goto postaction;
4646  }
4647 
4648  else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4649  check_malloced_chunk(gm, mem, nb);
4650  goto postaction;
4651  }
4652  }
4653  }
4654  else if (bytes >= MAX_REQUEST)
4655  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4656  else {
4657  nb = pad_request(bytes);
4658  if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4659  check_malloced_chunk(gm, mem, nb);
4660  goto postaction;
4661  }
4662  }
4663 
4664  if (nb <= gm->dvsize) {
4665  size_t rsize = gm->dvsize - nb;
4666  mchunkptr p = gm->dv;
4667  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4668  mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4669  gm->dvsize = rsize;
4670  set_size_and_pinuse_of_free_chunk(r, rsize);
4671  set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4672  }
4673  else { /* exhaust dv */
4674  size_t dvs = gm->dvsize;
4675  gm->dvsize = 0;
4676  gm->dv = 0;
4677  set_inuse_and_pinuse(gm, p, dvs);
4678  }
4679  mem = chunk2mem(p);
4680  check_malloced_chunk(gm, mem, nb);
4681  goto postaction;
4682  }
4683 
4684  else if (nb < gm->topsize) { /* Split top */
4685  size_t rsize = gm->topsize -= nb;
4686  mchunkptr p = gm->top;
4687  mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4688  r->head = rsize | PINUSE_BIT;
4689  set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4690  mem = chunk2mem(p);
4691  check_top_chunk(gm, gm->top);
4692  check_malloced_chunk(gm, mem, nb);
4693  goto postaction;
4694  }
4695 
4696  mem = sys_alloc(gm, nb);
4697 
4698  postaction:
4699  POSTACTION(gm);
4700  return mem;
4701  }
4702 
4703  return 0;
4704 }
4705 
4706 /* ---------------------------- free --------------------------- */
4707 
4708 void dlfree(void* mem) {
4709  /*
4710  Consolidate freed chunks with preceeding or succeeding bordering
4711  free chunks, if they exist, and then place in a bin. Intermixed
4712  with special cases for top, dv, mmapped chunks, and usage errors.
4713  */
4714 
4715  if (mem != 0) {
4716  mchunkptr p = mem2chunk(mem);
4717 #if FOOTERS
4718  mstate fm = get_mstate_for(p);
4719  if (!ok_magic(fm)) {
4720  USAGE_ERROR_ACTION(fm, p);
4721  return;
4722  }
4723 #else /* FOOTERS */
4724 #define fm gm
4725 #endif /* FOOTERS */
4726  if (!PREACTION(fm)) {
4727  check_inuse_chunk(fm, p);
4728  if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4729  size_t psize = chunksize(p);
4730  mchunkptr next = chunk_plus_offset(p, psize);
4731  if (!pinuse(p)) {
4732  size_t prevsize = p->prev_foot;
4733  if (is_mmapped(p)) {
4734  psize += prevsize + MMAP_FOOT_PAD;
4735  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4736  fm->footprint -= psize;
4737  goto postaction;
4738  }
4739  else {
4740  mchunkptr prev = chunk_minus_offset(p, prevsize);
4741  psize += prevsize;
4742  p = prev;
4743  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4744  if (p != fm->dv) {
4745  unlink_chunk(fm, p, prevsize);
4746  }
4747  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4748  fm->dvsize = psize;
4749  set_free_with_pinuse(p, psize, next);
4750  goto postaction;
4751  }
4752  }
4753  else
4754  goto erroraction;
4755  }
4756  }
4757 
4758  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4759  if (!cinuse(next)) { /* consolidate forward */
4760  if (next == fm->top) {
4761  size_t tsize = fm->topsize += psize;
4762  fm->top = p;
4763  p->head = tsize | PINUSE_BIT;
4764  if (p == fm->dv) {
4765  fm->dv = 0;
4766  fm->dvsize = 0;
4767  }
4768  if (should_trim(fm, tsize))
4769  sys_trim(fm, 0);
4770  goto postaction;
4771  }
4772  else if (next == fm->dv) {
4773  size_t dsize = fm->dvsize += psize;
4774  fm->dv = p;
4775  set_size_and_pinuse_of_free_chunk(p, dsize);
4776  goto postaction;
4777  }
4778  else {
4779  size_t nsize = chunksize(next);
4780  psize += nsize;
4781  unlink_chunk(fm, next, nsize);
4782  set_size_and_pinuse_of_free_chunk(p, psize);
4783  if (p == fm->dv) {
4784  fm->dvsize = psize;
4785  goto postaction;
4786  }
4787  }
4788  }
4789  else
4790  set_free_with_pinuse(p, psize, next);
4791 
4792  if (is_small(psize)) {
4793  insert_small_chunk(fm, p, psize);
4794  check_free_chunk(fm, p);
4795  }
4796  else {
4797  tchunkptr tp = (tchunkptr)p;
4798  insert_large_chunk(fm, tp, psize);
4799  check_free_chunk(fm, p);
4800  if (--fm->release_checks == 0)
4801  release_unused_segments(fm);
4802  }
4803  goto postaction;
4804  }
4805  }
4806  erroraction:
4807  USAGE_ERROR_ACTION(fm, p);
4808  postaction:
4809  POSTACTION(fm);
4810  }
4811  }
4812 #if !FOOTERS
4813 #undef fm
4814 #endif /* FOOTERS */
4815 }
4816 
4817 void* dlcalloc(size_t n_elements, size_t elem_size) {
4818  void* mem;
4819  size_t req = 0;
4820  if (n_elements != 0) {
4821  req = n_elements * elem_size;
4822  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4823  (req / n_elements != elem_size))
4824  req = MAX_SIZE_T; /* force downstream failure on overflow */
4825  }
4826  mem = dlmalloc(req);
4827  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4828  memset(mem, 0, req);
4829  return mem;
4830 }
4831 
4832 #endif /* !ONLY_MSPACES */
4833 
4834 /* ------------ Internal support for realloc, memalign, etc -------------- */
4835 
4836 /* Try to realloc; only in-place unless can_move true */
4837 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4838  int can_move) {
4839  mchunkptr newp = 0;
4840  size_t oldsize = chunksize(p);
4841  mchunkptr next = chunk_plus_offset(p, oldsize);
4842  if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4843  ok_next(p, next) && ok_pinuse(next))) {
4844  if (is_mmapped(p)) {
4845  newp = mmap_resize(m, p, nb, can_move);
4846  }
4847  else if (oldsize >= nb) { /* already big enough */
4848  size_t rsize = oldsize - nb;
4849  if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4850  mchunkptr r = chunk_plus_offset(p, nb);
4851  set_inuse(m, p, nb);
4852  set_inuse(m, r, rsize);
4853  dispose_chunk(m, r, rsize);
4854  }
4855  newp = p;
4856  }
4857  else if (next == m->top) { /* extend into top */
4858  if (oldsize + m->topsize > nb) {
4859  size_t newsize = oldsize + m->topsize;
4860  size_t newtopsize = newsize - nb;
4861  mchunkptr newtop = chunk_plus_offset(p, nb);
4862  set_inuse(m, p, nb);
4863  newtop->head = newtopsize |PINUSE_BIT;
4864  m->top = newtop;
4865  m->topsize = newtopsize;
4866  newp = p;
4867  }
4868  }
4869  else if (next == m->dv) { /* extend into dv */
4870  size_t dvs = m->dvsize;
4871  if (oldsize + dvs >= nb) {
4872  size_t dsize = oldsize + dvs - nb;
4873  if (dsize >= MIN_CHUNK_SIZE) {
4874  mchunkptr r = chunk_plus_offset(p, nb);
4875  mchunkptr n = chunk_plus_offset(r, dsize);
4876  set_inuse(m, p, nb);
4877  set_size_and_pinuse_of_free_chunk(r, dsize);
4878  clear_pinuse(n);
4879  m->dvsize = dsize;
4880  m->dv = r;
4881  }
4882  else { /* exhaust dv */
4883  size_t newsize = oldsize + dvs;
4884  set_inuse(m, p, newsize);
4885  m->dvsize = 0;
4886  m->dv = 0;
4887  }
4888  newp = p;
4889  }
4890  }
4891  else if (!cinuse(next)) { /* extend into next free chunk */
4892  size_t nextsize = chunksize(next);
4893  if (oldsize + nextsize >= nb) {
4894  size_t rsize = oldsize + nextsize - nb;
4895  unlink_chunk(m, next, nextsize);
4896  if (rsize < MIN_CHUNK_SIZE) {
4897  size_t newsize = oldsize + nextsize;
4898  set_inuse(m, p, newsize);
4899  }
4900  else {
4901  mchunkptr r = chunk_plus_offset(p, nb);
4902  set_inuse(m, p, nb);
4903  set_inuse(m, r, rsize);
4904  dispose_chunk(m, r, rsize);
4905  }
4906  newp = p;
4907  }
4908  }
4909  }
4910  else {
4911  USAGE_ERROR_ACTION(m, chunk2mem(p));
4912  }
4913  return newp;
4914 }
4915 
4916 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4917  void* mem = 0;
4918  if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4919  alignment = MIN_CHUNK_SIZE;
4920  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4921  size_t a = MALLOC_ALIGNMENT << 1;
4922  while (a < alignment) a <<= 1;
4923  alignment = a;
4924  }
4925  if (bytes >= MAX_REQUEST - alignment) {
4926  if (m != 0) { /* Test isn't needed but avoids compiler warning */
4927  MALLOC_FAILURE_ACTION;
4928  }
4929  }
4930  else {
4931  size_t nb = request2size(bytes);
4932  size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4933  mem = internal_malloc(m, req);
4934  if (mem != 0) {
4935  mchunkptr p = mem2chunk(mem);
4936  if (PREACTION(m))
4937  return 0;
4938  if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4939  /*
4940  Find an aligned spot inside chunk. Since we need to give
4941  back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4942  the first calculation places us at a spot with less than
4943  MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4944  We've allocated enough total room so that this is always
4945  possible.
4946  */
4947  char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4948  SIZE_T_ONE)) &
4949  -alignment));
4950  char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4951  br : br+alignment;
4952  mchunkptr newp = (mchunkptr)pos;
4953  size_t leadsize = pos - (char*)(p);
4954  size_t newsize = chunksize(p) - leadsize;
4955 
4956  if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4957  newp->prev_foot = p->prev_foot + leadsize;
4958  newp->head = newsize;
4959  }
4960  else { /* Otherwise, give back leader, use the rest */
4961  set_inuse(m, newp, newsize);
4962  set_inuse(m, p, leadsize);
4963  dispose_chunk(m, p, leadsize);
4964  }
4965  p = newp;
4966  }
4967 
4968  /* Give back spare room at the end */
4969  if (!is_mmapped(p)) {
4970  size_t size = chunksize(p);
4971  if (size > nb + MIN_CHUNK_SIZE) {
4972  size_t remainder_size = size - nb;
4973  mchunkptr remainder = chunk_plus_offset(p, nb);
4974  set_inuse(m, p, nb);
4975  set_inuse(m, remainder, remainder_size);
4976  dispose_chunk(m, remainder, remainder_size);
4977  }
4978  }
4979 
4980  mem = chunk2mem(p);
4981  assert (chunksize(p) >= nb);
4982  assert(((size_t)mem & (alignment - 1)) == 0);
4983  check_inuse_chunk(m, p);
4984  POSTACTION(m);
4985  }
4986  }
4987  return mem;
4988 }
4989 
4990 /*
4991  Common support for independent_X routines, handling
4992  all of the combinations that can result.
4993  The opts arg has:
4994  bit 0 set if all elements are same size (using sizes[0])
4995  bit 1 set if elements should be zeroed
4996 */
4997 static void** ialloc(mstate m,
4998  size_t n_elements,
4999  size_t* sizes,
5000  int opts,
5001  void* chunks[]) {
5002 
5003  size_t element_size; /* chunksize of each element, if all same */
5004  size_t contents_size; /* total size of elements */
5005  size_t array_size; /* request size of pointer array */
5006  void* mem; /* malloced aggregate space */
5007  mchunkptr p; /* corresponding chunk */
5008  size_t remainder_size; /* remaining bytes while splitting */
5009  void** marray; /* either "chunks" or malloced ptr array */
5010  mchunkptr array_chunk; /* chunk for malloced ptr array */
5011  flag_t was_enabled; /* to disable mmap */
5012  size_t size;
5013  size_t i;
5014 
5015  ensure_initialization();
5016  /* compute array length, if needed */
5017  if (chunks != 0) {
5018  if (n_elements == 0)
5019  return chunks; /* nothing to do */
5020  marray = chunks;
5021  array_size = 0;
5022  }
5023  else {
5024  /* if empty req, must still return chunk representing empty array */
5025  if (n_elements == 0)
5026  return (void**)internal_malloc(m, 0);
5027  marray = 0;
5028  array_size = request2size(n_elements * (sizeof(void*)));
5029  }
5030 
5031  /* compute total element size */
5032  if (opts & 0x1) { /* all-same-size */
5033  element_size = request2size(*sizes);
5034  contents_size = n_elements * element_size;
5035  }
5036  else { /* add up all the sizes */
5037  element_size = 0;
5038  contents_size = 0;
5039  for (i = 0; i != n_elements; ++i)
5040  contents_size += request2size(sizes[i]);
5041  }
5042 
5043  size = contents_size + array_size;
5044 
5045  /*
5046  Allocate the aggregate chunk. First disable direct-mmapping so
5047  malloc won't use it, since we would not be able to later
5048  free/realloc space internal to a segregated mmap region.
5049  */
5050  was_enabled = use_mmap(m);
5051  disable_mmap(m);
5052  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5053  if (was_enabled)
5054  enable_mmap(m);
5055  if (mem == 0)
5056  return 0;
5057 
5058  if (PREACTION(m)) return 0;
5059  p = mem2chunk(mem);
5060  remainder_size = chunksize(p);
5061 
5062  assert(!is_mmapped(p));
5063 
5064  if (opts & 0x2) { /* optionally clear the elements */
5065  memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5066  }
5067 
5068  /* If not provided, allocate the pointer array as final part of chunk */
5069  if (marray == 0) {
5070  size_t array_chunk_size;
5071  array_chunk = chunk_plus_offset(p, contents_size);
5072  array_chunk_size = remainder_size - contents_size;
5073  marray = (void**) (chunk2mem(array_chunk));
5074  set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5075  remainder_size = contents_size;
5076  }
5077 
5078  /* split out elements */
5079  for (i = 0; ; ++i) {
5080  marray[i] = chunk2mem(p);
5081  if (i != n_elements-1) {
5082  if (element_size != 0)
5083  size = element_size;
5084  else
5085  size = request2size(sizes[i]);
5086  remainder_size -= size;
5087  set_size_and_pinuse_of_inuse_chunk(m, p, size);
5088  p = chunk_plus_offset(p, size);
5089  }
5090  else { /* the final element absorbs any overallocation slop */
5091  set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5092  break;
5093  }
5094  }
5095 
5096 #if DEBUG
5097  if (marray != chunks) {
5098  /* final element must have exactly exhausted chunk */
5099  if (element_size != 0) {
5100  assert(remainder_size == element_size);
5101  }
5102  else {
5103  assert(remainder_size == request2size(sizes[i]));
5104  }
5105  check_inuse_chunk(m, mem2chunk(marray));
5106  }
5107  for (i = 0; i != n_elements; ++i)
5108  check_inuse_chunk(m, mem2chunk(marray[i]));
5109 
5110 #endif /* DEBUG */
5111 
5112  POSTACTION(m);
5113  return marray;
5114 }
5115 
5116 /* Try to free all pointers in the given array.
5117  Note: this could be made faster, by delaying consolidation,
5118  at the price of disabling some user integrity checks, We
5119  still optimize some consolidations by combining adjacent
5120  chunks before freeing, which will occur often if allocated
5121  with ialloc or the array is sorted.
5122 */
5123 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5124  size_t unfreed = 0;
5125  if (!PREACTION(m)) {
5126  void** a;
5127  void** fence = &(array[nelem]);
5128  for (a = array; a != fence; ++a) {
5129  void* mem = *a;
5130  if (mem != 0) {
5131  mchunkptr p = mem2chunk(mem);
5132  size_t psize = chunksize(p);
5133 #if FOOTERS
5134  if (get_mstate_for(p) != m) {
5135  ++unfreed;
5136  continue;
5137  }
5138 #endif
5139  check_inuse_chunk(m, p);
5140  *a = 0;
5141  if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5142  void ** b = a + 1; /* try to merge with next chunk */
5143  mchunkptr next = next_chunk(p);
5144  if (b != fence && *b == chunk2mem(next)) {
5145  size_t newsize = chunksize(next) + psize;
5146  set_inuse(m, p, newsize);
5147  *b = chunk2mem(p);
5148  }
5149  else
5150  dispose_chunk(m, p, psize);
5151  }
5152  else {
5153  CORRUPTION_ERROR_ACTION(m);
5154  break;
5155  }
5156  }
5157  }
5158  if (should_trim(m, m->topsize))
5159  sys_trim(m, 0);
5160  POSTACTION(m);
5161  }
5162  return unfreed;
5163 }
5164 
5165 /* Traversal */
5166 #if MALLOC_INSPECT_ALL
5167 static void internal_inspect_all(mstate m,
5168  void(*handler)(void *start,
5169  void *end,
5170  size_t used_bytes,
5171  void* callback_arg),
5172  void* arg) {
5173  if (is_initialized(m)) {
5174  mchunkptr top = m->top;
5175  msegmentptr s;
5176  for (s = &m->seg; s != 0; s = s->next) {
5177  mchunkptr q = align_as_chunk(s->base);
5178  while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5179  mchunkptr next = next_chunk(q);
5180  size_t sz = chunksize(q);
5181  size_t used;
5182  void* start;
5183  if (is_inuse(q)) {
5184  used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5185  start = chunk2mem(q);
5186  }
5187  else {
5188  used = 0;
5189  if (is_small(sz)) { /* offset by possible bookkeeping */
5190  start = (void*)((char*)q + sizeof(struct malloc_chunk));
5191  }
5192  else {
5193  start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5194  }
5195  }
5196  if (start < (void*)next) /* skip if all space is bookkeeping */
5197  handler(start, next, used, arg);
5198  if (q == top)
5199  break;
5200  q = next;
5201  }
5202  }
5203  }
5204 }
5205 #endif /* MALLOC_INSPECT_ALL */
5206 
5207 /* ------------------ Exported realloc, memalign, etc -------------------- */
5208 
5209 #if !ONLY_MSPACES
5210 
5211 void* dlrealloc(void* oldmem, size_t bytes) {
5212  void* mem = 0;
5213  if (oldmem == 0) {
5214  mem = dlmalloc(bytes);
5215  }
5216  else if (bytes >= MAX_REQUEST) {
5217  MALLOC_FAILURE_ACTION;
5218  }
5219 #ifdef REALLOC_ZERO_BYTES_FREES
5220  else if (bytes == 0) {
5221  dlfree(oldmem);
5222  }
5223 #endif /* REALLOC_ZERO_BYTES_FREES */
5224  else {
5225  size_t nb = request2size(bytes);
5226  mchunkptr oldp = mem2chunk(oldmem);
5227 #if ! FOOTERS
5228  mstate m = gm;
5229 #else /* FOOTERS */
5230  mstate m = get_mstate_for(oldp);
5231  if (!ok_magic(m)) {
5232  USAGE_ERROR_ACTION(m, oldmem);
5233  return 0;
5234  }
5235 #endif /* FOOTERS */
5236  if (!PREACTION(m)) {
5237  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5238  POSTACTION(m);
5239  if (newp != 0) {
5240  check_inuse_chunk(m, newp);
5241  mem = chunk2mem(newp);
5242  }
5243  else {
5244  mem = internal_malloc(m, bytes);
5245  if (mem != 0) {
5246  size_t oc = chunksize(oldp) - overhead_for(oldp);
5247  memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5248  internal_free(m, oldmem);
5249  }
5250  }
5251  }
5252  }
5253  return mem;
5254 }
5255 
5256 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5257  void* mem = 0;
5258  if (oldmem != 0) {
5259  if (bytes >= MAX_REQUEST) {
5260  MALLOC_FAILURE_ACTION;
5261  }
5262  else {
5263  size_t nb = request2size(bytes);
5264  mchunkptr oldp = mem2chunk(oldmem);
5265 #if ! FOOTERS
5266  mstate m = gm;
5267 #else /* FOOTERS */
5268  mstate m = get_mstate_for(oldp);
5269  if (!ok_magic(m)) {
5270  USAGE_ERROR_ACTION(m, oldmem);
5271  return 0;
5272  }
5273 #endif /* FOOTERS */
5274  if (!PREACTION(m)) {
5275  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5276  POSTACTION(m);
5277  if (newp == oldp) {
5278  check_inuse_chunk(m, newp);
5279  mem = oldmem;
5280  }
5281  }
5282  }
5283  }
5284  return mem;
5285 }
5286 
5287 void* dlmemalign(size_t alignment, size_t bytes) {
5288  if (alignment <= MALLOC_ALIGNMENT) {
5289  return dlmalloc(bytes);
5290  }
5291  return internal_memalign(gm, alignment, bytes);
5292 }
5293 
5294 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5295  void* mem = 0;
5296  if (alignment == MALLOC_ALIGNMENT)
5297  mem = dlmalloc(bytes);
5298  else {
5299  size_t d = alignment / sizeof(void*);
5300  size_t r = alignment % sizeof(void*);
5301  if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5302  return EINVAL;
5303  else if (bytes <= MAX_REQUEST - alignment) {
5304  if (alignment < MIN_CHUNK_SIZE)
5305  alignment = MIN_CHUNK_SIZE;
5306  mem = internal_memalign(gm, alignment, bytes);
5307  }
5308  }
5309  if (mem == 0)
5310  return ENOMEM;
5311  else {
5312  *pp = mem;
5313  return 0;
5314  }
5315 }
5316 
5317 void* dlvalloc(size_t bytes) {
5318  size_t pagesz;
5319  ensure_initialization();
5320  pagesz = mparams.page_size;
5321  return dlmemalign(pagesz, bytes);
5322 }
5323 
5324 void* dlpvalloc(size_t bytes) {
5325  size_t pagesz;
5326  ensure_initialization();
5327  pagesz = mparams.page_size;
5328  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5329 }
5330 
5331 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5332  void* chunks[]) {
5333  size_t sz = elem_size; /* serves as 1-element array */
5334  return ialloc(gm, n_elements, &sz, 3, chunks);
5335 }
5336 
5337 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5338  void* chunks[]) {
5339  return ialloc(gm, n_elements, sizes, 0, chunks);
5340 }
5341 
5342 size_t dlbulk_free(void* array[], size_t nelem) {
5343  return internal_bulk_free(gm, array, nelem);
5344 }
5345 
5346 #if MALLOC_INSPECT_ALL
5347 void dlmalloc_inspect_all(void(*handler)(void *start,
5348  void *end,
5349  size_t used_bytes,
5350  void* callback_arg),
5351  void* arg) {
5352  ensure_initialization();
5353  if (!PREACTION(gm)) {
5354  internal_inspect_all(gm, handler, arg);
5355  POSTACTION(gm);
5356  }
5357 }
5358 #endif /* MALLOC_INSPECT_ALL */
5359 
5360 int dlmalloc_trim(size_t pad) {
5361  int result = 0;
5362  ensure_initialization();
5363  if (!PREACTION(gm)) {
5364  result = sys_trim(gm, pad);
5365  POSTACTION(gm);
5366  }
5367  return result;
5368 }
5369 
5370 size_t dlmalloc_footprint(void) {
5371  return gm->footprint;
5372 }
5373 
5374 size_t dlmalloc_max_footprint(void) {
5375  return gm->max_footprint;
5376 }
5377 
5378 size_t dlmalloc_footprint_limit(void) {
5379  size_t maf = gm->footprint_limit;
5380  return maf == 0 ? MAX_SIZE_T : maf;
5381 }
5382 
5383 size_t dlmalloc_set_footprint_limit(size_t bytes) {
5384  size_t result; /* invert sense of 0 */
5385  if (bytes == 0)
5386  result = granularity_align(1); /* Use minimal size */
5387  if (bytes == MAX_SIZE_T)
5388  result = 0; /* disable */
5389  else
5390  result = granularity_align(bytes);
5391  return gm->footprint_limit = result;
5392 }
5393 
5394 #if !NO_MALLINFO
5395 struct mallinfo dlmallinfo(void) {
5396  return internal_mallinfo(gm);
5397 }
5398 #endif /* NO_MALLINFO */
5399 
5400 #if !NO_MALLOC_STATS
5401 void dlmalloc_stats() {
5402  internal_malloc_stats(gm);
5403 }
5404 #endif /* NO_MALLOC_STATS */
5405 
5406 int dlmallopt(int param_number, int value) {
5407  return change_mparam(param_number, value);
5408 }
5409 
5410 size_t dlmalloc_usable_size(void* mem) {
5411  if (mem != 0) {
5412  mchunkptr p = mem2chunk(mem);
5413  if (is_inuse(p))
5414  return chunksize(p) - overhead_for(p);
5415  }
5416  return 0;
5417 }
5418 
5419 #endif /* !ONLY_MSPACES */
5420 
5421 /* ----------------------------- user mspaces ---------------------------- */
5422 
5423 #if MSPACES
5424 
5425 static mstate init_user_mstate(char* tbase, size_t tsize) {
5426  size_t msize = pad_request(sizeof(struct malloc_state));
5427  mchunkptr mn;
5428  mchunkptr msp = align_as_chunk(tbase);
5429  mstate m = (mstate)(chunk2mem(msp));
5430  memset(m, 0, msize);
5431  (void)INITIAL_LOCK(&m->mutex);
5432  msp->head = (msize|INUSE_BITS);
5433  m->seg.base = m->least_addr = tbase;
5434  m->seg.size = m->footprint = m->max_footprint = tsize;
5435  m->magic = mparams.magic;
5436  m->release_checks = MAX_RELEASE_CHECK_RATE;
5437  m->mflags = mparams.default_mflags;
5438  m->extp = 0;
5439  m->exts = 0;
5440  disable_contiguous(m);
5441  init_bins(m);
5442  mn = next_chunk(mem2chunk(m));
5443  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5444  check_top_chunk(m, m->top);
5445  return m;
5446 }
5447 
5448 mspace create_mspace(size_t capacity, int locked) {
5449  mstate m = 0;
5450  size_t msize;
5451  ensure_initialization();
5452  msize = pad_request(sizeof(struct malloc_state));
5453  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5454  size_t rs = ((capacity == 0)? mparams.granularity :
5455  (capacity + TOP_FOOT_SIZE + msize));
5456  size_t tsize = granularity_align(rs);
5457  char* tbase = (char*)(CALL_MMAP(tsize));
5458  if (tbase != CMFAIL) {
5459  m = init_user_mstate(tbase, tsize);
5460  m->seg.sflags = USE_MMAP_BIT;
5461  set_lock(m, locked);
5462  }
5463  }
5464  return (mspace)m;
5465 }
5466 
5467 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5468  mstate m = 0;
5469  size_t msize;
5470  ensure_initialization();
5471  msize = pad_request(sizeof(struct malloc_state));
5472  if (capacity > msize + TOP_FOOT_SIZE &&
5473  capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5474  m = init_user_mstate((char*)base, capacity);
5475  m->seg.sflags = EXTERN_BIT;
5476  set_lock(m, locked);
5477  }
5478  return (mspace)m;
5479 }
5480 
5481 int mspace_track_large_chunks(mspace msp, int enable) {
5482  int ret = 0;
5483  mstate ms = (mstate)msp;
5484  if (!PREACTION(ms)) {
5485  if (!use_mmap(ms)) {
5486  ret = 1;
5487  }
5488  if (!enable) {
5489  enable_mmap(ms);
5490  } else {
5491  disable_mmap(ms);
5492  }
5493  POSTACTION(ms);
5494  }
5495  return ret;
5496 }
5497 
5498 size_t destroy_mspace(mspace msp) {
5499  size_t freed = 0;
5500  mstate ms = (mstate)msp;
5501  if (ok_magic(ms)) {
5502  msegmentptr sp = &ms->seg;
5503  (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5504  while (sp != 0) {
5505  char* base = sp->base;
5506  size_t size = sp->size;
5507  flag_t flag = sp->sflags;
5508  (void)base; /* placate people compiling -Wunused-variable */
5509  sp = sp->next;
5510  if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5511  CALL_MUNMAP(base, size) == 0)
5512  freed += size;
5513  }
5514  }
5515  else {
5516  USAGE_ERROR_ACTION(ms,ms);
5517  }
5518  return freed;
5519 }
5520 
5521 /*
5522  mspace versions of routines are near-clones of the global
5523  versions. This is not so nice but better than the alternatives.
5524 */
5525 
5526 void* mspace_malloc(mspace msp, size_t bytes) {
5527  mstate ms = (mstate)msp;
5528  if (!ok_magic(ms)) {
5529  USAGE_ERROR_ACTION(ms,ms);
5530  return 0;
5531  }
5532  if (!PREACTION(ms)) {
5533  void* mem;
5534  size_t nb;
5535  if (bytes <= MAX_SMALL_REQUEST) {
5536  bindex_t idx;
5537  binmap_t smallbits;
5538  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5539  idx = small_index(nb);
5540  smallbits = ms->smallmap >> idx;
5541 
5542  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5543  mchunkptr b, p;
5544  idx += ~smallbits & 1; /* Uses next bin if idx empty */
5545  b = smallbin_at(ms, idx);
5546  p = b->fd;
5547  assert(chunksize(p) == small_index2size(idx));
5548  unlink_first_small_chunk(ms, b, p, idx);
5549  set_inuse_and_pinuse(ms, p, small_index2size(idx));
5550  mem = chunk2mem(p);
5551  check_malloced_chunk(ms, mem, nb);
5552  goto postaction;
5553  }
5554 
5555  else if (nb > ms->dvsize) {
5556  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5557  mchunkptr b, p, r;
5558  size_t rsize;
5559  bindex_t i;
5560  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5561  binmap_t leastbit = least_bit(leftbits);
5562  compute_bit2idx(leastbit, i);
5563  b = smallbin_at(ms, i);
5564  p = b->fd;
5565  assert(chunksize(p) == small_index2size(i));
5566  unlink_first_small_chunk(ms, b, p, i);
5567  rsize = small_index2size(i) - nb;
5568  /* Fit here cannot be remainderless if 4byte sizes */
5569  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5570  set_inuse_and_pinuse(ms, p, small_index2size(i));
5571  else {
5572  set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5573  r = chunk_plus_offset(p, nb);
5574  set_size_and_pinuse_of_free_chunk(r, rsize);
5575  replace_dv(ms, r, rsize);
5576  }
5577  mem = chunk2mem(p);
5578  check_malloced_chunk(ms, mem, nb);
5579  goto postaction;
5580  }
5581 
5582  else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5583  check_malloced_chunk(ms, mem, nb);
5584  goto postaction;
5585  }
5586  }
5587  }
5588  else if (bytes >= MAX_REQUEST)
5589  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5590  else {
5591  nb = pad_request(bytes);
5592  if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5593  check_malloced_chunk(ms, mem, nb);
5594  goto postaction;
5595  }
5596  }
5597 
5598  if (nb <= ms->dvsize) {
5599  size_t rsize = ms->dvsize - nb;
5600  mchunkptr p = ms->dv;
5601  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5602  mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5603  ms->dvsize = rsize;
5604  set_size_and_pinuse_of_free_chunk(r, rsize);
5605  set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5606  }
5607  else { /* exhaust dv */
5608  size_t dvs = ms->dvsize;
5609  ms->dvsize = 0;
5610  ms->dv = 0;
5611  set_inuse_and_pinuse(ms, p, dvs);
5612  }
5613  mem = chunk2mem(p);
5614  check_malloced_chunk(ms, mem, nb);
5615  goto postaction;
5616  }
5617 
5618  else if (nb < ms->topsize) { /* Split top */
5619  size_t rsize = ms->topsize -= nb;
5620  mchunkptr p = ms->top;
5621  mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5622  r->head = rsize | PINUSE_BIT;
5623  set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5624  mem = chunk2mem(p);
5625  check_top_chunk(ms, ms->top);
5626  check_malloced_chunk(ms, mem, nb);
5627  goto postaction;
5628  }
5629 
5630  mem = sys_alloc(ms, nb);
5631 
5632  postaction:
5633  POSTACTION(ms);
5634  return mem;
5635  }
5636 
5637  return 0;
5638 }
5639 
5640 void mspace_free(mspace msp, void* mem) {
5641  if (mem != 0) {
5642  mchunkptr p = mem2chunk(mem);
5643 #if FOOTERS
5644  mstate fm = get_mstate_for(p);
5645  (void)msp; /* placate people compiling -Wunused */
5646 #else /* FOOTERS */
5647  mstate fm = (mstate)msp;
5648 #endif /* FOOTERS */
5649  if (!ok_magic(fm)) {
5650  USAGE_ERROR_ACTION(fm, p);
5651  return;
5652  }
5653  if (!PREACTION(fm)) {
5654  check_inuse_chunk(fm, p);
5655  if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5656  size_t psize = chunksize(p);
5657  mchunkptr next = chunk_plus_offset(p, psize);
5658  if (!pinuse(p)) {
5659  size_t prevsize = p->prev_foot;
5660  if (is_mmapped(p)) {
5661  psize += prevsize + MMAP_FOOT_PAD;
5662  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5663  fm->footprint -= psize;
5664  goto postaction;
5665  }
5666  else {
5667  mchunkptr prev = chunk_minus_offset(p, prevsize);
5668  psize += prevsize;
5669  p = prev;
5670  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5671  if (p != fm->dv) {
5672  unlink_chunk(fm, p, prevsize);
5673  }
5674  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5675  fm->dvsize = psize;
5676  set_free_with_pinuse(p, psize, next);
5677  goto postaction;
5678  }
5679  }
5680  else
5681  goto erroraction;
5682  }
5683  }
5684 
5685  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5686  if (!cinuse(next)) { /* consolidate forward */
5687  if (next == fm->top) {
5688  size_t tsize = fm->topsize += psize;
5689  fm->top = p;
5690  p->head = tsize | PINUSE_BIT;
5691  if (p == fm->dv) {
5692  fm->dv = 0;
5693  fm->dvsize = 0;
5694  }
5695  if (should_trim(fm, tsize))
5696  sys_trim(fm, 0);
5697  goto postaction;
5698  }
5699  else if (next == fm->dv) {
5700  size_t dsize = fm->dvsize += psize;
5701  fm->dv = p;
5702  set_size_and_pinuse_of_free_chunk(p, dsize);
5703  goto postaction;
5704  }
5705  else {
5706  size_t nsize = chunksize(next);
5707  psize += nsize;
5708  unlink_chunk(fm, next, nsize);
5709  set_size_and_pinuse_of_free_chunk(p, psize);
5710  if (p == fm->dv) {
5711  fm->dvsize = psize;
5712  goto postaction;
5713  }
5714  }
5715  }
5716  else
5717  set_free_with_pinuse(p, psize, next);
5718 
5719  if (is_small(psize)) {
5720  insert_small_chunk(fm, p, psize);
5721  check_free_chunk(fm, p);
5722  }
5723  else {
5724  tchunkptr tp = (tchunkptr)p;
5725  insert_large_chunk(fm, tp, psize);
5726  check_free_chunk(fm, p);
5727  if (--fm->release_checks == 0)
5728  release_unused_segments(fm);
5729  }
5730  goto postaction;
5731  }
5732  }
5733  erroraction:
5734  USAGE_ERROR_ACTION(fm, p);
5735  postaction:
5736  POSTACTION(fm);
5737  }
5738  }
5739 }
5740 
5741 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5742  void* mem;
5743  size_t req = 0;
5744  mstate ms = (mstate)msp;
5745  if (!ok_magic(ms)) {
5746  USAGE_ERROR_ACTION(ms,ms);
5747  return 0;
5748  }
5749  if (n_elements != 0) {
5750  req = n_elements * elem_size;
5751  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5752  (req / n_elements != elem_size))
5753  req = MAX_SIZE_T; /* force downstream failure on overflow */
5754  }
5755  mem = internal_malloc(ms, req);
5756  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5757  memset(mem, 0, req);
5758  return mem;
5759 }
5760 
5761 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5762  void* mem = 0;
5763  if (oldmem == 0) {
5764  mem = mspace_malloc(msp, bytes);
5765  }
5766  else if (bytes >= MAX_REQUEST) {
5767  MALLOC_FAILURE_ACTION;
5768  }
5769 #ifdef REALLOC_ZERO_BYTES_FREES
5770  else if (bytes == 0) {
5771  mspace_free(msp, oldmem);
5772  }
5773 #endif /* REALLOC_ZERO_BYTES_FREES */
5774  else {
5775  size_t nb = request2size(bytes);
5776  mchunkptr oldp = mem2chunk(oldmem);
5777 #if ! FOOTERS
5778  mstate m = (mstate)msp;
5779 #else /* FOOTERS */
5780  mstate m = get_mstate_for(oldp);
5781  if (!ok_magic(m)) {
5782  USAGE_ERROR_ACTION(m, oldmem);
5783  return 0;
5784  }
5785 #endif /* FOOTERS */
5786  if (!PREACTION(m)) {
5787  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5788  POSTACTION(m);
5789  if (newp != 0) {
5790  check_inuse_chunk(m, newp);
5791  mem = chunk2mem(newp);
5792  }
5793  else {
5794  mem = mspace_malloc(m, bytes);
5795  if (mem != 0) {
5796  size_t oc = chunksize(oldp) - overhead_for(oldp);
5797  memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5798  mspace_free(m, oldmem);
5799  }
5800  }
5801  }
5802  }
5803  return mem;
5804 }
5805 
5806 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5807  void* mem = 0;
5808  if (oldmem != 0) {
5809  if (bytes >= MAX_REQUEST) {
5810  MALLOC_FAILURE_ACTION;
5811  }
5812  else {
5813  size_t nb = request2size(bytes);
5814  mchunkptr oldp = mem2chunk(oldmem);
5815 #if ! FOOTERS
5816  mstate m = (mstate)msp;
5817 #else /* FOOTERS */
5818  mstate m = get_mstate_for(oldp);
5819  (void)msp; /* placate people compiling -Wunused */
5820  if (!ok_magic(m)) {
5821  USAGE_ERROR_ACTION(m, oldmem);
5822  return 0;
5823  }
5824 #endif /* FOOTERS */
5825  if (!PREACTION(m)) {
5826  mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5827  POSTACTION(m);
5828  if (newp == oldp) {
5829  check_inuse_chunk(m, newp);
5830  mem = oldmem;
5831  }
5832  }
5833  }
5834  }
5835  return mem;
5836 }
5837 
5838 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5839  mstate ms = (mstate)msp;
5840  if (!ok_magic(ms)) {
5841  USAGE_ERROR_ACTION(ms,ms);
5842  return 0;
5843  }
5844  if (alignment <= MALLOC_ALIGNMENT)
5845  return mspace_malloc(msp, bytes);
5846  return internal_memalign(ms, alignment, bytes);
5847 }
5848 
5849 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5850  size_t elem_size, void* chunks[]) {
5851  size_t sz = elem_size; /* serves as 1-element array */
5852  mstate ms = (mstate)msp;
5853  if (!ok_magic(ms)) {
5854  USAGE_ERROR_ACTION(ms,ms);
5855  return 0;
5856  }
5857  return ialloc(ms, n_elements, &sz, 3, chunks);
5858 }
5859 
5860 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5861  size_t sizes[], void* chunks[]) {
5862  mstate ms = (mstate)msp;
5863  if (!ok_magic(ms)) {
5864  USAGE_ERROR_ACTION(ms,ms);
5865  return 0;
5866  }
5867  return ialloc(ms, n_elements, sizes, 0, chunks);
5868 }
5869 
5870 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5871  return internal_bulk_free((mstate)msp, array, nelem);
5872 }
5873 
5874 #if MALLOC_INSPECT_ALL
5875 void mspace_inspect_all(mspace msp,
5876  void(*handler)(void *start,
5877  void *end,
5878  size_t used_bytes,
5879  void* callback_arg),
5880  void* arg) {
5881  mstate ms = (mstate)msp;
5882  if (ok_magic(ms)) {
5883  if (!PREACTION(ms)) {
5884  internal_inspect_all(ms, handler, arg);
5885  POSTACTION(ms);
5886  }
5887  }
5888  else {
5889  USAGE_ERROR_ACTION(ms,ms);
5890  }
5891 }
5892 #endif /* MALLOC_INSPECT_ALL */
5893 
5894 int mspace_trim(mspace msp, size_t pad) {
5895  int result = 0;
5896  mstate ms = (mstate)msp;
5897  if (ok_magic(ms)) {
5898  if (!PREACTION(ms)) {
5899  result = sys_trim(ms, pad);
5900  POSTACTION(ms);
5901  }
5902  }
5903  else {
5904  USAGE_ERROR_ACTION(ms,ms);
5905  }
5906  return result;
5907 }
5908 
5909 #if !NO_MALLOC_STATS
5910 void mspace_malloc_stats(mspace msp) {
5911  mstate ms = (mstate)msp;
5912  if (ok_magic(ms)) {
5913  internal_malloc_stats(ms);
5914  }
5915  else {
5916  USAGE_ERROR_ACTION(ms,ms);
5917  }
5918 }
5919 #endif /* NO_MALLOC_STATS */
5920 
5921 size_t mspace_footprint(mspace msp) {
5922  size_t result = 0;
5923  mstate ms = (mstate)msp;
5924  if (ok_magic(ms)) {
5925  result = ms->footprint;
5926  }
5927  else {
5928  USAGE_ERROR_ACTION(ms,ms);
5929  }
5930  return result;
5931 }
5932 
5933 size_t mspace_max_footprint(mspace msp) {
5934  size_t result = 0;
5935  mstate ms = (mstate)msp;
5936  if (ok_magic(ms)) {
5937  result = ms->max_footprint;
5938  }
5939  else {
5940  USAGE_ERROR_ACTION(ms,ms);
5941  }
5942  return result;
5943 }
5944 
5945 size_t mspace_footprint_limit(mspace msp) {
5946  size_t result = 0;
5947  mstate ms = (mstate)msp;
5948  if (ok_magic(ms)) {
5949  size_t maf = ms->footprint_limit;
5950  result = (maf == 0) ? MAX_SIZE_T : maf;
5951  }
5952  else {
5953  USAGE_ERROR_ACTION(ms,ms);
5954  }
5955  return result;
5956 }
5957 
5958 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5959  size_t result = 0;
5960  mstate ms = (mstate)msp;
5961  if (ok_magic(ms)) {
5962  if (bytes == 0)
5963  result = granularity_align(1); /* Use minimal size */
5964  if (bytes == MAX_SIZE_T)
5965  result = 0; /* disable */
5966  else
5967  result = granularity_align(bytes);
5968  ms->footprint_limit = result;
5969  }
5970  else {
5971  USAGE_ERROR_ACTION(ms,ms);
5972  }
5973  return result;
5974 }
5975 
5976 #if !NO_MALLINFO
5977 struct mallinfo mspace_mallinfo(mspace msp) {
5978  mstate ms = (mstate)msp;
5979  if (!ok_magic(ms)) {
5980  USAGE_ERROR_ACTION(ms,ms);
5981  }
5982  return internal_mallinfo(ms);
5983 }
5984 #endif /* NO_MALLINFO */
5985 
5986 size_t mspace_usable_size(const void* mem) {
5987  if (mem != 0) {
5988  mchunkptr p = mem2chunk(mem);
5989  if (is_inuse(p))
5990  return chunksize(p) - overhead_for(p);
5991  }
5992  return 0;
5993 }
5994 
5995 int mspace_mallopt(int param_number, int value) {
5996  return change_mparam(param_number, value);
5997 }
5998 
5999 #endif /* MSPACES */
6000 
6001 
6002 /* -------------------- Alternative MORECORE functions ------------------- */
6003 
6004 /*
6005  Guidelines for creating a custom version of MORECORE:
6006 
6007  * For best performance, MORECORE should allocate in multiples of pagesize.
6008  * MORECORE may allocate more memory than requested. (Or even less,
6009  but this will usually result in a malloc failure.)
6010  * MORECORE must not allocate memory when given argument zero, but
6011  instead return one past the end address of memory from previous
6012  nonzero call.
6013  * For best performance, consecutive calls to MORECORE with positive
6014  arguments should return increasing addresses, indicating that
6015  space has been contiguously extended.
6016  * Even though consecutive calls to MORECORE need not return contiguous
6017  addresses, it must be OK for malloc'ed chunks to span multiple
6018  regions in those cases where they do happen to be contiguous.
6019  * MORECORE need not handle negative arguments -- it may instead
6020  just return MFAIL when given negative arguments.
6021  Negative arguments are always multiples of pagesize. MORECORE
6022  must not misinterpret negative args as large positive unsigned
6023  args. You can suppress all such calls from even occurring by defining
6024  MORECORE_CANNOT_TRIM,
6025 
6026  As an example alternative MORECORE, here is a custom allocator
6027  kindly contributed for pre-OSX macOS. It uses virtually but not
6028  necessarily physically contiguous non-paged memory (locked in,
6029  present and won't get swapped out). You can use it by uncommenting
6030  this section, adding some #includes, and setting up the appropriate
6031  defines above:
6032 
6033  #define MORECORE osMoreCore
6034 
6035  There is also a shutdown routine that should somehow be called for
6036  cleanup upon program exit.
6037 
6038  #define MAX_POOL_ENTRIES 100
6039  #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6040  static int next_os_pool;
6041  void *our_os_pools[MAX_POOL_ENTRIES];
6042 
6043  void *osMoreCore(int size)
6044  {
6045  void *ptr = 0;
6046  static void *sbrk_top = 0;
6047 
6048  if (size > 0)
6049  {
6050  if (size < MINIMUM_MORECORE_SIZE)
6051  size = MINIMUM_MORECORE_SIZE;
6052  if (CurrentExecutionLevel() == kTaskLevel)
6053  ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6054  if (ptr == 0)
6055  {
6056  return (void *) MFAIL;
6057  }
6058  // save ptrs so they can be freed during cleanup
6059  our_os_pools[next_os_pool] = ptr;
6060  next_os_pool++;
6061  ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6062  sbrk_top = (char *) ptr + size;
6063  return ptr;
6064  }
6065  else if (size < 0)
6066  {
6067  // we don't currently support shrink behavior
6068  return (void *) MFAIL;
6069  }
6070  else
6071  {
6072  return sbrk_top;
6073  }
6074  }
6075 
6076  // cleanup any allocated memory pools
6077  // called as last thing before shutting down driver
6078 
6079  void osCleanupMem(void)
6080  {
6081  void **ptr;
6082 
6083  for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6084  if (*ptr)
6085  {
6086  PoolDeallocate(*ptr);
6087  *ptr = 0;
6088  }
6089  }
6090 
6091 */
6092 
6093 
6094 /* -----------------------------------------------------------------------
6095 History:
6096  v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6097  * fix bad comparison in dlposix_memalign
6098  * don't reuse adjusted asize in sys_alloc
6099  * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6100  * reduce compiler warnings -- thanks to all who reported/suggested these
6101 
6102  v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6103  * Always perform unlink checks unless INSECURE
6104  * Add posix_memalign.
6105  * Improve realloc to expand in more cases; expose realloc_in_place.
6106  Thanks to Peter Buhr for the suggestion.
6107  * Add footprint_limit, inspect_all, bulk_free. Thanks
6108  to Barry Hayes and others for the suggestions.
6109  * Internal refactorings to avoid calls while holding locks
6110  * Use non-reentrant locks by default. Thanks to Roland McGrath
6111  for the suggestion.
6112  * Small fixes to mspace_destroy, reset_on_error.
6113  * Various configuration extensions/changes. Thanks
6114  to all who contributed these.
6115 
6116  V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6117  * Update Creative Commons URL
6118 
6119  V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6120  * Use zeros instead of prev foot for is_mmapped
6121  * Add mspace_track_large_chunks; thanks to Jean Brouwers
6122  * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6123  * Fix insufficient sys_alloc padding when using 16byte alignment
6124  * Fix bad error check in mspace_footprint
6125  * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6126  * Reentrant spin locks; thanks to Earl Chew and others
6127  * Win32 improvements; thanks to Niall Douglas and Earl Chew
6128  * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6129  * Extension hook in malloc_state
6130  * Various small adjustments to reduce warnings on some compilers
6131  * Various configuration extensions/changes for more platforms. Thanks
6132  to all who contributed these.
6133 
6134  V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6135  * Add max_footprint functions
6136  * Ensure all appropriate literals are size_t
6137  * Fix conditional compilation problem for some #define settings
6138  * Avoid concatenating segments with the one provided
6139  in create_mspace_with_base
6140  * Rename some variables to avoid compiler shadowing warnings
6141  * Use explicit lock initialization.
6142  * Better handling of sbrk interference.
6143  * Simplify and fix segment insertion, trimming and mspace_destroy
6144  * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6145  * Thanks especially to Dennis Flanagan for help on these.
6146 
6147  V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6148  * Fix memalign brace error.
6149 
6150  V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6151  * Fix improper #endif nesting in C++
6152  * Add explicit casts needed for C++
6153 
6154  V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6155  * Use trees for large bins
6156  * Support mspaces
6157  * Use segments to unify sbrk-based and mmap-based system allocation,
6158  removing need for emulation on most platforms without sbrk.
6159  * Default safety checks
6160  * Optional footer checks. Thanks to William Robertson for the idea.
6161  * Internal code refactoring
6162  * Incorporate suggestions and platform-specific changes.
6163  Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6164  Aaron Bachmann, Emery Berger, and others.
6165  * Speed up non-fastbin processing enough to remove fastbins.
6166  * Remove useless cfree() to avoid conflicts with other apps.
6167  * Remove internal memcpy, memset. Compilers handle builtins better.
6168  * Remove some options that no one ever used and rename others.
6169 
6170  V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6171  * Fix malloc_state bitmap array misdeclaration
6172 
6173  V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6174  * Allow tuning of FIRST_SORTED_BIN_SIZE
6175  * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6176  * Better detection and support for non-contiguousness of MORECORE.
6177  Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6178  * Bypass most of malloc if no frees. Thanks To Emery Berger.
6179  * Fix freeing of old top non-contiguous chunk im sysmalloc.
6180  * Raised default trim and map thresholds to 256K.
6181  * Fix mmap-related #defines. Thanks to Lubos Lunak.
6182  * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6183  * Branch-free bin calculation
6184  * Default trim and mmap thresholds now 256K.
6185 
6186  V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6187  * Introduce independent_comalloc and independent_calloc.
6188  Thanks to Michael Pachos for motivation and help.
6189  * Make optional .h file available
6190  * Allow > 2GB requests on 32bit systems.
6191  * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6192  Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6193  and Anonymous.
6194  * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6195  helping test this.)
6196  * memalign: check alignment arg
6197  * realloc: don't try to shift chunks backwards, since this
6198  leads to more fragmentation in some programs and doesn't
6199  seem to help in any others.
6200  * Collect all cases in malloc requiring system memory into sysmalloc
6201  * Use mmap as backup to sbrk
6202  * Place all internal state in malloc_state
6203  * Introduce fastbins (although similar to 2.5.1)
6204  * Many minor tunings and cosmetic improvements
6205  * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6206  * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6207  Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6208  * Include errno.h to support default failure action.
6209 
6210  V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6211  * return null for negative arguments
6212  * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6213  * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6214  (e.g. WIN32 platforms)
6215  * Cleanup header file inclusion for WIN32 platforms
6216  * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6217  * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6218  memory allocation routines
6219  * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6220  * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6221  usage of 'assert' in non-WIN32 code
6222  * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6223  avoid infinite loop
6224  * Always call 'fREe()' rather than 'free()'
6225 
6226  V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6227  * Fixed ordering problem with boundary-stamping
6228 
6229  V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6230  * Added pvalloc, as recommended by H.J. Liu
6231  * Added 64bit pointer support mainly from Wolfram Gloger
6232  * Added anonymously donated WIN32 sbrk emulation
6233  * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6234  * malloc_extend_top: fix mask error that caused wastage after
6235  foreign sbrks
6236  * Add linux mremap support code from HJ Liu
6237 
6238  V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6239  * Integrated most documentation with the code.
6240  * Add support for mmap, with help from
6241  Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6242  * Use last_remainder in more cases.
6243  * Pack bins using idea from colin@nyx10.cs.du.edu
6244  * Use ordered bins instead of best-fit threshhold
6245  * Eliminate block-local decls to simplify tracing and debugging.
6246  * Support another case of realloc via move into top
6247  * Fix error occuring when initial sbrk_base not word-aligned.
6248  * Rely on page size for units instead of SBRK_UNIT to
6249  avoid surprises about sbrk alignment conventions.
6250  * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6251  (raymond@es.ele.tue.nl) for the suggestion.
6252  * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6253  * More precautions for cases where other routines call sbrk,
6254  courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6255  * Added macros etc., allowing use in linux libc from
6256  H.J. Lu (hjl@gnu.ai.mit.edu)
6257  * Inverted this history list
6258 
6259  V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6260  * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6261  * Removed all preallocation code since under current scheme
6262  the work required to undo bad preallocations exceeds
6263  the work saved in good cases for most test programs.
6264  * No longer use return list or unconsolidated bins since
6265  no scheme using them consistently outperforms those that don't
6266  given above changes.
6267  * Use best fit for very large chunks to prevent some worst-cases.
6268  * Added some support for debugging
6269 
6270  V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6271  * Removed footers when chunks are in use. Thanks to
6272  Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6273 
6274  V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6275  * Added malloc_trim, with help from Wolfram Gloger
6276  (wmglo@Dent.MED.Uni-Muenchen.DE).
6277 
6278  V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6279 
6280  V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6281  * realloc: try to expand in both directions
6282  * malloc: swap order of clean-bin strategy;
6283  * realloc: only conditionally expand backwards
6284  * Try not to scavenge used bins
6285  * Use bin counts as a guide to preallocation
6286  * Occasionally bin return list chunks in first scan
6287  * Add a few optimizations from colin@nyx10.cs.du.edu
6288 
6289  V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6290  * faster bin computation & slightly different binning
6291  * merged all consolidations to one part of malloc proper
6292  (eliminating old malloc_find_space & malloc_clean_bin)
6293  * Scan 2 returns chunks (not just 1)
6294  * Propagate failure in realloc if malloc returns 0
6295  * Add stuff to allow compilation on non-ANSI compilers
6296  from kpv@research.att.com
6297 
6298  V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6299  * removed potential for odd address access in prev_chunk
6300  * removed dependency on getpagesize.h
6301  * misc cosmetics and a bit more internal documentation
6302  * anticosmetics: mangled names in macros to evade debugger strangeness
6303  * tested on sparc, hp-700, dec-mips, rs6000
6304  with gcc & native cc (hp, dec only) allowing
6305  Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6306 
6307  Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6308  * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6309  structure of old version, but most details differ.)
6310 
6311 */
#define assert(x)
Definition: assert.h:37
EXPORTED_PUBLIC void * page_align(void *p) PURE
Definition: utility.cc:28
Definition: mem.c:283