Keine Beschreibung
Du kannst nicht mehr als 25 Themen auswählen Themen müssen mit entweder einem Buchstaben oder einer Ziffer beginnen. Sie können Bindestriche („-“) enthalten und bis zu 35 Zeichen lang sein.

paper.tex 8.8KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121
  1. \documentclass{article}
  2. \begin{document}
  3. \title{A Memory Allocator}
  4. \author{Cameron Weinfurt, Thomas Johnson}
  5. \maketitle
  6. \section{Introduction}
  7. When implementing the functionality of abstract data structures, it becomes necessary to have an array
  8. whose size is not known at compile time. This is problematic in languages like C and C++ where the
  9. size of an array must be known when compiling in order to calculate how bih the stack frame must be.
  10. The solution is to forgo having the array existing on the stack and not even dedicate space at compile
  11. time for the array. Instead, the array is created during runtime once the size of the array can be
  12. determined. This is known as dynamic memory allocation. A special region in a process's memory is
  13. dedicated for these kinds of allocations, called the heap, while a memory allocator keeps track of
  14. this space's usage. How the allocator handles memory that is no longer in use and situations in which
  15. a region must be resized depends on how it is implemented. In addition, the memory allocator has to
  16. balance how much memory that it uses for record keeping while also minimizing the amount of CPU cycles
  17. required to manage the memory region it has been given.
  18. In ANSI C, the standard library provides a memory allocator for general purpose use in programs.
  19. Dynamic memory allocation is performed through the {\tt malloc()} library call in which the caller
  20. passes the desired size of the allocation and it returns a pointer to the allocated memory region. It
  21. must be noted that malloc does not initalize the returned memory region with any value, so the caller
  22. must initalize the array itself. However, the standard library also provides the {\tt calloc()}
  23. library call in which the allocated memory is initalized with zero in every byte. Resizing of an
  24. allocation is performed using the {\tt realloc()} and {\tt reallocarray()} library calls, which will
  25. either grow the allocation in place or move the allocation to a space in which the new size can fit.
  26. The program must signify to the allocator that an memory region is to be marked free through the {\tt
  27. free()} libary call. Should it fail to notify the allocator that an allocation is no longer in use
  28. before all of its references go out of scope, it will become impossible to access the underlying
  29. data or use the space it took up; a
  30. memory leak.
  31. Other languages' standard libraries can provide memory allocators or additional data structures for
  32. dynamic allocation using other techniques. Rather returning a pointer as a reference to the
  33. allocation, languages like C++ and Rust provide smart references that determines when the underlying
  34. memory allocation can be freed by detecting when all of its references have gone out of scope. Runtime
  35. languages like those running on the Java Virtual Machine or Microsoft's Common Language Interface also
  36. provide smart pointers, but wait to free the unused memory regions until a scheduled batch process
  37. occurs known as a garbage collection. Higher order languages like Python and Lua completely abstract
  38. away that dynamic allocation is occuring by only allowing abstract data structures to be created by
  39. the programmer, allowing for their implementation to handle the underlying allocations needed
  40. transparently.
  41. In practice, abstract data structures do not allocate in the same way. It is possible to categorize
  42. them into two groups based on how they allocate. For data structures like vectors and smart strings, a
  43. single allocation is made at its creation and then resized as data is either added or removed. This is
  44. the first group. The second group consists of data structures like linked lists and trees, where many
  45. allocations and frees are requested, but each allocation is fixed in size. Rather than handling
  46. dynamic allocation request using one method, abstract data structures could instead give hints to the
  47. allocator as to what kind of allocations it should be making. Such an allocator could then be
  48. constructed to take advantage of these hints and optimize accordingly. This is what this paper aims to
  49. demonstrate.
  50. \section{Implementation}
  51. \subsection{The Tree Allocator}
  52. Internally, the allocator is made of two self-balancing binary search trees in which one keeps records
  53. of free space available to the allocator while the other keeps records of allocated memory. Both trees
  54. are sorted using the size of their respective regions, though they can be search based on location in
  55. memory as well. Each node can be one of three types depending on their purpose. The type is used to
  56. determine which struct to represent the node with; the similar footprint permitting pointer
  57. polymorphism. A red-black tree was chosen to perform the self-balancing due to the minimal cost of
  58. adding a color field to each node.
  59. To perform an allocation, the allocator first searches for within the free space tree for a memory
  60. block of suitable size. If it cannot find one, it requests the operating system for additional memory
  61. to mapped into the process before pushing the new space onto the tree and searching again. Once a node
  62. representing a memory region of sufficent size is found, it is removed from the free-space tree. The
  63. underlying space is then split to fit the new allocation and leave the excess space unallocated. The
  64. allocated space as a new node is pushed onto the allocations tree while the excess space also in new
  65. nodes are pushed back ono the free-space tree. In particular, the allocator attempts to place the new
  66. allocation in the center of the free space in order to minimize the chances of resizing causing a
  67. move. Deallocations are handled in a similar manner. When an address is requested to be freed, the
  68. allocator searches for the corresponding node in the allocations tree. This node is then popped off
  69. the allocations tree and pushed onto the free-space tree. In addition, if it is found that this node
  70. is surrounded by unallocated memory after being pushed onto the free-space tree, it will merge the
  71. nodes together. This keeps fragmentation at a minimum and speeds up subsequent allocations and
  72. deallocation.
  73. \subsection{The Watermark Allocator}
  74. On its own, the watermark allocator present many problems that make is unfeasible to use as an
  75. allocator. This model of allocator is simply a stack that cannot have elements popped off of it while
  76. also holding a reference counter. The obvious problem with this is that frees ultimately are leaks
  77. under a watermark allocator. Unrestricted, a watermark allocator will eventually run out of memory
  78. even though free space may exist behind its stack pointer. This simplistic model does not come without
  79. its benefits; however, being that an allocation requires very little overhead and metadata to handle.
  80. The solution that was derived to take advantage of this property was to use the tree allocator to
  81. manage a series of finite sized watermark allocators. The implementation does not create an instance
  82. of a watermark allocator until a request for a fixed sized allocation is made. It is limited to a
  83. space of 4096 bytes, enforcing that the allocator be used for small, fixed size allocations. Larger
  84. allocations will either fail or be allocated using the tree allocator instead. Should an allocator run
  85. out of space, a new one is created and the allocation is performed on that new allocator. In addition, it is stored as a node within the tree allocator, meaning the last reference to the memory region will be the global allocator itself, which will free the space through the tree allocator automatically when the reference count on the space goes to zero.
  86. \section{Results}
  87. % TODO: Add figures the LaTeX way
  88. To evaluate the effectiveness of the allocator, glibc's malloc was used as a benchmark.
  89. Tree alloc vs. Watermark alloc vs. Glibc Malloc with repeated allocs of size 20:
  90. Tree alloc vs. Glibc Malloc with repeated allocs of size 8000:
  91. %## Tree alloc vs. Watermark alloc vs Glibc Malloc with repeated frees of size 20:
  92. %## Tree alloc vs. Glibc Malloc with repeated frees of size 8000:
  93. %## Tree alloc vs. Glibc Realloc with repeatedly resizing allocations.
  94. A set of 3 allocations were made of size 20. They were doubled in size repeatedly in sequence of their allocation.
  95. \section{Conclusion}
  96. Due to time constraints, we were unable to finalize the allocator as a whole. The tree allocator is in a mostly working state, but appears to behave different depending on build parameters and which tools the test environment is run under. In most cases, the allocator will leak due to bugs in the red-black tree implementation. The watermark allocator, being dependent on the tree allocator, was not tested even though an implementation was written. The choice to use a red-black tree may have been suboptimal. Even though it is a self-balancing binary search tree that gives logarithmic time search, insert and deletion, other types of self-balancing trees could have been used. A B-tree may have been the more appropriate
  97. \end{document}