Parcourir la source

Paper progress

Cameron Weinfurt il y a 5 mois
Parent
révision
f361388285
1 fichiers modifiés avec 30 ajouts et 0 suppressions
  1. 30
    0
      paper_content.md

+ 30
- 0
paper_content.md Voir le fichier

@@ -6,3 +6,33 @@ In ANSI C, the standard library provides a memory allocator for general purpose
6 6
 Other languages' standard libraries can provide memory allocators or additional data structures for dynamic allocation using other techniques. Rather returning a pointer as a reference to the allocation, languages like C++ and Rust provide smart references that determines when the underlying memory allocation can be freed by detecting when all of its references have gone out of scope. Runtime languages like those running on the Java Virtual Machine or Microsoft's Common Language Interface also provide smart pointers, but wait to free the unused memory regions until a scheduled batch process occurs known as a garbage collection. Higher order languages like Python and Lua completely abstract away that dynamic allocation is occuring by only allowing abstract data structures to be created by the programmer, allowing for their implementation to handle the underlying allocations needed transparently. 
7 7
 
8 8
 In practice, abstract data structures do not allocate in the same way. It is possible to categorize them into two groups based on how they allocate. For data structures like vectors and smart strings, a single allocation is made at its creation and then resized as data is either added or removed. This is the first group. The second group consists of data structures like linked lists and trees, where many allocations and frees are requested, but each allocation is fixed in size. Rather than handling dynamic allocation request using one method, abstract data structures could instead give hints to the allocator as to what kind of allocations it should be making. Such an allocator could then be constructed to take advantage of these hints and optimize accordingly. This is what this paper aims to demonstrate. 
9
+
10
+# Implementation
11
+
12
+## The Tree Allocator
13
+
14
+Internally, the allocator is made of two self-balancing binary search trees in which one keeps records of free space available to the allocator while the other keeps records of allocated memory. Both trees are sorted using the size of their respective regions, though they can be search based on location in memory as well. Each node can be one of three types depending on their purpose. The type is used to determine which struct to represent the node with; the similar footprint permitting pointer polymorphism. A red-black tree was chosen to perform the self-balancing due to the minimal cost of adding a color field to each node. 
15
+
16
+To perform an allocation, the allocator first searches for within the free space tree for a memory block of suitable size. If it cannot find one, it requests the operating system for additional memory to mapped into the process before pushing the new space onto the tree and searching again. Once a node representing a memory region of sufficent size is found, it is removed from the free-space tree. The underlying space is then split to fit the new allocation and leave the excess space unallocated. The allocated space as a new node is pushed onto the allocations tree while the excess space also in new nodes are pushed back ono the free-space tree. In particular, the allocator attempts to place the new allocation in the center of the free space in order to minimize the chances of resizing causing a move. Deallocations are handled in a similar manner. When an address is requested to be freed, the allocator searches for the corresponding node in the allocations tree. This node is then popped off the allocations tree and pushed onto the free-space tree. In addition, if it is found that this node is surrounded by unallocated memory after being pushed onto the free-space tree, it will merge the nodes together. This keeps fragmentation at a minimum and speeds up subsequent allocations and deallocation. 
17
+
18
+## The Watermark Allocator
19
+
20
+On its own, the watermark allocator present many problems that make is unfeasible to use as an allocator. This model of allocator is simply a stack that cannot have elements popped off of it while also holding a reference counter. The obvious problem with this is that frees ultimately are leaks under a watermark allocator. Unrestricted, a watermark allocator will eventually run out of memory even though free space may exist behind its stack pointer. This simplistic model does not come without its benefits; however, being that an allocation requires very little overhead and metadata to handle. The solution that was derived to take advantage of this property was to use the tree allocator to manage a series of finite sized watermark allocators. The implementation does not create an instance of a watermark allocator until a request for a fixed sized allocation is made. It is limited to a space of 4096 bytes, enforcing that the allocator be used for small, fixed size allocations. Larger allocations will either fail or be allocated using the tree allocator instead. Should an allocator run out of space, a new one is created and the allocation is performed on that new allocator. In addition, it is stored as a node within the tree allocator, meaning the last reference to the memory region will be the global allocator itself, which will free the space through the tree allocator automatically when the reference count on the space goes to zero. 
21
+
22
+# Results:
23
+
24
+To evaluate the effectiveness of the allocator, glibc's malloc was used as a benchmark.
25
+
26
+## Tree alloc vs. Watermark alloc vs. Glibc Malloc with repeated allocs of size 20:
27
+
28
+## Tree alloc vs. Glibc Malloc with repeated allocs of size 8000:
29
+
30
+## Tree alloc vs. Watermark alloc vs Glibc Malloc with repeated frees of size 20:
31
+
32
+## Tree alloc vs. Glibc Malloc with repeated frees of size 8000:
33
+
34
+## Tree alloc vs. Glibc Realloc with repeatedly resizing allocations.
35
+
36
+A set of 3 allocations were made of size 20. They were doubled in size repeatedly in sequence of their allocation.
37
+
38
+# Conclusion:

Chargement…
Annuler
Enregistrer