Front Page All Articles Recent Changes Random Article

Contents

Concatenative language

  • ACL
  • Ait
  • Aocla
  • Breeze
  • Callisto
  • Cat
  • Cognate
  • colorForth
  • Concata
  • CoSy
  • Deque
  • DSSP
  • dt
  • Elymas
  • Enchilada
  • ETAC
  • F
  • Factor
  • Fiveth
  • Forth
  • Fourth
  • Freelang
  • Gershwin
  • hex
  • iNet
  • Joy
  • Joy of Postfix App
  • kcats
  • Kitten
  • lang5
  • Listack
  • LSE64
  • Lviv
  • Meow5
  • min
  • Mirth
  • mjoy
  • Mlatu
  • Ode
  • OForth
  • Om
  • Onyx
  • Plorth
  • Popr
  • Porth
  • PostScript
  • Prowl
  • Quest32
  • Quackery
  • r3
  • Raven
  • Retro
  • RPL
  • SPL
  • Staapl
  • Stabel
  • Tal
  • Titan
  • Trith
  • Uiua
  • Worst
  • xs
  • XY
  • 5th
  • 8th

Concatenative topics

  • Compilers
  • Interpreters
  • Type systems
  • Object systems
  • Quotations
  • Variables
  • Garbage collection
  • Example programs

Concatenative meta

  • People
  • Communities

Other languages

  • APL
  • C++
  • Erlang
  • FP trivia
  • Haskell
  • Io
  • Java
  • JavaScript
  • Lisp
  • ML
  • Oberon
  • RPL
  • Self
  • Slate
  • Smalltalk

Meta

  • Search
  • Farkup wiki format
  • Etiquette
  • Sandbox

Exotic Data Structures

Let's consider these conceputal data structures:

  1. Dynamic arrays
  2. Linked lists
  3. Unordered maps
  4. Ordered lists
  5. Ordered maps
  6. Sparse arrays
  7. Ordered maps (over finite keys)
  8. Fully persistent ordered maps

Nothing out of the ordinary, right ?

1. Compact dynamic array (compact-arrays)

An indexable deque which is optimal in space and time [3].

This is simply a O(sqrtN) array of O(sqrtN) sub-arrays.

Two lists of arrays are maintained, small and big (twice bigger)

Also, pointers to head/tail indexes, and the big/small separation are maintained.

Conceptually, the virtual array is the concatenation of all small sub-arrays followed by the big sub-arrays, and indexed between head/tail.

All operations are straightforward. In order to maintain the global invariants, small sub-arrays are sometimes merged into big arrays, or big arrays are split into small arrays (this happens at the boundary between the two lists). when one of both lists is empty, lists are swapped (big arrays become small and vice-versa)

Asymptotic complexity:

  • O(1) worst case queries (get/set)
  • O(1) amortized, O(sqrtN) worst case update (push/pop) at both ends
  • N + O(sqrtN) records of space

Variant: Compact integer arrays

This is implemented by growing the integer range of sub-arrays dynamically when an update overflows. It makes updates O(sqrtN) worst case (O(1) amortized if O(N) numbers take O(lgM) bits).

Asymptotic complexity:

  • O(1) amortized, O(sqrtN) worst case queries (get/set)
  • O(1) amortized, O(sqrtN) worst case update (push/pop) at both ends
  • NlgM + O(N) bits of space in the worst case, where M is the biggest integer stored in the array.

2. Monlithic and Succinct lists (monolithic-lists, succinct-lists)

Monolithic lists

Monolithic lists are lists stored without any managed pointers, in a flat array. Each node in the list is identified by a number in [1..N]. 0 is the head and -1 the tail. Insertion allocates nodes, removal reclaims them. Because the links are xor-coded, two consecutive nodes are needed to form a cursor. given a cursor, you can query the list for the next or previous element. To store values at each node, you can simply use node identifiers to index a [1..N] array. This makes cache behavior even worse, but it doesn't matter much, as indexing lists is slow anyway.

note: unrolled lists are isomorphic to lists of arrays. they offer no improvement.

Asymptotic complexity:

  • O(1) worst case queries (prev/next)
  • O(1) amortized, O(N) worst case update (insertion/removal)
  • N + o(N) words of space

Succinct lists

A succinct data structure has space usage close to the information theoretic lower bound.

In essence, a list is isomorphic to an Hamiltonian cycle in a complete graph of n nodes. There are O(N!) such cycles.

Thus at least lg(N!) + o(N) bits are needed to encode a list. This gives us NlgN + o(N) as a lower bound.

Succinct lists achieve this lower bound (tightly) and can still operate efficiently. To achieve this, succinct lists are simply monolithic lists stored in the previously defined compact integer arrays.

Asymptotic complexity:

  • O(1) worst case queries (prev/next)
  • O(1) amortized, O(sqrtN) worst case update (insertion/removal)
  • NlgN + o(N) bits of space

3. Split Hashtables (split-hashtables)

Split hashtables

Split hashtables are a variation of cuckoo hashing, that doesn't move anything around.

In essence, this structure is a pair of arrays of lists of key/value pairs. Lookup is done by splitting the hash of the key in two, using this pair to index the arrays, and then search each list for the key. The twist is that insertion is always in the shortest list. Surprisingly, that gives us an information-theoretic advantage that reduces the expected worst case list length significantly [4]. This works so well that it's a waste of time to make any effort to move elements around (as in cuckoo hashing). Also, the structure can achieve near 100 load efficiently.

Of course, to get the good performance, some tweaks can make the difference.

For space usage: If possible, lists are inlined in the tables. (there are two words to accomodate for a key/value pair per position). When needed full lists are stored as arrays. Full lists have very low expectation, so they are not important. The effort to recover inlined lists is to scan a single cache line in each table.

For speed: The most critical operation for an unordered set is to answer queries. This is divided into three stages in split hashtables:

  1. Short inlined path: Both arrays are looked up for key/values. if any key matches, it's a hit, and the appropriate value is returned with branchless arithmetic. This costs of two 2-word loads (2 potential cache misses) and a conditional branch.
  2. Long inlined path: If the short path fails, there's a branch to the escape path if any of the 2 keys are ((list)). If there's no full list, all the key/value pairs in both cache lines are loaded. Using branchless arithmetic again, the matching key/value pair (or f f) is returned after we swapped it with the pair in the primary position (which is not a full list link because of the previous branch). This costs cached memory accesses and a conditional branch.
  3. Escape: For the inevitable cases where full lists have been allocated, we swap the pair of hashes and use linear probing on parallel on each list (probing all elements of each list in the worst case) and return the match.

Insertions and removal are performed with a variant of query that returns the key/value pair by reference.

  • Removals query for the key and and replace the key/value pair with ((free)) f. (compaction is done when load is low)
  • Insertions query for a ((free)) key and store the key/value pair over it. If the query fails, ((free)) keys are allocated and the operation is restarted.

Allocation has many possibilities: allocating full lists, growing tables, or even swapping values around (like cuckoo hashing) to achieve better results.

A load of 50 gives expected performance similar to cuckoo hashes, as the short inline path will miss with very low probability. Very high loads (near 100) can be achieved as the saturation of inlined lists has very low probability (on the order of 1 for millions of elements, grows in lglgN).

Asymptotic complexity:

  • 2 cache misses expected on cold cache for queries and updates
  • O(1) worst case queries with cuckoo allocation
  • O(1) expected queries and updates
  • O(lglgN) worst case queries and updates with probability 1 - O(1/n)
  • 2N words of space for a near 100 load factor
  • 4N words of space for a near 100 fast path hit rate
  • 5N words of space expected for O(1) worst case queries

4. Rotated lists (rotated-lists)

[ arrays with good random insertion/removal and lookup speed ]

5. B+ trees, Adaptive packed memory arrays (b-trees)

B+ trees

[ boring ordered maps. we can't beat lgN with comparison order. very fast insertion/removal/balancing, very compact. ]

Adaptive pack memory arrays

[ ordered map in a flat array with some bubbles. funny but not really competitive. B+ trees are boring ]

6. Speculative hollow arrays (hollow-arrays)

[ sparse array optimized for maximum speed within a space budget, with speculative relayout to match opportunities in density ]

7. van Emde Boas trees, y-fast trees, approximate ordered sets (y-fast-trees)

van Emde Boas treess

[ ordered map over a finite universe. lglgN search ]

y-fast trees

[ space efficient van Emde Boas trees ]

approximate ordered sets

[ O(1) ordered maps with a weak ordering on keys ]

8. Fully persistent red-black trees

[ basic red-black trees with full persistence in O(1) space per update ]

References

[1] J. R. Driscoll, N. Sarnak, D. D. Sleator, R. E. Tarjan. Making data structures persistent. Journal of Computer and System Sciences. Volume 38, Issue 1, February 1989.

http://www.cs.cmu.edu/~sleator/papers/making-data-structures-persistent.pdf

[2] P. F. Dietz. Fully Persistent Arrays (Extended Array). Lecture Notes in Computer Science, Volume 382, 1989

https://gummo.lib.rochester.edu/retrieve/14496/tr%23290.pdf

[3] A. Brodnik, S. Carlsson, E. D. Demaine, J. I. Munro, and R. Sedgewick. Resizable arrays in optimal time and space. Technical Report CS-99-09, U. Waterloo, 1999.

http://www.cs.uwaterloo.ca/~imunro/resize.ps

[4] M. Mitzenmacher. The power of two choices in randomized load balancing. Parallel and Distributed Systems, IEEE Transactions on. Volume 12, Issue 10, Oct 2001

http://www.eecs.harvard.edu/~michaelm/postscripts/mythesis.pdf

This revision created on Thu, 11 Dec 2008 06:27:18 by prunedtree

Latest Revisions Edit

All content is © 2008-2024 by its respective authors. By adding content to this wiki, you agree to release it under the BSD license.