Priority Queue - adamtew/CS2420-Exam2 GitHub Wiki
Anything Greedy
Main Idea | Sub idea | Description |
---|---|---|
Binary Heap | ||
D-Heap | ||
Mergable Heaps | Leftist Heap | |
Skew Heap | ||
Binomial Queue |
Examples:
- ordering CPU jobs
- searching for the exit in a maze (or looking for moves in the rotation puzzle game)
- emergency room admission processing
You don't need to be as sorted as other data structures (BST trees, Splay trees, AVL trees)
// Delete Code
Comparable deleteMin(){
x = A[0];
A[0]=A[size--];
percolateDown(0);
return x;
}
percolateDown(int hole) {
shiftVal=heap[hole];
while (2*hole+1 <= size) {
left = 2*hole+1;
right = left + 1;
if (right <= size &&
heap[right] < heap[left])
target = right;
else
target = left;
if (heap[target] < shiftVal) {
heap[hole] = heap[target];
hole = target;
}
else
break;
}
Heap [hole] = shiftVal;
}
// Insert Code
void insert(Comparable newVal) {
// Efficiency hack: we won’t actually put newVal
// into the heap until we’ve located the position
// it goes in. This avoids having to copy it
// repeatedly during the percolate up.
int hole = ++size;
// Percolate up
for( ; hole>0 && newVal < heap[(hole-1)/2] ;
hole = (hole-1)/2)
heap[hole] = heap[(hole-1)/2];
heap[hole] = newVal;
}
Floyd's method
buildHeap(){
for (i=size/2; i>0; i--)
percolateDown(i);
}
// O(n log n) worst case, but O(n) average case.
// Can we get a worst case O(n) bound?
Binary Heap Worst Case | Binary Heap Average Case | AVL Worst | BST Average | |
---|---|---|---|---|
Insert | (log n) | O(1) 2.6 compares | O(log n) | O(log n) |
Delete Min | O(log n) | O(log n) | O(log n) | O(log n) |
- finding a child/parent index is a multiply/divide by two
- each percolate down operation looks at only two kids
- inserts are at least as common as deleteMins
- division and multiplication by powers of two are fast
- with huge data sets (that can’t be stored in main memory), memory - accesses dominate
- Each node has d children
- Still representable by array
- optimize performance based on # of inserts/removes
- power of two for efficiency
- fit one set of children in a cache line (the block of memory that - is transferred to memory cache)
- fit one set of children on a memory page/disk block
Complexity of O(n) with Floyd's method
Binary heap-ordered trees with left subtrees always “longer” than right subtrees A heap structure that enables fast merges
- Main idea: Recursively work on right path for Merge/Insert/DeleteMin
- Right path is always short has O(log N) nodes
- Merge, Insert, DeleteMin all have O(log N) running time (see text)
the null path length (npl) of a node is the smallest number of nodes between it and a null in the tree
npl(null) = -1
npl(leaf) = 0
npl(single-child node) = 0
- parent’s priority value is to childrens’ priority values
- result: minimum element is at the root
- null path length of left subtree is npl of right subtree
- result: tree is at least as “heavy” on the left as the right
- every subtree of a leftist tree is leftist!
- Basically, the shortest path must be to the right. So, if you always take the shortest path, it can’t be longer than log n.
- If both the left and right sub-trees are leftist heaps but the root does not form a leftist heap, We only need to swap the two sub-trees
- We can use this to merge two leftist heaps
- Merging strategy: Given two leftist heaps, recursively merge the larger value with the right sub-heap of the root Traversing back to the root, swap trees to maintain the leftist heap property
Node * merge (Node * t1, Node * t2) // t1 and t2 are merged, new tree is created
{ Node * small;
if (t1==NULL) return t2;
if (t2==NULL) return t1;
if (t1 ->element < t2->element) {
t1->right = merge(t1->right, t2);
small=t1;}
else {
t2->right = merge(t2->right, t1);
small=t2;}
if (notLeftist(small)) swapkids(small);
setNullPathLength(small);
return small;
}
- The heaps are merged, but the result is not a leftist heap as 3 is unhappy. On the way back our of the recursion swap sub-heaps where necessary. Find the unhappy nodes – after updating the null path lengths.
- extra storage for npl
- extra complexity/logic to maintain and check npl
Self-adjusting version of leftist heaps (a la splay trees)
Motivation? Remember that we “often” get heavy right sides when we merge. So, why not just assume the right side is heavy and move it over automatically?
We can make a simple modification to the leftist heap and get similar results without storing (or computing) the null path length.
We always merge with the right child, but after merging, we swap the left and right children for every node in the resulting right path of the temporary tree.
- Do not actually keep track of path lengths
- Adjust tree by swapping children during each merge
- O(log N) amortized time per operation for a sequence of M operations
Skew Notes
- blind adjusting version of leftist heaps
- amortized time for merge, insert, and deleteMin is O(log n)
- worst case time for all three is O(n)
- merge always switches children when fixing right path
- iterative method has only one pass
Node * SkewHeapMerge (Node * t1, Node * t2) // t1 and t2 are merged, a new tree
{ Node * small;
if (t1==NULL) return t2;
if (t2==NULL) return t1;
if (t1 ->element < t2->element) {
t1->right = merge(t1->right, t2);
small=t1;}
else {
t2->right = merge(t2->right, t1);
small=t2;}
swapkids(small);
return small;
}