Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Homework
1. (10 points) Let D be some (mystery) data structure subjected to a sequence of n > 0 operations with the following characteristic:
The ith operation (i > 0) costs i time units if i is a power of 2; otherwise, it costs 1 time unit.
Use the aggregate method of amortized analysis to prove that the total cost of the n operations is O(n). What is the amortized cost per operation?
1. (10 points) Consider a binary counter with k bits for some positive integer k. Suppose that in addition to Increment we also wish to support a Decrement operation, which decreases the value of the counter by 1. (Decrement is never done on a zero-valued counter.)
Describe a sequence of n operations, consisting of Increment and Decrement, which takes Θ(nk) time when executed on a counter that is zero initially. Both the description of the sequence and the run time analysis must be done carefully. (Note that a Θ bound is called for, not O.)
3. (13 points) Suppose that you wish to do two types of operations on an infinite binary counter:
Increment and Reset, where the latter sets the value of the counter to zero.
The goal is to be able to do any sequence of n Increment and Reset operations on an initially-zero counter in O(n) time.
Assume that it takes 1 time unit to check the value of a bit (i.e., is it 0 or 1?) or to change its value (i.e., from 0 to 1 or from 1 to 0).
(a) Explain carefully in words (pseudocode is not needed) how such a counter can be implemented and how the operations can be done.
Hint: Consider maintaining a pointer (an integer) which tracks the position of the high-order 1-bit in the counter. For simplicity, assume that this pointer can be maintained at zero cost.
(b) Use the accounting (i.e., credits) method to analyze your implementation and establish the O(n) bound on the run time of the sequence.
You must describe carefully the invariant you use and the number of credits assigned to each operation, and show that this assignment works. (Use of any other method will earn no credit—pun intended :)
4. (13 points) This problem assumes familiarity with Ch. 6. In that chapter, it is proved that a max-heap, A, on n keys can be built in O(n) time via a sequence of Max_Heapify operations. (See Sec. 6.3.) That proof is not based on amortized analysis.
Prove the above result now using the potential method of amortized analysis.
State clearly your potential function and derive the amortized cost of each Max_Heapify operation. Then show that the total amortized cost obtained using this potential function is an upper bound on the total actual cost for building a max-heap. (Use of any other method will earn no credit.)
Hint: Try to relate your potential function to some aspect of the maximal max-heaps present just before the next Max_Heapify operation is done. For instance, in Fig. 6.3(d), just before Max_Heapify is done from the node numbered 2 (shown dark gray), the maximal max-heaps are the ones with roots numbered 3, 4, and 5. They are maximal in the sense that none of them is contained in a larger max-heap at that moment.
This problem illustrates two aspects of amortized analysis: namely, it often leads to simpler proofs (as compared to the proof in Section 6.3) and, furthermore, it “forces” one to really understand how a data structure works.
5. (10 points) In addition to insertions, one can also perform. deletions in a dynamic table. Briefly, a Table_delete(T, x) operation deletes item x from a dynamic table T.
A crucial requirement is to not allow the table to become too sparse.
One possible strategy for Table_delete(T, x) is as follows: First x is deleted. If this causes the load factor to drop below 1/4, then a table contraction happens. A contraction allocates a new table, T', that is half the size of T, inserts all the remaining items of T into T', and frees the space for T. (However, if x is the only item in T, then the table becomes empty after x’s deletion and the space for T is simply freed.)
Write pseudocode for Table_delete(T, x). Also, establish a lower bound on the load factor of the new table.
Note that since Table_insert(T, x) doubles the size of a table during expansion and Table_delete(T, x) halves it during contraction, the table size is always a power of 2.