Talk:Hirschberg's algorithm
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||
|
Untitled
editI am starting this article. I need to add a section on the proof of time and space complexity. Vegasprof 01:18, 21 March 2007 (UTC)
I'm not sure what this line in the pseudocode is doing:
ArgMin{Edit[x^left,Pref[y,cut]] + Edit[x^right,Suff[y,n-cut]]}
Are those meant to be two array slices? What's the '+' operator doing there? Kenahoo (talk) 23:47, 24 November 2009 (UTC)
Space necessary for Hirschberg's algorithm
editOn 3 July 2011, user Juri Master changed the space calculation from O(min{m,n}) to O(n + m) arguing that the stack space to make recursive calls uses hidden memory. This should be changed back because:
- It is generally accepted that Hirschberg's algorithm takes linear space.
- Juri Master's original work to claim O(n + m) is mistaken. The claim would need to be O(n + log2(m)).
- The algorithm does not need to put any elements from m or n on the stack.
- log2(m) is small enough that it is not significant. If m were 1,000,000,000,000 (1 trillion), then log2(m) rounds up to 40. A common implementation of the algorithm would require passing four integers, two pointers, and a return address on the stack. If they were each 64 bits that amounts to 56 bytes per recursive call, or just over 2KB to handle 1 trillion records. In contrast, Hirschberg's algorithm would require 8TB(!) to run the forward (or reverse) subprogram on 1 trillion records. In any interesting problem log2(m) will be dwarfed by n.
- Hirschberg's algorithm can actually take O(2n) space since the output uses an additional n space. That could be much larger than O(n + log2(m)) space.
- Big-O notation is only interested in how computational complexity scales. Hirschberg's algorithm actually takes O(2mn) time but the two is dropped because it doesn't affect how the algorithm scales. For the same reason, O(n + log2(m)) should lose the log2(m). — Preceding unsigned comment added by 204.246.125.93 (talk) 06:15, 31 July 2011 (UTC)
There is a bit of confusion here because we are talking about scaling for an algorithm with 2 inputs. Often you would just concatenate the inputs (q = n + m) and then calculate the scaling, in which case it would take O(q) space. Since we are keeping the inputs separate, would the actual scaling not be O(min{ max{m, log(n)}, max{log(m), n} })? This is because we aren't guaranteed that the log of the longer input is smaller than the shorter input. In short, O(q+log(q))==O(q), but O(m+log(n))!=O(m). I agree that in most cases this is negligible (point 4 above).
Pseudocode
editI have a question about the pseudocode: For example in the forwards subprogramm in line 4 "Edit[Prefix[0,0]] = 0;":
Do i understand it correctly that Edit is an two-dimensional array? Why is it in line 4 only one index (Prefix[0,0])? It is a little bit confusing.