Talk:Baby-step giant-step
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
Who found it?
editShanks found this algorithm. We called it "Shanks Babystep-Giantstep Algorithm".
Yes it was Shanks, and he initially used it to compute group orders, not discrete logarithms, although it can do both.
This page needs work: Shanks should be more properly credited, the use of the algorithm to compute group orders should be explained,
and at least some mention of various modifications to handle unbounded searches and optimizing for distribution should be made.
The versatility of this algorithm makes it a workhorse that is used for all sorts of things besides computing discrete logarithms,
and cryptography is but one of many applications (and not the first, Shanks was interested in computing the order of ideal class groups).
The context of the current article is too narrow.
References
editThere is currently an unreferenced tag on the page, claiming that the page is not properly referenced. Isn't the reference to Shank's paper enough? 775
Several references have been added.
I think there is a slight error in the algorithm's description
editStep 3 says to compute a-m. But the original algorithm is to compute [a-1]m. In the modular world, a-1 means "the multiplicative inverse of a", and is not an exponent that you can actually multiply out. Can an expert on this algorithm please confirm this? 72.93.101.152 (talk) 21:04, 3 December 2008 (UTC)
I am not really sure but I think it's correct: in a group we have the property (a-1)n=(an)-1=a-n —Preceding unsigned comment added by 86.64.192.254 (talk) 13:35, 24 July 2009 (UTC)
- You are correct. There is no error in the description of the article. The equations in Theorem 1.9 of the section Elementary_group_theory#Powers are true for all integers. 62.203.32.82 (talk) —Preceding undated comment added 11:47, 25 July 2009 (UTC).
It is Fermat's theorem that states B^(P-1) == 1 (mod P)
for any prime P and some other (fairly rare) numbers known as base-B pseudoprimes. A rarer subset of the base-B pseudoprimes, known as Carmichael numbers, are pseudoprimes for every base between 2 and P-1. A corollary to Fermat's theorem is that for any m
B^(-m) == B^(P-1-m) (mod P) . —Preceding unsigned comment added by 218.65.102.178 (talk) 11:59, 22 July 2010 (UTC)
GNU_MP code snippet
editIs that GNU_MP code snippet useful?
- It's not all that readable, so it doesn't really add to the article content (except in length).
- It's also debatable whether it is correct. The catch is that m is presumed to fit in an
unsigned long
, since the array of baby-step indices is an array of such, but m is computed as the square root of a bignum, which can be a lot larger. One could make an argument that there is an implicit assumption that the code is not to be used for input that would anyway take "forever" to process, but then the choice of using multiple-precision arithmetic becomes suspect, because all numbers should fit comfortably in an integer with just twice the number of bits as anunsigned long
(i.e., anunsigned long long
, at least under some compilers). - Finally, the table design and lookup mechanism become troublesome asymptotically, if one imagines that the
unsigned long
bug is fixed. Binary search in a table with m elements requires comparisons, but those comparisons need to examine at least bits to distinguish elements, so one would expect a time complexity of for just one table lookup. The bitlength of the group elements is , so any fast algorithm for bignum arithmetic (e.g. the Karatsuba algorithm at is asymptotically dominated by the subsequent lookup step. That's pretty silly. 130.243.68.122 (talk) 12:24, 23 May 2017 (UTC)
- I agree with you and just replaced it with a C++-example that should be a great improvement, though maybe still not perfect. It's however better at exposing the actual algorithm than the previous piece that mostly demonstrated that the GMP-API sucks. --Florian Weber (talk) 13:52, 23 August 2018 (UTC)
- Note: The C++ example was removed in 2021 (diff). – Tea2min (talk) 09:53, 5 September 2024 (UTC)
Diffie-Hellman is supposed to prevent eavesdroppers from learning the shared key
editThe articles says
> The Baby-step giant-step algorithm is often used to solve for the shared key in the Diffie Hellman key exchange, when the modulus is a prime number.
Why would the shared key "often" be "solved?" The chosen group size is a large prime number so that it can't be "solved." To say it can "often" be "solved" would imply that the protocol is often broken and therefore useless. — Preceding unsigned comment added by 73.239.7.211 (talk) 06:40, 6 June 2022 (UTC)
I propose removing that sentence, and the one that follows it because we also find this in the "notes" section of the article
> Usually the baby-step giant-step algorithm is used for groups whose order is prime. If the order of the group is composite then the Pohlig–Hellman algorithm is more efficient. — Preceding unsigned comment added by 73.239.7.211 (talk) 16:19, 6 June 2022 (UTC)
Runtime
editThe article says:
The space complexity of the algorithm is , while the time complexity of the algorithm is due to the first part of the baby steps of repeated multiplication of the generator before storage (note that each multiplication only takes linear time in terms of the bits, as the generator is either small, or considered a constant of the problem instance).
There are multiple things wrong with this.
(1) The first part of the algorithm requires (in the presently used notation) exactly multiplications, because the only sensible way to do this is to successively compute . No multiplications are required for the first two terms and from the third term on, each of these is computed with a single multiplication - just multiply the previous term by . That sum seems to be assuming that somebody would compute e.g., from scratch, using repeated squarings, instead of just multiplying the previously computed by .
(2) "note that each multiplication takes only linear time in terms of the bits,..." That's just a flat out lie. It's not possible to know a priori how many bit operations are required for each group operation. It would be completely different for e.g., integers modulo , an elliptic curve over a finite field, or a cyclic subgroup of some group of matrices. And no, the generator is not necessarily "small". So the complexity is group operations. One can't say anything at all about the number of bit operations for an arbitrary cyclic group.
— Preceding unsigned comment added by 45.25.113.237 (talk) 12:49, 18 February 2023 (UTC)