This tests a random 100000-bits number in 4 seconds on my PC. And a one-million-bits number in about 400 seconds.
def isPerfectSquare(self, num): x = xSquared = 0 for i in range(num.bit_length() >> 1, -1, -1): tmp = xSquared + (x << (i+1)) + (1 << i+i) if tmp <= num: xSquared = tmp x += 1 << i return xSquared == num
Note that (x + 2i)2 = x2 + 2i+1x + 22i. So if I keep track of both x and x2, I can compute x + 2i as well as its square with just addition and shifts. This is good for really big numbers, where multiplication can't be seen as an O(1) time operation anymore but takes O(log(m) * log(n)) for multiplying numbers m and n, while adding them only takes O(log(m) + log(n)) and shifts are fast, too.
I start with x=0 and then test adding decreasing powers of 2. For example with num=100, I try adding 8, 4, 2 and 1 to x. I leave out 4 and 1, as adding them would make x2 too large. So I end up with x=8+2=10 and xSquared=100.
O(log^2 n) in what algorithm? Also, by
n you mean the number of bits/digits or the number being multiplied?
With the algorithm that everybody learns in school as a child. Multiplying each digit of the first number with each digit of the other number.
With n I mean the number being multiplied, but since it's two numbers, I should have said O(log(m) * log(n)), where m and n are two numbers being multiplied. And O(log(m) + log(n)) for addition/shift. Fixed the text, thanks.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.