Let us discuss when n is unbounded and can be arbitrarily large.

• To begin with, I propose two algorithms as follows.

``````public class Solution {
public boolean isPowerOfThree(int n) {
while(n > 0 && n % 3 == 0) n /= 3;
return n == 1;
}
}
``````

This might be the most straightforward method one could come up with. The runtime is proportional to the number of factor `3` of `n`, and in the worst case, it is `O(log n)`. To do better, one could apply the doubling squaring technique (thanks to StefanPochmann for a better name) to find a power of `3` that is no less than `n`. See the code below.

``````public class Solution {
public boolean isPowerOfThree(int n) {
long a = 3;
for (; a < n; a *= a);
return n > 0 && a % n == 0;
}
}
``````

What is the runtime of the code above? Well, it is in fact `O(log log n)`. Why? All the powers of `3` that are computed are 3, 32, 34, 38, ..., 3log n. If you pay attention to the exponents, they are 1, 2, 4, 8, ..., log n. This is a geometric series with the first term being `1`, the last term being `log n`, and the ratio being `2`, which indicates that there are O(log log n) terms in the sequence.

To be fair, the O(log n) and O(log log n) bounds above are valid only if we assume that integer arithmetic takes O(1) time in the machine. This assumption, in fact, suggests a purely O(1) algorithm (see andrei3's solution), i.e., `return n > 0 && (1162261467 % n == 0)`, where the magic number 1162261467 is the largest possible power of 3 that fits into a 32-bit integer. But what happens if we do not make such assumption? Will the two bounds still hold?

I would like to discuss the potential optimal solution with you if `n` is not no longer constrained by an `int` but can be arbitrarily large. For example, the type of `n` can be `BigInteger` in Java. Note that, under this high-precision computation setting, interpreting `n` requires O(log n) bits. We also assume that `Math.pow` is not provided, and only basic arithmetic (i.e., `+, -, *, /, %`) is allowed. Specially, the sum of an x-bit number and a y-bit number takes O(x + y) time to compute, and the product takes O(xy).

Let us analyze the runtime of the first algorithm. Note that `n / 3` in this case takes O(log n) time since 3 consists of only O(1) bits. In the worst case, the algorithm would end up with a sequence like `n, n/3, n/9, n/27, ...`. There are O(log n) terms in the sequence, and the runtime is O(log(n) + log(n/3) + log(n/9) + log(n/27) + ... ) = O(log2n).

How about the second algorithm? Will it still beat the first one? Let us do the analysis.
As we mentioned above, the second algorithm involves a sequence like 3, 32, 34, 38, ..., 3log n of (log log n) terms. A difference here is that `a *= a` takes O(log2a) time. So, the total runtime is
sum(log2 32^i), for i = 1, 2, ..., log log n, which is O(log2n).

It is surprising to see that the two algorithms have the same runtime when `n` is unbounded.

I know, theoretically speaking, the second algorithm can be further optimized using Fast Fourier transform. By using FFT, the product of two x-bit integers can be computed in O(x log x) time. Then, the runtime of the second algorithm can be further improved to sum(log 32^i log log 32^i), for i = 1, 2, ..., log log n, which is O(log n * log log n). But this is a little bit beyond the original purpose of the problem.

• Whoa, great stuff, thanks. I hadn't even really thought about big integers. The "doubling" (wouldn't "squaring" be better?) is smart there. The O(log2n) bound is tight for both ways, isn't it? I mean, it's really Θ(log2n), no? If it's only O(...) then you don't know whether/which one is faster.

I wonder whether there's a faster way. I think we can agree on the number being stored in binary so that we can access individual bits in O(1), right? Maybe powers of 3 have a pattern that could be checked more efficiently, hopefully in O(log n). The last bit is of course always 1, because powers of 3 are odd. The penultimate bit alternates between 1 and 0, and the bit before that is always 0. Here are the first twenty powers:

``````>>> for e in range(20):
print(bin(3**e)[2:].rjust(33))

1
11
1001
11011
1010001
11110011
1011011001
100010001011
1100110100001
100110011100011
1110011010101001
101011001111111011
10000001101111110001
110000101001111010011
10010001111101101111001
110110101111001001101011
10100100001101011101000001
111101100101000010111000011
10111000101111001000101001001
1000101010001101011001111011011
``````

• One idea I had in mind was that if `n` is a power of 3, then `3n = (2^1 + 2^0) * n = n<<1 + n` or equivalently `3n = n<<2-n`, no idea how to continue.

• I tried looking for pattern also. Until I decided to give base 3 a try.
Java's BigInteger uses Shonhange algorithm to convert a number to base 3.
complexity is O(n log n log log n) where n is number of digits.
Taking n = log(N) where N is the actual value We get
O(log(N) (log log(N))(log log log (N)))

• @qgambit2 Nice. Now if only someone could implement that and also lixx2100's two solutions and show runtimes for large powers of 3... I'm curious about the results, but I don't want to do it :-)

• Thanks @StefanPochmann for a better name. Yes, the runtime are both tight in the worst case, i.e., Θ(log2n). But the first algorithm might take some advantages when `n` is not a power of 3 :-).

I also tried to find the similar pattern like what you gave but don't have any good idea.

• here is the test results

``````
public boolean isPowerOfThree1(BigInteger n) {
String s = n.toString(3);
if (s.indexOf("1", 0)!=0 || s.indexOf("1", 1)>-1 || s.indexOf("2", 1)>-1)
return false;
return true;
}

private static BigInteger mult = new BigInteger("3");
public boolean isPowerOfThree2(BigInteger n) {
BigInteger a = new BigInteger("3");
for (; a.compareTo(n)<0; a = a.multiply(mult));
return a.mod(n).equals(new BigInteger("0"));
}

public static void main(String[] args) {
Random r = new Random();
List<BigInteger> list = new ArrayList<BigInteger>();
for (int i=0;i<10000;i++){
BigInteger n = new BigInteger(10000, r);
}
PowerOfThree t = new PowerOfThree();
long l1 = System.currentTimeMillis();
for (BigInteger b:list){
t.isPowerOfThree1(b);
}
long l2 = System.currentTimeMillis();
System.out.println(l2-l1);

l1 = System.currentTimeMillis();
for (BigInteger b:list){
t.isPowerOfThree2(b);
}
l2 = System.currentTimeMillis();
System.out.println(l2-l1);
}
```
```

for 1k bits in per number ternary number is about 40% faster
for 10k bits per number ternary is about 5x faster

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.