To begin with, I propose two algorithms as follows.

```
public class Solution {
public boolean isPowerOfThree(int n) {
while(n > 0 && n % 3 == 0) n /= 3;
return n == 1;
}
}
```

This might be the most straightforward method one could come up with. The runtime is proportional to the number of factor `3`

of `n`

, and in the worst case, it is `O(log n)`

. To do better, one could apply the ** doubling squaring technique** (thanks to StefanPochmann for a better name) to find a power of

`3`

that is no less than `n`

. See the code below.```
public class Solution {
public boolean isPowerOfThree(int n) {
long a = 3;
for (; a < n; a *= a);
return n > 0 && a % n == 0;
}
}
```

What is the runtime of the code above? Well, it is in fact `O(log log n)`

. Why? All the powers of `3`

that are computed are 3, 3^{2}, 3^{4}, 3^{8}, ..., 3^{log n}. If you pay attention to the exponents, they are 1, 2, 4, 8, ..., log n. This is a geometric series with the first term being `1`

, the last term being `log n`

, and the ratio being `2`

, which indicates that there are O(log log n) terms in the sequence.

To be fair, the O(log n) and O(log log n) bounds above are valid only if we assume that integer arithmetic takes O(1) time in the machine. This assumption, in fact, suggests a purely O(1) algorithm (see andrei3's solution), i.e., `return n > 0 && (1162261467 % n == 0)`

, where the magic number 1162261467 is the largest possible power of 3 that fits into a 32-bit integer. But what happens if we do not make such assumption? Will the two bounds still hold?

I would like to discuss the potential optimal solution with you if `n`

is not no longer constrained by an `int`

but can be arbitrarily large. For example, the type of `n`

can be `BigInteger`

in Java. Note that, under this high-precision computation setting, interpreting `n`

requires O(log n) bits. We also assume that `Math.pow`

is not provided, and only basic arithmetic (i.e., `+, -, *, /, %`

) is allowed. Specially, the sum of an x-bit number and a y-bit number takes O(x + y) time to compute, and the product takes O(xy).

Let us analyze the runtime of the first algorithm. Note that `n / 3`

in this case takes O(log n) time since 3 consists of only O(1) bits. In the worst case, the algorithm would end up with a sequence like `n, n/3, n/9, n/27, ...`

. There are O(log n) terms in the sequence, and the runtime is O(log(n) + log(n/3) + log(n/9) + log(n/27) + ... ) = O(log^{2}n).

How about the second algorithm? Will it still beat the first one? Let us do the analysis.

As we mentioned above, the second algorithm involves a sequence like 3, 3^{2}, 3^{4}, 3^{8}, ..., 3^{log n} of (log log n) terms. A difference here is that `a *= a`

takes O(log^{2}a) time. So, the total runtime is

sum(log^{2} 3^{2^i}), for i = 1, 2, ..., log log n, which is O(log^{2}n).

It is surprising to see that the two algorithms have the same runtime when `n`

is unbounded.

I know, theoretically speaking, the second algorithm can be further optimized using Fast Fourier transform. By using FFT, the product of two x-bit integers can be computed in O(x log x) time. Then, the runtime of the second algorithm can be further improved to sum(log 3^{2^i} log log 3^{2^i}), for i = 1, 2, ..., log log n, which is O(log n * log log n). But this is a little bit beyond the original purpose of the problem.