Let us discuss when n is unbounded and can be arbitrarily large.

  • 2

    To begin with, I propose two algorithms as follows.

    public class Solution {
        public boolean isPowerOfThree(int n) {
            while(n > 0 && n % 3 == 0) n /= 3;
            return n == 1;

    This might be the most straightforward method one could come up with. The runtime is proportional to the number of factor 3 of n, and in the worst case, it is O(log n). To do better, one could apply the doubling squaring technique (thanks to StefanPochmann for a better name) to find a power of 3 that is no less than n. See the code below.

    public class Solution {
        public boolean isPowerOfThree(int n) {
            long a = 3;
            for (; a < n; a *= a);
            return n > 0 && a % n == 0;

    What is the runtime of the code above? Well, it is in fact O(log log n). Why? All the powers of 3 that are computed are 3, 32, 34, 38, ..., 3log n. If you pay attention to the exponents, they are 1, 2, 4, 8, ..., log n. This is a geometric series with the first term being 1, the last term being log n, and the ratio being 2, which indicates that there are O(log log n) terms in the sequence.

    To be fair, the O(log n) and O(log log n) bounds above are valid only if we assume that integer arithmetic takes O(1) time in the machine. This assumption, in fact, suggests a purely O(1) algorithm (see andrei3's solution), i.e., return n > 0 && (1162261467 % n == 0), where the magic number 1162261467 is the largest possible power of 3 that fits into a 32-bit integer. But what happens if we do not make such assumption? Will the two bounds still hold?

    I would like to discuss the potential optimal solution with you if n is not no longer constrained by an int but can be arbitrarily large. For example, the type of n can be BigInteger in Java. Note that, under this high-precision computation setting, interpreting n requires O(log n) bits. We also assume that Math.pow is not provided, and only basic arithmetic (i.e., +, -, *, /, %) is allowed. Specially, the sum of an x-bit number and a y-bit number takes O(x + y) time to compute, and the product takes O(xy).

    Let us analyze the runtime of the first algorithm. Note that n / 3 in this case takes O(log n) time since 3 consists of only O(1) bits. In the worst case, the algorithm would end up with a sequence like n, n/3, n/9, n/27, .... There are O(log n) terms in the sequence, and the runtime is O(log(n) + log(n/3) + log(n/9) + log(n/27) + ... ) = O(log2n).

    How about the second algorithm? Will it still beat the first one? Let us do the analysis.
    As we mentioned above, the second algorithm involves a sequence like 3, 32, 34, 38, ..., 3log n of (log log n) terms. A difference here is that a *= a takes O(log2a) time. So, the total runtime is
    sum(log2 32^i), for i = 1, 2, ..., log log n, which is O(log2n).

    It is surprising to see that the two algorithms have the same runtime when n is unbounded.

    I know, theoretically speaking, the second algorithm can be further optimized using Fast Fourier transform. By using FFT, the product of two x-bit integers can be computed in O(x log x) time. Then, the runtime of the second algorithm can be further improved to sum(log 32^i log log 32^i), for i = 1, 2, ..., log log n, which is O(log n * log log n). But this is a little bit beyond the original purpose of the problem.

  • 2

    Whoa, great stuff, thanks. I hadn't even really thought about big integers. The "doubling" (wouldn't "squaring" be better?) is smart there. The O(log2n) bound is tight for both ways, isn't it? I mean, it's really Θ(log2n), no? If it's only O(...) then you don't know whether/which one is faster.

    I wonder whether there's a faster way. I think we can agree on the number being stored in binary so that we can access individual bits in O(1), right? Maybe powers of 3 have a pattern that could be checked more efficiently, hopefully in O(log n). The last bit is of course always 1, because powers of 3 are odd. The penultimate bit alternates between 1 and 0, and the bit before that is always 0. Here are the first twenty powers:

    >>> for e in range(20):

  • 0

    I thought about this yesterday but didn't have a clue yet.

  • 0

    One idea I had in mind was that if n is a power of 3, then 3n = (2^1 + 2^0) * n = n<<1 + n or equivalently 3n = n<<2-n, no idea how to continue.

  • 0

    I tried looking for pattern also. Until I decided to give base 3 a try.
    Java's BigInteger uses Shonhange algorithm to convert a number to base 3.
    complexity is O(n log n log log n) where n is number of digits.
    Taking n = log(N) where N is the actual value We get
    O(log(N) (log log(N))(log log log (N)))

  • 0

    @qgambit2 Nice. Now if only someone could implement that and also lixx2100's two solutions and show runtimes for large powers of 3... I'm curious about the results, but I don't want to do it :-)

  • 0

    Thanks @StefanPochmann for a better name. Yes, the runtime are both tight in the worst case, i.e., Θ(log2n). But the first algorithm might take some advantages when n is not a power of 3 :-).

    I also tried to find the similar pattern like what you gave but don't have any good idea.

  • 0

    here is the test results

    public boolean isPowerOfThree1(BigInteger n) {
             String s = n.toString(3);
             if (s.indexOf("1", 0)!=0 || s.indexOf("1", 1)>-1 || s.indexOf("2", 1)>-1) 
                return false;
             return true;
    	private static BigInteger mult = new BigInteger("3");
    	public boolean isPowerOfThree2(BigInteger n) {
            BigInteger a = new BigInteger("3");
            for (; a.compareTo(n)<0; a = a.multiply(mult));
            return a.mod(n).equals(new BigInteger("0"));
    	public static void main(String[] args) {
    		Random r = new Random();
    		List<BigInteger> list = new ArrayList<BigInteger>();
    		for (int i=0;i<10000;i++){
    			BigInteger n = new BigInteger(10000, r);
    		PowerOfThree t = new PowerOfThree();
    		long l1 = System.currentTimeMillis();
    		for (BigInteger b:list){
    		long l2 = System.currentTimeMillis();
    		l1 = System.currentTimeMillis();
    		for (BigInteger b:list){
    		l2 = System.currentTimeMillis();

    for 1k bits in per number ternary number is about 40% faster
    for 10k bits per number ternary is about 5x faster

Log in to reply

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.