5-line C++ O(N) solution with comments on time complexity about O(N) vs O(N log max(nums))


  • 0

    This is my C++ version inspired by @StefanPochmann 's Python solution.

    Note: Only distinct values in the original array nums matter to the final result.

    So there is a minor difference in the commented code section where we could use a hash set noDup to store only distinct values from the original array nums. And nums can be abandoned from now on.

    This minor change uses an extra O(N) pass to reduce inner loop length to build prefix hash set pre from N to noDup.size() for 32 times. Depending on exact test cases, this may or may not reduce running time.

    For OJ's test cases, I can see a total running time reduction from about 318ms to 282ms.

    Brief Explanation of Algorithm: The goal is to find the maximum value res = n1^n2, where n1, n2 in array nums[]. Note that the value of res is always dominated by its higher bit values, so we try to get the maximum i-position bit of res one by one (i = 31 to 0).

    In the loop, i is the current bit location we try to build, res is the intermediate result with first 31-i most significant bit positions already build to maximum res and with i-position initialized as 0. To maximize res, it is equivalent to ask whether we could find p1, p2 in the first 32-i position prefixes from nums[] (i.e., pre) such that

    • p1^p2 = res^1 (note res^1 is to only set 1 to last bit of res and keep others unchanged),

    and this is equivalent to p1 = res^1^p2, which means p1 is in hashset pre for some p2 also in pre.

    If we could find such p1, p2 in pre, we can build the maximum first 32-i bits as ++res, otherwise, still res unchanged. Repeat this process until all 32 bits are built.

        int findMaximumXOR(vector<int>& nums) {
            //unordered_set<int> noDup(nums.begin(), nums.end()); // remove duplicates
            for (int i = 31, res = 0; ; res<<=1) {
                unordered_set<int> pre;
                for (int a : nums /*noDup*/) pre.insert(a >> i);
                for (int p : pre) if (pre.count(res^1^p) && ++res) break;
                if (!i--) return res;
            }
        }       
    

    Comments on Time Complexity: Either way, the algorithm above should have O(N). Whether the outer loop length 32 should be counted as O(log max(nums[])) which contributes to an overall O(N log max(nums[])) time complexity depends on how the range condition 0 <= nums[i] < 2^31 is perceived, in my opinion:

    • If it is considered as an essential constraint by the problem, then 32 should be considered as a constant.
    • If it is perceived as a derived constraint due to the range of int type as in many common programming language, then the 32 is just the consequence of the data type limitation which is a non-essential condition from the original problem, so the factor log max(nums[]) will come to play as an impact on the algorithm.

    For example, if the problem did not explicitly state condition 0 <= nums[i] < 2^31 but only given vector<int> nums as the input argument of the method, then clearly, I would say the factor log max(nums[]) definitely contributes to the complexity because the range of int is not imposed by the problem itself but simply due to how the data is represented.

    If you consider this problem as a pure math question (i.e., all positive integers as input data), then the log factor will become clear. Just like many coding problems where we say O(N), O(N^2), etc complexity, we certainly did not mean to put an upper bound on array size N simply due to language/computing resource limitations.


  • 0
    D

    Can you explain the algorithm itself a bit detail?


  • 0

    @daxietx I added my brief explanation of the algorithm in the first post. Basically, it builds 32 bits of res one by one from most significant position to least significant one. (@StefanPochmann has explained his solution in his original post)


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.