# 16 ms c++ o(n)time o(1)space solution

• Very straight forward solution:

1. Just count how many 1 should appear on each bit from 1 to n

2. and count again how many actually appears

3. set the bit with extra 1s to 1 and return the result

===============^^====================

Short explanation:

Say you happen to have 1 to n plus x, then you have extra 1 and extra 0 at places that x has 1 and 0.

If you have more than two x, then for each additional x you have to delete any number from 1 to n and add an x, which will only add to the number of 1s, if not remain unchanged, in the place that have extra 1s.

In this case I scan each element once for each bit. Since there are at most 64 bits or 32 bits I consider it a O(n) solution. I improved it a little with scanning only the number of bits needed.

``````int findDuplicate(vector<int>& nums) {
int result = 0, count,bit,i;

for(bit = 1; bit >0 && bit < nums.size(); bit <<= 1 )
{
count = 0;
for(i = 0;i<nums.size();i++) count += bool(nums[i]&bit) - bool(i&bit);
if(count > 0) result |= bit;
}
return result;
}``````

• Can you explain why it would work if the duplicated number is repeated more than once?

• Because I only care about extra 1's on each bit? If it is repeated more than once there will be more 1's, which is more than welcome. I added some explanation there.

• I see, this is an interesting solution. But just one more questions, isn't maxbit O(log(n)) so the time complexity will be O(nlog(n))?

• In that case that is right. If you want a O(n) you can use 32 or 64 as maxbit.You will happily get an algorithm with O(n) with a few extra ms in this case.

I am using maxbit simply to improve performance since not all the bits are needed. So the maxbit will be less than a small constant. I will still consider this O(n).

• I changed maxbit to a constant. Are you happy now? :)

``````class Solution {
public:
int findDuplicate(vector<int>& nums) {
int count[32] = {0};
int result =0;

for(int i = 1; i < nums.size();i++)
for(int j = 0;j < 32; j++)
if(i & (1 <<j)) count[j]--;

for(int a:nums)
for(int j = 0;j < 32; j++)
if(a & (1 <<j)) count[j]++;

for(int j = 0; j < 32; j++)
if(count[j] > 0) result |= 1<<j;

return result;
}
};``````

• This post is deleted!

• Nice solution and explanation. I'd just add that by replacing more numbers with x, not only do the "too frequent" bits remain too frequent, but that the "not too frequent" bits also remain not too frequent.

• I thought about this question for half an hour, but couldn't think of any O(n) solution. Sadly, I came up with a O(nlogn) solution. Then I opened your O(n) solution, and it turned out yours is exactly same as mine... Didn't you realize this is a O(nlogn) solution or I'm missing something?

• Oh, I see, you already replied to this question earlier. However, no matter how you change maxbit, it can't change the fact that this is a O(nlogn) algorithm. When we are talking about algorithm, we should not taking computer limits into account--- pure theory. Just like, this algorithm for (i=0; i<n; ++i); , you can't change its O(n) run time to O(1) by writing it like this: for (i=0; i<9999999999999999999999999999999; ++i);

• It's WRONG solution, but it get accepted.

• @Wsl_F2 What's WRONG is YOUR test case. Read the problem again.

• I am talking about the complexity given in this problem that the numbers are all int. Well yes if the number is big enough without a range then it takes lgN to process a single number. But I am feeling it unfair because it actually takes the computer to do it bit by bit to process each number( or several bits at the same time) in any algorithm anyway. If the number is big enough, it even takes a LgN to compare two numbers. Then no algorithm can perform in O(N)

• Thanks for pointing the "not too frequent" out. I am viewing that as "too frequent" zero. Since it is the same for 1 and 0 I am too lazy to prove it.

• You are very wrong at two things:

1. Computer doesn't process numbers bit by bit.
2. You are confusing the range of input data with your algorithm's complexity.
So, basically, you are telling me, if input data range is 2^32, then my algorithm runs in O(32n); if the range is 2^62, then my algo runs in O(64n), see? it's O(n)!! If this claim is true, I can write any algorithm with O(1) complexity: O(1) sort algorithm:
3. write a most ugly sorting algorithm that runs in O(n^2),
4. since it is guaranteed that N can't be larger than 9999999999999999999;
5. I can claim my ugly sorting algorithm runs in O( 9999999999999999999^2) = O(1);
See, I invented an O(1) sorting algorithm, cheers!

• Well if the number is 9999999999999999999. Then the computer definitely cannot process all the bits in that number at one time. So your sorting algorithm should be O(N^2lgN)?

• No, you don't know what computer I'm using. I may use the one invented in 1950s or the one will be invented in year 3025. You don't know. And this is why what you claimed was so wrong: the THEORETICAL time complexity depends on which computer you are using?

• Yes that is why theoretically when talking about dealing with BIG number it takes logN to process a single number. Because you don't know which computer you are using. It takes lgN bytes to store a number. But still people won't consider for(i = 1; i < N; i++) a O(NlgN) thing

• And if this algorithm can solve THIS given problem in O(32N), it is a O(N) solution. Btw unfortunately O(9999999999999999999^2) is by any mean a O(1) solution. It might still be slow though. Please look up the definition of O.

• Ok, so you just admit O(9999999999999999999^2) is a O(n) solution right? Then that sorting algorithm I came up with was O(9999999999999999^2) = O(1), right? Then I invented a O(1) time sorting algorithm, do you agree or not?

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.