# Two C++ implementations: O(n) 16ms, O(nlogn) 12ms

• Worst running time: O(n), Practical: 16ms.
Space: O(n).

``````class Solution {
public:
vector<int> twoSum(vector<int>& nums, int target)
{
unordered_map<int, int> numsMap;
vector<int> result;

for (int i = 0; i < nums.size(); ++i)
{
int addend = target - nums[i];
{
result.push_back(i+1);

return result;
}

numsMap[nums[i]] = i;
}
}
};
``````

Worst running time: O(nlogn), Practical: 12ms.
Space: O(n).

``````class Solution {
public:
vector<int> twoSum(vector<int>& nums, int target) {
vector<int> numsCopy(nums.begin(), nums.end());
sort(numsCopy.begin(), numsCopy.end()); // O(nlogn)

int i = 0;
int j = numsCopy.size()-1;

while (i < j)
{
int sum = numsCopy[i] + numsCopy[j];
if (sum < target)
i++;
else if (sum > target)
j--;
else
{
break;
}
}

}

private:
{
vector<int> result;

for (int i=0; i<nums.size(); i++)
{
{
result.push_back(i+1);
}
}

return result;
}

};``````

• In the first algorithm, it needs some time to apply a new node where stores the information when we insert a number. So when n is much larger(apply is less important), the algorithm 1 will be faster than algorithm 2.

• In the first algorithm, why is 'unordered_map<int, int> numsMap not initialized but able to work correctly?

• when n is big,algorithum 1 would be terrible？

• It actually calls the default constructor which creates "an empty unordered_map object, containing no elements and with a size of zero". More details about it can be found here http://www.cplusplus.com/reference/unordered_map/unordered_map/unordered_map/.

• I think in practice it would be terrible for a big n. When n is huge, the hash table is likely to get frequent collisions. It will cost most of the time to resolve the collisions.

I compared the two algorithms using the test data from the Stanford online course “Algorithms: Design and Analysis, Part 1” (the Programming Question 6). The test data contains 1 million integers, both positive and negative. Both algorithms were implemented by C++ without any optimisation. The result is that the Algorithm 2 (using sorted) is nearly twice as fast as the Algorithm 1 (using hash table). If you are interested I am glad to email you the code and test data.

• As I commented above, in practice hash table may get frequent collisions for a huge n and have a high cost of collision resolution. If we do not optimise the hash table, I think Algorithm 2 (using sorted) is likely to have a better practical performance than Algorithm 1 (using hash table).

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.