Two C++ implementations: O(n) 16ms, O(nlogn) 12ms


  • 9
    X

    Worst running time: O(n), Practical: 16ms.
    Space: O(n).

    class Solution {
    public:
        vector<int> twoSum(vector<int>& nums, int target)
        {
            unordered_map<int, int> numsMap;
    		vector<int> result;
    		
    		for (int i = 0; i < nums.size(); ++i)
    		{
    			int addend = target - nums[i];
    			if (numsMap.find(addend) != numsMap.end())
    			{
    				result.push_back(numsMap[addend]+1);
    				result.push_back(i+1);
    				
    				return result;
    			}
    			
    			numsMap[nums[i]] = i;
    		}
        }
    };
    

    Worst running time: O(nlogn), Practical: 12ms.
    Space: O(n).

    class Solution {
    public:
        vector<int> twoSum(vector<int>& nums, int target) {
            vector<int> numsCopy(nums.begin(), nums.end());
            sort(numsCopy.begin(), numsCopy.end()); // O(nlogn)
            
            int i = 0;
            int j = numsCopy.size()-1;
            
            while (i < j)
            {
                int sum = numsCopy[i] + numsCopy[j];
                if (sum < target)
                    i++;
                else if (sum > target)
                    j--;
                else
                {
                    break;
                }
            }
            
            return GetIndexOfAddends(nums, numsCopy[i], numsCopy[j]);
        }
        
    private:
        vector<int> GetIndexOfAddends(const vector<int>& nums, int addend1, int addend2)
        {
            vector<int> result;
            
            for (int i=0; i<nums.size(); i++)
            {
                if (nums[i] == addend1 || nums[i] == addend2)
                {
                    result.push_back(i+1);
                }
            }
    
            return result;
        }
        
    };

  • 0
    W

    In the first algorithm, it needs some time to apply a new node where stores the information when we insert a number. So when n is much larger(apply is less important), the algorithm 1 will be faster than algorithm 2.


  • 0
    H

    In the first algorithm, why is 'unordered_map<int, int> numsMap not initialized but able to work correctly?


  • 0
    F

    when n is big,algorithum 1 would be terrible?


  • 0
    X

    It actually calls the default constructor which creates "an empty unordered_map object, containing no elements and with a size of zero". More details about it can be found here http://www.cplusplus.com/reference/unordered_map/unordered_map/unordered_map/.


  • 0
    X

    I think in practice it would be terrible for a big n. When n is huge, the hash table is likely to get frequent collisions. It will cost most of the time to resolve the collisions.

    I compared the two algorithms using the test data from the Stanford online course “Algorithms: Design and Analysis, Part 1” (the Programming Question 6). The test data contains 1 million integers, both positive and negative. Both algorithms were implemented by C++ without any optimisation. The result is that the Algorithm 2 (using sorted) is nearly twice as fast as the Algorithm 1 (using hash table). If you are interested I am glad to email you the code and test data.


  • 0
    X

    As I commented above, in practice hash table may get frequent collisions for a huge n and have a high cost of collision resolution. If we do not optimise the hash table, I think Algorithm 2 (using sorted) is likely to have a better practical performance than Algorithm 1 (using hash table).


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.