This is interesting because I tried both O(n) and O(nlogn) solutions but surprisingly the O(nlog) solution turned out to be a faster one (beats 97.6%) while the O(n) solution only beats 68%! I tried to submit multiple times and the results were consistent. My guess is that creating and accessing a hashmap is actually slower than using an array when the theoretical running times are close, for example, n and nlogn are actually pretty close mathematically. Can anyone confirm my guess?

```
public class Solution {
public int[] twoSum(int[] nums, int target) {
/* Space: O(n) because we still need to keep track of the indice, Time: O(nlogn + 2n) */
int [] original = Arrays.copyOf(nums, nums.length);
Arrays.sort(nums);
int head = 0;
int tail = nums.length - 1;
while(head < tail) {
int sum = nums[head] + nums[tail];
if(sum < target) head++;
else if(sum > target) tail--;
else {
int [] result = new int[2];
for(int i = 0; i < original.length; i++)
if(original[i] == nums[head]) {
result[0] = i;
break;
}
for(int j = original.length - 1; j >= 0; j--)
if(original[j] == nums[tail]) {
result[1] = j;
break;
}
return result;
}
} throw new IllegalArgumentException();
/* Space: O(n), Time: O(n)
Map<Integer, Integer> map = new HashMap<Integer, Integer>();
for(int i = 0; i < nums.length; i++) {
if(map.containsKey(target - nums[i])) {
return new int [] {map.get(target - nums[i]), i};
}
map.put(nums[i], i);
} throw new IllegalArgumentException();
*/
}
}
```