The basic idea is to create a quick lookup to check whether an item is duplicated in both arrays.

`set`

is this lookup, which is built from the shorter array anticipating less distinct elements in that. This may be premature and obviously easy to trick: `[1000 times 1], [1...100]`

, but I think if we take a randomly distributed input it'll be faster most of the time.

I've seen other solutions doing `if (contains) remove`

which does two hash lookups, while using the return value of `remove`

(see JavaDoc) can "reuse" the result of contains and remove it immediately.

I used a primitive array instead of an `ArrayList`

to prevent boxing and the amortized growth. Clamping the array with `arraycopy`

should be better than converting a List to a primitive array.

Notice that the temporary `intersection`

array is allocated with the size of the smaller array. This is because it's not possible to have more distinct items than the length of the container.

```
public class Solution {
public int[] intersection(int[] nums1, int[] nums2) {
int[] larger = nums1.length < nums2.length? nums2 : nums1;
int[] smaller = nums1.length < nums2.length? nums1 : nums2;
Set<Integer> set = new HashSet<Integer>();
for (int num : smaller) {
set.add(num);
}
int[] intersection = new int[smaller.length];
int i = 0;
for (int num : larger) {
if (set.remove(num)) {
intersection[i++] = num;
}
}
int[] result = new int[i];
System.arraycopy(intersection, 0, result, 0, i);
return result;
}
}
```

I'm curious to hear thoughts about the optimizations. Do you think they're premature? (assume evenly distributed random inputs)