This problem is variant of 2-sum, which is taking advantages of DS: hash_table and sliding window algorithm. However I failed at both methods. And it exposed my shortcomings in thinking process. Rushy and Rigid Learning

For this type of problem, first assumption is we should return a "count"

1.Hash_Table

One application of symbol table is preprocessing , then I realized that would fail in the case of k = 0;

So decision 1 is we would scan and populate hash_table at the same time.

decision 2 : How to detect duplicate pair. My idea is to use sum of the pair

so Regarding the sign of k, I listed two if condition directly in the for block, not writing out the body part of if condition

```
for ~~~
if (table.count(target1) && pair.count(target1 + nums[i])) {
}
if (table.count(target2) && pair.count(target2 + nums[i])) {
}
```

then I found what would happend if k == 0 . Second if block would also get executed. Then "count" would incremented when it should not. So I got struggled , should we specifically give a if for k == 0. Then the code is too heavy. After a while, I gave up.

However, after studying the solution, I knew I made a stupid mistake. The second if would not get executed at all. Because we were supposed to insert pair sum in the first if block. However I was too rushy. After writing out the first if condition , I jumped to write second if condition. What I should do is to finish first if block and then continue with second if . Like below

```
int findPairs(vector<int>& nums, int k) {
if (nums.size() < 1 || k < 0) return 0;
unordered_set<int> pairs, table;
int target1 = 0, target2 = 0, count = 0;
for (int i = 0; i < nums.size(); i++) {
target1 = nums[i] - k;
target2 = nums[i] + k;
if (table.count(target1) &&
!pairs.count(nums[i] + target1)) {
count++;
pairs.insert(nums[i] + target1);
}
if (table.count(target2) &&
!pairs.count(nums[i] + target2)) {
count++;
pairs.insert(nums[i] + target2);
}
table.insert(nums[i]);
}
return count;
}
```

Note 1 unordered_set pair can handle duplicate by itself, and since it is used to store the pair, so why don't we just return its size. We should power the use of data structure

https://discuss.leetcode.com/topic/81676/c-clean-code-with-explanation-set-map

We can throw away the pair.count check and count

Root cause: Too rushy to finish the code of previous step.

2.Sliding window / Two pointers

Two things I was wrong and exposed my rigid learning

1 I only remember the two pointer with opposite directions, like what we do in 2-sum. However I found we can't move pointer conditionally. So I gave up on this

2 pointer can only be updated one step further just after comparison diff and k.

With this stupid assumption, I could not handle both k == 0 and duplicate pair at the same time.

```
if (nums[fast] - nums[slow] == k ) {
slow++;
fast++;
} else if(nums[fast] - nums[slow] > k) slow++;
else fast++;
```

To handle "duplicate pair" , I put if (nums[fast} == nums[fast - 1]) continue; hoping this can skip the duplicate. However I found [1,1,1,2,2] would skip [1,1] pair. So that "if" is wrong and I was thinking what to put in that if~~. Unluckily, I gave up.

The solution told me just to move slow and fast as far as possible to position at different numbers after we find a pair. That could be done by "while", a very common pattern . We don't have to put something in that if condition

Root cause: stick on sliding window pattern from previous example.

- an excellent solution from "anorange0409" in the "Contest"

```
class Solution {
public:
int findPairs(vector<int>& nums, int K) {
if (K < 0) return 0;
int n = nums.size();
unordered_map<int, int> M;
for (auto x:nums) M[x]++;
int ans = 0;
for (auto t:M) {
int x = t.first;
int y = x + K;
if (y == x) {
if (M[x] > 1) ans ++;
}
else {
if (M.find(y) != M.end()) ans++;
}
}
return ans;
}
};
```

When censoring the problem, there are two tricky parts k ==0 and duplicate pair.

"anorange0409" deals with them in a smart way

Originally our search area has duplicate, so he preprocesses our search area by hashed table, what he does is to eliminates the duplicate ones by "Unique Keys". Another thing is the case of k ==0, we need to know duplicate exist for this case , so he uses mapped value to keep occurrences of duplicate number. for each key:value pair, key is number , value is occurrences.

Takeaway: how to optimize the search area. Avoid the duplicate the search for duplicate and also keep the info that duplicate exists