# 7 lines ~1050 ms C++

• ``````int numberOfBoomerangs(vector<pair<int, int>>& points) {
int booms = 0;
for (auto &p : points) {
unordered_map<double, int> ctr(points.size());
for (auto &q : points)
booms += 2 * ctr[hypot(p.first - q.first, p.second - q.second)]++;
}
return booms;
}
``````

Try each point as the "axis" of the boomerang, i.e., the "i" part of the triple. Group its distances to all other points by distance, counting the boomerangs as we go. No need to avoid q == p, as it'll be alone in the distance == 0 group and thus won't influence the outcome.

Submitted five times, accepted in 1059, 1022, 1102, 1026 and 1052 ms, average is 1052.2 ms. The initial capacity for `ctr` isn't necessary, just helps make it fast. Without it, I got accepted in 1542, 1309, 1302, 1306 and 1338 ms.

• @StefanPochmann Using double as key: hypot takes care of precision with out any floating point errors?
I get 1059 ms using an initial size for the map. Kind of consistently +/- 10ms

• @kdtree Hmm, good question. I somewhat assumed that `hypot` (1) squares its arguments, (2) adds those results, and then (3) takes the square root of the sum. If it does, then - because we're dealing with small integers - I'm sure the first two steps are accurate, and I think the third step is also safe. But... I guess `hypot` maybe doesn't have to do those steps, and I don't know what's guaranteed. A quick look at some documentation didn't really help, and I'm not in the mood for reading the implementation it links to :-)

• @StefanPochmann

One suggestion:

Since we are only using distance to group points (instead of using it's actual value).

Calculate the actual euclidean distance is unnecessary.

In other words, instead of calculate ((p1[0]-p2[0]) ** 2 + (p1[1]-p2[1]) ** 2) ** .5

calculate (p1[0]-p2[0]) ** 2 + (p1[1]-p2[1]) ** 2 should be sufficient.

• Nice! `2+4+6+8+...+2n = n(n+1)`!

• This post is deleted!

• We do not need to handle doubles:

``````int numberOfBoomerangs(vector<pair<int, int>>& points) {
int booms = 0;
for (auto &p : points) {
unordered_map<int, int> ctr(points.size());
for (auto &q : points)
booms += 2 * ctr[pow((p.first - q.first),2) + pow((p.second - q.second),2)]++;
}
return booms;
}
``````

This got accepted in 379ms,beats 98%.

• 150ms +/- 10ms:

``````    int numberOfBoomerangs(const vector<pair<int, int>>& points) {
short len{points.size()};
unordered_map<int, int> mymap;
int total{};
const pair<int, int> *data{points.data()};

for (short ii = 0; ii < len; ++ii) {
for (short jj = 0; jj < len; ++jj) {
if (ii == jj)
continue;

total += mymap[((data + ii)->first - (data + jj)->first)*((data + ii)->first - (data + jj)->first) + \
((data + ii)->second - (data + jj)->second)*((data + ii)->second - (data + jj)->second)]++;
}
mymap.clear();
}

return 2*total;
}
``````

• @StefanPochmann hypot is expensive when doing sqrt, which is unnecessary.

• @StefanPochmann You may assume that n will be at most 500 and coordinates of points are all in the range [-10000, 10000] (inclusive). It means square of distance will never overflow of int.

• @dengzhizhang Yes, like I said we're dealing with small integers, and that's part of why I said I'm sure the first two steps are accurate. Not sure what your point is.

• @StefanPochmann Oh, I mean you can use square of distance as metric, then you don't need to use double/float.

``````class Solution {
public:
int numberOfBoomerangs(vector<pair<int, int>>& points) {
int n = points.size();
int booms = 0;
for (auto& p : points)
{
unordered_map<int, int> mp(n);
for (auto& q : points)
{
int x = p.first - q.first;
int y = p.second - q.second;
int d = x*x + y*y;
booms += 2*mp[d]++;
}
}
return booms;
}
};
``````

• @dengzhizhang
why do you need to multiply by 2 when counting the booms?

• @robinpark95 For me I don't care if we should multiple it by 2. But according to the provided example, we should.
Input:
[[0,0],[1,0],[2,0]]

Output:
2

Explanation:
The two boomerangs are [[1,0],[0,0],[2,0]] and [[1,0],[2,0],[0,0]]

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.