# C++_Greedy_Time: O(nlogn)_73ms_Accepted

• At first I thought this problem should be solved by DP, but it is too complicated, so I just tried Greedy algorithm. One thing we should remember is that when we delivery our cookies, we should always satisfy the child with the smallest greedy factor, because this would save more cookies for the following delivery.

First we sort our g and s, and if there is a s[j] could satisfy g[i], we can add 1 to our result, otherwise we should stop and return our result, because for the following children, they will have more greedy factor than our current g[i].

``````class Solution {
public:
int findContentChildren(vector<int>& g, vector<int>& s) {
if(s.empty()) return 0;
sort(g.begin(), g.end());
sort(s.begin(), s.end());
int mg = g.size(), ns = s.size(), res = 0;
int i = 0, j = 0;
while(i < mg && j < ns){
while(j < ns && g[i] > s[j]){j++;}
if(j >= ns) break;
else{res++; i++; j++;}
}
return res;
}
};
``````

• Here's a thing I don't understand, I wrote code similar to you using greedy algorithm:

``````   void sort(vector<int>& vec)
{
for(int i=0;i<vec.size()-1;i++)
{
for(int j=i+1;j<vec.size();j++)
{
if(vec.at(i)<vec.at(j))
{
vec.at(i)=vec.at(i)^vec.at(j);
vec.at(j)=vec.at(j)^vec.at(i);
vec.at(i)=vec.at(i)^vec.at(j);
}
}
}
}
int findContentChildren(vector<int>& g, vector<int>& s) {
if(s.size()==0||g.size()==0)
{
return 0;
}
int i=0;
int count=0;
int j=0;
sort(g);
sort(s);
int ss=s.size();
int gs=g.size();
while(i<ss&&j<gs)
{
if(g[j]>s[i])
{
j++;
}
else
{
count++;
j++;
i++;
}
}

return count;
}
``````

There are two things different in my code: one is that I use a self-defined sort() function, the second thing is that I will add 1 for j if g[j]>s[i], since my order of g and s is from high to low. My code can get the same result as the expected answer tells, however, when I use the order of g and s from high to low, it takes more time to compute the answer(about 1200-1300 ms) which causes the time limit exceeded problem.
But when I change my order of g and s to from low to high and use the c++ vector prepared sort() function, my code is accepted by OJ. Can someone tell me something about why two different orders and two different sort() function makes such a time cost difference?

• It looks like a bubble sort, which is N^2 wheras c++ uses a n*logn quicksort
for better sort time.

• c++ quicksort uses quicksort until the recursion goes too deep then uses heapsort... this is called an introsort.

Also I should mention I had to hand jam (calculate) your bit swapping to make sure it was correct, and it looks like it is. You should add comments to make it explicitly clear you thought it out correctly (good coding style).

I think you could probably achieve a cleaner implementation using swap() vs bitmath.

• I've tried two implementations so far and haven't beat 73 ms. The second attempt using heaps I figured wouldn't improve it but I wanted practice using c++ heaps.

I think our first attempts are identical with the exception of factoring size out of the loop (i think compiler does this anyway) but you and I have our loop logic in reverse order. You account for more misses than hits, and I account for hits rather than misses. I should have thought about that when designing the loop.

Nice job!

``````#include <queue>
using namespace std;

class Solution {
public:
int findContentChildren(vector<int>& g, vector<int>& s) {
// 119 ms solution ~ heap too slow... extract + downheap bubble = nlogn
if (g.empty() || s.empty())
return 0;

int count = 0;
priority_queue<int> gp(g.begin(), g.end());
priority_queue<int> sp(s.begin(), s.end());
while (!gp.empty() && !sp.empty()){
if (gp.top() <= sp.top()) {
sp.pop();
count++;
}
gp.pop();
}
return count;

/*  93 ms solution (first attempt)
if (g.empty() || s.empty())
return 0;

sort(g.begin(), g.end());
sort(s.begin(), s.end());
int count = 0, i = 0, j = 0;
while (i < s.size() && j < g.size()) {
if (s[i] >= g[j]) { count++; i++; j++; }
else i++;
}
return count;*/
}
};
``````

• I think the OJ time is not so stable, sometime it will be changed if you run your code at different time or different PCs.

• @jasonshieh
I copied your codes and submitted it. But why it shows 452ms?
It's much longer than 73ms. If the OJ time were unstable in this way, it would be meaningless.

• @efzccy
Recently, I believe there is sth wrong with OJ on handling C++'s "sort". Looks like it will generate extremely long running time than any other methods or languages.
In the problem 'best meeting point', I used sorting rather than finding median. Yes, it would absolutely make longer running time, but the difference should be like that between O(nlogn) and O(n). To my surprise, I got 400+ms but the "median method" yeilds only 30+ms.

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.