"BEATS 100%" ???


  • 11

    Anybody else noticed the huge influx of new "beats 100%" posts? I myself in Python also suddenly got several times that were faster than any times I had ever gotten before. I asked and was told that indeed the judge was recently upgraded and it's now faster. New times also aren't immediately added to the statistic (but only once a week or so?), so currently the times achieved on the new faster judge are compared with times of the old slower judge. And as always, times can vary quite a bit even when you submit the exact same code twice, especially in Python, so there's luck involved.

    I just grabbed the most recent 2607 topics as listed on the Recent page and did some statistics:

     0 days ago, 29 out of 239 titles said "100%"
     1 days ago, 24 out of 168 titles said "100%"
     2 days ago, 26 out of 174 titles said "100%"
     3 days ago, 33 out of 187 titles said "100%"
     4 days ago,  4 out of 143 titles said "100%"
     5 days ago,  5 out of 127 titles said "100%"
     6 days ago,  0 out of 152 titles said "100%"
     7 days ago,  0 out of 115 titles said "100%"
     8 days ago,  1 out of 166 titles said "100%"
     9 days ago,  0 out of 149 titles said "100%"
    10 days ago,  1 out of  96 titles said "100%"
    

    Looks like it started 5 days ago and really got huge 3 days ago.

    Anyway... now that you know that your solution probably isn't really better than everybody else's, you can stop making a fuss about it as if it were.


  • 1

    Mods, make this a sticky please. At least, till the statistics stabilize again.


  • 1

    Appreciate for the investigation. Now I can also stop making a fuss about my beat 100% "Flatten Binary Tree to Linked List" solution :-)

    Actually, as I am originally from Math background, I never quite get why (or how could it ) use the actual running time of a given set of test cases as standard to judge the efficiency of a program. The running time heavily depends on external resource outside of the program. Also, the test cases could be biased as well (theoretically, any finite number of test cases is biased). In Math, once you finished a proof of a problem, it is either right or wrong. There is no such thing (or not I am aware of) to measure how "good" your proof is. I am wondering whether it is possible to have an automatic complexity analyzer to analyze the time and space complexity of a program statically, not at run time as it requires input data which will be biased. I would like to see an automatic way to analyze programs pure theoretically so I can rule out any external source dependency.


  • 0

    I have just pushed a change which disable the runtime distribution graph temporarily. Sorry for the confusion caused. Thanks @StefanPochmann for stepping up and voice this out.


  • 0

    @zzg_zzm I don't know how good a complexity analyzer could be in practice, but remember the halting problem. If we can't even decide whether a program finishes at all, how are we supposed to do a more fine-grained analysis? :-). I think carefully designed tests do a pretty good job, plus we have this discussion forum to talk about complexity.

    I don't quite agree about your assertion about math and proofs. One person might accept your proof as proof while another might not, maybe because you're using something that feels obvious to you but not to them. Or remember the controversy about Appel and Haken's proof of the four color theorem.


  • 0

    @StefanPochmann Yeah, I also think the "complexity analyzer" is not likely to work as we already have the halting problem.
    For the math proof, I guess what I want to say is that math proof does not have worries about resources as in program algorithm. As you mentioned, it does have the "human factor" into account to measure how good a proof is. For program efficiency, I was thinking how to make the standard as theoretical and quantifiable as possible (but I guess this is probably not likely in practice :-) ). Thanks a lot for your insight!


  • 1

    Yes, I also noticed that: One day I found my code beats 100%, I felt it's amazing and even wrote a reply....Anyway, then I found that in the coming days, many people's submission become "beats 100%". Maybe the judge was recently upgraded.

    Many people are very excited that their code seem to be faster than others. However, the scale of most of the test case are too small. The runtime statics itself is unreliable. The current "BEAT XX%!!!" would be misleading.

    I hope LeetCode could add more large scale test case before people become crazy about the random time competition.


  • 0

    Seems that LeetCode has closed the Runtime Competition?


  • 0

    @haiwei624 I am hoping to restore the runtime distribution by this weekend, please stay tuned.


  • 0

    @1337c0d3r said in "BEATS 100%" ???:

    stay tuned

    WOW, thanks


  • 0
    Z

    Recently, Python runs quicker than ever again...


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.