Submitted a C++ solution for a problem - 3ms.
Changed the code a bit, submitted - 6ms.
Restored the original code, submitted - 6ms
I wonder if the perf measuring scenarios run the tests several times, taking averages, making sure deviation is small. If the tests run on virtual machines, how dedicated are those to our submissions?
@andhddn First, you posted in the wrong category. This category is strictly for posting interview questions only. I have now moved it to "General Discussions" category.
Please read the following for more instructions:
As for your question, the time difference between 3ms and 6ms, although 100% difference, but 3ms in CPU time is nothing and could be caused by many reasons. For example, there may be interrupts happening which adds to the overhead in CPU time.
Thanks for moving the topic and for your answer. Today i have another evidence that the perf comparisons can't be trusted or... maybe people making statements can't be trusted.
Submitted the code for integer palindrome testing. Got 75ms results.
Went into discussion. Noticed some code with a claim of 46 ms. Copied the "competitor" code into my Visual Studio project.
Ran comparison on my numbers (basically a loop testing each integer [0...100 000 000[ plus a couple of billion of other specific numbers.
My time is 1.9s, the "competitor" is 8.2 s.
I wonder what happens with me submitting the code today and previous code submitted ~2 years ago. Is the hardware always the same? Are the tests recalculated periodically? They can't be all recalculated with each submission.
@andhddn No, the hardware does not stay the same. We recently upgraded our machines, that's why you saw the distribution graph disappeared for few weeks and came back recently.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.