# My java 0ms(not always Luckily !You are here! Your runtime beats 97.90% of java submissions.)

• ``````public class Solution {
public List<String> summaryRanges(int[] nums) {
List<String> list = new ArrayList<>();
for (int i = 0, len = nums.length, k; i < len; i = k + 1) {
k = help(nums, i, len);
if (i != k)
list.add("" + nums[i] + "->" + nums[k]);
else
}
return list;
}

private int help(int[] nums, int l, int r) {
while (l + 1 < r) {
int m = (l + r) / 2;
if (nums[m] - nums[l] == m - l)
l = m;
else
r = m;
}
return l;
}
}``````

• This is very impressive! Brovo!

• Correct me if I'm wrong, Isn't it nlogn in the worst case??

• No, n is the worst, logn is the best.

• What about for array {0,2,4,6,8,10,12} ? Each call to the binary search helper function takes time worst case log(n), each call returns k == i.

• worst case it is nlogn, though only a carefully constructed list (as in zexi's post) will do so.

One way to avoid getting hit by these worst cases are using increasingly greedy search zone: first see if position n+2 is 2 larger, then n+4 is 4 larger, and n+8 is 8 larger... until you know the break point is in certain zone, within which you can do the bi-sec search.
By varying the greediness, you can rule out the possibility someone can construct a worst case example specific to your algorithm.
Overall, this is the best algorithm I saw on this question.

Technically, as we cannot assign probability to an increasing array, so it is pointless to talk about average performance. But in real world I would adopt the Op's algorithm

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.