# What is the complexity of this Java solution?

• The idea is to recursively compare target with the element in the middle of the matrix and eliminate the quarter of the matrix that can't contain the target. After the quarter is eliminated do searches in the remaining matrix elements. At first it seemed to me that the complexity is O(log N*M), but it looks like I'm wrong. Can anyone show how to calculate complexity of this algorithm?

public class Solution {
public boolean searchMatrix(int[][] matrix, int target) {
if (matrix.length == 0) return false;
return search(matrix, target, 0, matrix.length-1, 0, matrix[0].length-1);
}
private boolean search(int[][]matrix, int target, int minRow, int maxRow, int minCol, int maxCol) {
if (minRow>maxRow || minCol>maxCol) return false;
int midRow = (minRow + maxRow) /2;
int midCol = (minCol + maxCol) /2;
int val = matrix[midRow][midCol];
if (val == target) return true;
if (target<val) {
return search(matrix, target, minRow, maxRow, minCol, midCol - 1) ||
search(matrix, target, minRow, midRow - 1, midCol, maxCol);
} else {
return search(matrix, target, midRow+1, maxRow, minCol, maxCol) ||
search(matrix, target, minRow, midRow, midCol+1, maxCol);
}
}
}

• This post is deleted!

• Here T(n) = 2T(n/2) + O(1) which reduces to
T(n) = O(n)

• Apparently it is trickier than that.
It's not correct that T(n) = 2T(n/2) + O(1). The recursion step just eliminates a quarter of the entries, not half of them. So it's more like T(n) = T(3*n/4) + T(n/2) + 1.

The best analysis I could find is in the first comment of http://twistedoakstudios.com/blog/Post5365_searching-a-sorted-matrix-faster:

For a matrix of w x h dimensions, with w <= h and w, h powers of 2, the following holds:
T(w, h) = 1 + T(w/2, h/2) + T(w/2, h).
Binary search is performed at the base cases: T(1, h) = log(h), T(w, 1) = log(w).

One can inductively verify that the solution to this recurrence is:
T(w, h) = w*(log(h) – 1/2*log(w) + 1) – 1, which is Θ(w*log(h/w)).

— Stergios

• One recursive call elimiates half entries the other eliminates 3/4 entris. Given n=h*w. My O(n) producing recurrence and calculation seem fine to me. (I don't know if one or two recursive calls will be called so I always assume the worst (2), and the worst (1/2 entries elimated) ), and the work inside the method is O(1) only. I can do better than my original upper bound. T(n) = T(n/2) + T(n/4) + O(1)

Going further we can apply Akra–Bazzi method and get p=0.6942, which eventually leads to upper bound of O(n^0.6942)

Thanks for the link you provided. log(Φ)/log(2) is exactly the number I got.

• I didn't realize that your n=h*w. It means your original estimate is O(h*w). If it was the best possible estimate it would mean that the algorithm is as fast as checking every single entry in the matrix.

• Yes, while correct it was not very impressive analysis. I came up with much nicer upper bound just now

• I agree, this is a better upper bound. What do you think about the calculations that I pasted-in in my previous post? I think they're accurate up to the point where

T(w, h) = w*(log(h) – 1/2*log(w) + 1) – 1,

but it doesn't follow that the complexity is

Θ(w*log(h/w))

because O(log(h) – 1/2*log(w)) is O(log(h/sqrt(w))) and the upper bound there is

O(w*log(h/sqrt(w)))

When h == w it becomes O(w*log(w)).

Would you agree?

In your calculations if h == w the upper bound is O(w^1.3884). I'm not sure which estimate is better.

• Can you show the steps to reach T(w, h) = w*(log(h) – 1/2*log(w) + 1) – 1 ?

• I'm not sure how to reach this formula. However, I can show how to verify that it is a correct one. I apologize in advance for the amount of the math below.

The formula

T(w, h) = w(log(h) – 1/2log(w) + 1) – 1

is supposed to be a solution for this recurrence:

T(w, h) = 1 + T(w/2, h/2) + T(w/2, h)

It is true for T(1, 1):

1 * (log(1) - 1/2log(1) + 1) - 1 = 0

For any h it is also true T(1, h):

1 * (log(h) - 1/2log(1) + 1) - 1 = log (h)

Using these two base cases we can do inductive proof. Assuming it is true for T(w, h/2) and T(w, h) we need to prove that it is also true for T(2w, h), where 2w <= h. If we prove this then it will be true for all (w, h) combinations, because we could start with (1, h) and gradually increase 1 to w until we get (w, h).

Here is the proof:

T(2w, h) = 1 + T(w, h/2) + T(w, h)

After replacing both sides with the formula we get:

2w(log(h) – 1/2log(2w) + 1) – 1 = 1 + (w(log(h/2) – 1/2log(w) + 1) – 1) + (w(log(h) – 1/2log(w) + 1) – 1)

The goal is to prove that the left side is equal to right side.

The right side transformations:

1 + (w(log(h/2) – 1/2log(w) + 1) – 1) + (w(log(h) – 1/2log(w) + 1) – 1) =
1 + (w(log(h) - 1 – 1/2log(w) + 1) – 1) + (w(log(h) – 1/2log(w) + 1) – 1) =
1 + (w(log(h) – 1/2log(w) + 1) – 1 - w) + (w(log(h) – 1/2log(w) + 1) – 1) =
1 + (w(log(h) – 1/2log(w) + 1) – 1) - w + (w(log(h) – 1/2log(w) + 1) – 1) =
1 + 2(w(log(h) – 1/2log(w) + 1) – 1) - w

The left side transformations:

2w(log(h) – 1/2log(2w) + 1) – 1 =
2w(log(h) – 1/2(1 + log(w)) + 1) – 1 =
2w(log(h) – 1/2log(w) + 1 - 1/2) – 1 =
2w(log(h) – 1/2log(w) + 1) - w – 1 =
2w(log(h) – 1/2log(w) + 1) - 2 - w – 1 + 2 =
2(w(log(h) – 1/2log(w) + 1) -1) - w + 1 =
1 + 2(w(log(h) – 1/2log(w) + 1) -1) - w

After applying the two tranformations above we can see that the left side is equal to the right side. End of proof.

• The math makes sense. O(w(log(w))) is a tighter upper bound.

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.