Simple formal proof of why little-o(n) running time is impossible

• Suppose there exists an algorithm A whose worst-case running time is little-o(n) (not big-O). Since reading all the numbers in the input requires Omega(n) time, A cannot read all the numbers in the input with little-o(n) worst-case running time (when n is sufficiently large).

(In fact, most of the numbers will not be read by A because lim(little-o(n) / n) = 0 when n->infinity.)

Let num = [1, 1, ..., 1]. If we run A with num, A will outputs 1. Suppose during this computation, A does not read num[i]. Then if I change num[i] to 0, and feed this new array to A, A will still outputs 1 (because A does not read num[i] at all!!!).

This is clearly a wrong answer --- since the minimum is now 0, not 1.

(P.S. You should notice that the new array is a rotated sorted array with duplicates.)

• I am very sorry my friend, but I think your post is not quite correct, since it is not based on convincing facts:

Suppose during this computation, A does not read num[i]

and using incorrect assumptions:

Suppose there exists an algorithm A whose worst-case running time is
o(n). Since reading all the numbers in the input requires Omega(n)
time, A cannot read all the numbers in the input with o(n) worst-case
running time (when n is sufficiently large).

If algorithm runs O(n) time how can you claim it will not read all the n inputs?

Even the naive solution like continuous comparison of two following numbers until it finds the second one is lower then the first one (if there is no such finding in all the array, the first in the array is minimum) runs O(n)