简体   繁体   中英

Deriving time complexity of a function

This following program is for searching two dimensional square array of positive integers with sorted rows and columns in non-decreasing order. The program returns true if target (value) element exist in array, false otherwise. I need to design algorithm for this task which will be as efficient as possible. I wrote this code, but i do not know how to derive the worst case run time complexity function using summations. I assume that the my solution is order O(n) in the worst case. but i do not know how to show it mathematically (using summations and etc).

Putting the actual code aside, your worst case is probably a two-dimensional array with one of the dimensions only being of size 1, so like an array with 100 columns and 1 row. Then if you want the largest number, it's order N of the total number of elements in the array to walk to the end.

On every iteration, either x decreases or y increases. In the worst case, we don't terminate the loop until x == 0 and y == n-1 . Therefore, we've walked from top-right (assuming we start at x == n-1 and y==0 ) to bottom-left.

Assuming the array is n -by- n in size, then this will require 2n iterations, worst-case. Therefore, this is O(n) , worst-case.

Well, we're talking upper bounds (that's what O( f (N)) measures) so we're talking worst case scenarios. Which means traversing the whole array.

If we examine the algorithm, we see that it traverses a path across the array with no backtracking: x is never increased, y is never decreased, and we never spin on one spot. That means that the time cost will be linear in the sum of the dimensions of the array: O(N+M). For a square array (or any array where either dimension is a fixed multiple of the other) that simplifies to O(N) through constant-factor removal.

Showing it using summations… well, without loss of generality you're going to do a bunch of across steps followed by a bunch of down steps (or some reordering of them) which gives you a cost of:

K across ×N across + K down ×N down + K post

(that's pretty trivial). But all those constant K blah bits drop out in big-Oh analysis giving you O(N across + N down ), which simplifies to O(2×N) for square arrays (with slightly different values for other shapes) and hence to O(N). You only get other cost functions when one of the dimensions is a super-linear function of the other, but that's rather exotic.

The key insight to deriving the cost function is knowing that you must traverse the whole array, yet you are just taking a ( Manhattan-style ) path across it and not visiting every cell.

Since a multi-dimensional array is really another form of a one-dimensional array (where n = x*y), this is in the order of O(n). The total number of elements it will search is going to be less than that, but only by a fractional amount. Even if it's a large fraction, this is stil thought of as O(n).

Start by figuring out your worst case scenario. This would be something like x+y+x or 2x+y.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM