简体   繁体   中英

Worst case time complexity to search an element in a closely sorted array of elements?

This was asked in my interview,Here the actual meaning of the question is to find the time complexity or specifically worst case time complexity of an array of elements which are already in the sorted order.

Main point to note here is the difference between the two adjacent numbers in the array are very small or insignificant.

I approached this problem as a simple binary search which requires the array to be in sorted order and thought the Worst case time complexity is O(log n) . But will this answer will change if the array elements are very close to each other as mentioned in the question.

在此输入图像描述

What is the correct approach to solve this problem.

According to the question we can assume the array as below picture. 在此输入图像描述

Thisis defenitely not what iam asking which was shown below , because the elements are sparely differ in the difference between them and we can use binary seach.

在此输入图像描述

The O(log n) binary search complexity will not change even if all the elements are equal (or "very close to each other"), as long as array is sorted. Perhaps we can improve performance by taking advantage on array values distribution and using interpolation search https://en.wikipedia.org/wiki/Interpolation_search But if implemented poorly Interpolation search could result in O(n) complexity

If the array shows an almost linear slope, meaning that the difference between 2 consecutive elements is almost constant across the array, you could use linear interpolation to make a guess for the index where the value could be stored:

Here is an implementation in JavaScript, but without much of specific syntax. It should be clear what is happening:

 function search(arr, val) { var low, high, guess; low = 0; high = arr.length-1; while (low <= high && val >= arr[low] && val <= arr[high]) { // Use linear interpolation to make guess for index: guess = Math.round(low + (high - low) * (val - arr[low]) / (arr[high] - arr[low])); if (arr[guess] == val) return guess; console.log('Tried index ' + guess + '. No match yet for ' + val); if (arr[guess] < val) { low = guess + 1; } else { high = guess - 1; } } return -1; // not found } var arr = [1, 2, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 15, 16]; index = search(arr, 7); console.log('Search result: index ' + index); 

When the array would be perfectly linear, the algorithm will find the element on the first guess, so in O(1) time. Depending on how much deviation is present in the intervals, the time will be somewhere between O(1) and O(long n) .

In a normal binary search, each time you have to split the remaining array into 2 halves and check the right half's median, BUT if you know the elements are very close, you can add a simple check to the procedure:

instead of:

  1. check median
  2. if median is bigger: go left
  3. else if median is smaller: go right
  4. else return median
  5. repeat

you can modify 2 & 3 into: go right/left by abs(median - searched_number) this should shorten your average case time complexity, but not sure how to measure it

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM