简体   繁体   中英

Remove duplicates from Array without using Hash Table

i have an array which might contain duplicate elements(more than two duplicates of an element). I wonder if it's possible to find and remove the duplicates in the array:

  • without using Hash Table (strict requirement)
  • without using a temporary secondary array. No restrictions on complexity.

PS : This is not Home work question

Was asked to my friend in yahoo technical interview

Sort the source array. Find consecutive elements that are equal. (Ie what std::unique does in C++ land). Total complexity is N lg N, or merely N if the input is already sorted.

To remove duplicates, you can copy elements from later in the array over elements earlier in the array also in linear time. Simply keep a pointer to the new logical end of the container, and copy the next distinct element to that new logical end at each step. (Again, exactly like std::unique does (In fact, why not just download an implementation of std::unique and do exactly what it does? :P))

O(NlogN) : Sort and replace consecutive same element with one copy.

O(N 2 ) : Run nested loop to compare each element with the remaining elements in the array, if duplicate found, swap the duplicate with the element at the end of the array and decrease the array size by 1.

No restrictions on complexity.

So this is a piece of cake.

// A[1], A[2], A[3], ... A[i], ... A[n]

// O(n^2)
for(i=2; i<=n; i++)
{
    duplicate = false;
    for(j=1; j<i; j++)
        if(A[i] == A[j])
             {duplicate = true; break;}
    if(duplicate)
    {
        // "remove" A[i] by moving all elements from its left over it
        for(j=i; j<n; j++)
            A[j] = A[j+1];
        n--;
    }
}

In-place duplicate removal that preserves the existing order of the list, in quadratic time:

for (var i = 0; i < list.length; i++) {
  for (var j = i + 1; j < list.length;) {
    if (list[i] == list[j]) {
      list.splice(j, 1);
    } else {
      j++;
    }
  }
}

The trick is to start the inner loop on i + 1 and not increment the inner counter when you remove an element.

The code is JavaScript, splice(x, 1) removes the element at x .

If order preservation isn't an issue, then you can do it quicker:

list.sort();

for (var i = 1; i < list.length;) {
  if (list[i] == list[i - 1]) {
    list.splice(i, 1);
  } else {
    i++;
  }
}

Which is linear, unless you count the sort, which you should, so it's of the order of the sort -- in most cases n × log(n).

In functional languages you can combine sorting and unicification (is that a real word?) in one pass. Let's take the standard quick sort algorithm:

- Take the first element of the input (x) and the remaining elements (xs)
- Make two new lists
- left: all elements in xs smaller than or equal to x
- right: all elements in xs larger than x
- apply quick sort on the left and right lists
- return the concatenation of the left list, x, and the right list
- P.S. quick sort on an empty list is an empty list (don't forget base case!)

If you want only unique entries, replace

left: all elements in xs smaller than or equal to x

with

left: all elements in xs smaller than x

This is a one-pass O(n log n) algorithm.

Example implementation in F#:

let rec qsort = function
    | [] -> []
    | x::xs -> let left,right = List.partition (fun el -> el <= x) xs
               qsort left @ [x] @ qsort right

let rec qsortu = function
    | [] -> []
    | x::xs -> let left = List.filter (fun el -> el < x) xs
               let right = List.filter (fun el -> el > x) xs
               qsortu left @ [x] @ qsortu right

And a test in interactive mode:

> qsortu [42;42;42;42;42];;
val it : int list = [42]
> qsortu [5;4;4;3;3;3;2;2;2;2;1];;
val it : int list = [1; 2; 3; 4; 5]
> qsortu [3;1;4;1;5;9;2;6;5;3;5;8;9];;
val it : int list = [1; 2; 3; 4; 5; 6; 8; 9]

doesn't use a hash table per se but i know behind the scenes it's an implementation of one. Nevertheless, thought I might post in case it can help. This is in JavaScript and uses an associative array to record duplicates to pass over

function removeDuplicates(arr) {
    var results = [], dups = []; 

    for (var i = 0; i < arr.length; i++) {

        // check if not a duplicate
        if (dups[arr[i]] === undefined) {

            // save for next check to indicate duplicate
            dups[arr[i]] = 1; 

            // is unique. append to output array
            results.push(arr[i]);
        }
    }

    return results;
}

Since it's an interview question it is usually expected by the interviewer to be asked precisions about the problem.

With no alternative storage allowed (that is O(1) storage allowed in that you'll probably use some counters / pointers), it seems obvious that a destructive operation is expected, it might be worth pointing it out to the interviewer.

Now the real question is: do you want to preserve the relative order of the elements ? ie is this operation supposed to be stable ?

Stability hugely impact the available algorithms (and thus the complexity).

The most obvious choice is to list Sorting Algorithms , after all, once the data is sorted, it's pretty easy to get unique elements.

But if you want stability, you cannot actually sort the data (since you could not get the "right" order back) and thus I wonder if it solvable in less than O(N**2) if stability is involved.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM