简体   繁体   中英

Dividing the array in K parts

let the array

 A[0] = 2
  A[1] = 1
  A[2] = 5
  A[3] = 1
  A[4] = 2
  A[5] = 2
  A[6] = 2

I have to divide the array in K parts such that larger sum is low.
For example k=3

[2, 1, 5, 1, 2, 2, 2], [], [] with a large sum of 15;
[2], [1, 5, 1, 2], [2, 2] with a large sum of 9;
[2, 1, 5], [], [1, 2, 2, 2] with a large sum of 8;
[2, 1], [5, 1], [2, 2, 2] with a large sum of 6.

So the algorithm should return 6.
Which algorithm should i use , is this a modified version of KMP algorithm.
Please provide me a approach

I believe that you can use dynamic programming for this. Think of it this way - if you make a cut and put the first m elements into the first chunk, then the resulting maximum value will be the minimum value of the sum of the first m elements and the minimum value you can get by cutting the last n - m elements into k - 1 pieces. As a base case, if you run out of elements, the minimum value is ∞.

More formally, let S[n, k] be the minimum maximum value you can make using the first n elements of the array if you have to make k cuts. You can write out this recurrence:

S[0, k] = ∞ for any k (if you have no elements in the array and have to make any number of cuts, the value is infinite).

S[n, k] = min m ≤ n { max(A[n] + A[n - 1] + ... + A[n - m], S[n - m, k - 1] } for n > 0 (you peel off some number of elements from the back, computing the min as the best choice you can make.

You'll need to fill in a table of Θ(nk) total elements, and filling in the element at position (n', k') will take time Θ(n'). Therefore, the total runtime will be Θ(n 2 k).

As a heuristic, you may be able to speed this up a bit. In particular, as soon as A[n] + A[n - 1] + ... + A[m] becomes bigger than S[n - m, k - 1], you know that you've peeled off too many elements into the group in front and therefore can stop evaluating larger values of m. This is just a heuristic, but it may give you some performance benefits in practice.

Hope this helps!

There's an O(kn) -time optimization of templatetypedef's DP that works as follows. Let P(i, j) be the greatest optimal position of the last boundary between subarrays when dividing the first j elements into i parts. Then P(i, j) is nondecreasing in i . We exploit this fact as follows (lightly tested Python).

def minmaxsum(A, k):
    S = [0]  # partial sums
    for x in A:
        S.append(S[-1] + x)
    n = len(A)
    # V0 is the optimal objective values for the (i, j) subproblems
    V0 = [1e309] * (n + 1)  # 1e309 is infinity
    V0[0] = 0
    for j in range(k):
        # V1 is the optimal objective values for the (i + 1, j) subproblems
        V1 = []
        i0 = 0
        for i1 in range(n + 1):
            # the while loop is amortized constant-time
            while i0 < n and \
                    max(V0[i0], S[i1] - S[i0]) >= \
                        max(V0[i0 + 1], S[i1] - S[i0 + 1]):
                i0 += 1
            V1.append(max(V0[i0], S[i1] - S[i0]))
        V0 = V1
    return V0[n]

To improve this solution further, likely close to linear-time on sufficiently "unlumpy" inputs, we observe that sum(A) / k is a lower bound on the cost of any solution. Thus, for a split at i , we have the lower bound max(sum(A[:i]) / (k - 1), sum(A[i:])) . Using this bound, we may be able to use binary search to find the likeliest split points and then evaluate only a handful of the DP entries, using the bound to rule out the rest.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM