简体   繁体   中英

Pick subset of items minimizing the count of the most frequent of the selected item's labels

Problem

I want to pick a subset of fixed size from a list of items such that the count of the most frequent occurrence of the labels of the selected items is minimized. In English, I have a DataFrame consisting of a list of 10000 items, generated as follows.

import random
import pandas as pd
def RandLet():    
    alphabet = "ABCDEFG"
    return alphabet[random.randint(0, len(alphabet) - 1)]
items = pd.DataFrame([{"ID": i, "Label1": RandLet(), "Label2": RandLet(), "Label3": RandLet()} for i in range(0, 10000)])
items.head(3)

Each item has 3 labels. The labels are letters within ABCDEFG, and the order of the labels doesn't matter. An item may be tagged multiple times with the same label.
[Example of the first 3 rows]

   ID Label1 Label2 Label3
0   0      G      B      D
1   1      C      B      C
2   2      C      A      B

From this list, I want to pick 1000 items in a way that minimizes the number of occurrences of the most frequently appearing label within those items.

For example, if my DataFrame only consisted of the above 3 items, and I only wanted to pick 2 items, and I picked items with ID #1 and #2, the label 'C' appears 3 times, 'B' appears 2 times, 'A' appears 1 time, and all other labels appear 0 times - The maximum of these is 3. However, I could have done better by picking items #0 and #2, in which label 'B' appears the most frequently, coming in as a count of 2. Since 2 is less than 3, picking items #0 and #2 is better than picking items #1 and #2.

In the case where there are multiple ways to pick 1000 items such that the count of the maximum label occurrence is minimized, returning any of those selections is fine.

What I've got

To me, this feels similar a knapsack problem in len("ABCDEFG") = 7 dimensions. I want to put 1000 items in the knapsack, and each item's size in the relevant dimension is the sum of the occurrences of the label for that particular item. To that extent, I've built this function to convert my list of items into a list of sizes for the knapsack.

def ReshapeItems(items):
    alphabet = "ABCDEFG"
    item_rebuilder = []
    for i, row in items.iterrows():
        letter_counter = {}
        for letter in alphabet:
            letter_count = sum(row[[c for c in items.columns if "Label" in c]].apply(lambda x: 1 if x == letter else 0))
            letter_counter[letter] = letter_count
        letter_counter["ID"] = row["ID"]
        item_rebuilder.append(letter_counter)
    items2 = pd.DataFrame(item_rebuilder)
    return items2

items2 = ReshapeItems(items)
items2.head(3)

[Example of the first 3 rows of items2]

     A  B  C  D  E  F  G   ID
0    0  1  0  1  0  0  1    0
1    0  1  2  0  0  0  0    1
2    1  1  1  0  0  0  0    2

Unfortunately, at that point, I am completely stuck. I think that the point of knapsack problems is to maximize some sort of value, while keeping the sum of the selected items sizes under some limit - However, here my problem is the opposite, I want to minimize the sum of the selected size such that my value is at least some amount.

What I'm looking for

Although a function that takes in items or items2 and returns a subset of these items that meets my specifications would be ideal, I'd be happy to accept any sufficiently detailed answer that points me in the right direction.

Using a different approach, here is my take on your interesting question.

def get_best_subset(df, n_rows, key_cols, iterations=10_000):
    """Subset df in such a way that the frequency 
    of most frequent values in key columns is minimum.

    Args:
        df: input dataframe.
        n_rows: number of rows in subset.
        key_cols: columns to consider.
        iterations: max number of tries. Defaults to 10_000.

    Returns:
        Subset of n rows of input dataframe.

    """
    lowest_frequency = df.shape[0] * df.shape[1]
    best_df = pd.DataFrame([])

    # Iterate through all possible subsets
    i = 0
    while i < iterations:
        sample_df = df.sample(n=n_rows)
        # Count values in each column, concat and sum counts, get max count
        frequency = (
            pd.concat([sample_df[col].value_counts() for col in key_cols])
            .pipe(lambda df_: df_.groupby(df_.index).sum())
            .max()
        )
        if frequency < lowest_frequency:
            lowest_frequency = frequency
            best_df = sample_df
        if i == iterations:
            break
        i += 1
    return lowest_frequency, best_df.sort_values(by=["ID"]).reset_index(drop=True)

And so, with the toy dataframe constructor you provided:

lowest_frequency, best_df = get_best_subset(
    items, 1_000, ["Label1", "Label2", "Label3"]
)

print(lowest_frequency)
# 433

print(best_df)
# Output
       ID Label1 Label2 Label3
0       0      F      D      G
1      17      D      G      B
2      19      B      A      B
3      34      A      F      A
4      38      F      E      D
..    ...    ...    ...    ...
995  9965      G      C      F
996  9967      B      A      D
997  9969      E      F      G
998  9988      D      D      C
999  9998      C      D      E

[1000 rows x 4 columns]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM