简体   繁体   中英

market basket analysis in python for large transaction dataset

On applying apriori (support >= 0.01) and association_rules functions using mlxtend package of python on 4.2L+ rows transaction data (in the form of sparse matrix) , generation of frequent item sets and association rules takes too much time.

Sample transaction sparse matrix (pandas DataFrame), input data for MBA:

Invoice no./ Products  Shirt  T-shirt  Jeans  Footwear
                    1      1        1      0         0
                    2      0        0      1         0
                    3      0        1      0         1

a) Is there any way to optimize the representation of transaction data sparse matrix before applying MBA ?

b) any alternate efficient representations of transaction data?

The apriori algorithm receives a list of lists, where each list is a transaction. Are you passing the list of transactions? For example:

transactions = [['milk', 'bread', 'water'],['coffe', 'sugar' ],['burgers', 'eggs']]

here you have a list of transactions (lists). Then you can pass it to apriori.

from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
import time

support_threshold = 0.004

te = TransactionEncoder()
te_ary = te.fit(transactions).transform(transactions)
df = pd.DataFrame(te_ary, columns=te.columns_)
logging.debug("Calculating itemset according to support...")
# time 
start_time = time.clock()
# apriori
frequent_itemsets = apriori(df, min_support=support_threshold, use_colnames=True)
# end time to calculation
end_time = time.clock()
time_apriori = (end_time-start_time)/60
apriori_decimals = "%.2f" % round(time_apriori,2)
print("\n\nCompleted in %s minutes\n" % apriori_decimals)

print(frequent_itemsets) #dataframe with the itemsets

lift = association_rules(frequent_itemsets, metric="lift", min_threshold=1)
print(lift) #dataframe with confidence, lift, conviction and leverage metrics calculated

Regarding the min support threshold, and the time the apriori algorithm took to give us the result, with small min_support values we will have a lot of association rules. Thereby, to calculate them the algorithm needs time. This is one of the well-known limitations of the algorithm.

You can find here an overall explanation on how the apriori algorithm works, some highlights are:

Apriori uses a "bottom-up" approach, where frequent subsets are extended one item at a time (known as candidate generation). Then groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found.

Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate itemsets of length k from itemsets of length k-1. Then it prunes the candidates who have an infrequent subpattern. According to the downward closure lemma, the candidate set contains all frequent k-length item sets. After that, it scans the transaction database to determine frequent itemsets among the candidates.

As we can see, for a dataset with a large number of frequent items or with a low support value, the candidate itemsets will always be very large.

These large datasets require a lot of memory to be stored. Moreover, the apriori algorithm also look at all parts of the database multiple times to calculate the frequency of the itemsets in k-itemset. So, the apriori algorithm could be very slow and inefficient, mainly when the memory capacity is limited, and the number of transactions is large.

For example, I tried the apriori algorithm with a list of transactions with 25900 transactions and a min_support value of 0.004. The algorithm took about 2.5 hours to give the output.

For more detailed explanation of the code, visit - mlxtend apriori

Use fpgrowth algorithm, which is almost 5x times faster than the original apriori for large datasets.

I have tried for 1.4 million transactions and 200 unique items. Apriori took more than 4 hrs, while fpgrowth took less than 5 mins to generate frequent itemsets, given worst minimum support value.

mlxtend library version >= 0.17 provides fpgrowth implementation and generates same results as apriori, which saves you time and space. Your input is in one-hot encoding format and it is accepted input format. Link: http://rasbt.github.io/mlxtend/user_guide/frequent_patterns/fpgrowth/

from mlxtend.frequent_patterns import fpgrowth
from mlxtend.frequent_patterns import association_rules

frequent_itemsets = fpgrowth(df, min_support=0.6)
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM