简体   繁体   中英

How to efficiently serialize a scikit-learn classifier

What's the most efficient way to serialize a scikit-learn classifier?

I'm currently using Python's standard Pickle module to serialize a text classifier , but this results in a monstrously large pickle. The serialized object can be 100MB or more, which seems excessive and takes a while to generate and store. I've done similar work with Weka, and the equivalent serialized classifier is usually just a couple of MBs.

Is scikit-learn possibly caching the training data, or other extraneous info, in the pickle? If so, how can I speed up and reduce the size of serialized scikit-learn classifiers?

classifier = Pipeline([
    ('vectorizer', CountVectorizer(ngram_range=(1,4))),
    ('tfidf', TfidfTransformer()),
    ('clf', OneVsRestClassifier(LinearSVC())),
])

For large text datasets, use the hashing trick: replace the TfidfVectorizer by a HashingVectorizer (potentially stacked with a TfidfTransformer in the pipeline): it will be much faster to pickle as you won't have to store the vocabulary dict any more as discussed recently in this question:

How can i reduce memory usage of Scikit-Learn Vectorizers?

You can also use joblib.dump and pass in a compression. I noticed my classifier pickle dumps reducing by a factor of ~16 using option compress=3.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM