简体   繁体   中英

Using DictVectorizer with sklearn DecisionTreeClassifier

I try to start a decision tree with python and sklearn. Working approach was like this:

import pandas as pd
from sklearn import tree

for col in set(train.columns):
    if train[col].dtype == np.dtype('object'):
        s = np.unique(train[col].values)
        mapping = pd.Series([x[0] for x in enumerate(s)], index = s)
        train_fea = train_fea.join(train[col].map(mapping))
    else:
        train_fea = train_fea.join(train[col])

dt = tree.DecisionTreeClassifier(min_samples_split=3,
                             compute_importances=True,max_depth=5)
dt.fit(train_fea, labels)

Now I try to make the same thing with DictVectorizer, but my code doesn't work:

from sklearn.feature_extraction import DictVectorizer

vec = DictVectorizer(sparse=False)
train_fea = vec.fit_transform([dict(enumerate(sample)) for sample in train])

dt = tree.DecisionTreeClassifier(min_samples_split=3,
                             compute_importances=True,max_depth=5)
dt.fit(train_fea, labels)

I've got a error in last line: "ValueError: Number of labels=332448 does not match number of samples=55". As I learnt from documentation DictVectorize was designed to transform nominal features into numerical ones. What do I do wrong?

corrected (thanks ogrisel for pushing me to make a full example):

import pandas as pd
import numpy as np
from sklearn import tree

##################################
#  working example
train = pd.DataFrame({'a' : ['a', 'b', 'a'], 'd' : ['e', 'e', 'f'],
                  'b' : [0, 1, 1], 'c' : ['b', 'c', 'b']})
columns = set(train.columns)
columns.remove('b')
train_fea = train[['b']]

for col in columns:
    if train[col].dtype == np.dtype('object'):
        s = np.unique(train[col].values)
        mapping = pd.Series([x[0] for x in enumerate(s)], index = s)
        train_fea = train_fea.join(train[col].map(mapping))
    else:
        train_fea = train_fea.join(train[col])

dt = tree.DecisionTreeClassifier(min_samples_split=3,
                         compute_importances=True,max_depth=5)
dt.fit(train_fea, train['c'])

##########################################
# example with DictVectorizer and error

from sklearn.feature_extraction import DictVectorizer

vec = DictVectorizer(sparse=False)
train_fea = vec.fit_transform([dict(enumerate(sample)) for sample in train])

dt = tree.DecisionTreeClassifier(min_samples_split=3,
                         compute_importances=True,max_depth=5)
dt.fit(train_fea, train['c'])

Last code was fixed with a help of ogrisel:

import pandas as pd
from sklearn import tree
from sklearn.feature_extraction import DictVectorizer
from sklearn import preprocessing

train = pd.DataFrame({'a' : ['a', 'b', 'a'], 'd' : ['e', 'x', 'f'],
                  'b' : [0, 1, 1], 'c' : ['b', 'c', 'b']})

# encode labels
labels = train[['c']]
le = preprocessing.LabelEncoder()
labels_fea = le.fit_transform(labels) 
# vectorize training data
del train['c']
train_as_dicts = [dict(r.iteritems()) for _, r in train.iterrows()]
train_fea = DictVectorizer(sparse=False).fit_transform(train_as_dicts)
# use decision tree
dt = tree.DecisionTreeClassifier()
dt.fit(train_fea, labels_fea)
# transform result
predictions = le.inverse_transform(dt.predict(train_fea).astype('I'))
predictions_as_dataframe = train.join(pd.DataFrame({"Prediction": predictions}))
print predictions_as_dataframe

everything works

The way you enumerate your samples is not meaningful. Just print them to make it obvious:

>>> import pandas as pd
>>> train = pd.DataFrame({'a' : ['a', 'b', 'a'], 'd' : ['e', 'e', 'f'],
...                       'b' : [0, 1, 1], 'c' : ['b', 'c', 'b']})
>>> samples = [dict(enumerate(sample)) for sample in train]
>>> samples
[{0: 'a'}, {0: 'b'}, {0: 'c'}, {0: 'd'}]

Now this is syntacticly a list of dicts but nothing like what you would expect. Try to do this instead:

>>> train_as_dicts = [dict(r.iteritems()) for _, r in train.iterrows()]
>>> train_as_dicts
[{'a': 'a', 'c': 'b', 'b': 0, 'd': 'e'},
 {'a': 'b', 'c': 'c', 'b': 1, 'd': 'e'},
 {'a': 'a', 'c': 'b', 'b': 1, 'd': 'f'}]

This looks much better, let's now try to vectorize those dicts:

>>> from sklearn.feature_extraction import DictVectorizer

>>> vectorizer = DictVectorizer()
>>> vectorized_sparse = vectorizer.fit_transform(train_as_dicts)
>>> vectorized_sparse
<3x7 sparse matrix of type '<type 'numpy.float64'>'
    with 12 stored elements in Compressed Sparse Row format>

>>> vectorized_array = vectorized_sparse.toarray()
>>> vectorized_array
array([[ 1.,  0.,  0.,  1.,  0.,  1.,  0.],
       [ 0.,  1.,  1.,  0.,  1.,  1.,  0.],
       [ 1.,  0.,  1.,  1.,  0.,  0.,  1.]])

To get the meaning of each column, ask the vectorizer:

>>> vectorizer.get_feature_names()
['a=a', 'a=b', 'b', 'c=b', 'c=c', 'd=e', 'd=f']

vec.fit_transform returns a sparse array. And IIRC DecisionTreeClassifier doesn't play well with that.

Try train_fea = train_fea.toarray() before passing it to DecisionTreeClassifier .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM