簡體   English   中英

多標簽分類

[英]Multi-label classification

我有一個看起來像的數據集

      A         B         C         D       sex        weight
  0.955136  0.802256  0.317182 -0.708615  female       normal
  0.463615 -0.860053 -0.136408 -0.892888    male        obese
 -0.855532 -0.181905 -1.175605  1.396793  female   overweight
 -1.236216 -1.329982  0.531241  2.064822    male  underweight
 -0.970420 -0.481791 -0.995313  0.672131    male        obese

我想,給定features X= [A,B,C,D]和標簽 y= [sex, weight] ,訓練一個能夠預測性別和體重的機器學習模型給定特征 A、B、C 和 D 的人。如何實現? 你能推薦任何可以幫助我實現這一目標的圖書館或閱讀材料嗎? 為了便於測試,可以使用以下代碼人工生成數據集:

import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
df['sex']  = [np.random.choice(['male', 'female']) for x in range(len(df))]
df['weight'] = [np.random.choice(['underweight', 
        'normal', 'overweight', 'obese']) for x in range(len(df)) ]

您需要從字符串值到整數的固定標簽:

import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
#fixed labels
df['sex']  = [np.random.choice(['0', '1']) for x in range(len(df))]
df['weight'] = [np.random.choice(list(range(4))) for x in range(len(df))]

% matplotlib inline
from pandas import read_csv, DataFrame
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
trg = df[['sex','weight']]
trn = df.drop(['sex','weight'], axis=1)
#list of different models
models = [LinearRegression(),
          RandomForestRegressor(n_estimators=100, max_features ='sqrt'),
          SVR(kernel='linear'),
          LogisticRegression()
          ]

Xtrn, Xtest, Ytrn, Ytest = train_test_split(trn, trg, test_size=0.4)
TestModels = DataFrame()
tmp = {}
#for each model in list
for model in models:
    #get name
    m = str(model)
    tmp['Model'] = m[:m.index('(')]    
    #for each columns from result list
    for i in range(Ytrn.shape[1]):
        #learning model
        model.fit(Xtrn, Ytrn.iloc[:,i]) 
        #calculate coefficient of determination
        tmp['R2_Y%s'%str(i+1)] = r2_score(Ytest.iloc[:,0], model.predict(Xtest))
    #write data and final datarame
    TestModels = TestModels.append([tmp])
#make an index by model name
TestModels.set_index('Model', inplace=True)

fig, axes = plt.subplots(ncols=2, figsize=(10,4))
TestModels.R2_Y1.plot(ax=axes[0], kind='bar', title='R2_Y1')
TestModels.R2_Y2.plot(ax=axes[1], kind='bar', color='green', title='R2_Y2')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM