简体   繁体   中英

Logical Regression-Can't use categorical variables to train my model

I want to train my model using this categorical variables being lifequality my objective variable

SelectedColumns=['workOrganiz' , 'education', 'maritalSt','jobType','ageGroup','workHoursPeriod','sex','lifequality']

I try to run a logistic regression like this

dfML=df[SelectedColumns]
list_of_results=[]
#train and test set stratified
X=dfML.iloc[:,:-1]    #all features except last
y=dfML.iloc[:,-1]  #target last column

X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=15,stratify=y)
clf=LogisticRegression()
lrm=clf.fit(X_train,y_train)
y_pred=lrm.predict(X_test)

but I get the following error

ValueError: could not convert string to float: 'Private'

What am I doing wrong? Using dummies makes my model have a precision and accuracy of 100%

dfML=df[SelectedColumns]
dfML=pd.get_dummies(dfML)

If I remove the dfml=df[SelectedColumns] the 100% doesn't happen

Regression algorithms can only use 'numbers' to calculate the categorical prediction. You can tho make a work around and still use categorical variables as predictors. There are different ways but a simple one is called 'Dummy Coding'. You can use the functionality get_dummies() to change the categorical volumns into multiple 0 an 1 columns. See https://www.geeksforgeeks.org/how-to-create-dummy-variables-in-python-with-pandas/amp/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM