Using Python 2.7 on Windows. Want to fit a logistic regression model using feature T1
and T2
for a classification problem, and target is T3
.
I show the values of T1
and T2
, as well as my code. The question is, since T1
has dimension 5, and T2
has dimension 1, how should we pre-process them so that it could be leveraged by scikit-learn logistic regression training correctly?
BTW, I mean for training sample 1, its feature of T1
is [ 0 -1 -2 -3]
, and feature of T2
is [0]
, for training sample 2, its feature of T1 is [ 1 0 -1 -2]
and feature of T2
is [1]
, ...
import numpy as np
from sklearn import linear_model, datasets
arc = lambda r,c: r-c
T1 = np.array([[arc(r,c) for c in xrange(4)] for r in xrange(5)])
print T1
print type(T1)
T2 = np.array([[arc(r,c) for c in xrange(1)] for r in xrange(5)])
print T2
print type(T2)
T3 = np.array([0,0,1,1,1])
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
# using T1 and T2 as features, and T3 as target
logreg.fit(T1+T2, T3)
T1,
[[ 0 -1 -2 -3]
[ 1 0 -1 -2]
[ 2 1 0 -1]
[ 3 2 1 0]
[ 4 3 2 1]]
T2,
[[0]
[1]
[2]
[3]
[4]]
It needs to concatenate the feature data matrices using numpy.concatenate.
import numpy as np
from sklearn import linear_model, datasets
arc = lambda r,c: r-c
T1 = np.array([[arc(r,c) for c in xrange(4)] for r in xrange(5)])
T2 = np.array([[arc(r,c) for c in xrange(1)] for r in xrange(5)])
T3 = np.array([0,0,1,1,1])
X = np.concatenate((T1,T2), axis=1)
Y = T3
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
# using T1 and T2 as features, and T3 as target
logreg.fit(X, Y)
X_test = np.array([[1, 0, -1, -1, 1],
[0, 1, 2, 3, 4,]])
print logreg.predict(X_test)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.