简体   繁体   中英

How to do regression as opposed to classification using logistic regression and scikit learn

The target variable that I need to predict are probabilities (as opposed to labels). The corresponding column in my training data are also in this form. I do not want to lose information by thresholding the targets to create a classification problem out of it.

If I train the logistic regression classifier with binary labels, sk-learn logistic regression API allows obtaining the probabilities at prediction time. However, I need to train it with probabilities. Is there a way to do this in scikits-learn, or a suitable Python package that scales to 100K data points of 1K dimension.

This is an excellent question because (contrary to what people might believe) there are many legitimate uses of logistic regression as.... regression!

There are three basic approaches you can use if you insist on true logistic regression, and two additional options that should give similar results. They all assume your target output is between 0 and 1. Most of the time you will have to generate training/test sets "manually," unless you are lucky enough to be using a platform that supports SGD-R with custom kernels and X-validation support out-of-the-box.

Note that given your particular use case, the "not quite true logistic regression" options may be necessary. The downside of these approaches is that it is takes more work to see the weight/importance of each feature in case you want to reduce your feature space by removing weak features.

Direct Approach using Optimization

If you don't mind doing a bit of coding, you can just use scipy optimize function. This is dead simple:

  1. Create a function of the following type: y_o = inverse-logit (a_0 + a_1x_1 + a_2x_2 + ...)

where inverse-logit (z) = exp^(z) / (1 + exp^z)

  1. Use scipy minimize to minimize the sum of -1 * [y_t*log(y_o) + (1-y_t)*log(1 - y_o)], summed over all datapoints. To do this you have to set up a function that takes (a_0, a_1, ...) as parameters and creates the function and then calculates the loss.

Stochastic Gradient Descent with Custom Loss

If you happen to be using a platform that has SGD regression with a custom loss then you can just use that, specifying a loss of y_t*log(y_o) + (1-y_t)*log(1 - y_o)

One way to do this is just to fork sci-kit learn and add log loss to the regression SGD solver.

Convert to Classification Problem

You can convert your problem to a classification problem by oversampling, as described by @jo9k. But note that even in this case you should not use standard X-validation because the data are not independent anymore. You will need to break up your data manually into train/test sets and oversample only after you have broken them apart.

Convert to SVM

(Edit: I did some testing and found that on my test sets sigmoid kernels were not behaving well. I think they require some special pre-processing to work as expected. An SVM with a sigmoid kernel is equivalent to a 2-layer tanh Neural Network, which should be amenable to a regression task structured where training data outputs are probabilities. I might come back to this after further review.)

You should get similar results to logistic regression using an SVM with sigmoid kernel. You can use sci-kit learn's SVR function and specify the kernel as sigmoid. You may run into performance difficulties with 100,000s of data points across 1000 features.... which leads me to my final suggestion:

Convert to SVM using Approximated Kernels

This method will give results a bit further away from true logistic regression, but it is extremely performant. The process is the following:

  1. Use a sci-kit-learn's RBFsampler to explicitly construct an approximate rbf-kernel for your dataset.

  2. Process your data through that kernel and then use sci-kit-learn's SGDRegressor with a hinge loss to realize a super-performant SVM on the transformed data.

The above is laid out with code here

I want the regressor to use the structure of the problem. One such structure is that the targets are probabilities .

You can't have cross-entropy loss with non-indicator probabilities in scikit-learn ; this is not implemented and not supported in API. It is a scikit-learn 's limitation.

In general, according to scikit-learn 's docs a loss function is of the form Loss(prediction, target) , where prediction is the model's output, and target is the ground-truth value.

In the case of logistic regression, prediction is a value on (0,1) (ie, a "soft label"), while target is 0 or 1 (ie, a "hard label").


For logistic regression you can approximate probabilities as target by oversampling instances according to probabilities of their labels. eg if for given sample class_1 has probability 0.2 , and class_2 has probability 0.8 , then generate 10 training instances (copied sample): 8 with class_2 as "ground truth target label" and 2 with class_1`.

Obviously it is workaround and is not extremely efficient, but it should work properly.

If you're ok with upsampling approach, you can pip install eli5 , and use eli5.lime.utils.fit_proba with a Logistic Regression classifier from scikit-learn .


Alternative solution is to implement (or find implementation?) of LogisticRegression in Tensorflow, where you can define loss function as you like it.


In compiling this solution I worked using answers from scikit-learn - multinomial logistic regression with probabilities as a target variable and scikit-learn classification on soft labels . I advise those for more insight.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM