简体   繁体   中英

Combing different Support vector machine into one classifier using stacked ensemble method in python

I am developing a model that classifies if a patient has lung cancer or not. Currently it has been divided into right lower, right upper, left upper and left lower. I have used SVM for each segments like

model1 = SVM for right lower
model2 = SVM for right upper
model3 = SVM for left lower
model4 = SVM for left upper.

I have applied normalization techniques and achieved accuracy, precision and recall for models. Also used K-fold on each model for evaluation. As next step i need to combine all these models into one classifier using stacked ensemble method Since the data set are different on each model subset how to combine and how to evaluate the final classifier.

Thanks in advance

Maybe, this solution could help you.

You can create 4 pipelines, that preprocess the same data in deferent ways. And after that you can combine them using Stacking .

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import StackingClassifier


# make 4 function for each SVM classifier
def preprocessing1(X):
  pass

# make 4 pipelines
pipeline1 = Pipeline([
  ('prepr', FunctionTransformer(preprocessing1)),
  ('svm', SVC())
])

clf = StackingClassifier(
  estimators=[
    ('pipe1', pipeline1),
    ('pipe2', pipeline2),
    ('pipe3', pipeline3),
    ('pipe4', pipeline4),
  ],
  final_estimator=LogisticRegression(),
)

PS That's just an example. You can also try different final estimator.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM