簡體   English   中英

召回率 = 0 的 SVM 和隨機森林

[英]SVM and Random Forest with recall = 0

我試圖從可能出現在“退出”列中的兩個值中預測一個。 我有干凈的數據(大約 20 列和 4k 行包含有關客戶的典型信息,例如“性別”、“年齡”......)。 在訓練數據集中,大約 20% 的客戶被認定為“1”。 我制作了兩個模型——svm 和隨機森林——但都預測測試數據集大多為“0”(幾乎每次)。 兩個模型的召回率為 0。我在我認為我可能會犯一些愚蠢錯誤的地方附加了代碼。 任何想法為什么在 80% 的准確率下召回率如此之低?

def ml_model():
    print('sklearn: %s' % sklearn.__version__)
    df = pd.read_csv('clean_data.csv')
    df.head()
    feat = df.drop(columns=['target'], axis=1)
    label = df["target"]
    x_train, x_test, y_train, y_test = train_test_split(feat, label, test_size=0.3)
    sc_x = StandardScaler()
    x_train = sc_x.fit_transform(x_train)

    # SVC method
    support_vector_classifier = SVC(probability=True)
    # Grid search
    rand_list = {"C": stats.uniform(0.1, 10),
                 "gamma": stats.uniform(0.1, 1)}
    auc = make_scorer(roc_auc_score)
    rand_search_svc = RandomizedSearchCV(support_vector_classifier, param_distributions=rand_list, n_iter=100, n_jobs=4, cv=3, random_state=42,
                                     scoring=auc)
    rand_search_svc.fit(x_train, y_train)
    support_vector_classifier = rand_search_svc.best_estimator_
    cross_val_svc = cross_val_score(estimator=support_vector_classifier, X=x_train, y=y_train, cv=10, n_jobs=-1)
    print("Cross Validation Accuracy for SVM: ", round(cross_val_svc.mean() * 100, 2), "%")
    predicted_y = support_vector_classifier.predict(x_test)
    tn, fp, fn, tp = confusion_matrix(y_test, predicted_y).ravel()
    precision_score = tp / (tp + fp)
    recall_score = tp / (tp + fn)
    print("Recall score SVC: ", recall_score)


    # Random forests
    random_forest_classifier = RandomForestClassifier()
    # Grid search
    param_dist = {"max_depth": [3, None],
                  "max_features": sp_randint(1, 11),
                  "min_samples_split": sp_randint(2, 11),
                  "bootstrap": [True, False],
                  "criterion": ["gini", "entropy"]}
    rand_search_rf = RandomizedSearchCV(random_forest_classifier, param_distributions=param_dist,
                                       n_iter=100, cv=5, iid=False)
    rand_search_rf.fit(x_train, y_train)
    random_forest_classifier = rand_search_rf.best_estimator_
    cross_val_rfc = cross_val_score(estimator=random_forest_classifier, X=x_train, y=y_train, cv=10, n_jobs=-1)
    print("Cross Validation Accuracy for RF: ", round(cross_val_rfc.mean() * 100, 2), "%")
    predicted_y = random_forest_classifier.predict(x_test)
    tn, fp, fn, tp = confusion_matrix(y_test, predicted_y).ravel()
    precision_score = tp / (tp + fp)
    recall_score = tp / (tp + fn)
    print("Recall score RF: ", recall_score)

    new_data = pd.read_csv('new_data.csv')
    new_data = cleaning_data_to_predict(new_data)
    if round(cross_val_svc.mean() * 100, 2) > round(cross_val_rfc.mean() * 100, 2):
        predictions = support_vector_classifier.predict(new_data)
        predictions_proba = support_vector_classifier.predict_proba(new_data)
    else:
        predictions = random_forest_classifier.predict(new_data)
        predictions_proba = random_forest_classifier.predict_proba(new_data)

    f = open("output.txt", "w+")
    for i in range(len(predictions.tolist())):
        print("id: ", i, "probability: ", predictions_proba.tolist()[i][1], "exit: ", predictions.tolist()[i], file=open("output.txt", "a"))

如果我沒有錯過它,你忘了擴展你的測試集。 因此,您還需要對其進行縮放。 請注意,您應該只是改造它,不要再次安裝它。 見下文。

x_test = sc_x.transform(x_test)

我同意@e_kapti,還要檢查召回率和准確性的公式,您可以考慮改用 F1 分數( https://en.wikipedia.org/wiki/F1_score )。

Recall = TP / (TP+FN) Accuracy = (TP + TN) / (TP + TN + FP + FN) 其中TP、FP、TN、FN分別為真陽性、假陽性、真陰性和假陰性的數量.

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM