簡體   English   中英

sklearn 中的 RandomForestRegressor 給出負分

[英]RandomForestRegressor in sklearn giving negative scores

我很驚訝我使用 RandomForestRegressor 的預測得到了負分,我使用的是默認記分器(確定系數)。 任何幫助將不勝感激。 我的數據集看起來像這樣。 數據集截圖在這里

from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score,RandomizedSearchCV,train_test_split
import numpy as np,pandas as pd,pickle
dataframe = pd.read_csv("../../notebook/car-sales.csv")
y = dataframe["Price"].str.replace("[\$\.\,]" , "").astype(int)
x = dataframe.drop("Price" , axis = 1)
cat_features = [
    "Make",
    "Colour",
    "Doors",
]
oneencoder = OneHotEncoder()
transformer = ColumnTransformer([
("onehot" ,oneencoder, cat_features)
],remainder="passthrough")
transformered_x = transformer.fit_transform(x)
transformered_x = pd.get_dummies(dataframe[cat_features])
x_train , x_test , y_train,y_test = train_test_split(transformered_x , y , test_size = .2)
regressor = RandomForestRegressor(n_estimators=100)
regressor.fit(x_train , y_train)
regressor.score(x_test , y_test)

我稍微修改了您的代碼,並且能夠達到 89% 的分數。 你是如此接近。 你做得很好。 不破舊!

from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
import pandas as pd
dataframe = pd.read_csv("car-sales.csv")
df.head()
y = dataframe["Price"].str.replace("[\$\.\,]" , "").astype(int)
x = dataframe.drop("Price", axis=1)
cat_features = ["Make", "Colour", "Odometer", "Doors", ]
oneencoder = OneHotEncoder()
transformer = ColumnTransformer([("onehot", oneencoder, cat_features)], remainder="passthrough")
transformered_x = transformer.fit_transform(x)
transformered_x = pd.get_dummies(dataframe[cat_features])

x_train, x_test, y_train, y_test = train_test_split(transformered_x, y, test_size=.2, random_state=3)

forest = RandomForestRegressor(n_estimators=200, criterion="mse", min_samples_leaf=3, min_samples_split=3, max_depth=10)

forest.fit(x_train, y_train)

# Explained variance score: 1 is perfect prediction
print('Score: %.2f' % forest.score(x_test, y_test, sample_weight=None))
print(forest.score(x_test, y_test))

我認為這是負面的,因為由於數據量極少而導致過度擬合。

這直接來自 sklearn 文檔:

我引用文件:

https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html

https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html

The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of 
squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares 
((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it 
can be negative (because the model can be arbitrarily worse). A constant model 
that always predicts the expected value of y, disregarding the input features, 
would get a R^2 score of 0.0.

我將數據集擴大到 100 行,刪除了代理鍵(第一列的 int id 為 0-99),這里是:

在此處輸入圖像描述

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM