简体   繁体   中英

How do i get my support vector regression to work to plot my polynomial graph

I have compiled my code for a polynomial graph, but it is not plotting. I am using SVR(support vector regression) from scikit learn and my code is below. It is not showing any error message, and it is just showing my data. I don't know what is going on. Does anyone? It is not even showing anything on the variable console describing my data.

import pandas as pd
import numpy as np
from sklearn.svm import SVR
from sklearn import cross_validation
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt



df = pd.read_csv('coffee.csv')
print(df)

df = df[['Date','Amount_prod','Beverage_index']]

x = np.array(df.Amount_prod)
y = np.array(df.Beverage_index)

x_train, x_test, y_train, y_test = cross_validation.train_test_split(
x, y, test_size=0.2)

x_train = np.pad(x, [(0,0)], mode='constant')
x_train.reshape((26,1))

y_train = np.pad(y, [(0,0)], mode='constant')
y_train.reshape((26,1))

x_train = np.arange(26).reshape((26, 1))
x_train = x.reshape((26, 1))
c = x.T
np.all(x_train == c)

x_test = np.arange(6).reshape((-1,1))
x_test = x.reshape((-1,1))
c2 = x.T
np.all(x_test == c2)

y_test = np.arange(6).reshape((-1,1))
y_test = y.reshape((-1,1))
c2 = y.T
np.all(y_test ==c2)

svr_poly = SVR(kernel='poly', C=1e3, degree=2)
y_poly = svr_poly.fit(x_train,y_train).predict(x_train)




plt.scatter(x_train, y_train, color='black')
plt.plot(x_train,  y_poly)

plt.show()

Data sample:

 Date   Amount_prod Beverage_index
    1990    83000         78
    1991    102000        78
    1992    94567         86
    1993    101340        88
    1994    96909         123
    1995    92987         101
    1996    103489        99
    1997    99650         109
    1998    107849        110
    1999    123467        90
    2000    112586        67
    2001    113485        67
    2002    108765        90

Try the code below. Support Vector Machines expect their input to have zero mean and unit variance. It's not the plot, that's blocking. It's the call to fit .

from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler

svr_poly = make_pipeline(StandardScaler(), SVR(kernel='poly', C=1e3, degree=2))
y_poly = svr_poly.fit(x_train,y_train).predict(x_train)

Just to build on Matt's answer a little. Nothing about your plotting is in error. When you call to svr_poly.fit with 'unreasonably' large numbers no error is thrown (but I still had to kill my kernel). By tinkering the exponent value in this code I reckoned that you could get up to 1e5 before it breaks, but not more. Hence your problem. As Matt says, applying the StandardScaler will solve your problems.

import pandas as pd
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt



x_train = np.random.rand(10,1)      # between 0 and 1
y_train = np.random.rand(10,)       # between 0 and 1
x_train = np.multiply(x_train,1e5)  #scaling up to 1e5
svr_poly = SVR(kernel='poly', C=1e3, degree=1)
svr_poly.fit(x_train,y_train)#.predict(x_train)
y_poly = svr_poly.predict(x_train)

plt.scatter(x_train, y_train, color='black')
plt.plot(x_train,  y_poly)

plt.show()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM