简体   繁体   English

第一梯度下降:如何归一化X和Y?

[英]First gradient descent : how to normalize X and Y?

I'm doing my first gradient descent ever , following a course about Machine Learning. 我正在做一个关于机器学习课程的第一次梯度下降
But it doesn't seem to work correctly as it oscillates (converges then diverges then converges ... ) and at the end , the result is not appreciate. 但它似乎没有正常工作,因为它振荡 (收敛然后发散然后收敛......)并且最后,结果不被欣赏。

Maybe it's because I did'nt normalize my X and Y but I don't know how to do it ... I've tried a way with sklearn StandardScaler , but got an error. 也许是因为我没有规范我的X和Y,但我不知道怎么做...我已经尝试过使用sklearn StandardScaler,但是出了错误。 I don't know what is going wrong. 我不知道出了什么问题。

I'm using Tensorflow 1.3.0 and jupyter. 我正在使用Tensorflow 1.3.0和jupyter。

Here's my code : 这是我的代码:

    #from sklearn.preprocessing import StandardScaler
    #scaler=StandardScaler()

    n_epochs=1000
    learning_rate=0.01
    X=tf.constant(housing_data_plus_bias,dtype=tf.float32,name="X")
    #X_norm=scaler.fit_transform(X)
    Y=tf.constant(housing.target.reshape(-1,1),dtype=tf.float32,name="Y")
    theta=tf.Variable(tf.random_uniform([n+1,1],-1.0,1.0),name="theta")
    y_pred=tf.matmul(X,theta,name="predictions")  #eq 1.4
    error=y_pred - Y

    mse=tf.reduce_mean(tf.square(error),name="mse") #eq 1.5
    gradients= (2/(m*mse) ) * tf.matmul(tf.transpose(X),error) 
    training_op = tf.assign(theta,theta - learning_rate * gradients)

    init=tf.global_variables_initializer()

    with tf.Session() as sess:
        sess.run(init)
        print("   Y   ")
        print(Y.eval())
        print("   X   ")
        print(X.eval())
        for epoch in range(n_epochs):
            if epoch%100==0:
                print("Epoch",epoch,"MSE =",mse.eval())
            sess.run(training_op)

        best_theta=theta.eval()

and Here's what I get : 而这就是我得到的:

Y   
[[4.526]
 [3.585]
 [3.521]
 ...
 [0.923]
 [0.847]
 [0.894]]
   X   
[[   1.           8.3252      41.        ...    2.5555556   37.88
  -122.23     ]
 [   1.           8.3014      21.        ...    2.1098418   37.86
  -122.22     ]
 [   1.           7.2574      52.        ...    2.80226     37.85
  -122.24     ]
 ...
 [   1.           1.7         17.        ...    2.3256352   39.43
  -121.22     ]
 [   1.           1.8672      18.        ...    2.1232092   39.43
  -121.32     ]
 [   1.           2.3886      16.        ...    2.616981    39.37
  -121.24     ]]
Epoch 0 MSE = 511820.7
Epoch 100 MSE = 775760.0
Epoch 200 MSE = 2181710.8
Epoch 300 MSE = 115924.266
Epoch 400 MSE = 7663049.0
Epoch 500 MSE = 2283198.2
Epoch 600 MSE = 586127.75
Epoch 700 MSE = 7143360.5
Epoch 800 MSE = 15567712.0
Epoch 900 MSE = 2333040.0

But what's going wrong? 但是出了什么问题? I thought that normalize will only allow to converge faster. 我认为规范化只会让收敛更快。

From the looks of your code, you aren't using an optimizer for the Gradient Descent algorithm. 从代码的外观来看,您没有使用优化器来实现渐变下降算法。 I suggest to use an optimizer and then check the MSE again. 我建议使用优化器,然后再次检查MSE。 Theoretically, it should improve. 从理论上讲,它应该有所改善。 Here is an example of one Gradient Descent optimizer, 以下是一个Gradient Descent优化器的示例,

n_epochs=1000
learning_rate=0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) # Play around with learning rates and check the accuracy
X=tf.constant(housing_data_plus_bias,dtype=tf.float32,name="X")
#X_norm=scaler.fit_transform(X)
Y=tf.constant(housing.target.reshape(-1,1),dtype=tf.float32,name="Y")
theta=tf.Variable(tf.random_uniform([n+1,1],-1.0,1.0),name="theta")
y_pred=tf.matmul(X,theta,name="predictions")  #eq 1.4
error=y_pred - Y

mse=tf.reduce_mean(tf.square(error),name="mse") #eq 1.5
training_op = optimizer.minimize(mse)

This is using a built in Optimizer from TensorFlow. 这是使用TensorFlow内置的优化器。 You can opt to manually code the optimizer for your gradient descent algorithm. 您可以选择为梯度下降算法手动编写优化程序代码。

Here is a link to blog site which explains different optimizers and Gradient descent in detail, http://ruder.io/optimizing-gradient-descent/ 这是博客网站的链接,详细解释了不同的优化器和渐变下降, http://ruder.io/optimizing-gradient-descent/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM