I'm working through some exercises in Tensorflow 2.0 to make sure I understand the basics properly. My current task is to fit a regression with an MSE function using keras optimizers. However, when I try to run the code below I get this error below:
AttributeError: Tensor.name is meaningless when eager execution is enabled.You can find the code directly below:
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
data = pd.read_csv( 'cars.csv' )
continuous_features = data[ [ "Identification.Year","Engine Information.Engine Statistics.Horsepower","Engine Information.Engine Statistics.Torque"] ].values / 100
X = np.concatenate( [ continuous_features ] , axis=1 )
X = np.append(np.ones((X.shape[0],1) ) , X, axis=1)
Y = data[ [ 'Fuel Information.City mpg' ] ].values
# Perform basic subset selection in the code:
train_features , test_features ,train_labels, test_labels = train_test_split( X , Y , test_size=0.2 )
# Training data.
X = tf.Variable( train_features , dtype=tf.float32 )
Y = tf.Variable( train_labels , dtype=tf.float32 )
# Testing data
test_X = tf.Variable( test_features , dtype=tf.float32 )
test_Y = tf.Variable( test_labels , dtype=tf.float32 )
num_features = X.shape[1]
# Define the coefficientst that we'll be starting with:
weights = tf.Variable(tf.random.normal((num_features,1)))
def MSE(y_val ,x_val,weights):
output = tf.Variable(tf.tensordot(X, weights, axes=1 ),name = "output" )
mse_val = tf.reduce_mean( tf.square( output - y_val ) )
return(mse_val)
mse_opt = tf.keras.optimizers.SGD(learning_rate = 1e-7,momentum = .9)
weight_vals = weights
def temp_mse(weight_vals):
return(MSE(Y,X,weight_vals))
func_for_opt = lambda: tf.abs(temp_mse(weight_vals))
mse_opt.minimize(func_for_opt, [weight_vals]).numpy()
I've been able to use this sort of framework for a few other problems which are more simple numerical experiments, but this seems to be having issues. Can anyone make a suggestion on how to fix this?
Edit: I removed the name = "output" value and got this error below:
ValueError: No gradients provided for any variable: ['Variable:0'].
I checked to see if there was anything in the implementation that could keep it from being differentiated, but right now I don't see anything past the tf.abs call (removed; issue remained).
You can just do what the error is telling you to do. Remove the name
output = tf.Variable(tf.tensordot(X, weights, axes=1 ),name = "output" )
change it to
output = tf.Variable(tf.tensordot(X, weights, axes=1 ) )
And you were not actually referencing that anywhere so why bother to name it.
******************* UPDATED *****************
I did not have time to run your code but the error you are getting now is because there is no path betweeen your loss function and your variables.
this line is wrong
output = tf.Variable(tf.tensordot(X, weights, axes=1 ),name = "output" )
Why are you creating an variable inside your mse function? You should just have
output = tf.tensordot(X, weights, axes=1 )
This line is breaking the path between your cost and your variables
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.