简体   繁体   中英

Keras TypeError: Expected float32, got <tf.Tensor ..>of type 'Tensor' instead

I have a Keras model which gives me error

TypeError: Expected float32, got <tf.Tensor 'recommender_dnn_25/strided_slice_5:0' shape=(None, 1) dtype=float32> of type 'Tensor' instead.

To my keras model, I am sending train/validation data of type numpy.ndarray . This is from movielens dataset and the values are movie_id , user_id , zip_code , age , gender . A sample row below:

x_train[0]
array(['195', '241', 415, 3, 1], dtype=object)

The 1st two inputs are trained to an embedding along with the model training process. The last three (zip_code, age, gender) goes through below conversion before all five of the features are concatenated.

  1. converted to float
  2. reshaped to (None,1)
  3. converted to tensor using zip_code = K.constant(zip_code) , Without this step I see error ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int)

Now when I run this model - I get error TypeError: Expected float32, got <tf.Tensor 'recommender_dnn_25/strided_slice_5:0' shape=(None, 1) dtype=float32> of type 'Tensor' instead.

The error is happening at zip_code = K.constant(zip_code) even before it gets into the concatenate phase.

Below model code:

x_train.shape
(90000, 5)

EMBEDDING_SIZE = 50
NUM_USERS =movielens['user_id'].nunique()
NUM_MOVIES = movielens['movie_id'].nunique()

class RecommenderDNN(keras.Model):
    def __init__(self, num_users, num_movies, embedding_size, **kwargs):
        super(RecommenderDNN, self).__init__(**kwargs)
        self.num_users = num_users
        self.num_movies = num_movies
        self.embedding_size = embedding_size
        self.user_embedding = layers.Embedding(
            num_users,
            embedding_size,
            embeddings_initializer="he_normal",
            embeddings_regularizer=keras.regularizers.l2(1e-6),
        )
        self.movie_embedding = layers.Embedding(
            num_movies,
            embedding_size,
            embeddings_initializer="he_normal",
            embeddings_regularizer=keras.regularizers.l2(1e-6),
        )


    def call(self, inputs):
        user_vector = self.user_embedding(inputs[:, 0])

        movie_vector = self.movie_embedding(inputs[:, 1])

        zip_code = float(inputs[:, 2])
        age = float(inputs[:, 3])
        gender = float(inputs[:,4])

        zip_code = zip_code[: ,None]
        age = age[: ,None]
        gender = gender[: ,None]

        zip_code = K.constant(zip_code)
        age = K.constant(age)
        gender = K.constant(gender)


        print(user_vector.shape)
        print(movie_vector.shape)
        print(zip_code.shape)
        print(age.shape)
        print(gender.shape)


        concat = layers.concatenate([user_vector, movie_vector, zip_code, age, gender], axis=1)
        concat_dropout = layers.Dropout(0.2)(concat)
        # rest of the layers ...
        result = layers.Dense(1, activation='softmax',name='Activation')(dense_4)
        return result


model = RecommenderDNN(NUM_USERS, NUM_MOVIES, EMBEDDING_SIZE)
model.compile(
    loss=keras.losses.BinaryCrossentropy(), optimizer=keras.optimizers.Adam(lr=0.001)
)

Please suggest.

What I was doing had some fundamental problems. I was concatenating the embedding layers with the categorical input. The embedding layer outputs a 3D tensor (although printing out .shape only shows 2D, not sure why). Combining it with the input categorical without passing the categorical though any other layer does not really make sense. So I flattened the embedding layer output and passed the input categorical through a Dense layer before concatenating them.

For simplicity, lets assume we have 2 features - user_id and age . We want to generate embedding for the user_id through the training process. And age is the categorical variable that is passed as the model input after LabelEncoding .

Below code that resolved the issue:

row_count = train.shape[0]
EMBEDDING_SIZE = 50
NUM_USERS = movielens['user_id'].nunique()

user_input = keras.Input(shape=(1,), name='user')
age_input = keras.Input(shape=(1,), name='age')
user_emb = layers.Embedding(output_dim=EMBEDDING_SIZE, input_dim=NUM_USERS+1, input_length=row_count, name='user_emb')(user_input)
user_vec = layers.Flatten(name='FlattenUser')(user_emb)
dense_1 = layers.Dense(20, activation='relu')(age_input)
concat = layers.concatenate([user_vec, dense_1], axis=1)
dense = layers.Dense(10,name='FullyConnected')(concat)
outputs = layers.Dense(10, activation="softmax") (dense)
adam = keras.optimizers.Adam(lr=0.005)
model = keras.Model([user_input, age_input], outputs)
model.compile(optimizer=adam,loss= 'mean_absolute_error')

I solved mine like this:

kernel_constraint=tf.keras.constraints.min_max_norm

=>

kernel_constraint=tf.keras.constraints.MinMaxNorm()

probably just need to add parentheses (Y)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM