简体   繁体   中英

No attribute 'compile', how can I modify the class, so that it works?

The neuMF class is not a Keras 's class and therefore it doesn't provide any compile method. I would better use keras.Model instead of nn.Blocks .

Unfortunately, I do not really understand what nn.Blocks is and how I could replace it in the class. How should I modfy my code, so that it works with keras.Model and can use the Keras method?

Here is my code:

from d2l import mxnet as d2l
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers


    class NeuMF(nn.Block):
        def init(self, num_factors, num_users, num_items, nums_hiddens,
                     kwargs):
            super(NeuMF, self).init(kwargs)
            self.P = nn.Embedding(num_users, num_factors)
            self.Q = nn.Embedding(num_items, num_factors)
            self.U = nn.Embedding(num_users, num_factors)
            self.V = nn.Embedding(num_items, num_factors)
            self.mlp = nn.Sequential()
            for num_hiddens in nums_hiddens:
                self.mlp.add(nn.Dense(num_hiddens, activation='relu',
                                      use_bias=True))
            self.prediction_layer = nn.Dense(1, activation='sigmoid', use_bias=False)
    
        def forward(self, user_id, item_id):
            p_mf = self.P(user_id)
            q_mf = self.Q(item_id)
            gmf = p_mf * q_mf
            p_mlp = self.U(user_id)
            q_mlp = self.V(item_id)
            mlp = self.mlp(np.concatenate([p_mlp, q_mlp], axis=1))
            con_res = np.concatenate([gmf, mlp], axis=1)
            return self.prediction_layer(con_res)
    
    
    hidden = [5,5,5]
    
    model = NeuMF(5, num_users, num_items, hidden)
    model.compile(
         #loss=tf.keras.losses.BinaryCrossentropy(),
        loss=tf.keras.losses.MeanSquaredError(),
        optimizer=keras.optimizers.Adam(lr=0.001)
    )

And I get the following error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-21-5979072369bd> in <module>()
      2 
      3 model = NeuMF(5, num_users, num_items, hidden)
----> 4 model.compile(
      5      #loss=tf.keras.losses.BinaryCrossentropy(),
      6     loss=tf.keras.losses.MeanSquaredError(),

AttributeError: 'NeuMF' object has no attribute 'compile'

Thank you a lot in advance!

Edit:

I replaced nn to layers

class NeuMF(keras.Model):
    def __init__(self, num_factors, num_users, num_items, nums_hiddens,
                 **kwargs):
        super(NeuMF, self).__init__(**kwargs)
        self.P = layers.Embedding(num_users, num_factors)
        self.Q = layers.Embedding(num_items, num_factors)
        self.U = layers.Embedding(num_users, num_factors)
        self.V = layers.Embedding(num_items, num_factors)
        self.mlp = layers.Sequential()
        for num_hiddens in nums_hiddens:
            self.mlp.add(layers.Dense(num_hiddens, activation='relu',
                                  use_bias=True))
        self.prediction_layer = layers.Dense(1, activation='sigmoid', use_bias=False)

    def forward(self, user_id, item_id):
        p_mf = self.P(user_id)
        q_mf = self.Q(item_id)
        gmf = p_mf * q_mf
        p_mlp = self.U(user_id)
        q_mlp = self.V(item_id)
        mlp = self.mlp(np.concatenate([p_mlp, q_mlp], axis=1))
        con_res = np.concatenate([gmf, mlp], axis=1)
        return self.prediction_layer(con_res)

Then I got an new error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-26-7e09b0f80300> in <module>()
      1 hidden = [1,1,1]
      2 
----> 3 model = NeuMF(1, num_users, num_items, hidden)
      4 model.compile(
      5      #loss=tf.keras.losses.BinaryCrossentropy(),

1 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/embeddings.py in __init__(self, input_dim, output_dim, embeddings_initializer, embeddings_regularizer, activity_regularizer, embeddings_constraint, mask_zero, input_length, **kwargs)
    102       else:
    103         kwargs['input_shape'] = (None,)
--> 104     if input_dim <= 0 or output_dim <= 0:
    105       raise ValueError('Both `input_dim` and `output_dim` should be positive, '
    106                        'found input_dim {} and output_dim {}'.format(

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

After already quite some discussion in the comments, there are still several problems with your code and clarification needed from your side:

  1. subclasses of keras.Model should implement the __call__ method, but not a forward method.
  2. you can't just build numpy -operations like np.concatenate inside your model, always use keras -layers like tf.keras.layers.Concatenate .
  3. as already commented, the error you've posted most likely comes from num_factors , num_users , num_items not being integer, though I can only guess here, since you did not provide those to us.
  4. also, I can currently only guess what you are trying to achieve, since this is not at all clear from what you posted.

Let us approach the issues in a different way. The following code snippet runs without error and might be a good starting point for you:

import tensorflow as tf

class NeuMF(tf.keras.Model):
    def __init__(self, num_factors, num_users, num_items, nums_hiddens,
                 **kwargs):
        super(NeuMF, self).__init__(**kwargs)
        self.P = tf.keras.layers.Embedding(num_users, num_factors)
        self.Q = tf.keras.layers.Embedding(num_items, num_factors)
        self.U = tf.keras.layers.Embedding(num_users, num_factors)
        self.V = tf.keras.layers.Embedding(num_items, num_factors)
        self.mlp = tf.keras.Sequential()
        for num_hiddens in nums_hiddens:
            self.mlp.add(
                tf.keras.layers.Dense(
                    num_hiddens,
                    activation='relu',
                    use_bias=True
                    )
                )
        self.prediction_layer = tf.keras.layers.Dense(1, activation='sigmoid', use_bias=False)

    def __call__(self, inputs):
        x  = self.P(inputs[0])
        x1 = self.Q(inputs[1])
        x  = tf.keras.layers.Multiply()([x,x1])

        y = self.U(inputs[0])
        y1 = self.V(inputs[1])
        y = tf.keras.layers.Concatenate()([y,y1])
        y = self.mlp(y)
        x = tf.keras.layers.Concatenate()([x,y])
        return self.prediction_layer(x)

if __name__ == '__main__':
    #replace these with values of your choice:
    num_factors = 2
    num_users   = 3
    num_items   = 4 
    nums_hidden = [5,5,5]

    model = NeuMF(num_users, num_items, num_items, nums_hidden)
    model.compile(
        loss = tf.keras.losses.MeanSquaredError(),
        optimizer = tf.keras.optimizers.Adam(lr=0.001)
        )

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM