简体   繁体   中英

passing Tensor as input to Keras api functional model

I have an api functional model which works fine with numpy array as input. The simplified version of my model is as follows.

inputLayerU = Input(shape=(10,))
denseLayerU = Dense(10, activation='relu')(inputLayerU)

inputLayerM = Input(shape=(10,))    
denseLayerM = Dense(10, activation='relu')(inputLayerM)

concatLayerUM = concatenate([denseLayerU, denseLayerM], axis = 1)
outputLayer = Dense(1,activation='linear')(concatLayerUM)

model = Model(inputs=[inputLayerUM, inputLayerMU], outputs=outputLayer)

model.fit_generator(dataGenerator(train, matA, matB, matC, batchSize,1),
    epochs=3,
    steps_per_epoch=10)

I use a very large data set which does not fit in my memory, so I use a generator which is as follows:

def dataGenerator(data, matA, matB, matC, batchSize):

    sampleIndex = range(len(data))    
    batchNumber = int(len(data)/batchSize)  #count of batches

    counter=0
    while 1:
        U = np.zeros((batchSize,N))
        M = np.zeros((batchSize,N))
        outY = np.zeros((batchSize))

        for i in range(0,batchSize):
            ind = sampleIndex[i+counter*batchSize]
            U[i,:] = matA[ind,:]
            M[i,:] = matB[ind,:]
            outY[i] = data.iloc[ind]['y']

        matU = np.dot(U,matC)            
        matM = np.dot(M,matC)

        yield ([matU, matM], outY)

        #increase counter and restart it to yeild data in the next epoch as well
        counter += 1    
        if counter >= batchNumber:
            counter = 0  

As you can see I use dot product of two 2D arrays in dataGenerator function. I run my code on GPU and to make it faster I want to replace dot product with matmaul which provides the same result in tensor format. So it will be like this:

matU = tf.matmul(U,matB)        
matM = tf.matmul(M,matB)

However, it provides this error:

InvalidArgumentError: Requested tensor connection from unknown node: "input_4:0".

The input_4:0 is the first inputLayerU node in the model. So it seems I can't pass tensor to InputLayer. So how I should pass it?

Also, I tried to convert tensor matU and matM to numpy array before passing them to input layer

matU = tf.Session().run(tf.matmul(U,matB))       
matM = tf.Session().run(tf.matmul(M,matB))

but it was 10 times slower than using the dot product in the first place.

I checked this post , however, it was for the sequential model and I don't have my tensor before starting to train the model.

You could pass a U and M as inputs and then apply a Lambda inside the model:

Lambda(lambda x: tf.matmul(x, tf.constant(constant_matrix)))

assuming that constant_matrix is a constant in your model.

Using the Functional API:

import numpy as np
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K

const_matrix = np.random.rand(10, 20)

def apply_const_matrix(x):
  """
      x: shape=(batch_size, input_dims)
      const_matrix: shape=(input_dims, output_dims)
      output: (batch_size, output_dims)
  """
  return K.dot(x, K.constant(const_matrix))

def make_model():
  inp_M = Input(shape=(10,))
  inp_U = Input(shape=(10,))
  Mp = Lambda(apply_const_matrix)(inp_M)
  Up = Lambda(apply_const_matrix)(inp_U)
  join = Concatenate(axis=1)([Mp, Up])
  h1 = Dense(32, activation='relu')(join)
  out = Dense(1, activation='sigmoid')(h1)
  model = Model([inp_M, inp_U], out)
  model.compile('adam', 'mse')
  return model

model = make_model()
model.summary()

The assumption here is that the inputs to the model are M, U vectors before the matmul operation and that the transformation is with a constant matrix.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM