简体   繁体   中英

Output tensors to a Model must be Keras tensors

I was trying to make a model learned from difference between two model output. So I made code like below. But it occurred error read:

TypeError: Output tensors to a Model must be Keras tensors. Found: Tensor("sub:0", shape=(?, 10), dtype=float32)

I have found related answer including lambda , but I couldn't solve this issue. Does anyone know this issue? It might be seen converting tensor to keras's tensor.

Thx in advance.

from keras.layers import Dense
from keras.models import Model
from keras.models import Sequential

left_branch = Sequential()
left_branch.add(Dense(10, input_dim=784))

right_branch = Sequential()
right_branch.add(Dense(10, input_dim=784))

diff = left_branch.output - right_branch.output

model = Model(inputs=[left_branch.input, right_branch.input], outputs=[diff])
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

model.summary(line_length=150)

It's better to keep all operations done by a layer, do not subtract outputs like that (I wouldn't risk hidden errors for doing things differently from what the documentation expects):

from keras.layers import *

def negativeActivation(x):
    return -x

left_branch = Sequential()
left_branch.add(Dense(10, input_dim=784))

right_branch = Sequential()
right_branch.add(Dense(10, input_dim=784))

negativeRight = Activation(negativeActivation)(right_branch.output) 
diff = Add()([left_branch.output,negativeRight])

model = Model(inputs=[left_branch.input, right_branch.input], outputs=diff)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

When joining models like that, I do prefer using the Model way of doing it, with layers, instead of using Sequential :

def negativeActivation(x):
    return -x

leftInput = Input((784,))
rightInput = Input((784,))

left_branch = Dense(10)(leftInput) #Dense(10) creates a layer
right_branch = Dense(10)(rightInput) #passing the input creates the output

negativeRight = Activation(negativeActivation)(right_branch) 
diff = Add()([left_branch,negativeRight])

model = Model(inputs=[leftInput, rightInput], outputs=diff)
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

With this, you can create other models with the same layers, they will share the same weights:

leftModel = Model(leftInput,left_branch)
rightModel = Model(rightInput,right_branch)
fullModel = Model([leftInput,rightInput],diff)

Training one of them will affect the others if they share the same layer. You can train just the right part in the full model by making left_branch.trainable = False before compiling (or compile again for training), for instance.

I think I solve this, but it might be exact solution. I added some code like below :

diff = left_branch.output - right_branch.output
setattr(diff, '_keras_history', getattr(right_branch.output, '_keras_history'))
setattr(diff, '_keras_shape', getattr(right_branch.output, '_keras_shape'))
setattr(diff, '_uses_learning_phase', getattr(right_branch.output, '_uses_learning_phase'))

The reason why the error occur is diff tensor doesn't have attr named _keras_history so on. So adding intentionally them to diff tensor could prevent above error. I checked original code ran and being possible to learn.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM