简体   繁体   中英

Keras intermediate layers output

I'm trying to get the intermediate layers output when using functional API of Keras. I'm able to get the output when using the standard Sequential API, but not with the functional API.

I'm working on this working toy example:

from keras.models import Sequential
from keras.layers import Input, Dense,TimeDistributed
from keras.models import Model
from keras.layers import Dense, LSTM, Bidirectional,Masking

inputs = [[[0,0,0],[0,0,0],[0,0,0],[0,0,0]],[[1,2,3],[4,5,6],[7,8,9],[10,11,12]],[[10,20,30],[40,50,60],[70,80,90],[100,110,120]]]

model = Sequential()
model.add(Masking(mask_value=0., input_shape = (4,3)))
model.add(Bidirectional(LSTM(3,return_sequences = True),merge_mode='concat'))
model.add(TimeDistributed(Dense(3,activation = 'softmax')))


print "First layer:"
intermediate_layer_model = Model(input=model.input,output=model.layers[0].output)
print intermediate_layer_model.predict(inputs)
print ""
print "Second layer:"
intermediate_layer_model = Model(input=model.input,output=model.layers[1].output)
print intermediate_layer_model.predict(inputs)
print ""
print "Third layer:"
intermediate_layer_model = Model(input=model.input,output=model.layers[2].output)
print intermediate_layer_model.predict(inputs)

But if I use the functional API, it doesn't work. The outputs are not correct. For example it is outputting the initial input in the second layer:

inputs_ = Input(shape=(4,3))
x = Masking(mask_value=0., input_shape = (4,3))(inputs_)
x = Bidirectional(LSTM(3,return_sequences = True),merge_mode='concat')(x)
predictions = TimeDistributed(Dense(3,activation = 'softmax'))(x)
model2 = Model(input=inputs_, output=predictions)

print "First layer:"
intermediate_layer_model = Model(input=model2.input,output=model2.layers[0].output)
print intermediate_layer_model.predict(inputs)
print ""
print "Second layer:"
intermediate_layer_model = Model(input=model2.input,output=model2.layers[1].output)
print intermediate_layer_model.predict(inputs)
print ""
print "Third layer:"
intermediate_layer_model = Model(input=model2.input,output=model2.layers[2].output)
print intermediate_layer_model.predict(inputs)

ANSWER: Apparently when using the functional API layer 0 is the input itself. And so everything is shifted one position forward.

The issue arise from the fact, as the OP suggested, that the layer with index 0 (ie model.layers[0] ) corresponds to the input layer: "when using the functional API layer 0 is the input itself. And so everything is shifted one position forward."

Note: this answer is posted as community wiki as suggested in accepted answer of "Question with no answers, but issue solved in the comments (or extended in chat)" .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM