[英]Keras w/ Tensorflow intermediate layer extraction in batches
I am currently trying to leverage an intermediate layer from my already trained DL model as an embedding to a given input. 我目前正在尝试利用已经受过训练的DL模型的中间层作为对给定输入的嵌入。 The code below already works at getting the layer I want, however it is extremely slow to do this iteratively for a large number of inputs. 下面的代码已经可以用于获取所需的图层,但是对于大量输入而言,迭代地执行此操作非常慢。
model = load_model('model.h5')
inp = model.input
outputs = [layer.output for layer in model.layers]
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs]
def text2tensor(text):
"""Convert string to tensor"""
tensor = tokenizer.texts_to_sequences([text])
tensor = pad_sequences(tensor, maxlen=10, padding='pre')
return tensor
def get_embedding(tensor, at_layer):
"""Get output at particular layer in network """
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs][at_layer-1]
layer_outs = [func([tensor, 1.]) for func in [functors]]
return layer_outs[0][0]
texts = ['this is my first text',
'this is my second text',
'this is my third text',
.....nth text]
embeddings = np.empty((0,256))
for t in texts:
tensor = text2tensor(t)
embedding = get_embedding(tensor,at_layer=4)
embeddings = np.append(embeddings,[embedding[0]],axis=0)
How do I make use of batch processing so that I don't have to do this one by one? 我如何利用批处理,而不必一个接一个地执行此操作? It is extremely slow with the above implementation, but it works. 使用上述实现速度非常慢,但是可以正常工作。
In addition to the point I mentioned in my comment, I suggest you to create a model instead of a backend function: 除了我在评论中提到的要点外,建议您创建一个模型而不是后端函数:
input_tensor = Input(shape=(10,)) # assuming maxlen=10
new_model = Model(input_tensor, my_desired_layer.output)
Then, first pre-process your text data to form an input array (ie my_data
below) and afterwards use predict
method and pass a batch_size
argument to it to exploit batch processing: 然后,首先对文本数据进行预处理,以形成一个输入数组(即下面的my_data
),然后使用predict
方法并向其传递batch_size
参数以利用批处理:
out = new_model.predict(my_data) # the default batch size is 32
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.