简体   繁体   English

这两种在keras中建立模型的方法有什么区别?

[英]What is the difference between these two ways of building a model in keras?

I am new to Keras and after going through a few tutorials i started building a model and found these two styles of implementations. 我是Keras的新手,在完成一些教程后,我开始构建一个模型并找到了这两种实现方式。 However i am getting an error in the first one and second one works fine. 但是我在第一个错误中得到错误,第二个错误。 Can someone explain the difference between the two? 有人可以解释两者之间的区别吗?

First Method: 第一种方法:


visible = Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True)
encoder = LSTM(100,activation='relu')(visible)

Second Method: 第二种方法:


model = Sequential()
model.add(Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True))
model.add(LSTM(100,activation ='relu'))

This is the error I get: 这是我得到的错误:

ValueError: Layer lstm_59 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.embeddings.Embedding'>. Full input: [<keras.layers.embeddings.Embedding object at 0x00000207BC7DBCC0>]. All inputs to the layer should be tensors.

They're two ways of creating DL models in Keras. 它们是在Keras中创建DL模型的两种方式。 The first code snippet follows functional style. 第一个代码片段遵循功能样式。 This style is used for creating complex models like multi-input/output, shared layers etc. 此样式用于创建复杂模型,如多输入/输出,共享图层等。

https://keras.io/getting-started/functional-api-guide/ https://keras.io/getting-started/functional-api-guide/

The second code snippet is Sequential style. 第二个代码片段是Sequential样式。 Simple models can be created which involves just stacking of layers. 可以创建简单的模型,其仅涉及层的堆叠。

https://keras.io/getting-started/sequential-model-guide/ https://keras.io/getting-started/sequential-model-guide/

If you read the functional API guide, you'll notice the following point: 如果您阅读了功能API指南,您会注意到以下几点:

'A layer instance is callable (on a tensor), and it returns a tensor' '一个层实例是可调用的(在张量上),它返回一个张量'

Now the error you're seeing would make sense. 现在你看到的错误是有道理的。 This line only creates the layer and doesn't invoke it by passing a tensor. 此行仅创建图层,并且不通过传递张量来调用它。

visible = Embedding(QsVocabSize, 1024, input_length=max_length_inp, mask_zero=True)

Subsequently, passing this Embedding object to LSTM layer throws an error as it is expecting a Tensor. 随后,将此Embedding对象传递给LSTM层会引发错误,因为它需要Tensor。

This is an example from the functional API guide. 这是功能API指南中的一个示例。 Notice the output tensors getting passed from one layer to another. 请注意输出张量从一个层传递到另一个层。

main_input = Input(shape=(100,), dtype='int32', name='main_input')
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
lstm_out = LSTM(32)(x)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 这两种保存keras机器学习模型权重的方法有什么区别? - What is the difference between these two ways of saving keras machine learning model weights? 在Keras中添加神经网络层的这两种方法有什么区别? - What is the difference between these two ways of adding Neural Network layers in Keras? 清除列表的这两种方法有什么区别? - What is the difference between these two ways to clear lists? 这两种编写IF循环的方式有什么区别? - What is the difference between these two ways of writing the IF loop? 两种发射烧瓶的方式有什么区别? - What is the difference between the two ways for launching flask? Keras:模型和层有什么区别? - Keras: What is the difference between model and layers? Keras 模型中的权重和变量有什​​么区别? - What is the difference between weights and variables in a Keras Model? 在Keras中使用和不使用Sequential()建立模型之间有什么区别? - What are the differences between building a model with and without using Sequential() in Keras? 这两种实现“复制增量”的方式有什么区别? - What's the difference between these two ways of implementing `on duplication increment`? 这两种初始化数组的方式有什么区别? - What's the difference between these two ways of initializing an array?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM