繁体   English   中英

Keras LSTM 输入和 Output 尺寸问题

[英]Keras LSTM Input and Output Dimension Issue

我正在尝试为多步预测创建 LSTM model。 现在我正在测试 model 网络设置,但发现它的设置存在尺寸问题。

这是我的测试数据集:

length = 100
df = pd.DataFrame()
df['x1'] = [i/float(length) for i in range(length)]
df['x2'] = [i**2 for i in range(length)]
df['y'] = df['x1'] + df['x2'] 

x_value = df.drop(columns = 'y').values
y_value = df['y'].values.reshape(-1,1)

这是我的 window 数据构建 function:

def build_data(x_value, y_value ,n_input, n_output):
    X, Y = list(), list()
    in_start = 0
    data_len = len(x_value)
    # step over the entire history one time step at a time
    for _ in range(data_len):
        # define the end of the input sequence
        in_end = in_start + n_input
        out_end = in_end + n_output
        if out_end <= data_len:
            x_input = x_value[in_start:in_end] # e.g. t0-t3
            X.append(x_input)
            y_output = y_value[in_end:out_end] # e.g. t4-t5
            Y.append(y_output)
        # move along one time step
        in_start += 1
    return np.array(X), np.array(Y)            

X, Y = build_data(x_value, y_value, 1, 2)

X 和 Y 的形状

X.shape
### (98, 1, 2)
Y.shape
### (98, 2, 1)

对于 Model 零件,

verbose, epochs, batch_size = 1, 20, 16
n_neurons =  100
n_inputs, n_features  = X.shape[1], X.shape[2]
n_outputs = Y.shape[1]

model = Sequential()
model.add(LSTM(n_neurons, input_shape = (n_inputs, n_features), return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, Y, epochs=epochs, batch_size=batch_size, verbose=verbose)

发生错误: ValueError: Error when checking target: expected time_distributed_41 to have shape (1, 1) but got array with shape (2, 1)

如果使用X, Y = build_data(x_value, y_value, 2, 2) ie input window == output window将起作用。 但我认为它不应该包含这个约束。

我该如何克服这个问题? input window != output window

或者我应该设置的任何图层或设置?

您在处理时间维度时遇到形状不匹配...当您尝试预测时间维度为 2 的事物时,时间输入昏暗为 1。因此您的网络中需要能够从 1 增加到 2 时间维度的东西方面。 我使用了Upsampling1D层,下面是一个完整的例子

# create fake data
X = np.random.uniform(0,1, (98,1,2))
Y = np.random.uniform(0,1, (98,2,1))

verbose, epochs, batch_size = 1, 20, 16
n_neurons =  100
n_inputs, n_features  = X.shape[1], X.shape[2]
n_outputs = Y.shape[1]

model = Sequential()
model.add(LSTM(n_neurons, input_shape = (n_inputs, n_features), return_sequences=True))
model.add(UpSampling1D(n_outputs))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, Y, epochs=epochs, batch_size=batch_size, verbose=verbose)

输入时间暗淡 > output 时间暗淡,您可以使用 Lambda 或池化操作(如果尺寸匹配)。 下面是 Lambda 的示例

X = np.random.uniform(0,1, (98,3,2))
Y = np.random.uniform(0,1, (98,2,1))

verbose, epochs, batch_size = 1, 20, 16
n_neurons =  100
n_inputs, n_features  = X.shape[1], X.shape[2]
n_outputs = Y.shape[1]

model = Sequential()
model.add(LSTM(n_neurons, input_shape = (n_inputs, n_features), return_sequences=True))
model.add(Lambda(lambda x: x[:,-n_outputs:,:]))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, Y, epochs=epochs, batch_size=batch_size, verbose=verbose)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM