简体   繁体   中英

Create a RNN with Tensorflow 2

Im' trying to create a RNN with Python and Tensorflow 2a, but I'm really not sure about what I did... The prediction results are constants. What do you think about the data preparation?

### Create the data ###
training_data =    [[1,2], [4,5], [7,8]...] # here, input_size = 2
training_targets = [3,     6,     9...]
predict_data =     [[9,10], [12,13], [15,16]...] # predictions should be [11, 14, 17...]

### Imports ###
import numpy as np
import tensorflow as tf
from tensorflow.python import keras as tfk

### Parameters ###
batch_size = 8
time_steps = 64

### Create the model ###
model = tfk.Sequential()
model.add(tfk.layers.Bidirectional(tfk.layers.LSTM(128, return_sequences=True, input_shape=(time_steps, input_size))))
model.add(tfk.layers.Bidirectional(tfk.layers.LSTM(64, return_sequences=True)))
model.add(tfk.layers.Dropout(rate=0.05))
model.add(tfk.layers.Dense(32, activation='relu'))
model.add(tfk.layers.Dense(1, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

### Create the training dataset ###
# Separate data in time steps
data = np.array([training_data[i: i + time_steps] for i in range(len(training_data) - time_steps)])
targets = np.array([training_targets[i: i + time_steps] for i in range(len(training_argets) - time_steps)])
# Create the tensors and dataset
data = tf.convert_to_tensor(data)
targets = tf.convert_to_tensor(targets)
dataset = tf.data.Dataset.from_tensor_slices((data, targets))
# Batch data, the data shape is : (batch_size, time_steps, input_size)
dataset = dataset.batch(batch_size)

### Train the model ###
model.fit(dataset, epochs=10, validation_data=validation_dataset, shuffle=False)

### Create the predict data ###
data = np.array([predict_data[i: i + time_steps] for i in range(len(predict_data) - time_steps)])
data = tf.convert_to_tensor(data)

### Try the model ###
results = model.predict(data, steps=time_steps)

Predictions should be [11, 14, 17...] But it's like constant and in a weird shape:

[
[[1], [1], [1], [1] ...],
[[1], [1], [1], [1] ...],
...
]

Thanks for your help!

You only have one neuron in your final layer. Think about it. All your network can do is return a single number, but that number is meaningless because for all your training data, it never knows what you want it to do. The problem is you are framing this as a classification problem when it's much better suited to be a regression problem. Take out the softmax on the last layer and use MSE as the loss metric.

Also, there only seem to be 2 time steps but your code implies there are 64. That doesn't make sense to me.

Also, where do you define 'input_size'. It's not included in your code above.

Have a go at thinking about the problem a bit more, make those changes and hopefully it will work. I'd try and run it myself but I don't want to make assumptions on your training data and targets.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM