[英]Keras: dictionaries as validation_data
From Keras manual I learn that the variable validation_data
could be:从 Keras 手册中我了解到变量validation_data
可以是:
(x_val, y_val)
of NumPy arrays or tensors. NumPy arrays 或张量的元组(x_val, y_val)
。(x_val, y_val, val_sample_weights)
of NumPy arrays. NumPy arrays 的元组(x_val, y_val, val_sample_weights)
。(inputs, targets)
or (inputs, targets, sample_weights)
. Python 生成器或 keras.utils.Sequence 返回(inputs, targets)
或(inputs, targets, sample_weights)
。 My question is: since I am using multiple named inputs, could I use a tuple (x_val, y_val)
as validation_data
, where x_val
is a dictionary of NumPy arrays (with keys equal to the names of the model's input) and y_val
is a simple NumPy array?我的问题是:因为我使用多个命名输入,我可以使用元组(x_val, y_val)
作为validation_data
,其中x_val
是 NumPy arrays 的字典(键等于模型输入的名称), y_val
是一个简单的NumPy 数组?
Thank you for your help.谢谢您的帮助。
Since you are using multiple named inputs, you cannot pass a tuple (x_val, y_val)
for the validation_data
parameter (at least, currently, Keras does not support that).由于您使用的是多个命名输入,因此无法为validation_data
参数传递元组(x_val, y_val)
(至少,目前 Keras 不支持)。 As per the TensorFlow and Keras documentations:根据TensorFlow和Keras文档:
validation_data will override validation_split . validation_data将覆盖validation_split 。 validation_data could be: validation_data可以是:
- A tuple (x_val, y_val) of Numpy arrays or tensors. Numpy arrays 或张量的元组(x_val, y_val) 。
- A tuple (x_val, y_val, val_sample_weights) of NumPy arrays. NumPy arrays 的元组(x_val, y_val, val_sample_weights) 。
- A tf.data.Dataset .一个tf.data.Dataset 。
- A Python generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights) . Python 生成器或keras.utils.Sequence返回(inputs, targets)或(inputs, targets, sample_weights) 。 validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy . tf.distribute.experimental.ParameterServerStrategy尚不支持validation_data 。
Potential solution:潜在的解决方案:
One potential solution is to concatenate the training and validation datasets and pass it to the fit
method as arguments for x
and y
, while specifying the validation part using the validation_split
.一种可能的解决方案是连接训练和验证数据集,并将其作为x
和y
的 arguments 传递给fit
方法,同时使用validation_split
指定验证部分。 Note that:注意:
The validation data is selected from the last samples in the x and y data provided, before shuffling.在混洗之前,验证数据是从提供的 x 和 y 数据中的最后一个样本中选择的。
More details更多细节
Let's say your dataset has two inputs, eg in1 and in2, and two outputs, eg out1 and out2.假设您的数据集有两个输入(例如 in1 和 in2)和两个输出(例如 out1 和 out2)。
Optional reading选读
You can first shuffle your training and validation datasets as needed:您可以先根据需要打乱您的训练和验证数据集:
concat_xy_train=np.concatenate((train_in1, train_in2, train_out1, train_out2), axis=1)
concat_xy_val=np.concatenate((val_in1, val_in2, val_out1, val_out2), axis=1)
np.random.shuffle(concat_xy_train)
np.random.shuffle(concat_xy_val)
You can, then, retrieve your features and lables:然后,您可以检索您的功能和标签:
shuf_train_in1 = concat_xy_train[:,:len_in1]
shuf_train_in2 = concat_xy_train[:,len_in1:len_in1+len_in2]
shuf_train_out1 = concat_xy_train[:,len_in1+len_in2:len_in1+len_in2+len_out1]
shuf_train_out2 = concat_xy_train[:,len_in1+len_in2+len_out1:]
shuf_val_in1 = concat_xy_val[:,:len_in1]
shuf_val_in2 = concat_xy_val[:,len_in1:len_in1+len_in2]
shuf_val_out1 = concat_xy_val[:,len_in1+len_in2:len_in1+len_in2+len_out1]
shuf_val_out2 = concat_xy_val[:,len_in1+len_in2+len_out1:]
Concatenation of training and validation datasets连接训练和验证数据集
train_val_in1 = np.concatenate((shuf_train_in1, shuf_val_in1), axis=0)
train_val_in2 = np.concatenate((shuf_train_in2, shuf_val_in2), axis=0)
train_val_out1 = np.concatenate((shuf_train_out1, shuf_val_out1), axis=0)
train_val_out2 = np.concatenate((shuf_train_out2, shuf_val_out2), axis=0)
Fitting thee model适合你 model
When fitting the model:安装model时:
model.fit(
{"in1": train_val_in1, "in2": train_val_in2},
{"out1": train_val_out1, "out2": train_val_out2},
validation_split=len_val/(len_val+len_train),
...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.