[英]TensorFlow / Keras splitting training and validation data
I'm learning how to use TensorFlow and have been given a working model, built in the Keras structure.我正在学习如何使用 TensorFlow,并获得了一个可用的 model,它构建在 Keras 结构中。 It runs but the results are a bit of mystery to me.
它运行但结果对我来说有点神秘。 I'm attempting to copy this and simplify it to its bare essence and then build it back up again.
我试图复制它并将其简化为它的本质,然后再次构建它。 The part I can not understand at all is how/where it splits the training data input into training and validation sets?
我根本无法理解的部分是它如何/在何处将训练数据输入拆分为训练和验证集? I've checked the model code, initial parameters, etc. Is there a built-in function in TensorFlow convolutional neural.networks that does this automatically?
我已经检查了 model 代码,初始参数等。TensorFlow 卷积神经网络中是否有内置的 function 自动执行此操作?
The call to Talos looks like this, the first two values are x-training and y-training values nowhere is x_val
or y_val
passed to the Talos function. Could Talos have an automatic way of producing x_val
and y_val
?对 Talos 的调用如下所示,前两个值是 x-training 和 y-training 值,
x_val
或y_val
没有传递给 Talos function。Talos 是否可以自动生成x_val
和y_val
?
jam1 = talos.Scan(features3,
label2[0,],
model = DLAt,
params = ParamsJam1,
experiment_name = "toy1",
fraction_limit=.2)
def DLAt(x_train, y_train, x_val, y_val, params):
model = Sequential()
convLayer = Conv1D(filters=params['numFilters'],
kernel_size=params['kernalLen'], strides=1, activation='relu',
input_shape=(300,4), use_bias=True)
model.add(convLayer)
model.add(MaxPooling1D(pool_size=params['maxpool']))
model.add(Flatten())
firstHidden = Dense(params['neuronsInLayerOne'], activation='relu',
kernel_regularizer=regularizers.l1_l2(l1=params['l1'], l2=0))
model.add(firstHidden)
model.add(Dropout(params['dropoutLevel']))
model.add(Dense(params['neuronsInLayerTwo'], activation='relu'))
model.add(Dropout(params['dropoutLevel']))
model.add(Dense(1, activation='sigmoid'))
opt = keras.optimizers.Adam(lr=params['lr'])
model.compile(optimizer = opt, loss = 'loss', metrics = ['mse'])
out = model.fit(x_train, y_train, epochs = params['epoch'],
batch_size =params['batches'],
validation_data =(x_val, y_val))
return out, model
It is not splitting the training data at all and you are explicitly passing validation data to model.fit
via the parameter validation_data
:它根本没有拆分训练数据,您通过参数
validation_data
明确地将验证数据传递给model.fit
:
out = model.fit(x_train, y_train, epochs = params['epoch'],
batch_size =params['batches'],
validation_data =(x_val, y_val))
If you want to split your training data and do not want to provide validation data, you can use the validation_split
parameter in model.fit(...)
, which is the fraction of the training data to be used as validation data.. By default , it is set to 0.0.如果你想拆分你的训练数据并且不想提供验证数据,你可以使用
model.fit(...)
中的validation_split
参数,这是训练数据的一部分用作验证数据。通过默认情况下,它设置为 0.0。
Update 1 : Check the source code of talos.Scan
, it uses a validation_split
of 0.3 by default.更新 1 :检查
talos.Scan
的源代码,它默认使用 0.3 的validation_split
。 Also, check this .另外,检查这个。 It should then be self-explanatory.
然后它应该是不言自明的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.