![](/img/trans.png)
[英]Reading and Batching Sequence data in Tensorflow with TFRecords
[英]batching huge data in tensorflow
我正在嘗試使用https://github.com/eisenjulian/nlp_estimator_tutorial/blob/master/nlp_estimators.py中的代碼/教程執行二進制分類
print("Loading data...")
(x_train_variable, y_train), (x_test_variable, y_test) = imdb.load_data(num_words=vocab_size)
print(len(y_train), "train sequences")
print(len(y_test), "test sequences")
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train_variable,
maxlen=sentence_size,
padding='post',
value=0)
x_test = sequence.pad_sequences(x_test_variable,
maxlen=sentence_size,
padding='post',
value=0)
print("x_train shape:", x_train.shape)
print("x_test shape:", x_test.shape)
def train_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((x_train, x_len_train, y_train))
dataset = dataset.shuffle(buffer_size=len(x_train_variable))
dataset = dataset.batch(100)
dataset = dataset.map(parser)
dataset = dataset.repeat()
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()
def eval_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((x_test, x_len_test, y_test))
dataset = dataset.batch(100)
dataset = dataset.map(parser)
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()
def cnn_model_fn(features, labels, mode, params):
input_layer = tf.contrib.layers.embed_sequence(
features['x'], vocab_size, embedding_size,
initializer=params['embedding_initializer'])
training = mode == tf.estimator.ModeKeys.TRAIN
dropout_emb = tf.layers.dropout(inputs=input_layer,
rate=0.2,
training=training)
conv = tf.layers.conv1d(
inputs=dropout_emb,
filters=32,
kernel_size=3,
padding="same",
activation=tf.nn.relu)
# Global Max Pooling
pool = tf.reduce_max(input_tensor=conv, axis=1)
hidden = tf.layers.dense(inputs=pool, units=250, activation=tf.nn.relu)
dropout_hidden = tf.layers.dropout(inputs=hidden,
rate=0.2,
training=training)
logits = tf.layers.dense(inputs=dropout_hidden, units=1)
# This will be None when predicting
if labels is not None:
labels = tf.reshape(labels, [-1, 1])
optimizer = tf.train.AdamOptimizer()
def _train_op_fn(loss):
return optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return head.create_estimator_spec(
features=features,
labels=labels,
mode=mode,
logits=logits,
train_op_fn=_train_op_fn)
cnn_classifier = tf.estimator.Estimator(model_fn=cnn_model_fn,
model_dir=os.path.join(model_dir, 'cnn'),
params=params)
train_and_evaluate(cnn_classifier)
此處的示例從IMDB電影評論加載數據。 我有自己的文本形式的數據集,其大小約為2GB。 現在在此示例中,行(x_train_variable, y_train), (x_test_variable, y_test) = imdb.load_data(num_words=vocab_size)
嘗試將整個數據集加載到內存中。 如果我嘗試執行相同的操作,則會耗盡內存。 如何重組此邏輯以從磁盤批量讀取數據?
您想要更改dataset = tf.data.Dataset.from_tensor_slices((x_train, x_len_train, y_train))
行。 創建數據集的方法有很多from_tensor_slices
是最簡單的方法,但是如果您無法將整個數據集加載到內存中,則無法from_tensor_slices
。
最好的方法取決於您如何存儲數據,或者如何存儲/操作數據。 在我看來,最簡單的方面就是減少了極少的負面影響(除非在多個GPU上運行),是讓原始數據集只為數據提供索引,並編寫一個普通的numpy函數來加載第i
個示例。
dataset = tf.data.Dataset.from_tensor_slices(tf.range(epoch_size))
def tf_map_fn(i):
def np_map_fn(i):
return load_ith_example(i)
inp1, inp2 = tf.py_func(np_map_fn, (i,), Tout=(tf.float32, tf.float32), stateful=False)
# other preprocessing/data augmentation goes here.
# unbatched sizes
inp1.set_shape(shape1)
inp2.set_shape(shape2)
return inp1, inp2
dataset = dataset.repeat().shuffle(epoch_size).map(tf_map_fn, 8)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(1) # start loading data as GPU trains on previous batch
inp1, inp2 = dataset.make_one_shot_iterator().get_next()
在這里,我假設您的輸出是float32
張量( Tout=...
)。 set_shape
調用不是嚴格必需的,但是如果您知道形狀,它將進行更好的錯誤檢查。
只要預處理所需的時間不會長於網絡運行的時間,它的運行速度就應與一台GPU機器上的任何其他方法一樣快。
另一種明顯的方法是將數據轉換為tfrecords
,但這會占用更多的磁盤空間,如果您問我,則更難以管理。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.