簡體   English   中英

Tensorflow:如何使用 tf 數據集構建高效的 NLP 管道

[英]Tensorflow : How to build efficient NLP pipeline using tf Dataset

我正在使用 TensorFlow 並嘗試使用 tf.dataset API 創建高效的訓練和推理管道,但遇到了一些錯誤:

例如,一個簡單的 RNN 網絡結構是這樣的:

import tensorflow as tf
import numpy as np
# hyper parameters
vocab_size          = 20
word_embedding_dim  = 100
batch_size          = 2



tf.reset_default_graph()
# placeholders
sentences             = tf.placeholder(tf.int32, [None,None], name='sentences')
targets               = tf.placeholder(tf.int32, [None, None], name='labels' )
keep_prob             = tf.placeholder(tf.float32, [1,], name='dropout')
keep_prob             = tf.cast(keep_prob.shape[0],tf.float32)


# embedding
word_embedding         = tf.get_variable(name='word_embedding_',
                                             shape=[vocab_size, word_embedding_dim],
                                             dtype=tf.float32,
                                             initializer = tf.contrib.layers.xavier_initializer())
embedding_lookup = tf.nn.embedding_lookup(word_embedding, sentences)



#  bilstm model
with tf.variable_scope('forward'):
    fr_cell = tf.contrib.rnn.LSTMCell(num_units = 15)
    dropout_fr = tf.contrib.rnn.DropoutWrapper(fr_cell, output_keep_prob = 1. - keep_prob)

with tf.variable_scope('backward'):
    bw_cell = tf.contrib.rnn.LSTMCell(num_units = 15)
    dropout_bw = tf.contrib.rnn.DropoutWrapper(bw_cell, output_keep_prob = 1. - keep_prob)

with tf.variable_scope('bi-lstm') as scope:
    model,last_state = tf.nn.bidirectional_dynamic_rnn(dropout_fr,
                                                       dropout_bw,
                                                       inputs=embedding_lookup,
                                                       dtype=tf.float32)

logits             = tf.transpose(tf.concat(model, 2), [1, 0, 2])[-1]
linear_projection  = tf.layers.dense(logits, 5)



#loss
cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits = linear_projection, labels = tf.cast(targets,tf.float32))
loss = tf.reduce_mean(tf.reduce_sum(cross_entropy, axis=1))
optimizer = tf.train.AdamOptimizer(learning_rate = 0.001).minimize(loss)

虛擬數據是:

dummy_data    = [[1,3,4,5,5,12],[1,3,4,4,12,0],[12,4,12,0,0,0],[1,3,4,5,5,12]]
dummpy_labels = [[1,0,0,0,0],[0,1,0,1,0],[1,0,0,0,0],[0,1,0,1,0]]

現在我通常如何通過手動定義切片和填充序列來訓練這個網絡:

#  pad and slice 


def get_train_data(batch_size, slice_no):

    batch_data_j = np.array(dummy_data[slice_no * batch_size:(slice_no + 1) * batch_size])
    batch_labels = np.array(dummpy_labels[slice_no * batch_size:(slice_no + 1) * batch_size])

    max_sequence = max(list(map(len, batch_data_j)))

    # getting Max length of sequence
    padded_sequence = [i + [0] * (max_sequence - len(i)) if len(i) < max_sequence else i for i in batch_data_j]
    return padded_sequence, batch_labels




# dropout 0.2 during training and 0.0 during inference
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    iteration = len(dummy_data) // batch_size

    for iter_ in range(iteration):

        sentences_, labels_ = get_train_data(2,iter_)
        loss_,_ = sess.run([loss,optimizer], feed_dict= {sentences: sentences_, targets: labels_, keep_prob : 0.2})
        print(loss_)

現在想使用 tf dataset 管道來構建一個高效的訓練和推理管道。 我瀏覽了一些教程,但找不到好的答案。

我嘗試使用 tf.dataset 像:

dataset = tf.data.Dataset.from_tensor_slices((sentences,targets,keep_prob))
dataset = dataset.batch(batch_size)

iterator = tf.data.Iterator.from_structure(dataset.output_types)
iterator_initializer_ = iterator.make_initializer(dataset, name='initializer')
sentec, labels, drop_  = iterator.get_next()



def initialize_iterator(sess, sentences_, labels_, drops_):

        feed_dict = {sentences: sentences_, targets: labels_, keep_prob : [np.random.randint(0,2,[1,]).astype(np.float32)]}

        return sess.run(iterator_initializer_, feed_dict = feed_dict)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    iteration = len(dummy_data) // batch_size

    for iter_ in range(iteration):
        initialize_iterator(sess, dummy_data, dummpy_labels, [0.0])
        los, _ = sess.run([loss, optimizer])
        print(los)

但我收到一個錯誤。

那么,使用數據集 api 訓練 RNN 和編碼、填充 dropout 序列的有效管道應該是什么?

我建議您將數據准備為其他一些格式:例如CSVTFRecord 然后你可以使用tf.data.experimental.make_csv_datasettf.data.TFRecordDataset直接將數據讀入 tf.data 對象。

有關於這個主題的教程這里

如果您使用的是TFRecord ,一個示例(以tensorflow.Example proto 緩沖區文本格式)將如下所示:

features {
  feature {
    key: "sentences"
    value {
      int64_list {
        value: "0"
        value: "55"
        value: "128"
      }
    }
  }
  feature {
    key: "targets"
    value {
      int64_list {
        value: "10001"
        value: "10002"
      }
    }
  }
}

我會使用keep_probbatch_size作為模型配置參數。 您不需要將它們嵌入到示例中。

一旦您以上面的TFRecord格式創建了訓練和評估示例並進行了序列化,就可以直接構建您的數據管道。

dataset = tf.data.TFRecordDataset(filenames = [your_tf_record_file])

根據數據集,您可以構建input_fn ,然后您可以繼續使用tf.Estimator api 或tf.Estimator API。 一個示例教程在這里

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM