[英]Tensorflow Reading CSV - What's the best approach
所以我一直在試用不同的方法來讀取97K行的CSV文件,每行有500個功能(大約100mb)。
我的第一種方法是使用numpy將所有數據讀入內存:
raw_data = genfromtxt(filename,dtype = numpy.int32,delimiter =',')
這個命令運行了很長時間,我需要找到一個更好的方法來讀取我的文件。
第二種方法是遵循本指南: https : //www.tensorflow.org/programmers_guide/reading_data
我注意到的第一件事是,每個時代都需要更長的時間來運行。 由於我使用的是隨機梯度下降,因此可以解釋這一點,因為需要從文件中讀取每個批次
有沒有辦法優化第二種方法?
我的代碼(第二種方法):
reader = tf.TextLineReader()
filename_queue = tf.train.string_input_producer([filename])
_, csv_row = reader.read(filename_queue) # read one line
data = tf.decode_csv(csv_row, record_defaults=rDefaults) # use defaults for this line (in case of missing data)
labels = data[0]
features = data[labelsSize:labelsSize+featuresSize]
# minimum number elements in the queue after a dequeue, used to ensure
# that the samples are sufficiently mixed
# I think 10 times the BATCH_SIZE is sufficient
min_after_dequeue = 10 * batch_size
# the maximum number of elements in the queue
capacity = 20 * batch_size
# shuffle the data to generate BATCH_SIZE sample pairs
features_batch, labels_batch = tf.train.shuffle_batch([features, labels], batch_size=batch_size, num_threads=10, capacity=capacity, min_after_dequeue=min_after_dequeue)
* * * *
coordinator = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coordinator)
try:
# And then after everything is built, start the training loop.
for step in xrange(max_steps):
global_step = step + offset_step
start_time = time.time()
# Run one step of the model. The return values are the activations
# from the `train_op` (which is discarded) and the `loss` Op. To
# inspect the values of your Ops or variables, you may include them
# in the list passed to sess.run() and the value tensors will be
# returned in the tuple from the call.
_, __, loss_value, summary_str = sess.run([eval_op_train, train_op, loss_op, summary_op])
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
coordinator.request_stop()
# Wait for threads to finish.
coordinator.join(threads)
sess.close()
解決方案可以是使用TFRecords
以tensorflow
二進制格式轉換數據。
請參閱TensorFlow數據輸入(第1部分):占位符,Protobufs和隊列
要將CSV文件轉換為TFRecords
查看以下代碼段:
csv = pandas.read_csv("your.csv").values
with tf.python_io.TFRecordWriter("csv.tfrecords") as writer:
for row in csv:
features, label = row[:-1], row[-1]
example = tf.train.Example()
example.features.feature["features"].float_list.value.extend(features)
example.features.feature["label"].int64_list.value.append(label)
writer.write(example.SerializeToString())
雖然要從本地文件系統中流式傳輸(非常)大型文件,在更實際的用例中,從AWS S3,HDFS等遠程存儲器中流式傳輸,但Gensim smart_open python庫可能會有所幫助:
# stream lines from an S3 object
for line in smart_open.smart_open('s3://mybucket/mykey.txt'):
print line
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.