簡體   English   中英

Tensorflow:Numpy相當於tf批次和重塑

[英]Tensorflow: Numpy equivalent of tf batches and reshape

我試圖使用Tensorflow使用LSTM方法對圖像分類中的一些圖像進行分類,使用單熱編碼輸出和最后一個LSTM輸出的softmax分類器。 我的數據集是CSV,並且必須在Numpy和Tensorflow中研究如何進行一些修改。 我還是收到一個錯誤:

AttributeError: 'numpy.ndarray' object has no attribute 'next_batch'

如果您將看到,我不能將next_batch(batch_size)與我的數據集一起使用,並且下一個tf.reshape需要替換為其Numpy等效項。

我的問題:我該如何糾正這兩個問題?

'''
Tensorflow LSTM classification of 16x30 images.
'''

from __future__ import print_function

import tensorflow as tf
from tensorflow.python.ops import rnn, rnn_cell
import numpy as np
from numpy import genfromtxt
from sklearn.cross_validation import train_test_split
import pandas as pd

'''
a Tensorflow LSTM that will sequentially input several lines from each single image 
i.e. The Tensorflow graph will take a flat (1,480) features image as it was done in Multi-layer
perceptron MNIST Tensorflow tutorial, but then reshape it in a sequential manner with 16 features each and 30 time_steps.
'''

blaine = genfromtxt('./Desktop/Blaine_CSV_lstm.csv',delimiter=',')  # CSV transform to array
target = [row[0] for row in blaine]             # 1st column in CSV as the targets
data = blaine[:, 1:480]                          #flat feature vectors
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.05, random_state=42)

f=open('cs-training.csv','w')       #1st split for training
for i,j in enumerate(X_train):
        k=np.append(np.array(y_train[i]),j   )
        f.write(",".join([str(s) for s in k]) + '\n')
f.close()
f=open('cs-testing.csv','w')        #2nd split for test
for i,j in enumerate(X_test):
        k=np.append(np.array(y_test[i]),j   )
        f.write(",".join([str(s) for s in k]) + '\n')
f.close()

ss = pd.Series(y_train)     #indexing series needed for later Pandas Dummies one-hot vectors
gg = pd.Series(y_test)

new_data = genfromtxt('cs-training.csv',delimiter=',')  # Training data
new_test_data = genfromtxt('cs-testing.csv',delimiter=',')  # Test data

x_train=np.array([ i[1::] for i in new_data])
y_train_onehot = pd.get_dummies(ss)

x_test=np.array([ i[1::] for i in new_test_data])
y_test_onehot = pd.get_dummies(gg)


# General Parameters
learning_rate = 0.001
training_iters = 100000
batch_size = 128
display_step = 10

# Tensorflow LSTM Network Parameters
n_input = 16 # MNIST data input (img shape: 28*28)
n_steps = 30 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 20 # MNIST total classes (0-9 digits)

# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])

# Define weights
weights = {
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
    'out': tf.Variable(tf.random_normal([n_classes]))
}


def RNN(x, weights, biases):

    # Prepare data shape to match `rnn` function requirements
    # Current data input shape: (batch_size, n_steps, n_input)
    # Required shape: 'n_steps' tensors list of shape (batch_size, n_input)

    # Permuting batch_size and n_steps
    x = tf.transpose(x, [1, 0, 2])
    # Reshaping to (n_steps*batch_size, n_input)
    x = tf.reshape(x, [-1, n_input])
    # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
    x = tf.split(0, n_steps, x)

    # Define a lstm cell with tensorflow
    lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)

    # Get lstm cell output
    outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)

    # Linear activation, using rnn inner loop last output
    return tf.matmul(outputs[-1], weights['out']) + biases['out']

pred = RNN(x, weights, biases)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.initialize_all_variables()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    step = 1
    # Keep training until reach max iterations
    while step * batch_size < training_iters:
        x_train, y_train = new_data.next_batch(batch_size)
        # Reshape data to get 30 seq of 16 elements
        x_train = x_train.reshape((batch_size, n_steps, n_input))
        # Run optimization op (backprop)
        sess.run(optimizer, feed_dict={x: x_train, y: y_train})
        if step % display_step == 0:
            # Calculate batch accuracy
            acc = sess.run(accuracy, feed_dict={x: x_train, y: y_train})
            # Calculate batch loss
            loss = sess.run(cost, feed_dict={x: x_train, y: y_train})
            print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                  "{:.6f}".format(loss) + ", Training Accuracy= " + \
                  "{:.5f}".format(acc))
        step += 1
    print("Optimization Finished!")

您可以創建自己的函數,稱為下一批,給定一個numpy數組,索引將為您返回numpy數組的那一片。

def nextbatch(x,i,j):
    return x[i:j,...]

你也可以傳入你所處的步驟並且可能做模數,但這是使其工作的基礎。

至於resphape使用:

x_train = np.reshape(x_train,(batch_size, n_steps, n_input))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM