python/ csv/ pickle/ theano

I am trying to make a pkl file to be loaded into theano from a csv starting point

import numpy as np
import csv
import gzip, cPickle
from numpy import genfromtxt
import theano
import theano.tensor as T

#Open csv file and read in data
csvFile = "filename.csv"
my_data = genfromtxt(csvFile, delimiter=',', skip_header=1)
data_shape = "There are " + repr(my_data.shape[0]) + " samples of vector length " + repr(my_data.shape[1])

num_rows = my_data.shape[0] # Number of data samples
num_cols = my_data.shape[1] # Length of Data Vector

total_size = (num_cols-1) * num_rows


data = np.arange(total_size)
data = data.reshape(num_rows, num_cols-1) # 2D Matrix of data points
data = data.astype('float32')

label = np.arange(num_rows)
print label.shape
#label = label.reshape(num_rows, 1) # 2D Matrix of data points
label = label.astype('float32')

print data.shape

#Read through data file, assume label is in last col
for i in range(my_data.shape[0]):
    label[i] = my_data[i][num_cols-1]

    for j in range(num_cols-1):
        data[i][j] = my_data[i][j]


#Split data in terms of 70% train, 10% val, 20% test

train_num = int(num_rows * 0.7)
val_num = int(num_rows * 0.1)
test_num = int(num_rows * 0.2)

DataSetState = "This dataset has " + repr(data.shape[0]) + " samples of length " + repr(data.shape[1]) + ". The number of training examples is " + repr(train_num)
print DataSetState



train_set_x = data[:train_num]
train_set_y = label[:train_num]

val_set_x = data[train_num+1:train_num+val_num]
val_set_y = label[train_num+1:train_num+val_num]

test_set_x = data[train_num+val_num+1:]
test_set_y = label[train_num+val_num+1:]


# Divided dataset into 3 parts. split by percentage.

train_set = train_set_x, train_set_y
val_set = val_set_x, val_set_y
test_set = test_set_x, val_set_y


dataset = [train_set, val_set, test_set]

f = gzip.open(csvFile+'.pkl.gz','wb')
cPickle.dump(dataset, f, protocol=2)
f.close()

When I run the resulting pkl file through Thenao, (as a DBN or SdA) it pretrains just fine, which makes me think the data is stored correctly.

However when it comes to finetune I get the following error:

epoch 1, minibatch 2775/2775, validation error 0.000000 %

    Traceback (most recent call last):
      File "SdA_custom.py", line 489, in 
        test_SdA()
      File "SdA_custom.py", line 463, in test_SdA
        test_losses = test_model()
      File "SdA_custom.py", line 321, in test_score
        return [test_score_i(i) for i in xrange(n_test_batches)]

      File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 606, in __call__
        storage_map=self.fn.storage_map)
      File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 595, in __call__
        outputs = self.fn()
    ValueError: Input dimension mis-match. (input[0].shape[0] = 10, input[1].shape[0] = 3)
    Apply node that caused the error: Elemwise{neq,no_inplace}(argmax, Subtensor{int64:int64:}.0)
    Inputs types: [TensorType(int64, vector), TensorType(int32, vector)]
    Inputs shapes: [(10,), (3,)]
    Inputs strides: [(8,), (4,)]
    Inputs values: ['not shown', array([0, 0, 0], dtype=int32)]

    Backtrace when the node is created:
      File "/home/dean/Documents/DeepLearningRepo/DeepLearningTutorials-master/code/logistic_sgd.py", line 164, in errors
        return T.mean(T.neq(self.y_pred, y))

    HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

10 is the size of my batch, if I change to a batch size of 1 I get the following:

ValueError: Input dimension mis-match. (input[0].shape[0] = 1, input[1].shape[0] = 0)

I think I am storing the labels wrong when I make a pkl, but I can't seem to spot what is happening or why changing the batch alters the error

Hope you can help!

Saw this just now as was looking for similar error I was getting. Posting a reply so that it might help someone looking for similar error. For me the error resolved when I changed n_out to 2 from 1 in dbn_test() parameter list. n_out was the number of labels rather than number of output layers.

暂无
暂无

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

Related Question Converting .pkl file to .csv file What are the advantages of .pkl file over .txt or csv file in python Theano: Train theano neural net using data in CSV File Theano's pkl_utils Dump Function not available in Theano 0.7? Compressing a pkl file How to open a .pkl file How to unpack pkl file? not able to load .pkl file How to unpack a pkl file? load Python Pickle (.pkl) file
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM