简体   繁体   中英

How can I properly get my Dataset to create?

I have the following code:

imagepaths = tf.convert_to_tensor(imagepaths, dtype=tf.string)
labels = tf.convert_to_tensor(labels, dtype=tf.int32)

# Build a TF Queue, shuffle data
image, label = tf.data.Dataset.from_tensor_slices((imagepaths, labels))

and am getting the following error:

image, label = tf.data.Dataset.from_tensor_slices((imagepaths, labels))
ValueError: too many values to unpack (expected 2)

Shouldn't Dataset.from_tensor_slices see this as the length of the tensor, not the number of inputs? How can I fix this issue or combine the data tensors into the same variable more effectively? Just for reference: There are 1800 imagepaths and 1800 labels corresponding to each other. And to be clear, the imagepaths are paths to the files where the jpgs images are located. My goal after this is to shuffle the data set and build the neural network model.

That code is right here: # Read images from disk image = tf.read_file(image) image = tf.image.decode_jpeg(image, channels=CHANNELS)

# Resize images to a common size
image = tf.image.resize_images(image, [IMG_HEIGHT, IMG_WIDTH])

# Normalize
image = image * 1.0/127.5 - 1.0

# Create batches
X, Y = tf.train.batch([image, label], batch_size=batch_size,
                      capacity=batch_size * 8,
                      num_threads=4)

try to do this:

def transform(entry):
  img = entry[0]
  lbl = entry[1]

  return img, lbl

raw_data = list(zip(imagepaths, labels))
dataset = tf.data.Dataset.from_tensor_slices(raw_data)
dataset = dataset.map(transform)

and if you want to have a look at your dataset you can do it like this:

for e in dataset.take(1):
    print(e)

you can add multiple map functions and you can after that use shuffle and batch on your dataset to prepare it for training ;)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM