簡體   English   中英

數據擴充完成后會發生什么?

[英]What happend after data augmentation done?

我使用Kaggle的“ Dogs Vs cats” 日期集 ,並遵循TensorFlow的cifar-10教程(為了方便起見,我沒有使用重量衰減,移動平均值和L2損失),我已經成功地訓練了我的網絡,但是當我添加了數據擴充部分是我的代碼,只是發生了奇怪的事情,即使經過幾千步之后,損失也從未減少(在添加之前,每件事都可以)。 代碼如下所示:

def get_batch(image, label, image_w, image_h, batch_size, capacity, test_flag=False):
  '''
  Args:
      image: list type
      label: list type
      image_w: image width
      image_h: image height
      batch_size: batch size
      capacity: the maximum elements in queue 
      test_flag: create training batch or test batch
  Returns:
      image_batch: 4D tensor [batch_size, width, height, 3], dtype=tf.float32
      label_batch: 1D tensor [batch_size], dtype=tf.int32
  '''

  image = tf.cast(image, tf.string)
  label = tf.cast(label, tf.int32)

  # make an input queue
  input_queue = tf.train.slice_input_producer([image, label])

  label = input_queue[1]
  image_contents = tf.read_file(input_queue[0])
  image = tf.image.decode_jpeg(image_contents, channels=3)

  ####################################################################
  # Data argumentation should go to here
  # but when we want to do test, stay the images what they are

  if not test_flag:
     image = tf.image.resize_image_with_crop_or_pad(image, RESIZED_IMG, RESIZED_IMG)
     # Randomly crop a [height, width] section of the image.
     distorted_image = tf.random_crop(image, [image_w, image_h, 3])

    # Randomly flip the image horizontally.
     distorted_image = tf.image.random_flip_left_right(distorted_image)

    # Because these operations are not commutative, consider randomizing
    # the order their operation.
    # NOTE: since per_image_standardization zeros the mean and makes
    # the stddev unit, this likely has no effect see tensorflow#1458.
     distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)

     image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)
  else:
     image = tf.image.resize_image_with_crop_or_pad(image, image_w, image_h)

  ######################################################################

  # Subtract off the mean and divide by the variance of the pixels.
  image = tf.image.per_image_standardization(image)
  # Set the shapes of tensors.
  image.set_shape([image_h, image_w, 3])
  # label.set_shape([1])

  image_batch, label_batch = tf.train.batch([image, label],
                                            batch_size=batch_size,
                                            num_threads=64,
                                            capacity=capacity)

  label_batch = tf.reshape(label_batch, [batch_size])
  image_batch = tf.cast(image_batch, tf.float32)

  return image_batch, label_batch

確保您使用的限制(例如max_delta=63表示亮度, upper=1.8表示對比度)足夠低,以使圖像仍然可識別。 其他問題之一可能是一遍又一遍地應用了增強,因此經過幾次迭代后,它完全失真了(盡管我沒有在您的代碼段中發現此錯誤)。

我建議您要做的是在tensorboard中添加可視化數據。 要可視化圖像,請使用tf.summary.image方法。 您將可以清楚地看到增強的結果。

tf.summary.image('input', image_batch, 10)

這個要點可以作為一個例子。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM