简体   繁体   English

用发电机复位 kernel

[英]Reset kernel with generator

Why my code with generator with any batch_sizes will reset and my Ram is going to fill为什么我的带有任何 batch_sizes 的生成器的代码将重置并且我的 Ram 将被填满

import some important libraries导入一些重要的库

import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt

load and some spliting data加载和一些拆分数据

cifar10_data = tf.keras.datasets.cifar10

(train_images, train_labels), (test_images, test_labels) = cifar10_data.load_data()

CLASS_NAMES= ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

validation_images, validation_labels = train_images[:5000], train_labels[:5000]
train_images, train_labels = train_images[5000:], train_labels[5000:]

using tf.data form and build some pairs of data使用 tf.data 表单并构建一些数据对

train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
validation_ds = tf.data.Dataset.from_tensor_slices((validation_images, validation_labels))

define a preprocessing定义预处理

def process_images(image, label, size=227):
    # Normalize images to have a mean of 0 and standard deviation of 1
    image = tf.image.per_image_standardization(image)
    # Resize images from 32x32 to 277x277
    image = tf.image.resize(image, (227,227))
    return image, label

using tf.data for understanding size of data使用 tf.data 了解数据的大小

train_ds_size = tf.data.experimental.cardinality(train_ds).numpy()
test_ds_size = tf.data.experimental.cardinality(test_ds).numpy()
validation_ds_size = tf.data.experimental.cardinality(validation_ds).numpy()

print("Training data size:", train_ds_size)
print("Test data size:", test_ds_size)
print("Validation data size:", validation_ds_size)

using tf.data methods for generating data in batch size = 64使用 tf.data 方法以批量大小 = 64 生成数据

train_ds = (train_ds
                  .map(process_images)
                  .shuffle(buffer_size=train_ds_size)
                  .batch(batch_size=64, drop_remainder=True))
test_ds = (test_ds
                  .map(process_images)
                  .shuffle(buffer_size=train_ds_size)
                  .batch(batch_size=64, drop_remainder=True))
validation_ds = (validation_ds
                  .map(process_images)
                  .shuffle(buffer_size=train_ds_size)
                  .batch(batch_size=64, drop_remainder=True))

define the model定义 model

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(filters=96, kernel_size=(11,11), strides=(4,4), activation='relu', input_shape=(227,227,3)),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(pool_size=(3,3), strides=(2,2)),
    tf.keras.layers.Conv2D(filters=256, kernel_size=(5,5), strides=(1,1), activation='relu', padding="same"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(pool_size=(3,3), strides=(2,2)),
    tf.keras.layers.Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), activation='relu', padding="same"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), activation='relu', padding="same"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), activation='relu', padding="same"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(pool_size=(3,3), strides=(2,2)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(4096, activation='relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(4096, activation='relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(10, activation='softmax')
])

compile the model编译 model

model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.optimizers.SGD(lr=0.001), metrics=['accuracy'])
# model.summary()

fit the model on dataset在数据集上拟合 model

history = model.fit(train_ds,
          epochs=1,
          validation_data=validation_ds, verbose=1,
          validation_freq=1)

How can I use generator like this code without problem actually I need to use a generator in my code to solve memory problem but I don't know how to use this type of generator我怎样才能毫无问题地使用这样的代码生成器实际上我需要在我的代码中使用生成器来解决 memory 问题但我不知道如何使用这种类型的生成器

you must reduce shuffle buffer size.你必须减少洗牌缓冲区的大小。

Its just cause of stack of dense layers with so many units (neurons) that will lead into overflow and OOM and as estimated for this model, the dense layers will contain 37752832 and 16781312 trainable parameters which is really enormous model.这只是密集层堆叠的原因,其中有太多单元(神经元)会导致溢出和 OOM,据估计,对于这个 model,密集层将包含 37752832 和 16781312 个可训练参数,这确实是巨大的 model。

So try out again with less units for dense layers, notice the most important thing in convolution models is that the dense layers are just for classifying the extracted feature maps, so its not needed to define dense layers with so many units, so emphasis on defining best model based on convolution base.所以再次尝试使用较少的单元来为密集层,注意卷积模型中最重要的是密集层只是用于对提取的特征图进行分类,因此不需要定义具有这么多单元的密集层,因此强调定义基于卷积基础的最佳 model。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM