简体   繁体   中英

Changing Tensorflow number of convolutional and pooling layers using MNIST dataset

I am using Windows 10 pro, python 3.6.2rc1, Visual Studio 2017, and Tensorflow. I am working with Tensorflow example in its tutorial in the following link:

https://www.tensorflow.org/tutorials/layers

I have added another layer of convolution and pooling before flattening the last layer (3rd layer) to see if the accuracy changes.

The code I have added is as follows:

## Input Tensor Shape: [batch_size, 7, 7, 64]
## Output Tensor Shape: [batch_size, 7, 7, 64]
conv3 = tf.layers.conv2d(
    inputs=pool2,
    filters=64,
    kernel_size=[3, 3],
    padding=1,
    activation=tf.nn.relu)

pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=1)
pool3_flat = tf.reshape(pool3, [-1, 7* 7 * 64])

The reason I have changed padding to 1 and stride to 1 is to make sure the size of output is the same as input. But after adding this new layer I get the following warnings and without showing any result the program ends:

Estimator is decoupled from Scikit Learn interface by moving into separate class SKCompat. Arguments x, y and batch_size are only available in the SKCompat class, Estimator will only accept input_fn. Example conversion: est = Estimator(...) -> est = SKCompat(Estimator(...)) WARNING:tensorflow:From E:\\Apps\\DA2CNNTest\\TFHWDetection WIth More Layers\\TFClassification\\TFClassification\\TFClassification.py:179: calling BaseEstimator.fit (from tensorflow.contrib.learn.python.learn.estimators.estimator) with batch_size is deprecated and will be removed after 2016-12-01. Instructions for updating: Estimator is decoupled from Scikit Learn interface by moving into separate class SKCompat. Arguments x, y and batch_size are only available in the SKCompat class, Estimator will only accept input_fn. Example conversion: est = Estimator(...) -> est = SKCompat(Estimator(...)) The thread 'MainThread' (0x5c8) has exited with code 0 (0x0). The program '[13468] python.exe' has exited with code 1 (0x1).

Without adding this layer it works properly. In order to solve this problem I changed the conv3 and pool3 as follows:

conv3 = tf.layers.conv2d(
    inputs=pool2,
    filters=64,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 3, 3, 64]
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)
pool3_flat = tf.reshape(pool3, [-1, 3* 3 * 64])

but then I got a different error at

nist_classifier.fit(
    x=train_data,
    y=train_labels,
    batch_size=100,
    steps=20000,
    monitors=[logging_hook])

which is as follows:

tensorflow.python.framework.errors_impl.NotFoundError: Key conv2d_2/bias not found in checkpoint [[Node: save/RestoreV2_5 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_5/tensor_names, save/RestoreV2_5/shape_and_slices)]]

The error is exactly refering to monitors=[logging_hook].

My whole code is as follow and as you see I have commented the previous one with padding=1.

I really appreciate if you can guide me what my mistake is and why is it so. Moreover, I am correct with the dimension of my inputs and outputs in the 3rd layer?

Complete code:

"""Convolutional Neural Network Estimator for MNIST, built with tf.layers."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import tensorflow as tf

from tensorflow.contrib import learn
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib



tf.logging.set_verbosity(tf.logging.INFO)

def cnn_model_fn(features, labels, mode):
    """Model function for CNN."""

    input_layer = tf.reshape(features, [-1, 28, 28, 1])


# Input Tensor Shape: [batch_size, 28, 28, 1]
# Output Tensor Shape: [batch_size, 28, 28, 32]
conv1 = tf.layers.conv2d(
    inputs=input_layer,
    filters=32,       
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)


# Input Tensor Shape: [batch_size, 28, 28, 32]
# Output Tensor Shape: [batch_size, 14, 14, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)

# Convolutional Layer #2
# Input Tensor Shape: [batch_size, 14, 14, 32]
# Output Tensor Shape: [batch_size, 14, 14, 64]

conv2 = tf.layers.conv2d(
    inputs=pool1,
    filters=64,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

# Pooling Layer #2
# Input Tensor Shape: [batch_size, 14, 14, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)


'''Adding a new layer of conv and pool'''
## Input Tensor Shape: [batch_size, 7, 7, 32]
## Output Tensor Shape: [batch_size, 7, 7, 64]
#conv3 = tf.layers.conv2d(
#    inputs=pool2,
#    filters=64,
#    kernel_size=[3, 3],
#    padding=1,
#    activation=tf.nn.relu)


## Input Tensor Shape: [batch_size, 7, 7, 64]
## Output Tensor Shape: [batch_size, 7, 7, 64]
#pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=1)
#pool3_flat = tf.reshape(pool3, [-1, 7* 7 * 64])


# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 7, 7, 64]
conv3 = tf.layers.conv2d(
    inputs=pool2,
    filters=64,
    kernel_size=[5, 5],
    padding="same",
    activation=tf.nn.relu)

# Input Tensor Shape: [batch_size, 7, 7, 64]
# Output Tensor Shape: [batch_size, 3, 3, 64]
pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2)


'''End of manipulation'''


# Input Tensor Shape: [batch_size, 3, 3, 64]
# Output Tensor Shape: [batch_size, 3 * 3 * 64]
pool3_flat = tf.reshape(pool3, [-1, 3* 3 * 64])

# Input Tensor Shape: [batch_size, 3 * 3 * 64]
# Output Tensor Shape: [batch_size, 1024]
# dense(). Constructs a dense layer. Takes number of neurons and activation function as arguments.
dense = tf.layers.dense(inputs=pool3_flat, units=1024, activation=tf.nn.relu)

# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
    inputs=dense, rate=0.4, training=mode == learn.ModeKeys.TRAIN)


logits = tf.layers.dense(inputs=dropout, units=10)

loss = None
train_op = None

# Calculate Loss (for both TRAIN and EVAL modes)
if mode != learn.ModeKeys.INFER:
    onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
    loss = tf.losses.softmax_cross_entropy(
        onehot_labels=onehot_labels, logits=logits)

# Configure the Training Op (for TRAIN mode)
if mode == learn.ModeKeys.TRAIN:
    train_op = tf.contrib.layers.optimize_loss(
    loss=loss,
    global_step=tf.contrib.framework.get_global_step(),
    learning_rate=0.001,
    optimizer="SGD")

# Generate Predictions
# The logits layer of our model returns our predictions as raw values in a [batch_size, 10]-dimensional tensor.
predictions = {
    "classes": tf.argmax(
        input=logits, axis=1),
    "probabilities": tf.nn.softmax(
        logits, name="softmax_tensor")
}

# Return a ModelFnOps object
return model_fn_lib.ModelFnOps(
    mode=mode, predictions=predictions, loss=loss, train_op=train_op)

def main(unused_argv):
# Load training and eval data
mnist = learn.datasets.load_dataset("mnist")
train_data = mnist.train.images  # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images  # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)

# Create the Estimator
mnist_classifier = learn.Estimator(
    model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model")

# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
    tensors=tensors_to_log, every_n_iter=50)

# Train the model
mnist_classifier.fit(
    x=train_data,
    y=train_labels,
    batch_size=100,
    steps=20000,
    monitors=[logging_hook])

# Configure the accuracy metric for evaluation
#change metrics variable name
metricss = {
    "accuracy":
        learn.MetricSpec(
            metric_fn=tf.metrics.accuracy, prediction_key="classes"),
}

#Evaluate the model and print results
#for i in range(100)
eval_results = mnist_classifier.evaluate(
    x=eval_data[0:100], y=eval_labels[0:100], metrics=metricss)
print(eval_results)

if __name__ == "__main__":
    tf.app.run()

The error looks like the trained model which is available in the model_dir conflicts with the current graph changes. The Estimator loads checkpoints from the saved model directory and continue training from the previous saved model. So whenever your making changes in the model, you need to delete the old model and start training again.

A simple fix for this would be to define a custom checkpoint directory for the model as follows.

tf.train.generate_checkpoint_state_proto("/tmp/","/tmp/mnist_convnet_model")

This fixes the problem with the MNIST example and also gives you access to a location where you can control checkpoints.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM