简体   繁体   English

如何为机器学习MNIST数据集正确实现反向传播?

[英]How to correctly implement backpropagation for machine learning the MNIST dataset?

So, I'm using Michael Nielson's machine learning book as a reference for my code (it is basically identical): http://neuralnetworksanddeeplearning.com/chap1.html 因此,我使用Michael Nielson的机器学习书作为我的代码的参考(基本上是相同的): http : //neuralnetworksanddeeplearning.com/chap1.html

The code in question: 有问题的代码:

    def backpropagate(self, image, image_value) :


        # declare two new numpy arrays for the updated weights & biases
        new_biases = [np.zeros(bias.shape) for bias in self.biases]
        new_weights = [np.zeros(weight_matrix.shape) for weight_matrix in self.weights]

        # -------- feed forward --------
        # store all the activations in a list
        activations = [image]

        # declare empty list that will contain all the z vectors
        zs = []
        for bias, weight in zip(self.biases, self.weights) :
            print(bias.shape)
            print(weight.shape)
            print(image.shape)
            z = np.dot(weight, image) + bias
            zs.append(z)
            activation = sigmoid(z)
            activations.append(activation)

        # -------- backward pass --------
        # transpose() returns the numpy array with the rows as columns and columns as rows
        delta = self.cost_derivative(activations[-1], image_value) * sigmoid_prime(zs[-1])
        new_biases[-1] = delta
        new_weights[-1] = np.dot(delta, activations[-2].transpose())

        # l = 1 means the last layer of neurons, l = 2 is the second-last, etc.
        # this takes advantage of Python's ability to use negative indices in lists
        for l in range(2, self.num_layers) :
            z = zs[-1]
            sp = sigmoid_prime(z)
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            new_biases[-l] = delta
            new_weights[-l] = np.dot(delta, activations[-l-1].transpose())
        return (new_biases, new_weights)

My algorithm can only get to the first round backpropagation before this error occurs: 我的算法只能在发生此错误之前进入第一轮反向传播:

  File "D:/Programming/Python/DPUDS/DPUDS_Projects/Fall_2017/MNIST/network.py", line 97, in stochastic_gradient_descent
    self.update_mini_batch(mini_batch, learning_rate)
  File "D:/Programming/Python/DPUDS/DPUDS_Projects/Fall_2017/MNIST/network.py", line 117, in update_mini_batch
    delta_biases, delta_weights = self.backpropagate(image, image_value)
  File "D:/Programming/Python/DPUDS/DPUDS_Projects/Fall_2017/MNIST/network.py", line 160, in backpropagate
    z = np.dot(weight, activation) + bias
ValueError: shapes (30,50000) and (784,1) not aligned: 50000 (dim 1) != 784 (dim 0)

I get why it's an error. 我明白为什么这是一个错误。 The number of columns in weights doesn't match the number of rows in the pixel image, so I can't do matrix multiplication. 权重中的列数与像素图像中的行数不匹配,因此无法进行矩阵乘法。 Here's where I'm confused -- there are 30 neurons used in the backpropagation, each with 50,000 images being evaluated. 这是我感到困惑的地方-反向传播中使用了30个神经元,每个神经元都有50,000张图像被评估。 My understanding is that each of the 50,000 should have 784 weights attached, one for each pixel. 我的理解是,每50,000个像素应附加784个权重,每个像素一个。 But when I modify the code accordingly: 但是当我相应地修改代码时:

        count = 0
        for bias, weight in zip(self.biases, self.weights) :
            print(bias.shape)
            print(weight[count].shape)
            print(image.shape)
            z = np.dot(weight[count], image) + bias
            zs.append(z)
            activation = sigmoid(z)
            activations.append(activation)
            count += 1

I still get a similar error: 我仍然收到类似的错误:

ValueError: shapes (50000,) and (784,1) not aligned: 50000 (dim 0) != 784 (dim 0)

I'm just really confuzzled by all the linear algebra involved and I think I'm just missing something about the structure of the weight matrix. 我只是对所有涉及的线性代数感到困惑,我想我只是在权重矩阵的结构上缺少一些东西。 Any help at all would be greatly appreciated. 任何帮助将不胜感激。

It looks like the issue is in your changes to the original code. 看起来问题出在您对原始代码的更改中。

I'be downloaded example from the link you provided and it works without any errors: 我已经从您提供的链接中下载了示例,它可以正常运行而没有任何错误: 在此处输入图片说明

Here is full source code I used: 这是我使用的完整源代码:

import cPickle
import gzip
import numpy as np
import random

def load_data():
    """Return the MNIST data as a tuple containing the training data,
    the validation data, and the test data.
    The ``training_data`` is returned as a tuple with two entries.
    The first entry contains the actual training images.  This is a
    numpy ndarray with 50,000 entries.  Each entry is, in turn, a
    numpy ndarray with 784 values, representing the 28 * 28 = 784
    pixels in a single MNIST image.
    The second entry in the ``training_data`` tuple is a numpy ndarray
    containing 50,000 entries.  Those entries are just the digit
    values (0...9) for the corresponding images contained in the first
    entry of the tuple.
    The ``validation_data`` and ``test_data`` are similar, except
    each contains only 10,000 images.
    This is a nice data format, but for use in neural networks it's
    helpful to modify the format of the ``training_data`` a little.
    That's done in the wrapper function ``load_data_wrapper()``, see
    below.
    """
    f = gzip.open('../data/mnist.pkl.gz', 'rb')
    training_data, validation_data, test_data = cPickle.load(f)
    f.close()
    return (training_data, validation_data, test_data)

def load_data_wrapper():
    """Return a tuple containing ``(training_data, validation_data,
    test_data)``. Based on ``load_data``, but the format is more
    convenient for use in our implementation of neural networks.
    In particular, ``training_data`` is a list containing 50,000
    2-tuples ``(x, y)``.  ``x`` is a 784-dimensional numpy.ndarray
    containing the input image.  ``y`` is a 10-dimensional
    numpy.ndarray representing the unit vector corresponding to the
    correct digit for ``x``.
    ``validation_data`` and ``test_data`` are lists containing 10,000
    2-tuples ``(x, y)``.  In each case, ``x`` is a 784-dimensional
    numpy.ndarry containing the input image, and ``y`` is the
    corresponding classification, i.e., the digit values (integers)
    corresponding to ``x``.
    Obviously, this means we're using slightly different formats for
    the training data and the validation / test data.  These formats
    turn out to be the most convenient for use in our neural network
    code."""
    tr_d, va_d, te_d = load_data()
    training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]]
    training_results = [vectorized_result(y) for y in tr_d[1]]
    training_data = zip(training_inputs, training_results)
    validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]]
    validation_data = zip(validation_inputs, va_d[1])
    test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]]
    test_data = zip(test_inputs, te_d[1])
    return (training_data, validation_data, test_data)

def vectorized_result(j):
    """Return a 10-dimensional unit vector with a 1.0 in the jth
    position and zeroes elsewhere.  This is used to convert a digit
    (0...9) into a corresponding desired output from the neural
    network."""
    e = np.zeros((10, 1))
    e[j] = 1.0
    return e

class Network(object):

    def __init__(self, sizes):
        """The list ``sizes`` contains the number of neurons in the
        respective layers of the network.  For example, if the list
        was [2, 3, 1] then it would be a three-layer network, with the
        first layer containing 2 neurons, the second layer 3 neurons,
        and the third layer 1 neuron.  The biases and weights for the
        network are initialized randomly, using a Gaussian
        distribution with mean 0, and variance 1.  Note that the first
        layer is assumed to be an input layer, and by convention we
        won't set any biases for those neurons, since biases are only
        ever used in computing the outputs from later layers."""
        self.num_layers = len(sizes)
        self.sizes = sizes
        self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
        self.weights = [np.random.randn(y, x)
                        for x, y in zip(sizes[:-1], sizes[1:])]

    def feedforward(self, a):
        """Return the output of the network if ``a`` is input."""
        for b, w in zip(self.biases, self.weights):
            a = sigmoid(np.dot(w, a)+b)
        return a

    def SGD(self, training_data, epochs, mini_batch_size, eta,
            test_data=None):
        """Train the neural network using mini-batch stochastic
        gradient descent.  The ``training_data`` is a list of tuples
        ``(x, y)`` representing the training inputs and the desired
        outputs.  The other non-optional parameters are
        self-explanatory.  If ``test_data`` is provided then the
        network will be evaluated against the test data after each
        epoch, and partial progress printed out.  This is useful for
        tracking progress, but slows things down substantially."""
        if test_data: n_test = len(test_data)
        n = len(training_data)
        for j in xrange(epochs):
            random.shuffle(training_data)
            mini_batches = [
                training_data[k:k+mini_batch_size]
                for k in xrange(0, n, mini_batch_size)]
            for mini_batch in mini_batches:
                self.update_mini_batch(mini_batch, eta)
            if test_data:
                print "Epoch {0}: {1} / {2}".format(
                    j, self.evaluate(test_data), n_test)
            else:
                print "Epoch {0} complete".format(j)

    def update_mini_batch(self, mini_batch, eta):
        """Update the network's weights and biases by applying
        gradient descent using backpropagation to a single mini batch.
        The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
        is the learning rate."""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        for x, y in mini_batch:
            delta_nabla_b, delta_nabla_w = self.backprop(x, y)
            nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
            nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
        self.weights = [w-(eta/len(mini_batch))*nw
                        for w, nw in zip(self.weights, nabla_w)]
        self.biases = [b-(eta/len(mini_batch))*nb
                       for b, nb in zip(self.biases, nabla_b)]

    def backprop(self, x, y):
        """Return a tuple ``(nabla_b, nabla_w)`` representing the
        gradient for the cost function C_x.  ``nabla_b`` and
        ``nabla_w`` are layer-by-layer lists of numpy arrays, similar
        to ``self.biases`` and ``self.weights``."""
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward
        activation = x
        activations = [x] # list to store all the activations, layer by layer
        zs = [] # list to store all the z vectors, layer by layer
        for b, w in zip(self.biases, self.weights):
            z = np.dot(w, activation)+b
            zs.append(z)
            activation = sigmoid(z)
            activations.append(activation)
        # backward pass
        delta = self.cost_derivative(activations[-1], y) * \
            sigmoid_prime(zs[-1])
        nabla_b[-1] = delta
        nabla_w[-1] = np.dot(delta, activations[-2].transpose())
        # Note that the variable l in the loop below is used a little
        # differently to the notation in Chapter 2 of the book.  Here,
        # l = 1 means the last layer of neurons, l = 2 is the
        # second-last layer, and so on.  It's a renumbering of the
        # scheme in the book, used here to take advantage of the fact
        # that Python can use negative indices in lists.
        for l in xrange(2, self.num_layers):
            z = zs[-l]
            sp = sigmoid_prime(z)
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
        return (nabla_b, nabla_w)

    def evaluate(self, test_data):
        """Return the number of test inputs for which the neural
        network outputs the correct result. Note that the neural
        network's output is assumed to be the index of whichever
        neuron in the final layer has the highest activation."""
        test_results = [(np.argmax(self.feedforward(x)), y)
                        for (x, y) in test_data]
        return sum(int(x == y) for (x, y) in test_results)

    def cost_derivative(self, output_activations, y):
        """Return the vector of partial derivatives \partial C_x /
        \partial a for the output activations."""
        return (output_activations-y)

#### Miscellaneous functions
def sigmoid(z):
    """The sigmoid function."""
    return 1.0/(1.0+np.exp(-z))

def sigmoid_prime(z):
    """Derivative of the sigmoid function."""
    return sigmoid(z)*(1-sigmoid(z))

training_data, validation_data, test_data = load_data_wrapper()
net = Network([784, 30, 10])
net.SGD(training_data, 30, 10, 3.0, test_data=test_data)

Additional info: 附加信息:

However, I would recommend using one of existing frameworks, for example - Keras to don't reinvent the wheel 但是,我建议您使用现有框架之一,例如-Keras不要重新发明轮子

Also, it was checked with python 3.6: 另外,它是用python 3.6检查的: 在此处输入图片说明

Kudos on digging into Nielsen's code. 深入探讨尼尔森代码的荣誉。 It's a great resource to develop thorough understanding of NN principles. 这是深入了解NN原理的绝佳资源。 Too many people leap ahead to Keras without knowing what goes on under the hood. 太多的人跳进Keras,却不知道引擎盖下发生了什么。

Each training example doesn't get its own weights. 每个训练示例都不具有自己的权重。 Each of the 784 features does. 784个功能均具有此功能 If each example got its own weights then each weight set would overfit to its corresponding training example. 如果每个示例都有自己的权重,则每个权重集将过度适合其相应的训练示例。 Also, if you later used your trained network to run inference on a single test example, what would it do with 50,000 sets of weights when presented with just one handwritten digit? 此外,如果您以后使用训练有素的网络对单个测试示例进行推理,那么仅显示一个手写数字,它将对50,000组权重起作用? Instead, each of the 30 neurons in your hidden layer learns a set of 784 weights, one for each pixel, that offers high predictive accuracy when generalized to any handwritten digit. 取而代之的是,隐藏层中的30个神经元中的每一个都学习一组784个权重(每个像素一个),当将其推广到任何手写数字时,它们都具有很高的预测精度。

Import network.py and instantiate a Network class like this without modifying any code: 导入network.py并实例化Network类,而无需修改任何代码:

net = network.Network([784, 30, 10])

..which gives you a network with 784 input neurons, 30 hidden neurons and 10 output neurons. ..这将为您提供一个包含784个输入神经元,30个隐藏神经元和10个输出神经元的网络。 Your weight matrices will have dimensions [30, 784] and [10, 30] , respectively. 您的权重矩阵的尺寸分别为[30, 784][10, 30] [30, 784] When you feed the network an input array of dimensions [784, 1] the matrix multiplication that gave you an error is valid because dim 1 of the weight matrix equals dim 0 of the input array (both 784). 当您向网络输入尺寸为[784, 1]的输入数组时,给您一个错误的矩阵乘法是有效的,因为权重矩阵的dim 1等于输入数组的dim 0 (均为784)。

Your problem is not implementation of backprop but rather setting up a network architecture appropriate for the shape of your input data. 您的问题不是反向传播的实现,而是建立适合您输入数据形状的网络体系结构。 If memory serves Nielsen leaves backprop as a black box in chapter 1 and doesn't dive into it until chapter 2. Keep at it, and good luck! 如果记忆力发挥作用,尼尔森会在第1章中将反向支撑留为黑匣子,直到第2章才将其投入其中。祝您好运!

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 损失不改变:带有MNIST数据集的Python 3.6中的反向传播 - Loss Not changing: Backpropagation in Python 3.6 with MNIST Dataset 如何正确地将 Mnist 数据集(idx 格式)解析为 python arrays? - How to correctly parse Mnist dataset(idx format) into python arrays? python中用于机器学习的外部数据集学习 - external dataset learning in python for machine learning 有没有办法实现机器学习 model 可以预测给定数据集中出现次数最多的句子 - is there any way to implement a machine learning model that can predict most occured sentence in a given dataset 在 mnist 数据集上训练时查看所有正确和错误识别的图像 - See all correctly and incorrectly identified images when training on the mnist dataset 如何将增强的 MNIST 数据集保存到文件? - How to save augmented MNIST dataset to file? 如何从 UCI 机器学习存储库将数据集(.data 和.names)直接读入 Python DataFrame - How to read the dataset (.data and .names) directly into Python DataFrame from UCI Machine Learning Repository 我如何在机器学习中使用不同的数据集测试我的 model - how can i test my model using different dataset in machine learning 如何在不涉及机器学习算法的情况下实现无与伦比的井字游戏版本? - How to implement an unbeatable version of Tic-Tac-Toe game without involving Machine learning algorithms? MNIST 时尚数据集未加载 - MNIST fashion dataset not loading
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM