简体   繁体   English

向Lasagne神经网络层添加偏差

[英]Add bias to Lasagne neural network layers

I am wondering if there is a way to add bias node to each layer in Lasagne neural network toolkit? 我想知道是否有办法在Lasagne神经网络工具包中为每一层添加偏置节点? I have been trying to find related information in documentation. 我一直试图在文档中找到相关信息。

This is the network I built but i don't know how to add a bias node to each layer. 这是我构建的网络,但我不知道如何向每个层添加偏向节点。

def build_mlp(input_var=None):
    # This creates an MLP of two hidden layers of 800 units each, followed by
    # a softmax output layer of 10 units. It applies 20% dropout to the input
    # data and 50% dropout to the hidden layers.

    # Input layer, specifying the expected input shape of the network
    # (unspecified batchsize, 1 channel, 28 rows and 28 columns) and
    # linking it to the given Theano variable `input_var`, if any:
    l_in = lasagne.layers.InputLayer(shape=(None, 60),
                                     input_var=input_var)

    # Apply 20% dropout to the input data:
    l_in_drop = lasagne.layers.DropoutLayer(l_in, p=0.2)

    # Add a fully-connected layer of 800 units, using the linear rectifier, and
    # initializing weights with Glorot's scheme (which is the default anyway):
    l_hid1 = lasagne.layers.DenseLayer(
            l_in_drop, num_units=800,
            nonlinearity=lasagne.nonlinearities.rectify,
            W=lasagne.init.Uniform())

    # We'll now add dropout of 50%:
    l_hid1_drop = lasagne.layers.DropoutLayer(l_hid1, p=0.5)

    # Another 800-unit layer:
    l_hid2 = lasagne.layers.DenseLayer(
            l_hid1_drop, num_units=800,
            nonlinearity=lasagne.nonlinearities.rectify)

    # 50% dropout again:
    l_hid2_drop = lasagne.layers.DropoutLayer(l_hid2, p=0.5)

    # Finally, we'll add the fully-connected output layer, of 10 softmax units:
    l_out = lasagne.layers.DenseLayer(
            l_hid2_drop, num_units=2,
            nonlinearity=lasagne.nonlinearities.softmax)

    # Each layer is linked to its incoming layer(s), so we only need to pass
    # the output layer to give access to a network in Lasagne:
    return l_out

Actually you don't have to explicitly create biases, because DenseLayer() , and convolution base layers too, has a default keyword argument: 实际上你不必显式创建偏差,因为DenseLayer()和卷积基础层也有一个默认的关键字参数:

b=lasagne.init.Constant(0.) . b=lasagne.init.Constant(0.)

Thus you can avoid creating bias , if you don't want to have with explicitly pass bias=None , but this is not that case. 因此,如果您不希望显式传递bias=None ,则可以避免创建bias ,但事实并非如此。

Thus in brief you do have bias parameters while you don't pass None to bias parameter eg: 因此,在短暂你有偏见参数,而你没有通过Nonebias参数如:

hidden = Denselayer(...bias=None)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM