简体   繁体   中英

How do I implement a constant neuron in Keras?

I have the following neural network in Python/Keras:

input_img = Input(shape=(784,))

encoded = Dense(1000, activation='relu')(input_img)  # L1
encoded = Dense(500, activation='relu')(encoded)     # L2
encoded = Dense(250, activation='relu')(encoded)     # L3
encoded = Dense(2, activation='relu')(encoded)       # L4

decoded = Dense(20, activation='relu')(encoded)      # L5
decoded = Dense(400, activation='relu')(decoded)     # L6
decoded = Dense(100, activation='relu')(decoded)     # L7
decoded = Dense(10, activation='softmax')(decoded)   # L8

mymodel = Model(input_img, decoded)

What I'd like to do is to have one neuron in each of layers 4~7 to be a constant 1 (to implement the bias term), ie it has no input, has a fixed value of 1, and is fully connected to the next layer. Is there a simple way to do this? Thanks a lot!

You could create constant input tensors:

constant_values = np.ones(shape)
constant = Input(tensor=K.variable(constant_values))

With that said, your use case (bias) sounds like you should simply use use_bias=True which is the default, as noted by @gionni.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM