简体   繁体   中英

Weights in logistic regression in Keras layers

Following is the logistic regression code that I am using to establish association between dose value (shape 672,1) and disease outcome (shape 672,1; binary outcome 0,1) using Keras. My objective is to calculate odds ratio, which I figured out to be exp(weights) and compare it with the odds ratio that I calculated using Fisher's test.

from keras.models import Sequential 
from keras.layers import Dense, Activation 
from keras import layers

class logit:
def lg_keras(self,input_dim,output_dim,ep,X,y):
    model = Sequential() 
    model.add(Dense(output_dim, input_dim=input_dim, activation='sigmoid')) 
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) 
    model.fit(X, y, nb_epoch=ep, verbose=0) 
    print("Done")
    return model

My question is when I extract weights from the Keras model. I was hoping to get just one weight for a single output node, but I received two. Below is the code and the output.

model = lgd.lg_keras(X.shape[1], y.shape[1],20,X,y)
for layer in model.layers:
    weights = layer.get_weights() # list of numpy arrays
print(weights)

[array([[-0.00019858]], dtype=float32), array([-0.06999612], dtype=float32)]

What these two weight values are for?

I guess I have found the answer to my own question. The first number/array is for the weight term and the second array is for the bias term. Because if I add two columns in my feature table then I get two values in the weight array with a single value in the bias array, which makes sense.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM