简体   繁体   中英

Deep Learning using vector

I've built a network with no hidden layer. As shown below, I'm using an Activation function with Sigmoid and an Error function with MSE. Dataset used is the Titanic dataset from Kaggle and I have pre-processed the data.
The Error function of the model keeps on increasing. How can I update the weights properly to reduce the Error function? I am a beginner. Please bear with me.

Link for the dataset:
https://drive.google.com/file/d/1RZjCvJ732uHx-IAlhPAVRHDwabYsHJJl/view?usp=sharing

My colab notebook:
https://colab.research.google.com/drive/1HD9RA1RXQcaBvUANz3vR_6iW0BSr9kDp?usp=sharing

import numpy as np
X=np.array(td.drop(columns=['Survived','Fare'],axis=1))
Y=np.array(td['Survived'])
X=X.T
print('X shape :',X.shape)
print('Y shape :',Y.shape)
W=np.zeros((nf,1))
print('W shape :',W.shape)
B=0
for _ in range(100):
  z=summation(W,X,B)
  print("shape of z : ",z.shape)
  # print('z = ',z)
  y=sigmoid(z.T)
  # print(' y = ',y)
  cost=(error(Y,y))/ne
  print('cost = ',cost)
  updatefactor=((Y-y)*(Y-1))*y*(1-y)*X/ne
  print(updatefactor.squeeze())
  # print('updatefactor : ',updatefactor)
  # print('*****************************')
  print(updatefactor.shape)
  W=updW(W,0.001,updatefactor)

I assume you are building a perceptron model.

as @Rahul Vishwakarma commented, in calculated z value, you need to multiply weights, W and features, X together and add the bias, B. Just summationing these three terms would not lead to a proper activation. So, you could do something like

z = np.dot(W,X) + B

Make sure W and X are aligned for vector multiplication.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM