简体   繁体   中英

Weight Vectors in Perceptron Algorithm in Machine learning

I'm researching on the perceptron Algorithm in machine learning.Till now,I understood the following things about the perceptron

1)It's a supervised learning technique
2)It tries to create a hyper plane that linearly separates the class   
  labels ,which is when the perceptron converges
3)if the predicted output and the obtained output from the algorithm   
  doesnot match it adjusts it's weight vector and bias.

However,I couldnot understand what happens to the weight vector if the
perceptron doesnot acheive convergence? Do the algorithm keeps on
updating the weight vector?

The preceptron can only converge if and only if the classes are linearly separable. If this is true the algorithm will converge to a solution, it has not guarantees about the quality of the solution in any way. The preceptron's hyperplane does not minimize any particular objective and it has no unique solution, any line that separates the two classes is equally valid to the perceptron.

If this is not true, the perceptron will never converge to a final solution. It may bounce around a reasonable area, but that is not guaranteed - and it could just fly off into a useless place. It will simply keep updating its weights until you force some stopping condition (most common is max number of passes through the data).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM