简体   繁体   中英

Understanding Perceptrons

I just started a Machine learning class and we went over Perceptrons. For homework we are supposed to: "Choose appropriate training and test data sets of two dimensions (plane). Use 10 data points for training and 5 for testing. " Then we are supposed to write a program that will use a perceptron algorithm and output:

  • a comment on whether the training data points are linearly separable
  • a comment on whether the test points are linearly separable
  • your initial choice of the weights and constants
  • the final solution equation (decision boundary)
  • the total number of weight updates that your algorithm made
  • the total number of iterations made over the training set
  • the final misclassification error, if any, on the training data and also on the test data

I have read the first chapter of my book several times and I am still having trouble fully understanding perceptrons.

I understand that you change the weights if a point is misclassified until none are misclassified anymore, I guess what I'm having trouble understanding is

  1. What do I use the test data for and how does that relate to the training data?
  2. How do I know if a point is misclassified?
  3. How do I go about choosing test points, training points, threshold or a bias?

It's really hard for me to know how to make up one of these without my book providing good examples. As you can tell I am pretty lost, any help would be so much appreciated.

What do I use the test data for and how does that relate to the training data?

Think about a Perceptron as young child. You want to teach a child how to distinguish apples from oranges. You show it 5 different apples (all red/yellow) and 5 oranges (of different shape) while telling it what it sees at every turn ("this is a an apple. this is an orange). Assuming the child has perfect memory, it will learn to understand what makes an apple an apple and an orange an orange if you show him enough examples. He will eventually start to use meta- features (like shapes) without you actually telling him. This is what a Perceptron does. After you showed him all examples, you start at the beginning, this is called a new epoch .

What happens when you want to test the child's knowledge? You show it something new . A green apple (not just yellow/red), a grapefruit, maybe a watermelon. Why not show the child the exact same data as before during training? Because the child has perfect memory, it will only tell you what you told him. You won't see how good it generalizes from known to unseen data unless you have different training data that you never showed him during training. If the child has a horrible performance on the test data but a 100% performance on the training data, you will know that he has learned nothing - it's simply repeating what he has been told during training - you trained him too long, he only memorized your examples without understanding what makes an apple an apple because you gave him too many details - this is called overfitting . To prevent your Perceptron from only (!) recognizing training data you'll have to stop training at a reasonable time and find a good balance between the size of the training and testing set.

How do I know if a point is misclassified?

If it's different from what it should be. Let's say an apple has class 0 and an orange has 1 (here you should start reading into Single/MultiLayer Perceptrons and how Neural Networks of multiple Perceptrons work). The network will take your input. How it's coded is irrelevant for this, let's say input is a string "apple". Your training set then is {(apple1,0), (apple2,0), (apple3,0), (orange1,1), (orange2,1).....}. Since you know the class beforehand, the network will either output 1 or 0 for the input "apple1". If it outputs 1, you perform (targetValue-actualValue) = (1-0) = 1. 1 in this case means that the network gives a wrong output. Compare this to the delta rule and you will understand that this small equation is part of the larger update equation. In case you get a 1 you will perform a weight update. If target and actual value are the same, you will always get a 0 and you know that the network didn't misclassify.

How do I go about choosing test points, training points, threshold or a bias?

Practically the bias and threshold isn't "chosen" per se. The bias is trained like any other unit using a simple "trick", namely using the bias as an additional input unit with value 1 - this means the actual bias value is encoded in this additional unit's weight and the algorithm we use will make sure it learns the bias for us automatically.

Depending on your activation function, the threshold is predetermined. For a simple perceptron, the classification will occur as follows:

感知

Since we use a binary output (between 0 and 1), it's a good start to put the threshold at 0.5 since that's exactly the middle of the range [0,1].

Now to your last question about choosing training and test points: This is quite difficult, you do that by experience. Where you're at, you start off by implementing simple logical functions like AND, OR, XOR etc. There's it's trivial. You put everything in your training set and test with the same values as your training set (since for x XOR y etc. there are only 4 possible inputs 00, 10, 01, 11). For complex data like images, audio etc. you'll have to try and tweak your data and features until you feel like the network can work with it as good as you want it to.

What do I use the test data for and how does that relate to the training data?

Usually, to asses how well a particular algorithm performs, one first trains it and then uses different data to test how well it does on data it has never seen before.

How do I know if a point is misclassified?

Your training data has labels, which means that for each point in the training set, you know what class it belongs to.

How do I go about choosing test points, training points, threshold or a bias?

For simple problems, you usually take all the training data and split it around 80/20. You train on the 80% and test against the remaining 20%.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM