简体   繁体   中英

Python Neural Nets Batch Size

I have two questions to batch Sizes in Neural Nets. First of all, is it better to use a larger or a smaller Batch Size to get a better model?

And the second one is about this code:

model.evaluate(x=X_test, y=y_test, batch_size=iBatchSize)

I get two results with this code:

[0.2565826796926558, 0.8687499761581421]

But what is the meaning behind these two numbers?

Thank you very much. :)

The first number according to https://www.tensorflow.org/guide/keras/train_and_evaluate is test loss. So if you were using for example Cross entropy, thats the cross entropy loss calculated for your test data. The second number is accuracy, indicating how well your model did on the test data.

accuracy = (True Positive predictions + True Negative predictions) / TP + TN + FP + FN

When it comes to batch size the general rule of thumb is to use a batch size large enough so that when the gradients are propagated backwards through the.network, they change the weights in a meaningful way. Small batch size would return gradients that come from a small sample of data, so there is a chance they are not representative of your entire dataset. Large batch size may cause memory issues.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM