简体   繁体   English

张量流中的Batch_size? 理解这个概念

[英]Batch_size in tensorflow? Understanding the concept

My question is simple and stright forward. 我的问题很简单,也很直接。 What does a batch size specify while training and predicting a neural network. 批量大小在训练和预测神经网络时指定了什么。 How to visualize it so as to get a clear picture of how data is being feed to the network. 如何对其进行可视化,以便清楚地了解数据如何馈送到网络。

Suppose I have an autoencoder 假设我有一个自动编码器

encoder = tflearn.input_data(shape=[None, 41])
encoder = tflearn.fully_connected(encoder, 41,activation='relu')

and I am taking an input as csv file with 41 features, So as to what I understand is it will take each feature from csv file and feed it to the 41 neurons of the first layer when my batch size is 1. 我正在输入具有41个功能的csv文件,所以我理解的是它将从csv文件中获取每个功能,并在批量大小为1时将其提供给第一层的41个神经元。

But when I increase the batch size to 100 how is this 41 features of 100 batches are going to be feed to this network? 但是当我将批量大小增加到100时,100个批次的41个特征将如何被馈送到该网络?

model.fit(test_set, test_labels_set, n_epoch=1, validation_set=(valid_set, valid_labels_set),
          run_id="auto_encoder", batch_size=100,show_metric=True, snapshot_epoch=False)

Will there be any normalization on the batches or some operations on them? 是否会对批次或其中的某些操作进行标准化?

The number of epocs are same for both the cases 两种情况下的epoc数量相同

The batch size is the amount of samples you feed in your network. 批量大小是您在网络中提供的样本数量。 For your input encoder you specify that you enter an unspecified(None) amount of samples with 41 values per sample. 对于输入编码器,您可以指定输入未指定(无)的样本量,每个样本包含41个值。

The advantage of using None is that you can now train with batches of 100 values at once (which is good for your gradient), and test with a batch of only one value (one sample for which you want a prediction). 使用None的优点是,您现在可以同时使用100个值的批次进行训练(这对您的渐变有用),并使用一批只有一个值(一个您想要预测的样本)进行测试。

If you don't specify normalization per batch there is no normalization per batch ;) 如果没有为每个批次指定标准化,则每个批次没有标准化;)

Hope I explained it well enough! 希望我解释得很好! If you have more questions feel free to ask them! 如果您有更多问题,请随时问他们!

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 batch_size作为权重的尺寸 - batch_size as a dimension for weights Keras CNN:任何 batch_size > 1 的形状 [batch_size*2,1] 与 [batch_size,1] 不兼容 - Keras CNN: Incompatible shapes [batch_size*2,1] vs. [batch_size,1] with any batch_size > 1 如何修复“ValueError: Expected input batch_size (1) to match target batch_size (4).”? - How to fix "ValueError: Expected input batch_size (1) to match target batch_size (4)."? Pytorch:ValueError:预期输入 batch_size (32) 与目标 batch_size (64) 匹配 - Pytorch: ValueError: Expected input batch_size (32) to match target batch_size (64) Bulk_create 不适用于 batch_size 参数 - Bulk_create is not working with batch_size parameter 为什么我收到错误 ValueError: Expected input batch_size (4) to match target batch_size (64)? - Why am I getting the error ValueError: Expected input batch_size (4) to match target batch_size (64)? joblib中的batch_size和pre_dispatch到底意味着什么 - What batch_size and pre_dispatch in joblib exactly mean GridSearching LSTM网络中的问题 - Batch_size问题 - Problem in GridSearching a LSTM network - Batch_size issue 重塑的输入是具有“batch_size”值的张量,但请求的形状需要“n_features”的倍数 - Input to reshape is a tensor with 'batch_size' values, but the requested shape requires a multiple of 'n_features' RASA Chatbot Framework在训练时给出错误:fit()为关键字参数'batch_size'获得了多个值 - RASA Chatbot Framework gives error while training :fit() got multiple values for keyword argument 'batch_size'
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM