[英]differing results when using model to infer on a batch vs individual with pytorch
I have a neural network which takes input tensor of dimension (batch_size, 100, 1, 1) and produces an output tensor of dimension (batch_size, 3, 64, 64).我有一个神经网络,它接受维度 (batch_size, 100, 1, 1) 的输入张量并产生维度 (batch_size, 3, 64, 64) 的 output 张量。 I have differing results when using model to infer on a batch of two elements and on inferring on elements individually.当使用 model 推断一批两个元素和单独推断元素时,我有不同的结果。
With the below code I initialize a pytorch tensor of dimension (2, 100, 1, 1).使用下面的代码,我初始化了一个维度为 (2, 100, 1, 1) 的 pytorch 张量。 I pass this tensor through the model and I take the first element of the model output and store in variable result1.我通过 model 传递这个张量,并获取 model output 的第一个元素并存储在变量 result1 中。 For result2 I just directly run the first element of my original input tensor through my model.对于 result2,我只是通过我的 model 直接运行原始输入张量的第一个元素。
inputbatch=torch.randn(2, Z_DIM, 1, 1, device=device)
inputElement=inputbatch[0].unsqueeze(0)
result1=model(inputbatch)[0]
result2=model(inputElement)
My expectation was result1 and result2 would be same.我的期望是 result1 和 result2 是一样的。 But result1 and result2 are entirely different.但是 result1 和 result2 完全不同。 Could anyone explain why the two outputs differ.谁能解释为什么这两个输出不同。
This is probably because your model has some random processes that are either training specific and you have not disable them (eg by using model.eval()
) or needed at the model during inference.这可能是因为您的 model 有一些随机过程,这些过程要么是特定于训练的,而您没有禁用它们(例如,通过使用model.eval()
)或在推理期间需要在 model 处。
To test the above, use:要测试上述内容,请使用:
model = model.eval()
before obtaining result1
.在获得result1
之前。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.