简体   繁体   中英

differing results when using model to infer on a batch vs individual with pytorch

I have a neural network which takes input tensor of dimension (batch_size, 100, 1, 1) and produces an output tensor of dimension (batch_size, 3, 64, 64). I have differing results when using model to infer on a batch of two elements and on inferring on elements individually.

With the below code I initialize a pytorch tensor of dimension (2, 100, 1, 1). I pass this tensor through the model and I take the first element of the model output and store in variable result1. For result2 I just directly run the first element of my original input tensor through my model.

inputbatch=torch.randn(2, Z_DIM, 1, 1, device=device)
inputElement=inputbatch[0].unsqueeze(0)

result1=model(inputbatch)[0]
result2=model(inputElement)

My expectation was result1 and result2 would be same. But result1 and result2 are entirely different. Could anyone explain why the two outputs differ.

This is probably because your model has some random processes that are either training specific and you have not disable them (eg by using model.eval() ) or needed at the model during inference.

To test the above, use:


model = model.eval()

before obtaining result1 .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM