[英]Pytorch input tensor size with wrong dimension Conv1D
def train(epoch):
model.train()
train_loss = 0
for batch_idx, (data, _) in enumerate(train_loader):
data = data[None, :, :]
print(data.size()) # something seems to change between here
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data) # and here???
loss = loss_function(recon_batch, data, mu, logvar)
loss.backward()
train_loss += loss.item()
optimizer.step()
if batch_idx % 1000 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.item() / len(data)))
print('====> Epoch: {} Average loss: {:.4f}'.format(epoch, train_loss / len(train_loader.dataset)))
for epoch in range(1, 4):
train(epoch)
This is very strange looking at the training loop it does recognize that the size is [1,1,1998]
but then something changes after it is sent to the device?看看训练循环,这很奇怪,它确实识别出大小是[1,1,1998]
但是在将其发送到设备后会发生什么变化?
torch.Size([1, 1, 1998])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-138-70cca679f91a> in <module>()
27
28 for epoch in range(1, 4):
---> 29 train(epoch)
5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
255 _single(0), self.dilation, self.groups)
256 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 257 self.padding, self.dilation, self.groups)
258
259
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [12, 1, 1], but got 2-dimensional input of size [1, 1998] instead
Also here is my model (I recognize there is likely a couple of other issues here but I am asking about the tensor size not registering)这也是我的模型(我意识到这里可能还有其他几个问题,但我问的是张量大小未注册)
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.conv1 = nn.Conv1d( 1,12, kernel_size=1,stride=5,padding=0)
self.conv1_drop = nn.Dropout2d()
self.pool1 = nn.MaxPool1d(kernel_size=3, stride=2)
self.fc21 = nn.Linear(198, 1)
self.fc22 = nn.Linear(198, 1)
self.fc3 = nn.Linear(1, 198)
self.fc4 = nn.Linear(198, 1998)
def encode(self, x):
h1 = self.conv1(x)
h1 = self.conv1_drop(h1)
h1 = self.pool1(h1)
h1 = F.relu(h1)
h1 = h1.view(1, -1) # 1 is the batch size
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.rand_like(std)
return mu + eps*std
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x.view(-1, 1998))
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
So why doesn't Pytorch keep the dimensions after reshaping and would that be the correct tensor size if it did?那么为什么 Pytorch 在重塑后不保持尺寸,如果这样做的话,那会是正确的张量大小吗?
我刚刚在调用forward()
时发现了我的错误,我正在做self.encode(x.view(-1,1998))
,它正在重塑张量。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.