简体   繁体   English

当 PyTorch 张量原本是 2 维时,将其重塑为 3 维?

[英]Reshaping a PyTorch tensor to 3 dimensions when it is originally 2 dimensions?

I would like to take a PyTorch tensor that I have, originally of shape torch.Size([15000, 23]) and reshape it such that it is compatible to run in spiking neural network ( snnTorch is the framework I am using in PyTorch).我想采用我拥有的 PyTorch 张量,最初的形状为torch.Size([15000, 23])并对其进行整形,使其兼容在尖峰神经网络中运行( snnTorch是我在 PyTorch 中使用的框架) . The shape of the tensor to input into the SNN should [time x batch_size x feature_dimensions] (more information on this can be found here .输入 SNN 的张量的形状应该是[time x batch_size x feature_dimensions] (更多信息可以在这里找到。

Right now, I am using the following code:现在,我正在使用以下代码:

    # Create data of dimensions [time x batch_size x feature_dimensions]
    time_steps = 200
    batch_size = 1
    feature_dimensions = torch_input_tensor.size(dim = 1)
    torch_input_tensor_reshaped = torch.reshape(torch_input_tensor, (time_steps, batch_size, feature_dimensions))
    print(torch_input_tensor_reshaped.size())
    print(torch_input_tensor_reshaped)

When I run this code, I get the following error:当我运行此代码时,我收到以下错误:

RuntimeError: shape '[200, 1, 23]' is invalid for input of size 345000

I may be using the wrong function to do this, but the idea is that I currently have 15000 data points, and 23 input features.我可能使用了错误的 function 来执行此操作,但我的想法是我目前有 15000 个数据点和 23 个输入特征。 I want to essentially feed in the same data point (23 features, 1 data point) 200 times (200 time steps).我想基本上输入相同的数据点(23 个特征,1 个数据点)200 次(200 个时间步长)。

In the example provided in the link, the use the following code:在链接中提供的示例中,使用以下代码:

spk_in = spikegen.rate_conv(torch.rand((200, 784))).unsqueeze(1)

The unsqueeze function is for the input along dim=1 to indicate 'one batch' of data.解压unsqueeze用于沿 dim=1 的输入以指示“一批”数据。

How can I make my data shape compatible to run in an SNN?如何使我的数据形状兼容以在 SNN 中运行?

The thing with SNNs is that they are time-varying, so if your data is time-static, then your options are either to: SNN 的问题在于它们是随时间变化的,因此如果您的数据是时间静态的,那么您的选择是:

  1. pass the same sample at every time step to the network, or在每个时间步将相同的样本传递给网络,或者
  2. convert it into a spike-train before passing it in.在传递之前将其转换为尖峰列车。

You appear to be going for (2), although (1) might be easier.您似乎要选择(2),尽管(1)可能更容易。

During training, you would pass the same sample to the network over and over again:在训练期间,您将一遍又一遍地将相同的样本传递给网络:

for step in range(num_steps):
    cur1 = self.fc1(x)

If your input was time varying, you would have to change x to x[step] to iterate through each time step.如果您的输入随时间变化,则必须将x更改为x[step]以遍历每个时间步。 An example of this with MNIST is given here. 这里给出了一个使用 MNIST 的例子。

If the above code doesn't help, then it'd be useful to see how you define your network.如果上面的代码没有帮助,那么看看你如何定义你的网络会很有用。 Try something like:尝试类似:

# Define Network
class Net(nn.Module):
    def __init__(self):
        super().__init__()

        # Initialize layers
        self.fc1 = nn.Linear(23, 100) # 23 inputs, 100 hidden neurons
        self.lif1 = snn.Leaky(beta=0.9) # randomly chose 0.9 
        self.fc2 = nn.Linear(100, num_outputs) # change num_outputs to your number of classes
        self.lif2 = snn.Leaky(beta=0.9)

    def forward(self, x):

        # Initialize hidden states at t=0
        mem1 = self.lif1.init_leaky()
        mem2 = self.lif2.init_leaky()

        # Record the final layer
        spk2_rec = []
        mem2_rec = []

        for step in range(num_steps):
            cur1 = self.fc1(x)
            spk1, mem1 = self.lif1(cur1, mem1)
            cur2 = self.fc2(spk1)
            spk2, mem2 = self.lif2(cur2, mem2)
            spk2_rec.append(spk2)
            mem2_rec.append(mem2)

        return torch.stack(spk2_rec, dim=0), torch.stack(mem2_rec, dim=0)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM