简体   繁体   English

使用 Pytorch 的基础 class “nn.Linear” class 实现简单单层 RNN 的困难

[英]Difficulty in Implementing a simple single-layer RNN using Pytorch's base class “nn.Linear” class

While working on making a simple RNN using Pytorch nn.linear function.在使用 Pytorch nn.linear function 制作简单的 RNN 时。 So firstly I initialized my weights as所以首先我将我的权重初始化为

self.W_x = nn.Linear(self.input_dim, self.hidden_dim, bias=True)
self.W_h = nn.Linear(self.hidden_dim, self.hidden_dim, bias=True)

Now in the main step where I am getting the result of the current state by using the previous state and the values of the weights using this code statement现在在主要步骤中,我通过使用以前的 state 和使用此代码语句的权重值来获取当前 state 的结果

h_t = np.tanh((inp * self.W_x) + (prev_h * self.W_h))

So here I am getting the python error as shown below所以在这里我收到 python 错误,如下所示

TypeError: mul(): argument 'other' (position 1) must be Tensor, not Linear

Can anyone help me with his regards...谁能帮我打个招呼...

Your W_x and W_h are not weights, but linear layers, which use a weight and bias (since bias=True ).您的W_xW_h不是权重,而是线性层,它使用权重和偏差(因为bias=True )。 They need to be called as a function.它们需要被称为 function。

Furthermore, you cannot use NumPy operations with PyTorch tensors, but if you convert your tensors to NumPy arrays you can't back propagate through them, since only PyTorch operations are tracked in the computational graph. Furthermore, you cannot use NumPy operations with PyTorch tensors, but if you convert your tensors to NumPy arrays you can't back propagate through them, since only PyTorch operations are tracked in the computational graph. There is no need for np.tanh anyway, as PyTorch has torch.tanh as well.无论如何都不需要np.tanh ,因为 PyTorch 也有torch.tanh

h_t = torch.tanh(self.W_x(inp) + self.W_h(prev_h))

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 PyTorch 中 nn.Linear 的类定义是什么? - What is the class definition of nn.Linear in PyTorch? Keras Dense 层和 Pytorch 的 nn.linear 层有区别吗? - Is there a difference between Keras Dense layer and Pytorch's nn.linear layer? PyTorch nn.Linear层输出良好的输入和权重 - PyTorch nn.Linear layer output nan on well formed input and weights nn.Linear(feature_size,1)* n与PyTorch中的nn.Linear(feature_size,n) - nn.Linear(feature_size, 1)*n vs nn.Linear(feature_size, n) in PyTorch PyTorch:如何正确创建 nn.Linear() 列表 - PyTorch : How to properly create a list of nn.Linear() Pytorch nn.Linear 相同输入的不同输出 - Pytorch nn.Linear different output for same input pytorch是否在nn中自动应用softmax - Does pytorch apply softmax automatically in nn.Linear TypeError: new(): argument 'size' must be tuple of ints, 但是在使用 pytorch 时在 pos 2 找到类型为 NoneType 的元素,使用 nn.linear - TypeError: new(): argument 'size' must be tuple of ints, but found element of type NoneType at pos 2 when using pytorch, using nn.linear Pytorch nn.Linear RuntimeError: mat1 dim 1 必须匹配 mat2 dim 0 - Pytorch nn.Linear RuntimeError: mat1 dim 1 must match mat2 dim 0 nn.Linear应该是不匹配的,但它可以成功运行 - nn.Linear should be mismatch, but it works successfully
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM