简体   繁体   中英

`from_numpy()` leads to Expected vec.is_mps() to be true, but got false

When I try to convert the data ( numpy nd array ) to a tensor with from_numpy() when using the mps backend with torch.

I initiliaze the model as such:

device = "mps" if torch.has_mps else "cpu"
model = NeuralNetwork().to(device)

and it is using the mps backend:

Using mps device
NeuralNetwork(...)

Then using it as follows:

observations = env.reset()
X = torch.from_numpy(observations)
logits = model(X)

the model throws the error

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [9], in <cell line: 3>()
      1 observations = env.reset()
      2 X = torch.from_numpy(observations)
----> 3 logits = model(X)

File lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

Input In [2], in NeuralNetwork.forward(self, x)
     13 def forward(self, x):
---> 14     logits = self.linear_relu_stack(x)
     15     return logits

File lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

File lib/python3.8/site-packages/torch/nn/modules/container.py:139, in Sequential.forward(self, input)
    137 def forward(self, input):
    138     for module in self:
--> 139         input = module(input)
    140     return input

File lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

File lib/python3.8/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
    113 def forward(self, input: Tensor) -> Tensor:
--> 114     return F.linear(input, self.weight, self.bias)

RuntimeError: Expected vec.is_mps() to be true, but got false.  (Could this error message be improved?  If so, please report an enhancement request to PyTorch.)

If I change device to cpu instead of mps , it works. How do I use numpy arrays with the mps backend?

I am running it on an M1 chip and torch.has_mps is True .

You would have to first send your model to the mps device, then send your input explicitly to the mps device as well. In code:

model.to('mps')
logits = model(X.to('mps'))

Something like this worked for me using torch nightly on M1 pro, using a model that does not contain ops and data types that mps does not currently support.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM