![](/img/trans.png)
[英]PyTorch: AssertionError ("Torch not compiled with CUDA enabled")
[英]AssertionError: Torch not compiled with CUDA enabled
在 MacOS 上安裝 pytorch 說明如下:
conda install pytorch torchvision -c pytorch
# MacOS Binaries dont support CUDA, install from source if CUDA is needed
為什么要在不啟用 cuda 的情況下安裝 pytorch?
我問的原因是我收到錯誤:
-------------------------------------------------- ------------------------- AssertionError Traceback (most recent call last) in () 78 #predicted = output.data.max(1)[1 ] 79 ---> 80 輸出 = 模型(torch.tensor([[1,1]]).float().cuda()) 81 預測 = output.data.max(1)[1] 82
~/anaconda3/lib/python3.6/site-packages/torch/cuda/ init .py in _lazy_init() 159 raise RuntimeError( 160“無法在分叉的子進程中重新初始化 CUDA。” + msg) --> 161 _check_driver(第 162 章
~/anaconda3/lib/python3.6/site-packages/torch/cuda/ init .py in _check_driver() 73 def _check_driver(): 74 if not hasattr(torch._C, '_cuda_isDriverSufficient'): ---> 75引發斷言錯誤(“Torch 未在啟用 CUDA 的情況下編譯”)76 如果不是 torch._C._cuda_isDriverSufficient():77 如果 torch._C._cuda_getDriverVersion() == 0:
斷言錯誤:未在啟用 CUDA 的情況下編譯 Torch
嘗試執行代碼時:
x = torch.tensor([[0,0] , [0,1] , [1,0]]).float()
print(x)
y = torch.tensor([0,1,1]).long()
print(y)
my_train = data_utils.TensorDataset(x, y)
my_train_loader = data_utils.DataLoader(my_train, batch_size=2, shuffle=True)
# Device configuration
device = 'cpu'
print(device)
# Hyper-parameters
input_size = 2
hidden_size = 100
num_classes = 2
learning_rate = 0.001
train_dataset = my_train
train_loader = my_train_loader
pred = []
for i in range(0 , model_iters) :
# Fully connected neural network with one hidden layer
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
images = images.reshape(-1, 2).to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
{:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
output = model(torch.tensor([[1,1]]).float().cuda())
要修復此錯誤,我需要從已安裝 cuda 的源代碼安裝 pytorch 嗎?
總結和擴展評論:
這個 PyTorch github 問題提到很少有 Mac 有 Nvidia 處理器: https : //github.com/pytorch/pytorch/issues/30664
如果您的 Mac 確實有支持 CUDA 的 GPU,那么要在 MacOS 上使用 CUDA 命令,您需要使用正確的命令行選項從源代碼重新編譯 pytorch。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.