简体   繁体   English

使用 PyTorch DistributedDataParallel 在多个节点上训练时进程卡住

[英]Process stuck when training on multiple nodes using PyTorch DistributedDataParallel

I am trying to run the script mnist-distributed.py from Distributed data parallel training in Pytorch .我正在尝试从mnist-distributed.py分布式数据并行训练运行脚本mnist-distributed.py I have also pasted the same code here.我也在这里粘贴了相同的代码。 (I have replaced my actual MASTER_ADDR with abcd for posting here). (我已经用abcd替换了我的实际MASTER_ADDR以便在此处发布)。

import os
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torch.distributed as dist

class ConvNet(nn.Module):
    def __init__(self, num_classes=10):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(16),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))
        self.layer2 = nn.Sequential(
            nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))
        self.fc = nn.Linear(7*7*32, num_classes)

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)
        return out

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('-n', '--nodes', default=1, type=int, metavar='N')
    parser.add_argument('-g', '--gpus', default=1, type=int,
                        help='number of gpus per node')
    parser.add_argument('-nr', '--nr', default=0, type=int,
                        help='ranking within the nodes')
    parser.add_argument('--epochs', default=2, type=int, metavar='N',
                        help='number of total epochs to run')
    args = parser.parse_args()
    args.world_size = args.gpus * args.nodes               
    os.environ['MASTER_ADDR'] = 'a.b.c.d'              
    os.environ['MASTER_PORT'] = '8890'                    
    mp.spawn(train, nprocs=args.gpus, args=(args,))       

def train(gpu, args):
    rank = args.nr * args.gpus + gpu                              
    dist.init_process_group(                                   
        backend='nccl',                                         
        init_method='env://',                                   
        world_size=args.world_size,                              
        rank=rank                                               
    )                                                          
    
    torch.manual_seed(0)
    model = ConvNet()
    torch.cuda.set_device(gpu)
    model.cuda(gpu)
    batch_size = 100
    # define loss function (criterion) and optimizer
    criterion = nn.CrossEntropyLoss().cuda(gpu)
    optimizer = torch.optim.SGD(model.parameters(), 1e-4)
    
    # Wrap the model
    model = nn.parallel.DistributedDataParallel(model,
                                                device_ids=[gpu])

    # Data loading code
    train_dataset = torchvision.datasets.MNIST(
        root='./data',
        train=True,
        transform=transforms.ToTensor(),
        download=True
    )                                               
    train_sampler = torch.utils.data.distributed.DistributedSampler(
        train_dataset,
        num_replicas=args.world_size,
        rank=rank
    )

    train_loader = torch.utils.data.DataLoader(
        dataset=train_dataset,
       batch_size=batch_size,
       shuffle=False,            
       num_workers=0,
       pin_memory=True,
      sampler=train_sampler)     

    total_step = len(train_loader)
    for epoch in range(args.epochs):
        for i, (images, labels) in enumerate(train_loader):
            images = images.cuda(non_blocking=True)
            labels = labels.cuda(non_blocking=True)
            # Forward pass
            outputs = model(images)
            loss = criterion(outputs, labels)

            # Backward and optimize
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            if (i + 1) % 100 == 0 and gpu == 0:
                print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(
                    epoch + 1, 
                    args.epochs, 
                    i + 1, 
                    total_step,
                    loss.item())
                   )

if __name__ == '__main__':
    main()

There are 2 nodes with 2 GPUs each.有 2 个节点,每个节点有 2 个 GPU。 I run this command from the terminal of the master node-我从主节点的终端运行此命令-

python mnist-distributed.py -n 2 -g 2 -nr 0 python mnist-distributed.py -n 2 -g 2 -nr 0

, and then this from the terminal of the other node- ,然后这是来自另一个节点的终端-

python mnist-distributed.py -n 2 -g 2 -nr 1 python mnist-distributed.py -n 2 -g 2 -nr 1

But then my process gets stuck with no output on either terminal.但是随后我的进程卡住了,两个终端上都没有输出。

Running the same code on a single node using the following command works perfectly fine-使用以下命令在单个节点上运行相同的代码效果很好-

python mnist-distributed.py -n 1 -g 2 -nr 0 python mnist-distributed.py -n 1 -g 2 -nr 0

I met a similar problem.我遇到了类似的问题。 And the problem is solved by问题解决了

sudo vi /etc/default/grub

Edit it:编辑它:

#GRUB_CMDLINE_LINUX=""                           <----- Original commented
GRUB_CMDLINE_LINUX="iommu=soft"           <------ Change
sudo update-grub

Reboot to see the change.重新启动以查看更改。

Ref: https://github.com/pytorch/pytorch/issues/1637#issuecomment-338268158参考: https : //github.com/pytorch/pytorch/issues/1637#issuecomment-338268158

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 NVLink 是否使用 DistributedDataParallel 加速训练? - Does NVLink accelerate training with DistributedDataParallel? 使用pytorch训练神经网络时损失的周期性模式 - Periodic pattern in loss when training neural networks using pytorch 在 tensorflow 2.2 中使用 MirrorStrategy 进行分布式训练,自定义训练循环不起作用 - 更新梯度时卡住 - Distributed training using MirrorStrategy in tensorflow 2.2 with custom training loop not working - getting stuck when updating gradients Pytorch 默认数据加载器因大型图像分类训练集卡住 - Pytorch default dataloader gets stuck for large image classification training set 在多损失训练时了解何时在 pytorch 中调用 zero_grad() - Understanding when to call zero_grad() in pytorch, when training with multiple losses 在 PyTorch 中使用分布式数据并行 (DDP) 时,在训练期间检查点的正确方法是什么? - What is the proper way to checkpoint during training when using distributed data parallel (DDP) in PyTorch? Pytorch model 训练不使用正向 - Pytorch model training without using forward 使用 pytorch 进行训练时,调试器挂起,即使运行正常 - When training with pytorch, debugger hangs, even though running works fine 我在使用pytorch训练lstm网络时遇到了这个错误 - I was training the lstm network using pytorch and encountered this error 在PyTorch中训练神经网络时,损失总是“为” - Loss is 'nan' all the time when training the neural network in PyTorch
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM