简体   繁体   中英

Weights & Biases sweep cannot import modules with pytorch lightning

I am training a variational autoencoder, using pytorch-lightning. My pytorch-lightning code works with a Weights and Biases logger. I am trying to do a parameter sweep using a W&B parameter sweep.

The hyperparameter search procedure is based on what I followed from this repo.

The runs initialise correctly, but when my training script is run with the first set of hyperparameters, i get the following error:

2020-08-14 14:09:07,109 - wandb.wandb_agent - INFO - About to run command: /usr/bin/env python train_sweep.py --LR=0.02537477586974176
Traceback (most recent call last):
  File "train_sweep.py", line 1, in <module>
    import yaml
ImportError: No module named yaml

yaml is installed and is working correctly. I can train the network by setting the parameters manually, but not with the parameter sweep.

Here is my sweep script to train the VAE:

import yaml
import numpy as np
import ipdb
import torch
from vae_experiment import VAEXperiment
import torch.backends.cudnn as cudnn
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.callbacks import EarlyStopping
from vae_network import VanillaVAE
import os
import wandb
from utils import get_config, log_to_wandb

# Sweep parameters
hyperparameter_defaults = dict(
    root='data_semantics',
    gpus=1,
    batch_size = 2,
    lr = 1e-3,
    num_layers = 5,
    features_start = 64,
    bilinear = False,
    grad_batches = 1,
    epochs = 20
)

wandb.init(config=hyperparameter_defaults)
config = wandb.config

def main(hparams):

    model = VanillaVAE(hparams['exp_params']['img_size'], **hparams['model_params'])
    model.build_layers()
    experiment = VAEXperiment(model, hparams['exp_params'], hparams['parameters'])

    logger = WandbLogger(
        project='vae',
        name=config['logging_params']['name'],
        version=config['logging_params']['version'],
        save_dir=config['logging_params']['save_dir']
        )

    wandb_logger.watch(model.net)

    early_stopping = EarlyStopping(
       monitor='val_loss',
       min_delta=0.00,
       patience=3,
       verbose=False,
       mode='min'
    )

    runner = Trainer(weights_save_path="../../Logs/",
     min_epochs=1,
     logger=logger,
     log_save_interval=10,
     train_percent_check=1.,
     val_percent_check=1.,
     num_sanity_val_steps=5,
     early_stop_callback = early_stopping,
     **config['trainer_params']
     )

    runner.fit(experiment)

if __name__ == '__main__':
    main(config)

Why am I getting this error?

The problem is that the structure of my code and the way that I was running the wandb commands was not in the correct order. Looking at this pytorch-ligthning with wandb is the correct structure to follow.

Here is my refactored code:

#!/usr/bin/env python
import wandb
from utils import get_config

#---------------------------------------------------------------------------------------------

def main():

    """
    The training function used in each sweep of the model.
    For every sweep, this function will be executed as if it is a script on its own.
    """

    import wandb
    import yaml
    import numpy as np
    import torch
    from vae_experiment import VAEXperiment
    import torch.backends.cudnn as cudnn
    from pytorch_lightning import Trainer
    from pytorch_lightning.loggers import WandbLogger
    from pytorch_lightning.callbacks import EarlyStopping
    from vae_network import VanillaVAE
    import os
    from utils import log_to_wandb, format_config

    path_to_config = 'sweep.yaml'
    config = get_config(path_to_yaml)

    path_to_defaults = 'defaults.yaml'
    param_defaults = get_config(path_to_defaults)

    wandb.init(config=param_defaults)

    config = format_config(config, wandb.config)
    model = VanillaVAE(config['meta']['img_size'], hidden_dims = config['hidden_dims'], latent_dim  = config['latent_dim'])
    model.build_layers()

    experiment = VAEXperiment(model, config)

    early_stopping = EarlyStopping(
       monitor='val_loss',
       min_delta=0.00,
       patience=3,
       verbose=False,
       mode='max'
    )

    runner = Trainer(weights_save_path=config['meta']['save_dir'],
        min_epochs=1,
        train_percent_check=1.,
        val_percent_check=1.,
        num_sanity_val_steps=5,
        early_stop_callback = early_stopping,
        **config['trainer_params'])

    runner.fit(experiment)
    log_to_wandb(config, runner, experiment, path_to_config)

#---------------------------------------------------------------------------------------------

path_to_yaml = 'sweep.yaml'
sweep_config = get_config(path_to_yaml)
sweep_id = wandb.sweep(sweep_config)
wandb.agent(sweep_id, function=main)

#---------------------------------------------------------------------------------------------

Do you launch python in your shell by typing python or python3 ? Your script could be calling python 2 instead of python 3.

If this is the case, you can explicitly tell wandb to use python 3. See this section of documentation , in particular "Running Sweeps with Python 3".

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM