scvi.train.PyroTrainingPlan#

class scvi.train.PyroTrainingPlan(pyro_module, loss_fn=None, optim=None, optim_kwargs=None, n_steps_kl_warmup=None, n_epochs_kl_warmup=400, scale_elbo=1.0)[source]#

Bases: LowLevelPyroTrainingPlan

Lightning module task to train Pyro scvi-tools modules.

Parameters:
  • pyro_module (PyroBaseModuleClass) – An instance of PyroBaseModuleClass. This object should have callable model and guide attributes or methods.

  • loss_fn (Optional[ELBO] (default: None)) – A Pyro loss. Should be a subclass of ELBO. If None, defaults to Trace_ELBO.

  • optim (Optional[PyroOptim] (default: None)) – A Pyro optimizer instance, e.g., Adam. If None, defaults to pyro.optim.Adam optimizer with a learning rate of 1e-3.

  • optim_kwargs (Optional[dict] (default: None)) – Keyword arguments for default optimiser pyro.optim.Adam.

  • n_steps_kl_warmup (Optional[int] (default: None)) – Number of training steps (minibatches) to scale weight on KL divergences from 0 to 1. Only activated when n_epochs_kl_warmup is set to None.

  • n_epochs_kl_warmup (Optional[int] (default: 400)) – Number of epochs to scale weight on KL divergences from 0 to 1. Overrides n_steps_kl_warmup when both are not None.

  • scale_elbo (float (default: 1.0)) – Scale ELBO using scale. Potentially useful for avoiding numerical inaccuracy when working with very large ELBO.

Attributes table#

training

Methods table#

backward(*args, **kwargs)

Called to perform backward on the loss returned in training_step().

configure_optimizers()

Shim optimizer for PyTorch Lightning.

optimizer_step(*args, **kwargs)

Override this method to adjust the default way the Trainer calls the optimizer.

training_step(batch, batch_idx)

Training step for Pyro training.

Attributes#

PyroTrainingPlan.training: bool#

Methods#

PyroTrainingPlan.backward(*args, **kwargs)[source]#

Called to perform backward on the loss returned in training_step(). Override this hook with your own implementation if you need to.

Parameters:

loss – The loss tensor returned by training_step(). If gradient accumulation is used, the loss here holds the normalized value (scaled by 1 / accumulation steps).

Example:

def backward(self, loss):
    loss.backward()
PyroTrainingPlan.configure_optimizers()[source]#

Shim optimizer for PyTorch Lightning.

PyTorch Lightning wants to take steps on an optimizer returned by this function in order to increment the global step count. See PyTorch Lighinting optimizer manual loop.

Here we provide a shim optimizer that we can take steps on at minimal computational cost in order to keep Lightning happy :).

PyroTrainingPlan.optimizer_step(*args, **kwargs)[source]#

Override this method to adjust the default way the Trainer calls the optimizer.

By default, Lightning calls step() and zero_grad() as shown in the example. This method (and zero_grad()) won’t be called during the accumulation phase when Trainer(accumulate_grad_batches != 1). Overriding this hook has no benefit with manual optimization.

Parameters:
  • epoch – Current epoch

  • batch_idx – Index of current batch

  • optimizer – A PyTorch optimizer

  • optimizer_closure – The optimizer closure. This closure must be executed as it includes the calls to training_step(), optimizer.zero_grad(), and backward().

Examples:

# DEFAULT
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure):
    optimizer.step(closure=optimizer_closure)

# Learning rate warm-up
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure):
    # update params
    optimizer.step(closure=optimizer_closure)

    # manually warm up lr without a scheduler
    if self.trainer.global_step < 500:
        lr_scale = min(1.0, float(self.trainer.global_step + 1) / 500.0)
        for pg in optimizer.param_groups:
            pg["lr"] = lr_scale * self.learning_rate
PyroTrainingPlan.training_step(batch, batch_idx)[source]#

Training step for Pyro training.