scvi.train.PyroTrainingPlan#

class scvi.train.PyroTrainingPlan(pyro_module, loss_fn=None, optim=None, optim_kwargs=None, n_steps_kl_warmup=None, n_epochs_kl_warmup=400, scale_elbo=1.0)[source]#

Bases: LightningModule

Lightning module task to train Pyro scvi-tools modules.

Parameters:
  • pyro_module (PyroBaseModuleClass) – An instance of PyroBaseModuleClass. This object should have callable model and guide attributes or methods.

  • loss_fn (Optional[ELBO] (default: None)) – A Pyro loss. Should be a subclass of ELBO. If None, defaults to Trace_ELBO.

  • optim (Optional[PyroOptim] (default: None)) – A Pyro optimizer instance, e.g., Adam. If None, defaults to pyro.optim.Adam optimizer with a learning rate of 1e-3.

  • optim_kwargs (Optional[dict] (default: None)) – Keyword arguments for default optimiser pyro.optim.Adam.

  • n_steps_kl_warmup (Optional[int] (default: None)) – Number of training steps (minibatches) to scale weight on KL divergences from 0 to 1. Only activated when n_epochs_kl_warmup is set to None.

  • n_epochs_kl_warmup (Optional[int] (default: 400)) – Number of epochs to scale weight on KL divergences from 0 to 1. Overrides n_steps_kl_warmup when both are not None.

  • scale_elbo (float (default: 1.0)) – Scale ELBO using scale. Potentially useful for avoiding numerical inaccuracy when working with very large ELBO.

Attributes table#

kl_weight

Scaling factor on KL divergence during training.

n_obs_training

Number of training examples.

Methods table#

backward(*args, **kwargs)

Called to perform backward on the loss returned in training_step().

configure_optimizers()

Shim optimizer for PyTorch Lightning.

forward(*args, **kwargs)

Passthrough to the model's forward method.

optimizer_step(*args, **kwargs)

Override this method to adjust the default way the Trainer calls each optimizer.

training_epoch_end(outputs)

Training epoch end for Pyro training.

training_step(batch, batch_idx)

Training step for Pyro training.

Attributes#

kl_weight

PyroTrainingPlan.kl_weight[source]#

Scaling factor on KL divergence during training.

n_obs_training

PyroTrainingPlan.n_obs_training[source]#

Number of training examples.

If not None, updates the n_obs attr of the Pyro module’s model and guide, if they exist.

training

PyroTrainingPlan.training: bool#

Methods#

backward

PyroTrainingPlan.backward(*args, **kwargs)[source]#

Called to perform backward on the loss returned in training_step(). Override this hook with your own implementation if you need to.

Parameters:
  • loss – The loss tensor returned by training_step(). If gradient accumulation is used, the loss here holds the normalized value (scaled by 1 / accumulation steps).

  • optimizer – Current optimizer being used. None if using manual optimization.

  • optimizer_idx – Index of the current optimizer being used. None if using manual optimization.

Example:

def backward(self, loss, optimizer, optimizer_idx):
    loss.backward()

configure_optimizers

PyroTrainingPlan.configure_optimizers()[source]#

Shim optimizer for PyTorch Lightning.

PyTorch Lightning wants to take steps on an optimizer returned by this function in order to increment the global step count. See PyTorch Lighinting optimizer manual loop.

Here we provide a shim optimizer that we can take steps on at minimal computational cost in order to keep Lightning happy :).

forward

PyroTrainingPlan.forward(*args, **kwargs)[source]#

Passthrough to the model’s forward method.

optimizer_step

PyroTrainingPlan.optimizer_step(*args, **kwargs)[source]#

Override this method to adjust the default way the Trainer calls each optimizer.

By default, Lightning calls step() and zero_grad() as shown in the example once per optimizer. This method (and zero_grad()) won’t be called during the accumulation phase when Trainer(accumulate_grad_batches != 1). Overriding this hook has no benefit with manual optimization.

Parameters:
  • epoch – Current epoch

  • batch_idx – Index of current batch

  • optimizer – A PyTorch optimizer

  • optimizer_idx – If you used multiple optimizers, this indexes into that list.

  • optimizer_closure – The optimizer closure. This closure must be executed as it includes the calls to training_step(), optimizer.zero_grad(), and backward().

  • on_tpuTrue if TPU backward is required

  • using_native_ampTrue if using native amp

  • using_lbfgs – True if the matching optimizer is torch.optim.LBFGS

Examples:

# DEFAULT
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,
                   optimizer_closure, on_tpu, using_native_amp, using_lbfgs):
    optimizer.step(closure=optimizer_closure)

# Alternating schedule for optimizer steps (i.e.: GANs)
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,
                   optimizer_closure, on_tpu, using_native_amp, using_lbfgs):
    # update generator opt every step
    if optimizer_idx == 0:
        optimizer.step(closure=optimizer_closure)

    # update discriminator opt every 2 steps
    if optimizer_idx == 1:
        if (batch_idx + 1) % 2 == 0 :
            optimizer.step(closure=optimizer_closure)
        else:
            # call the closure by itself to run `training_step` + `backward` without an optimizer step
            optimizer_closure()

    # ...
    # add as many optimizers as you want

Here’s another example showing how to use this for more advanced things such as learning rate warm-up:

# learning rate warm-up
def optimizer_step(
    self,
    epoch,
    batch_idx,
    optimizer,
    optimizer_idx,
    optimizer_closure,
    on_tpu,
    using_native_amp,
    using_lbfgs,
):
    # update params
    optimizer.step(closure=optimizer_closure)

    # manually warm up lr without a scheduler
    if self.trainer.global_step < 500:
        lr_scale = min(1.0, float(self.trainer.global_step + 1) / 500.0)
        for pg in optimizer.param_groups:
            pg["lr"] = lr_scale * self.learning_rate

training_epoch_end

PyroTrainingPlan.training_epoch_end(outputs)[source]#

Training epoch end for Pyro training.

training_step

PyroTrainingPlan.training_step(batch, batch_idx)[source]#

Training step for Pyro training.