scvi.train.AdversarialTrainingPlan#

class scvi.train.AdversarialTrainingPlan(module, *, optimizer='Adam', optimizer_creator=None, lr=0.001, weight_decay=1e-06, n_steps_kl_warmup=None, n_epochs_kl_warmup=400, reduce_lr_on_plateau=False, lr_factor=0.6, lr_patience=30, lr_threshold=0.0, lr_scheduler_metric='elbo_validation', lr_min=0, adversarial_classifier=False, scale_adversarial_loss='auto', **loss_kwargs)[source]#

Bases: TrainingPlan

Train vaes with adversarial loss option to encourage latent space mixing.

Parameters:
  • module (BaseModuleClass) – A module instance from class BaseModuleClass.

  • optimizer (Tunable_[Literal['Adam', 'AdamW', 'Custom']] (default: 'Adam')) – One of “Adam” (Adam), “AdamW” (AdamW), or “Custom”, which requires a custom optimizer creator callable to be passed via optimizer_creator.

  • optimizer_creator (Optional[Callable[[Iterable[Tensor]], Optimizer]] (default: None)) – A callable taking in parameters and returning a Optimizer. This allows using any PyTorch optimizer with custom hyperparameters.

  • lr (Tunable_[float] (default: 0.001)) – Learning rate used for optimization, when optimizer_creator is None.

  • weight_decay (Tunable_[float] (default: 1e-06)) – Weight decay used in optimization, when optimizer_creator is None.

  • eps – eps used for optimization, when optimizer_creator is None.

  • n_steps_kl_warmup (Tunable_[int] (default: None)) – Number of training steps (minibatches) to scale weight on KL divergences from 0 to 1. Only activated when n_epochs_kl_warmup is set to None.

  • n_epochs_kl_warmup (Tunable_[int] (default: 400)) – Number of epochs to scale weight on KL divergences from 0 to 1. Overrides n_steps_kl_warmup when both are not None.

  • reduce_lr_on_plateau (Tunable_[bool] (default: False)) – Whether to monitor validation loss and reduce learning rate when validation set lr_scheduler_metric plateaus.

  • lr_factor (Tunable_[float] (default: 0.6)) – Factor to reduce learning rate.

  • lr_patience (Tunable_[int] (default: 30)) – Number of epochs with no improvement after which learning rate will be reduced.

  • lr_threshold (Tunable_[float] (default: 0.0)) – Threshold for measuring the new optimum.

  • lr_scheduler_metric (Literal['elbo_validation', 'reconstruction_loss_validation', 'kl_local_validation'] (default: 'elbo_validation')) – Which metric to track for learning rate reduction.

  • lr_min (float (default: 0)) – Minimum learning rate allowed

  • adversarial_classifier (Union[bool, Classifier] (default: False)) – Whether to use adversarial classifier in the latent space

  • scale_adversarial_loss (Union[float, Literal['auto']] (default: 'auto')) – Scaling factor on the adversarial components of the loss. By default, adversarial loss is scaled from 1 to 0 following opposite of kl warmup.

  • **loss_kwargs – Keyword args to pass to the loss method of the module. kl_weight should not be passed here and is handled automatically.

Attributes table#

training

Methods table#

configure_optimizers()

Configure optimizers for adversarial training.

loss_adversarial_classifier(z, batch_index)

Loss for adversarial classifier.

on_train_epoch_end()

Update the learning rate via scheduler steps.

on_validation_epoch_end()

Update the learning rate via scheduler steps.

training_step(batch, batch_idx)

Training step for adversarial training.

Attributes#

AdversarialTrainingPlan.training: bool#

Methods#

AdversarialTrainingPlan.configure_optimizers()[source]#

Configure optimizers for adversarial training.

AdversarialTrainingPlan.loss_adversarial_classifier(z, batch_index, predict_true_class=True)[source]#

Loss for adversarial classifier.

AdversarialTrainingPlan.on_train_epoch_end()[source]#

Update the learning rate via scheduler steps.

AdversarialTrainingPlan.on_validation_epoch_end()[source]#

Update the learning rate via scheduler steps.

Return type:

None

AdversarialTrainingPlan.training_step(batch, batch_idx)[source]#

Training step for adversarial training.