TotalTrainer

class scvi.inference.TotalTrainer(model, dataset, train_size=0.9, test_size=0.1, pro_recons_weight=1.0, n_epochs_kl_warmup=None, n_iter_kl_warmup='auto', discriminator=None, use_adversarial_loss=False, kappa=None, early_stopping_kwargs='auto', **kwargs)[source]

Bases: scvi.inference.inference.UnsupervisedTrainer

Unsupervised training for totalVI using variational inference

Parameters
  • model (TOTALVITOTALVI) – A model instance from class TOTALVI

  • gene_dataset – A gene_dataset instance like CbmcDataset() with attribute protein_expression

  • train_size (floatfloat) – The train size, a float between 0 and 1 representing proportion of dataset to use for training to use Default: 0.90.

  • test_size (floatfloat) – The test size, a float between 0 and 1 representing proportion of dataset to use for testing to use Default: 0.10. Note that if train and test do not add to 1 the remainder is placed in a validation set

  • pro_recons_weight (floatfloat) – Scaling factor on the reconstruction loss for proteins. Default: 1.0.

  • n_epochs_kl_warmup (int, NoneOptional[int]) – Number of epochs for annealing the KL terms for z and mu of the ELBO (from 0 to 1). If None, no warmup performed, unless n_iter_kl_warmup is set.

  • n_iter_kl_warmup (str, intUnion[str, int]) – Number of minibatches for annealing the KL terms for z and mu of the ELBO (from 0 to 1). If set to “auto”, the number of iterations is equal to 75% of the number of cells. n_epochs_kl_warmup takes precedence if it is not None. If both are None, then no warmup is performed.

  • discriminator (Classifier, NoneOptional[Classifier]) – Classifier used for adversarial training scheme

  • use_adversarial_loss (boolbool) – Whether to use adversarial classifier to improve mixing

  • kappa (float, NoneOptional[float]) – Scaling factor for adversarial loss. If None, follow inverse of kl warmup schedule.

  • early_stopping_kwargs (dict, str, NoneUnion[dict, str, None]) – Keyword args for early stopping. If “auto”, use totalVI defaults. If None, disable early stopping.

Attributes Summary

default_metrics_to_monitor

Methods Summary

loss(tensors)

loss_discriminator(z, batch_index[, …])

on_training_loop(tensors_list)

train([n_epochs, lr, eps, params])

training_extras_end()

Place to put extra models in eval mode, etc.

training_extras_init([lr_d, eps])

Other necessary models to simultaneously train

Attributes Documentation

default_metrics_to_monitor = ['elbo']

Methods Documentation

loss(tensors)[source]
loss_discriminator(z, batch_index, predict_true_class=True, return_details=True)[source]
on_training_loop(tensors_list)[source]
train(n_epochs=500, lr=0.004, eps=0.01, params=None)[source]
training_extras_end()[source]

Place to put extra models in eval mode, etc.

training_extras_init(lr_d=0.001, eps=0.01)[source]

Other necessary models to simultaneously train

Parameters

**extras_kwargs

Returns