class scvi.module.SCANVAE(n_input, n_batch=0, n_labels=0, n_hidden=128, n_latent=10, n_layers=1, n_continuous_cov=0, n_cats_per_cov=None, dropout_rate=0.1, dispersion='gene', log_variational=True, gene_likelihood='zinb', y_prior=None, labels_groups=None, use_labels_groups=False, classifier_parameters={}, use_batch_norm='both', use_layer_norm='none', **vae_kwargs)[source]#

Bases: scvi.module._vae.VAE

Single-cell annotation using variational inference.

This is an implementation of the scANVI model described in [Xu21], inspired from M1 + M2 model, as described in (

n_input : int

Number of input genes

n_batch : int (default: 0)

Number of batches

n_labels : int (default: 0)

Number of labels

n_hidden : int (default: 128)

Number of nodes per hidden layer

n_latent : int (default: 10)

Dimensionality of the latent space

n_layers : int (default: 1)

Number of hidden layers used for encoder and decoder NNs

n_continuous_cov : int (default: 0)

Number of continuous covarites

n_cats_per_cov : Iterable[int] | NoneOptional[Iterable[int]] (default: None)

Number of categories for each extra categorical covariate

dropout_rate : float (default: 0.1)

Dropout rate for neural networks

dispersion : str (default: 'gene')

One of the following

  • 'gene' - dispersion parameter of NB is constant per gene across cells

  • 'gene-batch' - dispersion can differ between different batches

  • 'gene-label' - dispersion can differ between different labels

  • 'gene-cell' - dispersion can differ for every gene in every cell

log_variational : bool (default: True)

Log(data+1) prior to encoding for numerical stability. Not normalization.

gene_likelihood : str (default: 'zinb')

One of

  • 'nb' - Negative binomial distribution

  • 'zinb' - Zero-inflated negative binomial distribution


If None, initialized to uniform probability over cell types

labels_groups : Sequence[int] | NoneOptional[Sequence[int]] (default: None)

Label group designations

use_labels_groups : bool (default: False)

Whether to use the label groups

use_batch_norm : {‘encoder’, ‘decoder’, ‘none’, ‘both’}Literal[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default: 'both')

Whether to use batch norm in layers

use_layer_norm : {‘encoder’, ‘decoder’, ‘none’, ‘both’}Literal[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default: 'none')

Whether to use layer norm in layers


Keyword args for VAE

Attributes table#

Methods table#


classify(x[, batch_index])

loss(tensors, inference_outputs, ...[, ...])

Compute the loss for a minibatch of data.




alias of TypeVar(‘T_destination’, bound=Mapping[str, torch.Tensor])

alias of TypeVar(‘T_destination’, bound=Mapping[str, torch.Tensor]) .. autoattribute:: SCANVAE.T_destination device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^



SCANVAE.dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

training# bool#





SCANVAE.classify(x, batch_index=None)[source]#


SCANVAE.loss(tensors, inference_outputs, generative_ouputs, feed_labels=False, kl_weight=1, labelled_tensors=None, classification_ratio=None)[source]#

Compute the loss for a minibatch of data.

This function uses the outputs of the inference and generative functions to compute a loss. This many optionally include other penalty terms, which should be computed here.

This function should return an object of type LossRecorder.