class scvi.module.SCANVAE(n_input, n_batch=0, n_labels=0, n_hidden=128, n_latent=10, n_layers=1, n_continuous_cov=0, n_cats_per_cov=None, dropout_rate=0.1, dispersion='gene', log_variational=True, gene_likelihood='zinb', y_prior=None, labels_groups=None, use_labels_groups=False, classifier_parameters={}, use_batch_norm='both', use_layer_norm='none', **kwargs)[source]

Bases: scvi.module._vae.VAE

Single-cell annotation using variational inference.

This is an implementation of the scANVI model described in [Xu21], inspired from M1 + M2 model, as described in (

n_input : intint

Number of input genes

n_batch : intint (default: 0)

Number of batches

n_labels : intint (default: 0)

Number of labels

n_hidden : intint (default: 128)

Number of nodes per hidden layer

n_latent : intint (default: 10)

Dimensionality of the latent space

n_layers : intint (default: 1)

Number of hidden layers used for encoder and decoder NNs

n_continuous_cov : intint (default: 0)

Number of continuous covarites

n_cats_per_cov : Iterable[int] | NoneOptional[Iterable[int]] (default: None)

Number of categories for each extra categorical covariate

dropout_rate : floatfloat (default: 0.1)

Dropout rate for neural networks

dispersion : strstr (default: 'gene')

One of the following

  • 'gene' - dispersion parameter of NB is constant per gene across cells

  • 'gene-batch' - dispersion can differ between different batches

  • 'gene-label' - dispersion can differ between different labels

  • 'gene-cell' - dispersion can differ for every gene in every cell

log_variational : boolbool (default: True)

Log(data+1) prior to encoding for numerical stability. Not normalization.

gene_likelihood : strstr (default: 'zinb')

One of

  • 'nb' - Negative binomial distribution

  • 'zinb' - Zero-inflated negative binomial distribution


If None, initialized to uniform probability over cell types

labels_groups : Sequence[int] | NoneOptional[Sequence[int]] (default: None)

Label group designations

use_labels_groups : boolbool (default: False)

Whether to use the label groups

use_batch_norm : {‘encoder’, ‘decoder’, ‘none’, ‘both’}Literal[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default: 'both')

Whether to use batch norm in layers

use_layer_norm : {‘encoder’, ‘decoder’, ‘none’, ‘both’}Literal[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default: 'none')

Whether to use layer norm in layers


Keyword args for VAE



classify(x[, batch_index])

loss(tensors, inference_outputs, …[, …])

Compute the loss for a minibatch of data.