scvi.nn.EncoderTOTALVI#

class scvi.nn.EncoderTOTALVI(n_input, n_output, n_cat_list=None, n_layers=2, n_hidden=256, dropout_rate=0.1, distribution='ln', use_batch_norm=True, use_layer_norm=False)[source]#

Bases: Module

Encodes data of n_input dimensions into a latent space of n_output dimensions.

Uses a fully-connected neural network of n_hidden layers.

Parameters:
  • n_input (int) – The dimensionality of the input (data space)

  • n_output (int) – The dimensionality of the output (latent space)

  • n_cat_list (Iterable[int] (default: None)) – A list containing the number of categories for each category of interest. Each category will be included using a one-hot encoding

  • n_layers (int (default: 2)) – The number of fully-connected hidden layers

  • n_hidden (int (default: 256)) – The number of nodes per hidden layer

  • dropout_rate (float (default: 0.1)) – Dropout rate to apply to each of the hidden layers

  • distribution (str (default: 'ln')) –

    Distribution of the latent space, one of

    • 'normal' - Normal distribution

    • 'ln' - Logistic normal

  • use_batch_norm (bool (default: True)) – Whether to use batch norm in layers

  • use_layer_norm (bool (default: False)) – Whether to use layer norm

Attributes table#

Methods table#

forward(data, *cat_list)

The forward computation for a single sample.

reparameterize_transformation(mu, var)

Reparameterization trick to sample from a normal distribution.

Attributes#

EncoderTOTALVI.training: bool#

Methods#

EncoderTOTALVI.forward(data, *cat_list)[source]#

The forward computation for a single sample.

  1. Encodes the data into latent space using the encoder network

  2. Generates a mean \( q_m \) and variance \( q_v \)

  3. Samples a new value from an i.i.d. latent distribution

The dictionary latent contains the samples of the latent variables, while untran_latent contains the untransformed versions of these latent variables. For example, the library size is log normally distributed, so untran_latent["l"] gives the normal sample that was later exponentiated to become latent["l"]. The logistic normal distribution is equivalent to applying softmax to a normal sample.

Parameters:
  • data (Tensor) – tensor with shape (n_input,)

  • cat_list (int) – list of category membership(s) for this sample

Returns:

6-tuple. First 4 of torch.Tensor, next 2 are dict of torch.Tensor tensors of shape (n_latent,) for mean and var, and sample

EncoderTOTALVI.reparameterize_transformation(mu, var)[source]#

Reparameterization trick to sample from a normal distribution.