scvi.module.AutoZIVAE#

class scvi.module.AutoZIVAE(n_input, alpha_prior=0.5, beta_prior=0.5, minimal_dropout=0.01, zero_inflation='gene', **kwargs)[source]#

Bases: VAE

Implementation of the AutoZI model [Clivio et al., 2019].

Parameters:
  • n_input (int) – Number of input genes

  • alpha_prior (Tunable_[float] (default: 0.5)) – Float denoting the alpha parameter of the prior Beta distribution of the zero-inflation Bernoulli parameter. Should be between 0 and 1, not included. When set to None, will be set to 1 - beta_prior if beta_prior is not None, otherwise the prior Beta distribution will be learned on an Empirical Bayes fashion.

  • beta_prior (Tunable_[float] (default: 0.5)) – Float denoting the beta parameter of the prior Beta distribution of the zero-inflation Bernoulli parameter. Should be between 0 and 1, not included. When set to None, will be set to 1 - alpha_prior if alpha_prior is not None, otherwise the prior Beta distribution will be learned on an Empirical Bayes fashion.

  • minimal_dropout (Tunable_[float] (default: 0.01)) – Float denoting the lower bound of the cell-gene ZI rate in the ZINB component. Must be non-negative. Can be set to 0 but not recommended as this may make the mixture problem ill-defined.

  • zero_inflation (One of the following) –

    • 'gene' - zero-inflation Bernoulli parameter of AutoZI is constant per gene across cells

    • 'gene-batch' - zero-inflation Bernoulli parameter can differ between different batches

    • 'gene-label' - zero-inflation Bernoulli parameter can differ between different labels

    • 'gene-cell' - zero-inflation Bernoulli parameter can differ for every gene in every cell

See VAE docstring (scvi/models/vae.py) for more parameters. reconstruction_loss should not be specified.

Examples

>>> gene_dataset = CortexDataset()
>>> autozivae = AutoZIVAE(gene_dataset.nb_genes, alpha_prior=0.5, beta_prior=0.5, minimal_dropout=0.01)

Attributes table#

training

Methods table#

compute_global_kl_divergence()

Compute global KL divergence.

generative(z, library[, batch_index, y, ...])

Run the generative model.

get_alphas_betas([as_numpy])

Get the parameters of the Bernoulli beta prior and posterior distributions.

get_reconstruction_loss(x, px_rate, px_r, ...)

Compute the reconstruction loss.

loss(tensors, inference_outputs, ...[, ...])

Compute the loss.

rescale_dropout(px_dropout[, eps_log])

Rescale dropout rate.

reshape_bernoulli(bernoulli_params[, ...])

Reshape Bernoulli parameters to match the input tensor.

sample_bernoulli_params([batch_index, y, ...])

Sample Bernoulli parameters from the posterior distribution.

sample_from_beta_distribution(alpha, beta[, ...])

Sample from a beta distribution.

Attributes#

AutoZIVAE.training: bool#

Methods#

AutoZIVAE.compute_global_kl_divergence()[source]#

Compute global KL divergence.

Return type:

Tensor

AutoZIVAE.generative(z, library, batch_index=None, y=None, size_factor=None, cont_covs=None, cat_covs=None, n_samples=1, eps_log=1e-08)[source]#

Run the generative model.

Return type:

dict[str, Tensor]

AutoZIVAE.get_alphas_betas(as_numpy=True)[source]#

Get the parameters of the Bernoulli beta prior and posterior distributions.

Return type:

dict[str, Union[Tensor, ndarray]]

AutoZIVAE.get_reconstruction_loss(x, px_rate, px_r, px_dropout, bernoulli_params, eps_log=1e-08, **kwargs)[source]#

Compute the reconstruction loss.

Return type:

Tensor

AutoZIVAE.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0, n_obs=1.0)[source]#

Compute the loss.

Return type:

tuple[Tensor, Tensor, Tensor]

AutoZIVAE.rescale_dropout(px_dropout, eps_log=1e-08)[source]#

Rescale dropout rate.

Return type:

Tensor

AutoZIVAE.reshape_bernoulli(bernoulli_params, batch_index=None, y=None)[source]#

Reshape Bernoulli parameters to match the input tensor.

Return type:

Tensor

AutoZIVAE.sample_bernoulli_params(batch_index=None, y=None, n_samples=1)[source]#

Sample Bernoulli parameters from the posterior distribution.

Return type:

Tensor

AutoZIVAE.sample_from_beta_distribution(alpha, beta, eps_gamma=1e-30, eps_sample=1e-07)[source]#

Sample from a beta distribution.

Return type:

Tensor