scvi.module.AutoZIVAE#

class scvi.module.AutoZIVAE(n_input, alpha_prior=0.5, beta_prior=0.5, minimal_dropout=0.01, zero_inflation='gene', **kwargs)[source]#

Bases: VAE

Implementation of the AutoZI model [Clivio19].

Parameters:
n_input : int

Number of input genes

alpha_prior : float | NoneOptional[float] (default: 0.5)

Float denoting the alpha parameter of the prior Beta distribution of the zero-inflation Bernoulli parameter. Should be between 0 and 1, not included. When set to ``None’’, will be set to 1 - beta_prior if beta_prior is not ``None’’, otherwise the prior Beta distribution will be learned on an Empirical Bayes fashion.

beta_prior : float | NoneOptional[float] (default: 0.5)

Float denoting the beta parameter of the prior Beta distribution of the zero-inflation Bernoulli parameter. Should be between 0 and 1, not included. When set to ``None’’, will be set to 1 - alpha_prior if alpha_prior is not ``None’’, otherwise the prior Beta distribution will be learned on an Empirical Bayes fashion.

minimal_dropout : float (default: 0.01)

Float denoting the lower bound of the cell-gene ZI rate in the ZINB component. Must be non-negative. Can be set to 0 but not recommended as this may make the mixture problem ill-defined.

zero_inflation : One of the following

  • 'gene' - zero-inflation Bernoulli parameter of AutoZI is constant per gene across cells

  • 'gene-batch' - zero-inflation Bernoulli parameter can differ between different batches

  • 'gene-label' - zero-inflation Bernoulli parameter can differ between different labels

  • 'gene-cell' - zero-inflation Bernoulli parameter can differ for every gene in every cell

See VAE docstring (scvi/models/vae.py) for more parameters. reconstruction_loss should not be specified.

Examples

>>> gene_dataset = CortexDataset()
>>> autozivae = AutoZIVAE(gene_dataset.nb_genes, alpha_prior=0.5, beta_prior=0.5, minimal_dropout=0.01)

Attributes table#

Methods table#

compute_global_kl_divergence()

rtype:

Tensor

generative(z, library[, batch_index, y, ...])

Runs the generative model.

get_alphas_betas([as_numpy])

rtype:

{str: Tensor | ndarray}Dict[str, Union[Tensor, ndarray]]

get_reconstruction_loss(x, px_rate, px_r, ...)

rtype:

Tensor

loss(tensors, inference_outputs, ...[, ...])

Compute the loss for a minibatch of data.

rescale_dropout(px_dropout[, eps_log])

rtype:

Tensor

reshape_bernoulli(bernoulli_params[, ...])

rtype:

Tensor

sample_bernoulli_params([batch_index, y, ...])

rtype:

Tensor

sample_from_beta_distribution(alpha, beta[, ...])

rtype:

Tensor

Attributes#

T_destination#

AutoZIVAE.T_destination#

alias of TypeVar(‘T_destination’, bound=Mapping[str, Tensor])

alias of TypeVar(‘T_destination’, bound=Mapping[str, Tensor]) .. autoattribute:: AutoZIVAE.T_destination device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

AutoZIVAE.device#

dump_patches#

AutoZIVAE.dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

training#

AutoZIVAE.training: bool#

Methods#

compute_global_kl_divergence#

AutoZIVAE.compute_global_kl_divergence()[source]#
Return type:

Tensor

generative#

AutoZIVAE.generative(z, library, batch_index=None, y=None, size_factor=None, cont_covs=None, cat_covs=None, n_samples=1, eps_log=1e-08)[source]#

Runs the generative model.

Return type:

{str: Tensor}Dict[str, Tensor]

get_alphas_betas#

AutoZIVAE.get_alphas_betas(as_numpy=True)[source]#
Return type:

{str: Tensor | ndarray}Dict[str, Union[Tensor, ndarray]]

get_reconstruction_loss#

AutoZIVAE.get_reconstruction_loss(x, px_rate, px_r, px_dropout, bernoulli_params, eps_log=1e-08, **kwargs)[source]#
Return type:

Tensor

loss#

AutoZIVAE.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0, n_obs=1.0)[source]#

Compute the loss for a minibatch of data.

This function uses the outputs of the inference and generative functions to compute a loss. This many optionally include other penalty terms, which should be computed here.

This function should return an object of type LossRecorder.

Return type:

Tuple[Tensor, Tensor, Tensor]

rescale_dropout#

AutoZIVAE.rescale_dropout(px_dropout, eps_log=1e-08)[source]#
Return type:

Tensor

reshape_bernoulli#

AutoZIVAE.reshape_bernoulli(bernoulli_params, batch_index=None, y=None)[source]#
Return type:

Tensor

sample_bernoulli_params#

AutoZIVAE.sample_bernoulli_params(batch_index=None, y=None, n_samples=1)[source]#
Return type:

Tensor

sample_from_beta_distribution#

AutoZIVAE.sample_from_beta_distribution(alpha, beta, eps_gamma=1e-30, eps_sample=1e-07)[source]#
Return type:

Tensor