scvi.module.VAE#
- class scvi.module.VAE(n_input, n_batch=0, n_labels=0, n_hidden=128, n_latent=10, n_layers=1, n_continuous_cov=0, n_cats_per_cov=None, dropout_rate=0.1, dispersion='gene', log_variational=True, gene_likelihood='zinb', latent_distribution='normal', encode_covariates=False, deeply_inject_covariates=True, use_batch_norm='both', use_layer_norm='none', use_size_factor_key=False, use_observed_lib_size=True, library_log_means=None, library_log_vars=None, var_activation=None, latent_data_type=None)[source]#
Bases:
BaseLatentModeModuleClass
Variational auto-encoder model.
This is an implementation of the scVI model described in [Lopez et al., 2018].
- Parameters:
n_input (
int
) – Number of input genesn_batch (
int
(default:0
)) – Number of batches, if 0, no batch correction is performed.n_labels (
int
(default:0
)) – Number of labelsn_hidden (
int
(default:128
)) – Number of nodes per hidden layern_latent (
int
(default:10
)) – Dimensionality of the latent spacen_layers (
int
(default:1
)) – Number of hidden layers used for encoder and decoder NNsn_continuous_cov (
int
(default:0
)) – Number of continuous covaritesn_cats_per_cov (
Optional
[Iterable
[int
]] (default:None
)) – Number of categories for each extra categorical covariatedropout_rate (
float
(default:0.1
)) – Dropout rate for neural networksdispersion (
str
(default:'gene'
)) –One of the following
'gene'
- dispersion parameter of NB is constant per gene across cells'gene-batch'
- dispersion can differ between different batches'gene-label'
- dispersion can differ between different labels'gene-cell'
- dispersion can differ for every gene in every cell
log_variational (
bool
(default:True
)) – Log(data+1) prior to encoding for numerical stability. Not normalization.gene_likelihood (
Literal
[‘zinb’, ‘nb’, ‘poisson’] (default:'zinb'
)) –One of
'nb'
- Negative binomial distribution'zinb'
- Zero-inflated negative binomial distribution'poisson'
- Poisson distribution
latent_distribution (
str
(default:'normal'
)) –One of
'normal'
- Isotropic normal'ln'
- Logistic normal with normal params N(0, 1)
encode_covariates (
bool
(default:False
)) – Whether to concatenate covariates to expression in encoderdeeply_inject_covariates (
bool
(default:True
)) – Whether to concatenate covariates into output of hidden layers in encoder/decoder. This option only applies when n_layers > 1. The covariates are concatenated to the input of subsequent hidden layers.use_layer_norm (
Literal
[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default:'none'
)) – Whether to use layer norm in layersuse_size_factor_key (
bool
(default:False
)) – Use size_factor AnnDataField defined by the user as scaling factor in mean of conditional distribution. Takes priority over use_observed_lib_size.use_observed_lib_size (
bool
(default:True
)) – Use observed library size for RNA as scaling factor in mean of conditional distributionlibrary_log_means (
Optional
[ndarray
] (default:None
)) – 1 x n_batch array of means of the log library sizes. Parameterizes prior on library size if not using observed library size.library_log_vars (
Optional
[ndarray
] (default:None
)) – 1 x n_batch array of variances of the log library sizes. Parameterizes prior on library size if not using observed library size.var_activation (
Optional
[Callable
] (default:None
)) – Callable used to ensure positivity of the variational distributions’ variance. When None, defaults to torch.exp.latent_data_type (
Optional
[str
] (default:None
)) – None or the type of latent data.
Attributes table#
Methods table#
|
Runs the generative model. |
|
Computes the loss function for the model. |
|
Computes the marginal log likelihood of the model. |
|
Generate observation samples from the posterior predictive distribution. |
Attributes#
training
Methods#
generative
- VAE.generative(z, library, batch_index, cont_covs=None, cat_covs=None, size_factor=None, y=None, transform_batch=None)[source]#
Runs the generative model.
loss
- VAE.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0)[source]#
Computes the loss function for the model.
marginal_ll
sample
- VAE.sample(tensors, n_samples=1, library_size=1)[source]#
Generate observation samples from the posterior predictive distribution.
The posterior predictive distribution is written as \(p(\hat{x} \mid x)\).
- Parameters:
tensors – Tensors dict
n_samples – Number of required samples for each cell
library_size – Library size to scale samples to
- Return type:
- Returns:
x_new :
torch.Tensor
tensor with shape (n_cells, n_genes, n_samples)