scvi.module.MULTIVAE#

class scvi.module.MULTIVAE(n_input_regions=0, n_input_genes=0, n_batch=0, gene_likelihood='zinb', n_hidden=None, n_latent=None, n_layers_encoder=2, n_layers_decoder=2, n_continuous_cov=0, n_cats_per_cov=None, dropout_rate=0.1, region_factors=True, use_batch_norm='none', use_layer_norm='both', latent_distribution='normal', deeply_inject_covariates=False, encode_covariates=False, use_size_factor_key=False)[source]#

Bases: BaseModuleClass

Variational auto-encoder model for joint paired + unpaired RNA-seq and ATAC-seq data.

Parameters:
n_input_regions : int (default: 0)

Number of input regions.

n_input_genes : int (default: 0)

Number of input genes.

n_batch : int (default: 0)

Number of batches, if 0, no batch correction is performed.

gene_likelihood : {‘zinb’, ‘nb’, ‘poisson’}Literal[‘zinb’, ‘nb’, ‘poisson’] (default: 'zinb')

The distribution to use for gene expression data. One of the following * 'zinb' - Zero-Inflated Negative Binomial * 'nb' - Negative Binomial * 'poisson' - Poisson

n_hidden : int | NoneOptional[int] (default: None)

Number of nodes per hidden layer. If None, defaults to square root of number of regions.

n_latent : int | NoneOptional[int] (default: None)

Dimensionality of the latent space. If None, defaults to square root of n_hidden.

n_layers_encoder : int (default: 2)

Number of hidden layers used for encoder NN.

n_layers_decoder : int (default: 2)

Number of hidden layers used for decoder NN.

dropout_rate : float (default: 0.1)

Dropout rate for neural networks

region_factors : bool (default: True)

Include region-specific factors in the model

use_batch_norm : {‘encoder’, ‘decoder’, ‘none’, ‘both’}Literal[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default: 'none')

One of the following * 'encoder' - use batch normalization in the encoder only * 'decoder' - use batch normalization in the decoder only * 'none' - do not use batch normalization * 'both' - use batch normalization in both the encoder and decoder

use_layer_norm : {‘encoder’, ‘decoder’, ‘none’, ‘both’}Literal[‘encoder’, ‘decoder’, ‘none’, ‘both’] (default: 'both')

One of the following * 'encoder' - use layer normalization in the encoder only * 'decoder' - use layer normalization in the decoder only * 'none' - do not use layer normalization * 'both' - use layer normalization in both the encoder and decoder

latent_distribution : str (default: 'normal')

which latent distribution to use, options are * 'normal' - Normal distribution * 'ln' - Logistic normal distribution (Normal(0, I) transformed by softmax)

deeply_inject_covariates : bool (default: False)

Whether to deeply inject covariates into all layers of the decoder. If False, covariates will only be included in the input layer.

encode_covariates : bool (default: False)

If True, include covariates in the input to the encoder.

use_size_factor_key : bool (default: False)

Use size_factor AnnDataField defined by the user as scaling factor in mean of conditional RNA distribution.

Attributes table#

Methods table#

generative(z, qz_m, batch_index[, ...])

Runs the generative model.

get_reconstruction_loss_accessibility(x, p, d)

get_reconstruction_loss_expression(x, ...)

inference(x, batch_index, cont_covs, cat_covs)

Run the inference (recognition) model.

loss(tensors, inference_outputs, ...[, ...])

Compute the loss for a minibatch of data.

Attributes#

T_destination#

MULTIVAE.T_destination#

alias of TypeVar(‘T_destination’, bound=Mapping[str, Tensor])

alias of TypeVar(‘T_destination’, bound=Mapping[str, Tensor]) .. autoattribute:: MULTIVAE.T_destination device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

MULTIVAE.device#

dump_patches#

MULTIVAE.dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

training#

MULTIVAE.training: bool#

Methods#

generative#

MULTIVAE.generative(z, qz_m, batch_index, cont_covs=None, cat_covs=None, libsize_expr=None, size_factor=None, use_z_mean=False)[source]#

Runs the generative model.

get_reconstruction_loss_accessibility#

MULTIVAE.get_reconstruction_loss_accessibility(x, p, d)[source]#

get_reconstruction_loss_expression#

MULTIVAE.get_reconstruction_loss_expression(x, px_rate, px_r, px_dropout)[source]#

inference#

MULTIVAE.inference(x, batch_index, cont_covs, cat_covs, n_samples=1)[source]#

Run the inference (recognition) model.

In the case of variational inference, this function will perform steps related to computing variational distribution parameters. In a VAE, this will involve running data through encoder networks.

This function should return a dictionary with str keys and Tensor values.

Return type:

{str: Tensor}Dict[str, Tensor]

loss#

MULTIVAE.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0)[source]#

Compute the loss for a minibatch of data.

This function uses the outputs of the inference and generative functions to compute a loss. This many optionally include other penalty terms, which should be computed here.

This function should return an object of type LossRecorder.