scvi.external.gimvi.JVAE

class scvi.external.gimvi.JVAE(dim_input_list, total_genes, indices_mappings, gene_likelihoods, model_library_bools, n_latent=10, n_layers_encoder_individual=1, n_layers_encoder_shared=1, dim_hidden_encoder=64, n_layers_decoder_individual=0, n_layers_decoder_shared=0, dim_hidden_decoder_individual=64, dim_hidden_decoder_shared=64, dropout_rate_encoder=0.2, dropout_rate_decoder=0.2, n_batch=0, n_labels=0, dispersion='gene-batch', log_variational=True)[source]

Bases: scvi.module.base._base_module.BaseModuleClass

Joint variational auto-encoder for imputing missing genes in spatial data.

Implementation of gimVI [Lopez19].

Parameters
dim_input_list : List[int]List[int]

List of number of input genes for each dataset. If

the datasets have different sizes, the dataloader will loop on the smallest until it reaches the size of the longest one

total_genes : intint

Total number of different genes

indices_mappings : List[Union[ndarray, slice]]List[Union[ndarray, slice]]

list of mapping the model inputs to the model output Eg: [[0,2], [0,1,3,2]] means the first dataset has 2 genes that will be reconstructed at location [0,2] the second dataset has 4 genes that will be reconstructed at [0,1,3,2]

gene_likelihoods : List[str]List[str]

list of distributions to use in the generative process ‘zinb’, ‘nb’, ‘poisson’

bool list : model_library_bools

model or not library size with a latent variable or use observed values

n_latent : intint (default: 10)

dimension of latent space

n_layers_encoder_individual : intint (default: 1)

number of individual layers in the encoder

n_layers_encoder_shared : intint (default: 1)

number of shared layers in the encoder

dim_hidden_encoder : intint (default: 64)

dimension of the hidden layers in the encoder

n_layers_decoder_individual : intint (default: 0)

number of layers that are conditionally batchnormed in the encoder

n_layers_decoder_shared : intint (default: 0)

number of shared layers in the decoder

dim_hidden_decoder_individual : intint (default: 64)

dimension of the individual hidden layers in the decoder

dim_hidden_decoder_shared : intint (default: 64)

dimension of the shared hidden layers in the decoder

dropout_rate_encoder : floatfloat (default: 0.2)

dropout encoder

dropout_rate_decoder : floatfloat (default: 0.2)

dropout decoder

n_batch : intint (default: 0)

total number of batches

n_labels : intint (default: 0)

total number of labels

dispersion : strstr (default: 'gene-batch')

See vae.py

log_variational : boolbool (default: True)

Log(data+1) prior to encoding for numerical stability. Not normalization.

Methods

generative(z, library[, batch_index, y, mode])

Run the generative model.

get_sample_rate(x, batch_index, *_, **__)

inference(x[, mode])

Run the inference (recognition) model.

loss(tensors, inference_outputs, …[, …])

Return the reconstruction loss and the Kullback divergences.

reconstruction_loss(x, px_rate, px_r, …)

rtype

TensorTensor

sample_from_posterior_l(x[, mode, deterministic])

Sample the tensor of library sizes from the posterior.

sample_from_posterior_z(x[, mode, deterministic])

Sample tensor of latent values from the posterior.

sample_rate(x, mode, batch_index[, y, …])

Returns the tensor of scaled frequencies of expression.

sample_scale(x, mode, batch_index[, y, …])

Return the tensor of predicted frequencies of expression.