scvi.module.MRDeconv#

class scvi.module.MRDeconv(n_spots, n_labels, n_hidden, n_layers, n_latent, n_genes, decoder_state_dict, px_decoder_state_dict, px_r, dropout_decoder, dropout_amortization=0.05, mean_vprior=None, var_vprior=None, mp_vprior=None, amortization='both', l1_reg=0.0, beta_reg=5.0, eta_reg=0.0001, extra_encoder_kwargs=None, extra_decoder_kwargs=None)[source]#

Bases: BaseModuleClass

Model for multi-resolution deconvolution of spatial transriptomics.

Parameters:
  • n_spots (int) – Number of input spots

  • n_labels (int) – Number of cell types

  • n_hidden (Tunable_[int]) – Number of neurons in the hidden layers

  • n_layers (Tunable_[int]) – Number of layers used in the encoder networks

  • n_latent (Tunable_[int]) – Number of dimensions used in the latent variables

  • n_genes (int) – Number of genes used in the decoder

  • dropout_decoder (float) – Dropout rate for the decoder neural network (same dropout as in CondSCVI decoder)

  • dropout_amortization (float (default: 0.05)) – Dropout rate for the amortization neural network

  • decoder_state_dict (OrderedDict) – state_dict from the decoder of the CondSCVI model

  • px_decoder_state_dict (OrderedDict) – state_dict from the px_decoder of the CondSCVI model

  • px_r (ndarray) – parameters for the px_r tensor in the CondSCVI model

  • mean_vprior (ndarray (default: None)) – Mean parameter for each component in the empirical prior over the latent space

  • var_vprior (ndarray (default: None)) – Diagonal variance parameter for each component in the empirical prior over the latent space

  • mp_vprior (ndarray (default: None)) – Mixture proportion in cell type sub-clustering of each component in the empirical prior

  • amortization (Literal['none', 'latent', 'proportion', 'both'] (default: 'both')) – which of the latent variables to amortize inference over (gamma, proportions, both or none)

  • l1_reg (Tunable_[float] (default: 0.0)) – Scalar parameter indicating the strength of L1 regularization on cell type proportions. A value of 50 leads to sparser results.

  • beta_reg (Tunable_[float] (default: 5.0)) – Scalar parameter indicating the strength of the variance penalty for the multiplicative offset in gene expression values (beta parameter). Default is 5 (setting to 0.5 might help if single cell reference and spatial assay are different e.g. UMI vs non-UMI.)

  • eta_reg (Tunable_[float] (default: 0.0001)) – Scalar parameter indicating the strength of the prior for the noise term (eta parameter). Default is 1e-4. (changing value is discouraged.)

  • extra_encoder_kwargs (Optional[dict] (default: None)) – Extra keyword arguments passed into FCLayers.

  • extra_decoder_kwargs (Optional[dict] (default: None)) – Extra keyword arguments passed into FCLayers.

Attributes table#

training

Methods table#

generative(x, ind_x)

Build the deconvolution model for every cell in the minibatch.

get_ct_specific_expression([x, ind_x, y])

Returns cell type specific gene expression at the queried spots.

get_gamma([x])

Returns the loadings.

get_proportions([x, keep_noise])

Returns the loadings.

inference()

Run the inference model.

loss(tensors, inference_outputs, ...[, ...])

Compute the loss.

sample(tensors[, n_samples, library_size])

Sample from the posterior.

Attributes#

MRDeconv.training: bool#

Methods#

MRDeconv.generative(x, ind_x)[source]#

Build the deconvolution model for every cell in the minibatch.

MRDeconv.get_ct_specific_expression(x=None, ind_x=None, y=None)[source]#

Returns cell type specific gene expression at the queried spots.

Parameters:
  • x (Tensor (default: None)) – tensor of data

  • ind_x (Tensor (default: None)) – tensor of indices

  • y (int (default: None)) – integer for cell types

MRDeconv.get_gamma(x=None)[source]#

Returns the loadings.

Return type:

Tensor

Returns:

type tensor

MRDeconv.get_proportions(x=None, keep_noise=False)[source]#

Returns the loadings.

Return type:

ndarray

MRDeconv.inference()[source]#

Run the inference model.

MRDeconv.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0, n_obs=1.0)[source]#

Compute the loss.

MRDeconv.sample(tensors, n_samples=1, library_size=1)[source]#

Sample from the posterior.