scvi.module.MRDeconv#

class scvi.module.MRDeconv(n_spots, n_labels, n_hidden, n_layers, n_latent, n_genes, decoder_state_dict, px_decoder_state_dict, px_r, dropout_decoder, dropout_amortization=0.05, mean_vprior=None, var_vprior=None, mp_vprior=None, amortization='both', l1_reg=0.0, beta_reg=5.0, eta_reg=0.0001)[source]#

Bases: BaseModuleClass

Model for multi-resolution deconvolution of spatial transriptomics.

Parameters:
n_spots : int

Number of input spots

n_labels : int

Number of cell types

n_hidden : int

Number of neurons in the hidden layers

n_layers : int

Number of layers used in the encoder networks

n_latent : int

Number of dimensions used in the latent variables

n_genes : int

Number of genes used in the decoder

dropout_decoder : float

Dropout rate for the decoder neural network (same dropout as in CondSCVI decoder)

dropout_amortization : float (default: 0.05)

Dropout rate for the amortization neural network

decoder_state_dict : OrderedDict

state_dict from the decoder of the CondSCVI model

px_decoder_state_dict : OrderedDict

state_dict from the px_decoder of the CondSCVI model

px_r : ndarray

parameters for the px_r tensor in the CondSCVI model

mean_vprior : ndarray | NoneOptional[ndarray] (default: None)

Mean parameter for each component in the empirical prior over the latent space

var_vprior : ndarray | NoneOptional[ndarray] (default: None)

Diagonal variance parameter for each component in the empirical prior over the latent space

mp_vprior : ndarray | NoneOptional[ndarray] (default: None)

Mixture proportion in cell type sub-clustering of each component in the empirical prior

amortization : {‘none’, ‘latent’, ‘proportion’, ‘both’}Literal[‘none’, ‘latent’, ‘proportion’, ‘both’] (default: 'both')

which of the latent variables to amortize inference over (gamma, proportions, both or none)

l1_reg : float (default: 0.0)

Scalar parameter indicating the strength of L1 regularization on cell type proportions. A value of 50 leads to sparser results.

beta_reg : float (default: 5.0)

Scalar parameter indicating the strength of the variance penalty for the multiplicative offset in gene expression values (beta parameter). Default is 5 (setting to 0.5 might help if single cell reference and spatial assay are different e.g. UMI vs non-UMI.)

eta_reg : float (default: 0.0001)

Scalar parameter indicating the strength of the prior for the noise term (eta parameter). Default is 1e-4. (changing value is discouraged.)

Attributes table#

Methods table#

generative(x, ind_x)

Build the deconvolution model for every cell in the minibatch.

get_ct_specific_expression([x, ind_x, y])

Returns cell type specific gene expression at the queried spots.

get_gamma([x])

Returns the loadings.

get_proportions([x, keep_noise])

Returns the loadings.

inference()

Run the inference (recognition) model.

loss(tensors, inference_outputs, ...[, ...])

Compute the loss for a minibatch of data.

sample(tensors[, n_samples, library_size])

Generate samples from the learned model.

Attributes#

T_destination#

MRDeconv.T_destination#

alias of TypeVar(‘T_destination’, bound=Mapping[str, Tensor])

alias of TypeVar(‘T_destination’, bound=Mapping[str, Tensor]) .. autoattribute:: MRDeconv.T_destination device ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

MRDeconv.device#

dump_patches#

MRDeconv.dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

training#

MRDeconv.training: bool#

Methods#

generative#

MRDeconv.generative(x, ind_x)[source]#

Build the deconvolution model for every cell in the minibatch.

get_ct_specific_expression#

MRDeconv.get_ct_specific_expression(x=None, ind_x=None, y=None)[source]#

Returns cell type specific gene expression at the queried spots.

Parameters:
x : Tensor | NoneOptional[Tensor] (default: None)

tensor of data

ind_x : Tensor | NoneOptional[Tensor] (default: None)

tensor of indices

y : int | NoneOptional[int] (default: None)

integer for cell types

get_gamma#

MRDeconv.get_gamma(x=None)[source]#

Returns the loadings.

Return type:

Tensor

Returns:

type tensor

get_proportions#

MRDeconv.get_proportions(x=None, keep_noise=False)[source]#

Returns the loadings.

Return type:

ndarray

inference#

MRDeconv.inference()[source]#

Run the inference (recognition) model.

In the case of variational inference, this function will perform steps related to computing variational distribution parameters. In a VAE, this will involve running data through encoder networks.

This function should return a dictionary with str keys and Tensor values.

loss#

MRDeconv.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0, n_obs=1.0)[source]#

Compute the loss for a minibatch of data.

This function uses the outputs of the inference and generative functions to compute a loss. This many optionally include other penalty terms, which should be computed here.

This function should return an object of type LossRecorder.

sample#

MRDeconv.sample(tensors, n_samples=1, library_size=1)[source]#

Generate samples from the learned model.