scvi.module.MRDeconv#
- class scvi.module.MRDeconv(n_spots, n_labels, n_hidden, n_layers, n_latent, n_genes, decoder_state_dict, px_decoder_state_dict, px_r, dropout_decoder, dropout_amortization=0.05, mean_vprior=None, var_vprior=None, mp_vprior=None, amortization='both', l1_reg=0.0, beta_reg=5.0, eta_reg=0.0001)[source]#
Bases:
BaseModuleClass
Model for multi-resolution deconvolution of spatial transriptomics.
- Parameters:
- n_spots :
int
Number of input spots
- n_labels :
int
Number of cell types
- n_hidden :
int
Number of neurons in the hidden layers
- n_layers :
int
Number of layers used in the encoder networks
- n_latent :
int
Number of dimensions used in the latent variables
- n_genes :
int
Number of genes used in the decoder
- dropout_decoder :
float
Dropout rate for the decoder neural network (same dropout as in CondSCVI decoder)
- dropout_amortization :
float
(default:0.05
) Dropout rate for the amortization neural network
- decoder_state_dict :
OrderedDict
state_dict from the decoder of the CondSCVI model
- px_decoder_state_dict :
OrderedDict
state_dict from the px_decoder of the CondSCVI model
- px_r :
ndarray
parameters for the px_r tensor in the CondSCVI model
- mean_vprior :
ndarray
|None
Optional
[ndarray
] (default:None
) Mean parameter for each component in the empirical prior over the latent space
- var_vprior :
ndarray
|None
Optional
[ndarray
] (default:None
) Diagonal variance parameter for each component in the empirical prior over the latent space
- mp_vprior :
ndarray
|None
Optional
[ndarray
] (default:None
) Mixture proportion in cell type sub-clustering of each component in the empirical prior
- amortization : {‘none’, ‘latent’, ‘proportion’, ‘both’}
Literal
[‘none’, ‘latent’, ‘proportion’, ‘both’] (default:'both'
) which of the latent variables to amortize inference over (gamma, proportions, both or none)
- l1_reg :
float
(default:0.0
) Scalar parameter indicating the strength of L1 regularization on cell type proportions. A value of 50 leads to sparser results.
- beta_reg :
float
(default:5.0
) Scalar parameter indicating the strength of the variance penalty for the multiplicative offset in gene expression values (beta parameter). Default is 5 (setting to 0.5 might help if single cell reference and spatial assay are different e.g. UMI vs non-UMI.)
- eta_reg :
float
(default:0.0001
) Scalar parameter indicating the strength of the prior for the noise term (eta parameter). Default is 1e-4. (changing value is discouraged.)
- n_spots :
Attributes table#
Methods table#
|
Build the deconvolution model for every cell in the minibatch. |
|
Returns cell type specific gene expression at the queried spots. |
|
Returns the loadings. |
|
Returns the loadings. |
Run the inference (recognition) model. |
|
|
Compute the loss for a minibatch of data. |
|
Generate samples from the learned model. |
Attributes#
T_destination#
alias of TypeVar(‘T_destination’, bound=Mapping
[str
, Tensor
])
.. autoattribute:: MRDeconv.T_destination
device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- MRDeconv.device#
dump_patches#
- MRDeconv.dump_patches: bool = False#
This allows better BC support for
load_state_dict()
. Instate_dict()
, the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See_load_from_state_dict
on how to use this information in loading.If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.
training#
Methods#
generative#
get_ct_specific_expression#
get_gamma#
get_proportions#
inference#
- MRDeconv.inference()[source]#
Run the inference (recognition) model.
In the case of variational inference, this function will perform steps related to computing variational distribution parameters. In a VAE, this will involve running data through encoder networks.
This function should return a dictionary with str keys and
Tensor
values.
loss#
- MRDeconv.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0, n_obs=1.0)[source]#
Compute the loss for a minibatch of data.
This function uses the outputs of the inference and generative functions to compute a loss. This many optionally include other penalty terms, which should be computed here.
This function should return an object of type
LossRecorder
.