scvi.external.methylvi.METHYLVAE#
- class scvi.external.methylvi.METHYLVAE(n_input, contexts, num_features_per_context, n_batch=0, n_cats_per_cov=None, n_hidden=128, n_latent=10, n_layers=1, dropout_rate=0.1, likelihood='betabinomial', dispersion='region')[source]#
Bases:
BaseModuleClass
PyTorch module for methylVI.
- Parameters:
n_input (
int
) – Total number of input genomic regionscontexts (
Iterable
[str
]) – List of methylation contexts (e.g. [“mCG”, “mCH”])num_features_per_context (
Iterable
[int
]) – Number of features corresponding to each contextn_batch (
int
(default:0
)) – Number of batches, if 0, no batch correction is performedn_cats_per_cov (
Iterable
[int
] |None
(default:None
)) – Number of categories for each extra categorical covariaten_hidden (
int
(default:128
)) – Number of nodes per hidden layern_latent (
int
(default:10
)) – Dimensionality of the latent spacen_layers (
int
(default:1
)) – Number of hidden layers used for encoder and decoder NNsdropout_rate (
float
(default:0.1
)) – Dropout rate for neural networkslikelihood (
Literal
['betabinomial'
,'binomial'
] (default:'betabinomial'
)) – One of *'betabinomial'
- BetaBinomial distribution *'binomial'
- Binomial distributiondispersion (
Literal
['region'
,'region-cell'
] (default:'region'
)) – One of the following *'region'
- dispersion parameter of BetaBinomial is constant per region across cells *'region-cell'
- dispersion can differ for every region in every cell
Attributes table#
Methods table#
|
Runs the generative model. |
|
High level inference method. |
|
Loss function. |
|
Generate observation samples from the posterior predictive distribution. |
Attributes#
- METHYLVAE.training: bool#
Methods#
- METHYLVAE.inference(mc, cov, batch_index, cat_covs=None, n_samples=1)[source]#
High level inference method.
Runs the inference (encoder) model.
- METHYLVAE.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0)[source]#
Loss function.
- METHYLVAE.sample(tensors, n_samples=1)[source]#
Generate observation samples from the posterior predictive distribution.
The posterior predictive distribution is written as \(p(\hat{x} \mid x)\).
- Parameters:
tensors – Tensors dict
n_samples (default:
1
) – Number of required samples for each cell
- Return type:
dict
[Tensor
]- Returns:
x_new tensor with shape (n_cells, n_regions, n_samples)