scvi.module.VAEC#
- class scvi.module.VAEC(n_input, n_batch=0, n_labels=0, n_hidden=128, n_latent=5, n_layers=2, log_variational=True, ct_weight=None, dropout_rate=0.05, encode_covariates=False, extra_encoder_kwargs=None, extra_decoder_kwargs=None)[source]#
Bases:
BaseModuleClass
Conditional Variational auto-encoder model.
This is an implementation of the CondSCVI model
- Parameters:
n_input (
int
) – Number of input genesn_batch (
int
(default:0
)) – Number of batches. If0
, no batch correction is performed.n_labels (
int
(default:0
)) – Number of labelsn_hidden (
int
(default:128
)) – Number of nodes per hidden layern_latent (
int
(default:5
)) – Dimensionality of the latent spacen_layers (
int
(default:2
)) – Number of hidden layers used for encoder and decoder NNslog_variational (
bool
(default:True
)) – Log(data+1) prior to encoding for numerical stability. Not normalization.ct_weight (
ndarray
|None
(default:None
)) – Multiplicative weight for cell type specific latent space.dropout_rate (
float
(default:0.05
)) – Dropout rate for the encoder and decoder neural network.encode_covariates (
bool
(default:False
)) – IfTrue
, covariates are concatenated to gene expression prior to passing through the encoder(s). Else, only gene expression is used.extra_encoder_kwargs (
dict
|None
(default:None
)) – Keyword arguments passed intoEncoder
.extra_decoder_kwargs (
dict
|None
(default:None
)) – Keyword arguments passed intoFCLayers
.
Attributes table#
Methods table#
|
Runs the generative model. |
|
High level inference method. |
|
Loss computation. |
|
Generate observation samples from the posterior predictive distribution. |
Attributes#
- VAEC.training: bool#
Methods#
- VAEC.generative(z, library, y, batch_index=None, transform_batch=None)[source]#
Runs the generative model.
- VAEC.inference(x, y, batch_index=None, n_samples=1)[source]#
High level inference method.
Runs the inference (encoder) model.