Tutorials#
The easiest way to get familiar with scvi-tools is to follow along with our tutorials. Many are also designed to work seamlessly in Google Colab, a free cloud computing platform. Tutorials by default work with the latest installable version of scvi-tools. To view older tutorials, change the documentation version using the tab at the bottom of the left sidebar.
Note
For questions about using scvi-tools, or broader questions about modeling data, please use our forum. Checkout the ecosystem for additional models powered by scvi-tools.
Quick start#
User tutorials#
scRNA-seq#
- Atlas-level integration of lung data
- Integrating datasets with scVI in R
- Integration and label transfer with Tabula Muris
- Reference mapping with scvi-tools
- Querying the Human Lung Cell Atlas
- Seed labeling with scANVI
- Linearly decoded VAE
- Identification of zero-inflated genes
- Annotation with CellAssign
- Topic Modeling with Amortized LDA
ATAC-seq#
Spatial transcriptomics#
Multimodal#
scvi-hub#
Model hyperparameter tuning#
Contributed tutorials#
Developer tutorials#
Here we feature tutorials to guide you through the construction of a model with scvi-tools. For an example of how scvi-tools can be used in an independent package, see our simple-scvi example.
Glossary#
A Model class inherits BaseModelClass
and is the user-facing object for interacting with a module.
The model has a train
method that learns the parameters of the module, and also contains methods
for users to retrieve information from the module, like the latent representation of cells in a VAE.
Conventionally, the post-inference model methods should not store data into the AnnData object, but
instead return “standard” Python objects, like numpy arrays or pandas dataframes.
A module is the lower-level object that defines a generative model and inference scheme. A module will
either inherit BaseModuleClass
or PyroBaseModuleClass
.
Consequently, a module can either be implemented with PyTorch alone, or Pyro. In the PyTorch only case, the
generative process and inference scheme are implemented respectively in the generative
and inference
methods,
while the loss
method computes the loss, e.g, ELBO in the case of variational inference.
The training plan is a PyTorch Lightning Module that is initialized with a scvi-tools module object. It configures the optimizers, defines the training step and validation step, and computes metrics to be recorded during training. The training step and validation step are functions that take data, run it through the model and return the loss, which will then be used to optimize the model parameters in the Trainer. Overall, custom training plans can be used to develop complex inference schemes on top of modules.
The Trainer
is a lightweight wrapper of the PyTorch Lightning Trainer. It takes as input
the training plan, a training data loader, and a validation dataloader. It performs the actual training loop, in
which parameters are optimized, as well as the validation loop to monitor metrics. It automatically handles moving
data to the correct device (CPU/GPU).