Note

This page was generated from model_user_guide.ipynb. Interactive online version: Colab badge.

Constructing a high-level model

[1]:
%%capture
import sys
IN_COLAB = "google.colab" in sys.modules
if IN_COLAB:
    !pip install --quiet scvi-tools
[2]:
import numpy as np
import scvi
import torch

At this point we have covered

  1. Data registration via scvi.data.setup_anndata and dataloaders via AnnDataLoader

  2. Building a probabilistic model by subclassing BaseModuleClass

In this tutorial, we will cover the highest-level classes in scvi-tools: the model classes. The main purpose of these classes (e.g., scvi.model.SCVI) is to wrap the actions of module instantiation, training, and subsequent posterior queries of our module into a convenient interface. These model classes are the fundamental objects driving scientific analysis of data with scvi-tools. Out of convention, we will refer to these objects as “models” and the lower-level objects presented in the previous tutorial as “modules”.

A simple model class

Here we will walkthrough an example of building the scvi.model.SCVI class. We will progressively add functionality to the class.

Sketch of BaseModelClass

Let us start by providing a high level overview of BaseModelClass that we will inherit. Note that this is pseudocode to provide intuition. We see that BaseModelClass contains some unverisally applicable methods, and some private methods (conventionally starting with _ in Python) that will become useful after training the model.

class MyModel(UnsupervisedTrainingMixin, BaseModelClass)

    def __init__(self, adata):
        # sets some basic attributes like is_trained_
        # record the setup_dict registered in the adata
        self.adata = adata
        self.scvi_setup_dict_ = adata.uns["_scvi"]
        self.summary_stats = self.scvi_setup_dict_["summary_stats"]

    def _validate_anndata(self, adata):
        # check that anndata is equivalent by comparing
        # to the initial setup_dict

    def _make_dataloader(adata):
        # return a dataloader to iterate over adata

    def train(...):
        # Universal train method provided by UnsupservisedTrainingMixin
        # BaseModelClass does not come with train
        # In general train methods are straightforward to compose manually

    def save(...):
        # universal save method
        # saves modules, anndata setup dict, and attributes ending with _

    def load(...):
        # universal load method

Baseline version of SCVI class

Let’s now create the simplest possible version of the SCVI class. We inherit the BaseModelClass, and write our __init__ method.

We take care to do the following:

  1. Set the module attribute to be equal to our VAE module, which here is the torch-level version of scVI.

  2. Add a _model_summary_string attr, which will be used as a representation for the model.

  3. Run self.init_params_ = self._get_init_params(locals()), which stores the arguments used to initialize the model, facilitating saving/loading of the model later.

To initialize the VAE, we can use the information in self.summary_stats, which is information that was stored in the anndata object at setup_anndata() time. In this example, we have only exposed n_latent to users through SCVI. In practice, we try to expose only the most relevant parameters, as all other parameters can be accessed by passing model_kwargs.

[3]:
from anndata import AnnData
from scvi.module import VAE
from scvi.model.base import BaseModelClass, UnsupervisedTrainingMixin

class SCVI(UnsupervisedTrainingMixin, BaseModelClass):
    """
    single-cell Variational Inference [Lopez18]_.
    """

    def __init__(
        self,
        adata: AnnData,
        n_latent: int = 10,
        **model_kwargs,
    ):
        super(SCVI, self).__init__(adata)

        self.module = VAE(
            n_input=self.summary_stats["n_vars"],
            n_batch=self.summary_stats["n_batch"],
            n_latent=n_latent,
            **model_kwargs,
        )
        self._model_summary_string = (
            "SCVI Model with the following params: \nn_latent: {}"
        ).format(
            n_latent,
        )
        self.init_params_ = self._get_init_params(locals())

Now we explore what we can and cannot do with this model. Let’s get some data and initialize a SCVI instance. Of note, for testing purposes we like to use scvi.data.synthetic_iid() which returns a simple, small anndata object that was already run through setup_anndata().

[4]:
adata = scvi.data.synthetic_iid()
adata
INFO     Using batches from adata.obs["batch"]
INFO     Using labels from adata.obs["labels"]
INFO     Using data from adata.X
INFO     Computing library size prior per batch
INFO     Using protein expression from adata.obsm['protein_expression']
INFO     Using protein names from adata.uns['protein_names']
INFO     Successfully registered anndata object containing 400 cells, 100 vars, 2 batches, 3
         labels, and 100 proteins. Also registered 0 extra categorical covariates and 0 extra
         continuous covariates.
INFO     Please do not further modify adata until model is trained.
[4]:
AnnData object with n_obs × n_vars = 400 × 100
    obs: 'batch', 'labels', '_scvi_batch', '_scvi_labels', '_scvi_local_l_mean', '_scvi_local_l_var'
    uns: 'protein_names', '_scvi'
    obsm: 'protein_expression'
[5]:
model = SCVI(adata)
model
SCVI Model with the following params:
n_latent: 10
Training status: Not Trained

To print summary of associated AnnData, use: scvi.data.view_anndata_setup(model.adata)
[5]:

[6]:
model.train(max_epochs=20)
GPU available: True, used: True
TPU available: None, using: 0 TPU cores
Epoch 20/20: 100%|██████████| 20/20 [00:00<00:00, 42.08it/s, loss=305, v_num=1]

The train method

We were able to train this model, as this method is inherited in the class. Let us now take a look at psedocode of the train method of UnsupervisedTrainingMixin. The function of each of these objects is described in the API reference.

def train(
    self,
    max_epochs: Optional[int] = 100,
    use_gpu: Optional[bool] = None,
    train_size: float = 0.9,
    **kwargs,
):
    """
    Train the model.
    """
    # object to make train/test/val dataloaders
    data_splitter = DataSplitter(
        self.adata,
        train_size=train_size,
        validation_size=validation_size,
        batch_size=batch_size,
        use_gpu=use_gpu,
    )
    # defines optimizers, training step, val step, logged metrics
    training_plan = TrainingPlan(
        self.module, len(data_splitter.train_idx),
    )
    # creates Trainer, pre and post training procedures (Trainer.fit())
    runner = TrainRunner(
        self,
        training_plan=training_plan,
        data_splitter=data_splitter,
        max_epochs=max_epochs,
        use_gpu=use_gpu,
        **kwargs,
    )
    return runner()

We notice two new things:

  1. A training plan (training_plan)

  2. A train runner (runner)

The TrainRunner is a lightweight wrapper of the PyTorch lightning’s `Trainer <https://pytorch-lightning.readthedocs.io/en/stable/trainer.html#trainer-class-api>`__, which is a completely black-box method once a TrainingPlan is defined. So what does the TrainingPlan do?

  1. Configures optimizers (e.g., Adam), learning rate schedulers.

  2. Defines the training step, which runs a minibatch of data through the model and records the loss.

  3. Defines the validation step, same as training step, but for validation data.

  4. Records relevant metrics, such as the ELBO.

In scvi-tools we have scvi.lightning.TrainingPlan, which should cover many use cases, from VAEs and VI, to MLE and MAP estimation. Developers may find that they need a custom TrainingPlan for e.g,. multiple optimizers and complex training scheme. These can be written and used by the model class.

Developers may also overwrite this train method to add custom functionality like Early Stopping (see TOTALVI’s train method). In most cases the higher-level train method can call super().train(), which would be the BaseModelClass train method.

Save and load

We can also save and load this model object, as it follows the expected structure.

[7]:
model.save("saved_model/", save_anndata=True)
model = SCVI.load("saved_model/")
INFO     Using data from adata.X
INFO     Computing library size prior per batch
INFO     Registered keys:['X', 'batch_indices', 'local_l_mean', 'local_l_var', 'labels',
         'protein_expression']
INFO     Successfully registered anndata object containing 400 cells, 100 vars, 2 batches, 3
         labels, and 100 proteins. Also registered 0 extra categorical covariates and 0 extra
         continuous covariates.

Writing methods to query the model

So we have a model that wraps a module that has been trained. How can we get information out of the module and present in cleanly to our users? Let’s implement a simple example: getting the latent representation out of the VAE.

This method has the following structure:

  1. Validate the user-supplied data

  2. Create a data loader

  3. Iterate over the data loader and feed into the VAE, getting the tensor of interest out of the VAE.

[8]:
from typing import Optional, Sequence

@torch.no_grad()
def get_latent_representation(
    self,
    adata: Optional[AnnData] = None,
    indices: Optional[Sequence[int]] = None,
    batch_size: Optional[int] = None,
) -> np.ndarray:
    r"""
    Return the latent representation for each cell.

    Parameters
    ----------
    adata
        AnnData object with equivalent structure to initial AnnData. If `None`, defaults to the
        AnnData object used to initialize the model.
    indices
        Indices of cells in adata to use. If `None`, all cells are used.
    batch_size
        Minibatch size for data loading into model. Defaults to `scvi.settings.batch_size`.

    Returns
    -------
    latent_representation : np.ndarray
        Low-dimensional representation for each cell
    """
    if self.is_trained_ is False:
        raise RuntimeError("Please train the model first.")

    adata = self._validate_anndata(adata)
    dataloader = self._make_dataloader(adata=adata, indices=indices, batch_size=batch_size)
    latent = []
    for tensors in dataloader:
        inference_inputs = self.module._get_inference_input(tensors)
        outputs = self.module.inference(**inference_inputs)
        qz_m = outputs["qz_m"]

        latent += [qz_m.cpu()]
    return torch.cat(latent).numpy()

Note

Validating the anndata is critical to the user experience. If None is passed it just returns the anndata used to initialize the model, but if a different object is passed, it checks that this new object is equivalent in structure to the anndata passed to the model. We took great care in engineering this function so as to allow passing anndata objects with potentially missing categories (e.g., model was trained on batches ["A", "B", "C"], but the passed anndata only has ["B", "C"]). These sorts of checks will ensure that your module will see data that it expects, and the user will get the results they expect without advanced data manipulations.

As a convention, we like to keep the module code as bare as possible and leave all posterior manipulation of module tensors to the model class methods. However, it would have been possible to write a get_z method in the module, and just have the model class that method.

Mixing in pre-coded features

We have a number of Mixin classes that can add functionality to your model through inheritance. Here we demonstrate the `VAEMixin <https://www.scvi-tools.org/en/stable/api/reference/scvi.model.base.VAEMixin.html#scvi.model.base.VAEMixin>`__ class.

Let’s try to get the latent representation from the object we already created.

[9]:
try:
    model.get_latent_representation()
except AttributeError:
    print("This function does not exist")
This function does not exist

This method becomes avaialble once the VAEMixin is inherited. Here’s an overview of the mixin methods, which are coded generally enough that they should be broadly useful to those building VAEs.

class VAEMixin:
    @torch.no_grad()
    def get_elbo(
        self,
        adata: Optional[AnnData] = None,
        indices: Optional[Sequence[int]] = None,
        batch_size: Optional[int] = None,
    ) -> float:
        pass

    @torch.no_grad()
    def get_marginal_ll(
        self,
        adata: Optional[AnnData] = None,
        indices: Optional[Sequence[int]] = None,
        n_mc_samples: int = 1000,
        batch_size: Optional[int] = None,
    ) -> float:
        pass

    @torch.no_grad()
    def get_reconstruction_error(
        self,
        adata: Optional[AnnData] = None,
        indices: Optional[Sequence[int]] = None,
        batch_size: Optional[int] = None,
    ) -> Union[float, Dict[str, float]]:
        pass

    @torch.no_grad()
    def get_latent_representation(
        self,
        adata: Optional[AnnData] = None,
        indices: Optional[Sequence[int]] = None,
        give_mean: bool = True,
        mc_samples: int = 5000,
        batch_size: Optional[int] = None,
    ) -> np.ndarray:
        pass

Let’s now inherit the mixin into our SCVI class.

[12]:
from scvi.model.base import VAEMixin, UnsupervisedTrainingMixin

class SCVI(VAEMixin, UnsupervisedTrainingMixin, BaseModelClass):
    """
    single-cell Variational Inference [Lopez18]_.
    """

    def __init__(
        self,
        adata: AnnData,
        n_latent: int = 10,
        **model_kwargs,
    ):
        super(SCVI, self).__init__(adata)

        self.module = VAE(
            n_input=self.summary_stats["n_vars"],
            n_batch=self.summary_stats["n_batch"],
            n_latent=n_latent,
            **model_kwargs,
        )
        self._model_summary_string = (
            "SCVI Model with the following params: \nn_latent: {}"
        ).format(
            n_latent,
        )
        self.init_params_ = self._get_init_params(locals())
[13]:
model = SCVI(adata)
model.train(10)
model.get_latent_representation()
GPU available: True, used: True
TPU available: None, using: 0 TPU cores
Epoch 10/10: 100%|██████████| 10/10 [00:00<00:00, 43.13it/s, loss=312, v_num=1]
[13]:
array([[ 7.24648237e-01, -6.79293945e-02,  2.24610746e-01, ...,
        -1.60631642e-01, -4.38390583e-01, -1.13472319e+00],
       [-8.44153583e-01, -4.75247622e-01,  2.80124843e-02, ...,
        -1.94080174e-04,  8.48569334e-01,  3.32585931e-01],
       [-1.54564455e-01, -2.80499250e-01, -1.11564890e-01, ...,
        -9.88644898e-01,  6.64949536e-01, -7.96886533e-02],
       ...,
       [ 4.71180618e-01, -5.71987391e-01, -4.22892049e-02, ...,
        -3.70038971e-02,  2.39081487e-01, -2.67369717e-01],
       [ 6.74252212e-01, -7.91834950e-01, -1.84910953e-01, ...,
        -6.05610073e-01,  1.00632414e-01,  3.61604303e-01],
       [ 1.94981873e-01, -7.31289536e-02, -8.98141861e-02, ...,
        -3.45393956e-01, -4.50782865e-01, -3.28205645e-01]], dtype=float32)

Summary

We learned the structure of the high-level model classes in scvi-tools, and learned how a simple version of SCVI is implemented.

Questions? Comments? Keep the discussion going on our forum