This page was generated from harmonization.ipynb. Interactive online version: Colab badge.

Atlas-level integration and label transfer

An important task of single-cell analysis is the integration of several datasets. scVI can be used for this purpose. We can also use scANVI, an end-to-end framework for transfer of annotations. Here we demonstrate this functionality with an integrated analysis of cells from Tabula Muris. The same pipeline would generally be used to analyze a collection of scRNA-seq datasets.

import sys

#if branch is stable, will install via pypi, else will install from source
branch = "stable"
IN_COLAB = "google.colab" in sys.modules

if IN_COLAB and branch == "stable":
    !pip install --quiet scvi-tools[tutorials]
elif IN_COLAB and branch != "stable":
    !pip install --quiet --upgrade jsonschema
    !pip install --quiet git+$branch#egg=scvi-tools[tutorials]
     |████████████████████████████████| 69 kB 1.9 MB/s
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
nbclient 0.5.4 requires jupyter-client>=6.1.5, but you have jupyter-client 5.3.5 which is incompatible.
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
     |████████████████████████████████| 813 kB 8.6 MB/s
     |████████████████████████████████| 211 kB 84.2 MB/s
     |████████████████████████████████| 127 kB 91.7 MB/s
     |████████████████████████████████| 678 kB 67.8 MB/s
     |████████████████████████████████| 242 kB 94.6 MB/s
     |████████████████████████████████| 8.8 MB 29.3 MB/s
     |████████████████████████████████| 3.2 MB 82.6 MB/s
     |████████████████████████████████| 41 kB 150 kB/s
     |████████████████████████████████| 1.4 MB 74.1 MB/s
     |████████████████████████████████| 2.0 MB 36.8 MB/s
     |████████████████████████████████| 48 kB 6.6 MB/s
     |████████████████████████████████| 282 kB 83.6 MB/s
     |████████████████████████████████| 123 kB 60.9 MB/s
     |████████████████████████████████| 829 kB 71.5 MB/s
     |████████████████████████████████| 636 kB 73.7 MB/s
     |████████████████████████████████| 1.3 MB 53.9 MB/s
     |████████████████████████████████| 51 kB 9.0 MB/s
     |████████████████████████████████| 80 kB 11.9 MB/s
     |████████████████████████████████| 1.1 MB 67.5 MB/s
     |████████████████████████████████| 294 kB 69.9 MB/s
     |████████████████████████████████| 142 kB 93.9 MB/s
     |████████████████████████████████| 63 kB 2.9 MB/s
  Building wheel for scvi-tools (PEP 517) ... done
  Building wheel for docrep ( ... done
  Building wheel for loompy ( ... done
  Building wheel for future ( ... done
  Building wheel for umap-learn ( ... done
  Building wheel for pynndescent ( ... done
  Building wheel for numpy-groupies ( ... done
  Building wheel for sinfo ( ... done
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd

import scanpy as sc
import scvi

sc.set_figure_params(figsize=(4, 4))
Global seed set to 0
/usr/local/lib/python3.7/dist-packages/numba/np/ufunc/ NumbaWarning: The TBB threading layer requires TBB version 2019.5 or later i.e., TBB_INTERFACE_VERSION >= 11005. Found TBB_INTERFACE_VERSION = 9107. The TBB threading layer is disabled.
dataset1 =
dataset2 =
dataset1.obs = pd.read_csv(
dataset2.obs = pd.read_csv(
/usr/local/lib/python3.7/dist-packages/IPython/core/ DtypeWarning: Columns (4,9) have mixed types.Specify dtype option on import or set low_memory=False.
  interactivity=interactivity, compiler=compiler, result=result)

We subset to labelled bone marrow cells because it is a reasonably sized dataset that will allow us to validate our method’s ability to transfer labels from one dataset to another.

dataset1 = dataset1[
    (dataset1.obs.tissue == "Marrow") & (~dataset1.obs.cell_ontology_class.isna())
dataset2 = dataset2[
    (dataset2.obs.tissue == "Marrow") & (~dataset2.obs.cell_ontology_class.isna())
/usr/local/lib/python3.7/dist-packages/anndata/_core/ ImplicitModificationWarning: Transforming to str index.
  warnings.warn("Transforming to str index.", ImplicitModificationWarning)
dataset1.shape, dataset2.shape
((3652, 23433), (5037, 23433))

Dataset preprocessing

Normalize Smartseq2 matrix by gene length

We apply gene-length normalization because the number of reads representing a transcript is proportional to the lenght of the transcript, unless UMIs are used. A discussion of this phenomenon can be found in this 2017 paper by Phipson B. Other than the gene length normalization, no other normalization is needed. scVI or scANVI is designed to handle sequencing depth and does not need cell-wise normalization. Normalizing and scaling the data will be detrimental to the performance of scVI and scANVI since they model explicitly the count data.

The gene length file here is computed by taking the average length of alal transcripts corresponding to a mouse gene recorded in the ensembl dataabase.

gene_len = pd.read_csv(
    delimiter=" ",
0610007C21Rik 94.571429
0610007L01Rik 156.000000
0610007P08Rik 202.272727
0610007P14Rik 104.000000
0610007P22Rik 158.750000
gene_len = gene_len.reindex(dataset2.var.index).dropna()
dataset2 = dataset2[:, gene_len.index]
assert (dataset2.var.index == gene_len.index).sum() == dataset2.shape[1]
dataset2.X = dataset2.X / gene_len[1].values * np.median(gene_len[1].values)
# round to integer
dataset2.X = np.rint(dataset2.X)

Dataset concatenation and gene selection

Another important thing to keep in mind is highly-variable gene selection. While scVI and scANVI both accomodate for large gene sets in terms of runtime, we usually recommend filtering genes for best performance when the dataset has few number of cells. As a rule of thumb, performance starts to decrease when number of cells and number of genes are comparable. This point is emphasized in this comparative analysis of data integration algorithms for scRNA-seq data.

We perform this gene selection using the Scanpy pipeline while keeping the raw data in the adata.raw object. We obtain variable genes from each dataset and take their intersections.

adata = dataset1.concatenate(dataset2)
adata.layers["counts"] = adata.X.copy()
sc.pp.normalize_total(adata, target_sum=1e4)
adata.raw = adata  # keep full dimension safe

Integration with scVI

As a first step, we assume that the data is completely unlabelled and we wish to find common axes of variation between the two datasets. There are many methods available in scanpy for this purpose (BBKNN, Scanorama, etc.). In this notebook we present scVI. To run scVI, we simply need to:

  • Register the AnnData object with the correct key to identify the sample.

  • Create an SCVI model object.

scvi.model.SCVI.setup_anndata(adata, layer="counts", batch_key="batch")
INFO     Using batches from adata.obs["batch"]
INFO     No label_key inputted, assuming all cells have same label
INFO     Using data from adata.layers["counts"]
INFO     Successfully registered anndata object containing 8689 cells, 2000 vars, 2 batches,
         1 labels, and 0 proteins. Also registered 0 extra categorical covariates and 0 extra
         continuous covariates.
INFO     Please do not further modify adata until model is trained.
vae = scvi.model.SCVI(adata, n_layers=2, n_latent=30)

Now we train scVI. This should take a couple of minutes on a Colab session

GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Epoch 400/400: 100%|██████████| 400/400 [03:53<00:00,  1.71it/s, loss=1.49e+03, v_num=1]

Once the training is done, we can evaluate the latent representation of each cell in the dataset and add it to the AnnData object

adata.obsm["X_scVI"] = vae.get_latent_representation()

Finally, we can cluster the dataset and visualize it with UMAP

sc.pp.neighbors(adata, use_rep="X_scVI")
    color=["batch", "leiden"],
... storing 'channel' as categorical
... storing '' as categorical
... storing 'tissue' as categorical
... storing 'subtissue' as categorical
... storing '' as categorical
... storing 'method' as categorical
... storing 'cell_ontology_class' as categorical
... storing 'cell_ontology_id' as categorical
... storing 'free_annotation' as categorical
... storing 'plate.barcode' as categorical
... storing 'FACS.selection' as categorical

Because this combination of datasets is used for benchmarking purposes, we have access here to curated annotations. We can use those to assess whether the harmonization worked reasonably well.

[15]:, color="cell_ontology_class", frameon=False)

From a quick glance, it looks like the integration worked well. Indeed, the two datasets are relatively mixed in latent space and the cell types cluster together. A more refined analysis may be done at the level of markers.

Transfer of annotations with scANVI

We now investigate a different although complementary problem. Previously, we used scVI as we assumed we did not have any cell type annotation available to guide us. Consequently, one would need to use marker genes in order to annotate the clusters from the previous analysis.

Now, we assume that one dataset plays the role of the reference data, with known labels, and one is the query. We will use scANVI to transfer our cell type knowledge from the reference to the query data. For this, we simply need to indicate to scANVI:

  • the sample identifier for each cell (as in scVI)

  • the cell type, or an unnassigned label for each cell

We assume that the Smartseq2 data is annotated and the 10X data is not. Only the labels of the cells from the labelled Smartseq2 dataset will be kept in the adata.obs column ‘celltype_scanvi’. All the 10X cells will have ‘celltype_scanvi’ of value ‘Unknown’.

adata.obs["celltype_scanvi"] = 'Unknown'
ss2_idx = adata.obs['batch'] == "1"
adata.obs["celltype_scanvi"][ss2_idx] = adata.obs.cell_ontology_class[ss2_idx]
/usr/local/lib/python3.7/dist-packages/ SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation:
  This is separate from the ipykernel package so we can avoid doing imports until
np.unique(adata.obs["celltype_scanvi"], return_counts = True)
(array(['B cell', 'Slamf1-negative multipotent progenitor cell',
        'Slamf1-positive multipotent progenitor cell', 'Unknown',
        'basophil', 'common lymphoid progenitor', 'granulocyte',
        'granulocyte monocyte progenitor cell', 'granulocytopoietic cell',
        'hematopoietic precursor cell', 'immature B cell',
        'immature NK T cell', 'immature T cell',
        'immature natural killer cell', 'late pro-B cell', 'macrophage',
        'mature natural killer cell',
        'megakaryocyte-erythroid progenitor cell', 'monocyte',
        'naive B cell', 'pre-natural killer cell', 'precursor B cell',
        'regulatory T cell'], dtype=object),
 array([  44,  713,  134, 3652,   25,  156,  761,  134,  221,  265,  344,
          37,   60,   36,  306,  173,   49,   55,  266,  692,   22,  517,

Now we may register the AnnData object and run scANVI.

INFO     Using batches from adata.obs["batch"]
INFO     Using labels from adata.obs["celltype_scanvi"]
INFO     Using data from adata.layers["counts"]
INFO     Successfully registered anndata object containing 8689 cells, 2000 vars, 2 batches,
         23 labels, and 0 proteins. Also registered 0 extra categorical covariates and 0
         extra continuous covariates.
INFO     Please do not further modify adata until model is trained.

Since we’ve already trained an scVI model on our data, we will use it as a pretrained model to scANVI.

lvae = scvi.model.SCANVI.from_scvi_model(vae, "Unknown", adata=adata)
lvae.train(max_epochs=20, n_samples_per_label=100)
INFO     Training for 20 epochs.
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Epoch 20/20: 100%|██████████| 20/20 [00:26<00:00,  1.33s/it, loss=1.6e+03, v_num=1]

Now we can predict the missing cell types, and get the latent space

adata.obs["C_scANVI"] = lvae.predict(adata)
adata.obsm["X_scANVI"] = lvae.get_latent_representation(adata)

Again, we may visualize the latent space as well as the inferred labels

sc.pp.neighbors(adata, use_rep="X_scANVI")
[23]:, color=["cell_ontology_class", "C_scANVI"], ncols=1, frameon=False)
... storing 'celltype_scanvi' as categorical
... storing 'C_scANVI' as categorical

Now we can observe scANVI’s performance using a confusion matrix.

df = adata.obs.groupby(["cell_ontology_class", "C_scANVI"]).size().unstack(fill_value=0)
df = df / df.sum(axis=0)

plt.figure(figsize=(8, 8))
_ = plt.pcolor(df)
_ = plt.xticks(np.arange(0.5, len(df.columns), 1), df.columns, rotation=90)
_ = plt.yticks(np.arange(0.5, len(df.index), 1), df.index)
Text(0, 0.5, 'Observed')

As we see in this, scANVI’s latent space separates all cell types and performs well at classifying major cell types. Since the 10X data is labelled at lower resolution, the transferred labels are not always identical with the original label. However, biologically, the transferred labels are subsets of the original labels.