scvi.external.scbasset.ScBassetModule#

class scvi.external.scbasset.ScBassetModule(n_cells, batch_ids=None, n_filters_init=288, n_repeat_blocks_tower=6, filters_mult=1.122, n_filters_pre_bottleneck=256, n_bottleneck_layer=32, batch_norm=True, dropout=0.0, l2_reg_cell_embedding=0.0)[source]#

Bases: BaseModuleClass

PyTorch implementation of ScBasset [Yuan and Kelley, 2022]

Original implementation in Keras: https://github.com/calico/scBasset

Parameters:
  • n_cells (int) – Number of cells to predict region accessibility

  • batch_ids (Optional[ndarray] (default: None)) – Array of (n_cells,) with batch ids for each cell

  • n_filters_init (int (default: 288)) – Number of filters for the initial conv layer

  • n_repeat_blocks_tower (int (default: 6)) – Number of layers in the convolutional tower

  • filters_mult (float (default: 1.122)) – Proportion by which the number of filters should inrease in the convolutional tower

  • n_bottleneck_layer (int (default: 32)) – Size of the bottleneck layer

  • batch_norm (bool (default: True)) – Whether to apply batch norm across model layers

  • dropout (float (default: 0.0)) – Dropout rate across layers, by default we do not do it for convolutional layers but we do it for the dense layers

  • l2_reg_cell_embedding (float (default: 0.0)) – L2 regularization for the cell embedding layer

Attributes table#

Methods table#

generative(region_embedding)

Generative method for the model.

inference(dna_code)

Inference method for the model.

loss(tensors, inference_outputs, ...)

Loss function for the model.

Attributes#

training

ScBassetModule.training: bool#

Methods#

generative

ScBassetModule.generative(region_embedding)[source]#

Generative method for the model.

Return type:

Dict[str, Tensor]

inference

ScBassetModule.inference(dna_code)[source]#

Inference method for the model.

Return type:

Dict[str, Tensor]

loss

ScBassetModule.loss(tensors, inference_outputs, generative_outputs)[source]#

Loss function for the model.

Return type:

LossOutput