class scvi.nn.FCLayers(n_in, n_out, n_cat_list=None, n_layers=1, n_hidden=128, dropout_rate=0.1, use_batch_norm=True, use_layer_norm=False, use_activation=True, bias=True, inject_covariates=True, activation_fn=<class 'torch.nn.modules.activation.ReLU'>)[source]

Bases: torch.nn.modules.module.Module

A helper class to build fully-connected layers for a neural network.

n_in : intint

The dimensionality of the input

n_out : intint

The dimensionality of the output

n_cat_list : Iterable[int] | NoneOptional[Iterable[int]] (default: None)

A list containing, for each category of interest, the number of categories. Each category will be included using a one-hot encoding.

n_layers : intint (default: 1)

The number of fully-connected hidden layers

n_hidden : intint (default: 128)

The number of nodes per hidden layer

dropout_rate : floatfloat (default: 0.1)

Dropout rate to apply to each of the hidden layers

use_batch_norm : boolbool (default: True)

Whether to have BatchNorm layers or not

use_layer_norm : boolbool (default: False)

Whether to have LayerNorm layers or not

use_activation : boolbool (default: True)

Whether to have layer activation or not

bias : boolbool (default: True)

Whether to learn bias in linear layers or not

inject_covariates : boolbool (default: True)

Whether to inject covariates in each layer, or just the first (default).

activation_fn : ModuleModule (default: <class 'torch.nn.modules.activation.ReLU'>)

Which activation function to use



forward(x, *cat_list)

Forward computation on x.


Helper to determine if covariates should be injected.