AmortizedLDA.train(max_epochs=None, use_gpu=None, train_size=0.9, validation_size=None, batch_size=128, early_stopping=False, lr=None, plan_kwargs=None, **trainer_kwargs)

Train the model.

max_epochs : int | NoneOptional[int] (default: None)

Number of passes through the dataset. If None, defaults to np.min([round((20000 / n_cells) * 400), 400])

use_gpu : str | int | bool | NoneUnion[str, int, bool, None] (default: None)

Use default GPU if available (if None or True), or index of GPU to use (if int), or name of GPU (if str, e.g., ‘cuda:0’), or use CPU (if False).

train_size : floatfloat (default: 0.9)

Size of training set in the range [0.0, 1.0].

validation_size : float | NoneOptional[float] (default: None)

Size of the test set. If None, defaults to 1 - train_size. If train_size + validation_size < 1, the remaining cells belong to a test set.

batch_size : intint (default: 128)

Minibatch size to use during training. If None, no minibatching occurs and all data is copied to device (e.g., GPU).

early_stopping : boolbool (default: False)

Perform early stopping. Additional arguments can be passed in **kwargs. See Trainer for further options.

lr : float | NoneOptional[float] (default: None)

Optimiser learning rate (default optimiser is ClippedAdam). Specifying optimiser via plan_kwargs overrides this choice of lr.

plan_kwargs : dict | NoneOptional[dict] (default: None)

Keyword args for TrainingPlan. Keyword arguments passed to train() will overwrite values present in plan_kwargs, when appropriate.


Other keyword args for Trainer.