class scvi.module.base.PyroBaseModuleClass[source]#

Bases: torch.nn.modules.module.Module

Base module class for Pyro models.

In Pyro, model and guide should have the same signature. Out of convenience, the forward function of this class passes through to the forward of the model.

There are two ways this class can be equipped with a model and a guide. First, model and guide can be class attributes that are PyroModule instances. The implemented model and guide class method can then return the (private) attributes. Second, model and guide methods can be written directly (see Pyro scANVI example)

The model and guide may also be equipped with n_obs attributes, which can be set to None (e.g., self.n_obs = None). This attribute may be helpful in designating the size of observation-specific Pyro plates. The value will be updated automatically by PyroTrainingPlan, provided that it is given the number of training examples upon initialization.

Attributes table#



Model annotation for minibatch training with pyro plate.


Methods table#

create_predictive([model, ...])

Creates a Predictive object.

forward(*args, **kwargs)

Passthrough to Pyro model.




alias of TypeVar(‘T_destination’, bound=Mapping[str, torch.Tensor])

alias of TypeVar(‘T_destination’, bound=Mapping[str, torch.Tensor]) .. autoattribute:: PyroBaseModuleClass.T_destination dump_patches ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

PyroBaseModuleClass.dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.




Model annotation for minibatch training with pyro plate.

A dictionary with: 1. “name” - the name of observation/minibatch plate; 2. “in” - indexes of model args to provide to encoder network when using amortised inference; 3. “sites” - dictionary with

keys - names of variables that belong to the observation plate (used to recognise

and merge posterior samples for minibatch variables)

values - the dimensions in non-plate axis of each variable (used to construct output

layer of encoder network when using amortised inference)



training# bool#



PyroBaseModuleClass.create_predictive(model=None, posterior_samples=None, guide=None, num_samples=None, return_sites=(), parallel=False)[source]#

Creates a Predictive object.

model : Callable | NoneOptional[Callable] (default: None)

Python callable containing Pyro primitives. Defaults to self.model.

posterior_samples : dict | NoneOptional[dict] (default: None)

Dictionary of samples from the posterior

guide : Callable | NoneOptional[Callable] (default: None)

Optional guide to get posterior samples of sites not present in posterior_samples. Defaults to

num_samples : int | NoneOptional[int] (default: None)

Number of samples to draw from the predictive distribution. This argument has no effect if posterior_samples is non-empty, in which case, the leading dimension size of samples in posterior_samples is used.

return_sites : Tuple[str] (default: ())

Sites to return; by default only sample sites not present in posterior_samples are returned.

parallel : bool (default: False)

predict in parallel by wrapping the existing model in an outermost plate messenger. Note that this requires that the model has all batch dims correctly annotated via plate. Default is False.

Return type



PyroBaseModuleClass.forward(*args, **kwargs)[source]#

Passthrough to Pyro model.