scvi.distributions.BetaBinomial#
- class scvi.distributions.BetaBinomial(total_count, alpha=None, beta=None, mu=None, gamma=None, validate_args=False, eps=1e-08)[source]#
BETA
Beta binomial distribution.One of the following parameterizations must be provided:
1. (
alpha
,beta
,total_counts
) where alpha and beta are the shape parameters of the beta distribution and total_counts is the number of trials.2. (
mu
,gamma
,total_counts
), which is the one used by methylVI. These parameters respectively control the mean and dispersion of the distribution.In the (
mu
,gamma
) parameterization, samples from the beta-binomial are generated as follows:\(p_i \sim \textrm{Beta}(\mu, \gamma)\)
\(y_i \sim \textrm{Ber}(p_i)\)
\(y = \sum_{i}y_i\)
- Parameters:
total_count (
Tensor
) – Number of trials. Must be a non-negative integer.alpha (
Tensor
|None
(default:None
)) – As inBetaBinomial
, serves as the first shape parameterization of the beta distribution. Must be greater than0
.beta (
Tensor
|None
(default:None
)) – As inBetaBinomial
, serves as the second shape parameterization of the beta distribution. Must be greater than0
.mu (
Tensor
|None
(default:None
)) – Mean of the distribution. Must be within the interval(0, 1)
.gamma (
Tensor
|None
(default:None
)) – Dispersion. Must be within the interval(0, 1)
.validate_args (
bool
(default:False
)) – RaiseValueError
if arguments do not match the constraints.eps (
float
(default:1e-08
)) – Numerical stability constant. (See Notes)
Notes
Under the hood we use
BetaBinomial
to implement the Beta-Binomial distribution. Thus, when the user specifies a (mu
,gamma
) parameterization, we must convert to the (alpha
,beta
) parameterization used by the underlying Pyro distribution class. During this process, numerical stability issues sometimes causealpha
orbeta
to be equal to (exactly) zero. This is not allowed (alpha
andbeta
must be strictly greater than 0), so we clamp these values to be greater than a small constanteps
.
Attributes table#
Returns the shape over which parameters are batched. |
|
Number of dimensions of individual events. |
|
Returns the shape of a single sample (without batching). |
|
Returns the mean of the distribution. |
|
Returns the mode of the distribution. |
|
EXPERIMENTAL Switch to the Random Variable DSL for applying transformations to random variables. |
|
Returns the standard deviation of the distribution. |
|
Returns the variance of the distribution. |
Methods table#
|
Returns the cumulative density/mass function evaluated at value. |
|
EXPERIMENTAL Creates an updated distribution fusing information from another compatible distribution. |
|
Returns entropy of distribution, batched over batch_shape. |
|
Returns tensor containing all values supported by a discrete distribution. |
|
Returns a new |
|
Expands a distribution by adding |
|
Force reparameterized or detached sampling on a single distribution instance. |
|
Returns the inverse cumulative density/mass function evaluated at value. |
|
|
|
Infers |
|
Returns the log of the probability density/mass function evaluated at value. |
|
Masks a distribution by a boolean or boolean-valued tensor that is broadcastable to the distributions |
Returns perplexity of distribution, batched over batch_shape. |
|
|
|
|
Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched. |
|
Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched. |
|
Generates n samples or n batches of samples if the distribution parameters are batched. |
|
Computes ingredients for stochastic gradient estimators of ELBO. |
|
Sets whether validation is enabled or disabled. |
|
The tensor shape of samples from this distribution. |
|
Reinterprets the |
Attributes#
- BetaBinomial.approx_log_prob_tol = 0.0#
- BetaBinomial.arg_constraints = {'alpha': Optional(GreaterThan(lower_bound=0)), 'beta': Optional(GreaterThan(lower_bound=0)), 'gamma': Optional(OpenInterval(lower_bound=0, upper_bound=1)), 'mu': Optional(OpenInterval(lower_bound=0, upper_bound=1)), 'total_count': IntegerGreaterThan(lower_bound=0)}#
- BetaBinomial.has_enumerate_support = True#
- BetaBinomial.has_rsample = False#
- BetaBinomial.rv[source]#
EXPERIMENTAL Switch to the Random Variable DSL for applying transformations to random variables. Supports either chaining operations or arithmetic operator overloading.
Example usage:
# This should be equivalent to an Exponential distribution. Uniform(0, 1).rv.log().neg().dist # These two distributions Y1, Y2 should be the same X = Uniform(0, 1).rv Y1 = X.mul(4).pow(0.5).sub(1).abs().neg().dist Y2 = (-abs((4*X)**(0.5) - 1)).dist
- Returns:
A :class: ~pyro.contrib.randomvariable.random_variable.RandomVariable object wrapping this distribution.
- Return type:
- BetaBinomial.support = IntegerGreaterThan(lower_bound=0)#
Methods#
- BetaBinomial.cdf(value)[source]#
Returns the cumulative density/mass function evaluated at value.
- Parameters:
value (Tensor)
- Return type:
Tensor
- BetaBinomial.conjugate_update(other)[source]#
EXPERIMENTAL Creates an updated distribution fusing information from another compatible distribution. This is supported by only a few conjugate distributions.
This should satisfy the equation:
fg, log_normalizer = f.conjugate_update(g) assert f.log_prob(x) + g.log_prob(x) == fg.log_prob(x) + log_normalizer
Note this is equivalent to
funsor.ops.add
onFunsor
distributions, but we return a lazy sum(updated, log_normalizer)
because PyTorch distributions must be normalized. Thusconjugate_update()
should commute withdist_to_funsor()
andtensor_to_funsor()
dist_to_funsor(f) + dist_to_funsor(g) == dist_to_funsor(fg) + tensor_to_funsor(log_normalizer)
- Parameters:
other – A distribution representing
p(data|latent)
but normalized overlatent
rather thandata
. Herelatent
is a candidate sample fromself
anddata
is a ground observation of unrelated type.- Returns:
a pair
(updated,log_normalizer)
whereupdated
is an updated distribution of typetype(self)
, andlog_normalizer
is aTensor
representing the normalization factor.
- BetaBinomial.entropy()[source]#
Returns entropy of distribution, batched over batch_shape.
- Return type:
Tensor
- Returns:
Tensor of shape batch_shape.
- BetaBinomial.enumerate_support(expand=True)[source]#
Returns tensor containing all values supported by a discrete distribution. The result will enumerate over dimension 0, so the shape of the result will be (cardinality,) + batch_shape + event_shape (where event_shape = () for univariate distributions).
Note that this enumerates over all batched tensors in lock-step [[0, 0], [1, 1], …]. With expand=False, enumeration happens along dim 0, but with the remaining batch dimensions being singleton dimensions, [[0], [1], ...
To iterate over the full Cartesian product use itertools.product(m.enumerate_support()).
- Parameters:
expand (bool) – whether to expand the support over the batch dims to match the distribution’s batch_shape.
- Returns:
Tensor iterating over dimension 0.
- BetaBinomial.expand(batch_shape, _instance=None)[source]#
Returns a new
ExpandedDistribution
instance with batch dimensions expanded to batch_shape.- Parameters:
batch_shape (tuple) – batch shape to expand to.
_instance (default:
None
) – unused argument for compatibility withtorch.distributions.Distribution.expand()
- Returns:
an instance of ExpandedDistribution.
- Return type:
ExpandedDistribution
- BetaBinomial.expand_by(sample_shape)[source]#
Expands a distribution by adding
sample_shape
to the left side of itsbatch_shape
.To expand internal dims of
self.batch_shape
from 1 to something larger, useexpand()
instead.- Parameters:
sample_shape (torch.Size) – The size of the iid batch to be drawn from the distribution.
- Returns:
An expanded version of this distribution.
- Return type:
ExpandedDistribution
- BetaBinomial.has_rsample_(value)[source]#
Force reparameterized or detached sampling on a single distribution instance. This sets the
.has_rsample
attribute in-place.This is useful to instruct inference algorithms to avoid reparameterized gradients for variables that discontinuously determine downstream control flow.
- Parameters:
value (bool) – Whether samples will be pathwise differentiable.
- Returns:
self
- Return type:
Distribution
- BetaBinomial.icdf(value)[source]#
Returns the inverse cumulative density/mass function evaluated at value.
- Parameters:
value (Tensor)
- Return type:
Tensor
- classmethod BetaBinomial.infer_shapes(**arg_shapes)[source]#
Infers
batch_shape
andevent_shape
given shapes of args to__init__()
.Note
This assumes distribution shape depends only on the shapes of tensor inputs, not in the data contained in those inputs.
- Parameters:
**arg_shapes – Keywords mapping name of input arg to
torch.Size
or tuple representing the sizes of each tensor input.- Returns:
A pair
(batch_shape, event_shape)
of the shapes of a distribution that would be created with input args of the given shapes.- Return type:
- BetaBinomial.log_prob(value)[source]#
Returns the log of the probability density/mass function evaluated at value.
- Parameters:
value (Tensor)
- BetaBinomial.mask(mask)[source]#
Masks a distribution by a boolean or boolean-valued tensor that is broadcastable to the distributions
batch_shape
.- Parameters:
mask (bool or torch.Tensor) – A boolean or boolean valued tensor.
- Returns:
A masked copy of this distribution.
- Return type:
MaskedDistribution
- BetaBinomial.perplexity()[source]#
Returns perplexity of distribution, batched over batch_shape.
- Return type:
Tensor
- Returns:
Tensor of shape batch_shape.
- BetaBinomial.rsample(sample_shape=())[source]#
Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched.
- Return type:
Tensor
- BetaBinomial.sample(sample_shape=())[source]#
Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched.
- BetaBinomial.sample_n(n)[source]#
Generates n samples or n batches of samples if the distribution parameters are batched.
- Return type:
Tensor
- BetaBinomial.score_parts(x, *args, **kwargs)[source]#
Computes ingredients for stochastic gradient estimators of ELBO.
The default implementation is correct both for non-reparameterized and for fully reparameterized distributions. Partially reparameterized distributions should override this method to compute correct .score_function and .entropy_term parts.
Setting
.has_rsample
on a distribution instance will determine whether inference engines likeSVI
use reparameterized samplers or the score function estimator.- Parameters:
x (torch.Tensor) – A single value or batch of values.
- Returns:
A ScoreParts object containing parts of the ELBO estimator.
- Return type:
ScoreParts
- static BetaBinomial.set_default_validate_args(value)[source]#
Sets whether validation is enabled or disabled.
The default behavior mimics Python’s
assert
statement: validation is on by default, but is disabled if Python is run in optimized mode (viapython -O
). Validation may be expensive, so you may want to disable it once a model is working.
- BetaBinomial.shape(sample_shape=())[source]#
The tensor shape of samples from this distribution.
Samples are of shape:
d.shape(sample_shape) == sample_shape + d.batch_shape + d.event_shape
- Parameters:
sample_shape (torch.Size) – the size of the iid batch to be drawn from the distribution.
- Returns:
Tensor shape of samples.
- Return type:
torch.Size
- BetaBinomial.to_event(reinterpreted_batch_ndims=None)[source]#
Reinterprets the
n
rightmost dimensions of this distributionsbatch_shape
as event dims, adding them to the left side ofevent_shape
.Example
>>> [d1.batch_shape, d1.event_shape] [torch.Size([2, 3]), torch.Size([4, 5])] >>> d2 = d1.to_event(1) >>> [d2.batch_shape, d2.event_shape] [torch.Size([2]), torch.Size([3, 4, 5])] >>> d3 = d1.to_event(2) >>> [d3.batch_shape, d3.event_shape] [torch.Size([]), torch.Size([2, 3, 4, 5])]
- Parameters:
reinterpreted_batch_ndims (int) – The number of batch dimensions to reinterpret as event dimensions. May be negative to remove dimensions from an
pyro.distributions.torch.Independent
. If None, convert all dimensions to event dimensions.- Returns:
A reshaped version of this distribution.
- Return type: