glearn.priors.BetaPrime.log_pdf_jacobian#

BetaPrime.log_pdf_jacobian(hyperparam)#

Jacobian of the logarithm of the probability density function of the prior distribution.

Parameters:
xfloat or array_like[float]

Input hyperparameter or an array of hyperparameters.

Returns:
jacfloat or array_like[float]

The Jacobian of the logarithm of probability density function of the input hyperparameter(s).

Notes

Multiple hyperparameters:

Given an array of hyperparameters \(\boldsymbol{\theta} = (\theta_, \dots, \theta_n)\), this function returns the Jacobian vector \(\boldsymbol{J}\) with the components \(J_i\) as

\[J_i= \frac{\partial}{\partial \theta_i} \log p(\theta_i) = \frac{1}{p(\theta_i)} \frac{\partial p(\theta_i)}{\partial \theta_i}.\]

Using Log Scale:

If the attribute use_log_scale is True, it is assumed that the input argument \(\theta\) is the log of the hyperparameter, so to convert back to the original hyperparameter, the transformation below is performed

\[\theta \gets 10^{\theta}.\]

As a result, the Jacobian is transformed by

\[J_i \gets \log_e(10) \theta_i J_i.\]

Examples

Create the inverse Gamma distribution with the shape parameter \(\alpha=4\) and rate parameter \(\beta=2\).

>>> from glearn import priors
>>> prior = priors.InverseGamma(4, 2)

>>> # Evaluate the Jacobian of the log-PDF
>>> prior.log_pdf_jacobian(t)
array([ -6.90775528, -10.05664278, -11.05240845])