glearn.priors.Prior.log_pdf_hessian#

Prior.log_pdf_hessian(hyperparam)#

Hessian of the logarithm of the probability density function of the prior distribution.

Parameters:
xfloat or array_like[float]

Input hyperparameter or an array of hyperparameters.

Returns:
hessfloat or array_like[float]

The Hessian of the logarithm of probability density function of the input hyperparameter(s).

Notes

Multiple hyperparameters:

Given an array of hyperparameters \(\boldsymbol{\theta} = (\theta_, \dots, \theta_n)\), this function returns the Jacobian vector \(\mathbf{H}\) with the components \(H_{ij} = 0\) if \(i \neq j\) and

\[H_{ii} = \frac{\partial^2}{\partial \theta_i^2} \log p(\theta_i) = \frac{1}{p(\theta_i)} \frac{\partial^2 p(\theta_i)}{\partial \theta_i^2} - \left( \frac{J_i}{p(\theta_i)} \right)^2,\]

where \(J_i\) is the Jacobian

\[J_i = \frac{\partial}{\partial \theta_i} \log p(\theta_i).\]

Using Log Scale:

If the attribute use_log_scale is True, it is assumed that the input argument \(\theta\) is the log of the hyperparameter, so to convert back to the original hyperparameter, the transformation below is performed

\[\theta \gets 10^{\theta}.\]

As a result, the Hessian is transformed by

\[\begin{split}H_{ij} \gets \begin{cases} H_{ij} \theta_i^2 (\log_e(10))^2 + J_i \log_e(10), & i=j, \\ H_{ij} \theta_i \theta_j (\log_e(10))^2, & i \neq j. \end{cases}\end{split}\]

Examples

Create the inverse Gamma distribution with the shape parameter \(\alpha=4\) and rate parameter \(\beta=2\).

>>> from glearn import priors
>>> prior = priors.InverseGamma(4, 2)

>>> # Evaluate the Hessian of the log-PDF
>>> prior.log_pdf_hessian(t)
array([[-10.60379622,   0.        ,   0.        ],
       [  0.        ,  -3.35321479,   0.        ],
       [  0.        ,   0.        ,  -1.06037962]])