glearn.Covariance.logdet#

Covariance.logdet(sigma=None, sigma0=None, scale=None, p=1, derivative=[], imate_options={})#

Compute the log-determinant of the powers of the covariance matrix or its derivatives.

Parameters:
sigmafloat, default=None

The hyperparameter \(\sigma\) of the covariance model where \(\sigma^2\) represents the variance of the correlated errors of the model. \(\sigma\) should be positive and cannot be None.

sigma0float, default=None

The hyperparameter \(\varsigma\) of the covariance model where \(\varsigma^2\) represents the variance of the input noise to of the model. \(\sigma\) should be positive and cannot be None.

scalefloat or array_like[float], default=None

The scale hyperparameters \(\boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_d)\) in scales the distance between data points in \(\mathbb{R}^d\). If an array of the size \(d\) is given, each \(\alpha_i\) scales the distance in the \(i\)-th dimension. If a scalar value \(\alpha\) is given, all dimensions are scaled isometrically. \(\boldsymbol{\alpha}\) cannot be None.

pfloat, default=1

The real exponent \(p\) (negative or positive) of the covariance matrix \(\boldsymbol{\Sigma}^{p}\) (see Notes below).

derivativelist, default=[]

Specifies a list of derivatives of covariance matrix with respect to the hyperparameters \(\boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_d)\). A list of the size \(q\) with the components [i, j, ..., k] corresponds to take the derivative

\[\left. \frac{\partial^q}{\partial \alpha_{i+1} \partial \alpha_{j+1} \dots \partial \alpha_{k+1}} \boldsymbol{\Sigma}^{p}(\boldsymbol{\alpha} \vert \sigma^2, \varsigma^2) \right|_{\boldsymbol{\alpha}}.\]

Note

The derivative with respect to each hyperparameter \(\alpha_i\) can be at most of the order two, \(\partial^2 / \partial \alpha_i^2\). That is, each index in the derivative list can appear at most twice. For instance derivative=[1, 1] (second order derivative with respect to \(\alpha_{2}\)) is a valid input argument, how ever derivative=[1, 1, 1] (third order derivative) is an invalid input.

Note

When the derivative order is non-zero (meaning that derivative is not []), the exponent \(p\) should be 1.

Returns:
Snumpy.ndarray

An array of the size \(n \times n\) where \(n\) is the size of the matrix.

Notes

This function computes

\[\log \det \left( \frac{\partial^q}{\partial \alpha_{i} \partial \alpha_{j} \dots \partial \alpha_{k}} \boldsymbol{\Sigma}^{p}(\boldsymbol{\alpha} \vert \sigma, \varsigma) \right),\]

where the covariance matrix \(\boldsymbol{\Sigma}\) is defined by

\[\boldsymbol{\Sigma}(\boldsymbol{\alpha}, \sigma, \varsigma) = \sigma^2 \mathbf{K}(\boldsymbol{\alpha}) + \varsigma^2 \mathbf{I}.\]

In the above, \(\mathbf{I}\) is the identity matrix and \(\mathbf{K}\) is the correlation matrix that depends on a set of scale hyperparameters \(\boldsymbol{\alpha}=(\alpha_1, \dots, \alpha_d)\).

Derivatives:

Note that the indices in list derivative=[i, j, ..., k] are zero-indexed, meaning that the index i corresponds to take derivative with respect to the hyperparameter \(\alpha_{i+1}\). For instance:

  • [] corresponds to no derivative.

  • [0] corresponds to \(\partial / \partial \alpha_1\) and [1] corresponds to \(\partial / \partial \alpha_2\).

  • [0, 2] corresponds to \(\partial^2 / \partial \alpha_1 \partial \alpha_3\).

  • [0, 0] corresponds to \(\partial^2 / \partial \alpha_1^2\).

  • [0, 2, 2, 4] corresponds to \(\partial^4 / \partial \alpha_1 \partial \alpha_{3}^2 \partial \alpha_5\).

Configuring Computation Settings:

This function passes the computation of trace to the function imate.logdet(). To configure the latter function, create a dictionary of input arguments to this function and pass the dictionary with glearn.Covariance.set_imate_options(). See examples below for details.

Examples

Basic Usage:

Create a sample dataset with four points in \(d=2\) dimensional space. Then, compute the log-determinant of \(\boldsymbol{\Sigma}^{2}(\boldsymbol{\alpha}, \sigma, \varsigma)\) for \(\boldsymbol{\alpha} = (1, 2)\), \(\sigma=2\), and \(\varsigma=3\).

>>> # Generate a set of points
>>> from glearn.sample_data import generate_points
>>> x = generate_points(num_points=4, dimension=2)

>>> # Create a covariance object
>>> from glearn import Covariance
>>> cov = Covariance(x)

>>> # Compute log-determinant
>>> cov.logdet(sigma=2.0, sigma0=3.0, scale=[1.0, 2.0], p=2)
19.843781574740206

Configure Computation:

The following example shows how to compute the log-determinant of \(\boldsymbol{\Sigma}^{\frac{3}{2}}(\boldsymbol{\alpha}, \sigma, \varsigma)\). First, we check the default method:

>>> # Check the default imate option
>>> cov.get_imate_options()
{
    'method': 'cholesky'
}

In the following, we change the method to eigenvalue method.

>>> # Change the default imate option
>>> options = {
...    'method' : 'eigenvalue'
... }
>>> cov.set_imate_options(options)

>>> # Compute log-determinant with eigenvalue method
>>> cov.logdet(sigma=2.0, sigma0=3.0, scale=[1.0, 2.0], p=1.5)
14.882836181055152

Taking Derivatives:

Compute the log-determinant of the second mixed derivative

\[\frac{\partial^2}{\partial \alpha_2^2} \boldsymbol{\Sigma} (\alpha_1, \alpha_2 \vert \sigma, \varsigma).\]
>>> # Compute second mixed derivative
>>> cov.logdet(sigma=2.0, sigma0=3.0, scale=[1.0, 2.0], p=1,
...              derivative=[1, 1])
8.095686613549319