glearn.Covariance.trace#
- Covariance.trace(sigma=None, sigma0=None, scale=None, p=1, derivative=[], imate_options={})#
Compute the trace of the positive powers of the covariance matrix or its derivatives.
- Parameters:
- sigmafloat, default=None
The hyperparameter \(\sigma\) of the covariance model where \(\sigma^2\) represents the variance of the correlated errors of the model. \(\sigma\) should be positive and cannot be None.
- sigma0float, default=None
The hyperparameter \(\varsigma\) of the covariance model where \(\varsigma^2\) represents the variance of the input noise to of the model. \(\sigma\) should be positive and cannot be None.
- scalefloat or array_like[float], default=None
The scale hyperparameters \(\boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_d)\) in scales the distance between data points in \(\mathbb{R}^d\). If an array of the size \(d\) is given, each \(\alpha_i\) scales the distance in the \(i\)-th dimension. If a scalar value \(\alpha\) is given, all dimensions are scaled isometrically. \(\boldsymbol{\alpha}\) cannot be None.
- pfloat, default=1
The exponent \(p\) of the covariance matrix \(\boldsymbol{\Sigma}^p\) (see Notes below). The exponent should be non-negative real number. Note that if \(p \neq 1\), the derivative order should be zero, meaning that no derivative should be taken by setting
derivative=[]
.Note
For \(\boldsymbol{\Sigma}^{-p}\) with \(p > 0\) see
glearn.Covariance.trace()
.- derivativelist, default=[]
Specifies a list of derivatives of covariance matrix with respect to the hyperparameters \(\boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_d)\). A list of the size \(q\) with the components
[i, j, ..., k]
corresponds to take the derivative\[\left. \frac{\partial^q}{\partial \alpha_{i+1} \partial \alpha_{j+1} \dots \partial \alpha_{k+1}} \boldsymbol{\Sigma}^p(\boldsymbol{\alpha} \vert \sigma^2, \varsigma^2) \right|_{\boldsymbol{\alpha}}.\]Note
The derivative with respect to each hyperparameter \(\alpha_i\) can be at most of the order two, \(\partial^2 / \partial \alpha_i^2\). That is, each index in the
derivative
list can appear at most twice. For instancederivative=[1, 1]
(second order derivative with respect to \(\alpha_{2}\)) is a valid input argument, how everderivative=[1, 1, 1]
(third order derivative) is an invalid input.Note
When the derivative order is non-zero (meaning that
derivative
is not[]
), the exponent \(p\) should be 1.
- Returns:
- Snumpy.ndarray
An array of the size \(n \times n\) where \(n\) is the size of the matrix.
See also
Notes
This function computes
\[\mathrm{trace} \left( \frac{\partial^q}{\partial \alpha_{i} \partial \alpha_{j} \dots \partial \alpha_{k}} \boldsymbol{\Sigma}^p(\boldsymbol{\alpha} \vert \sigma, \varsigma) \right),\]where the covariance matrix \(\boldsymbol{\Sigma}\) is defined by
\[\boldsymbol{\Sigma}(\boldsymbol{\alpha}, \sigma, \varsigma) = \sigma^2 \mathbf{K}(\boldsymbol{\alpha}) + \varsigma^2 \mathbf{I}.\]In the above, \(\mathbf{I}\) is the identity matrix and \(\mathbf{K}\) is the correlation matrix that depends on a set of scale hyperparameters \(\boldsymbol{\alpha}=(\alpha_1, \dots, \alpha_d)\).
Derivatives:
Note that the indices in list
derivative=[i, j, ..., k]
are zero-indexed, meaning that the indexi
corresponds to take derivative with respect to the hyperparameter \(\alpha_{i+1}\). For instance:[]
corresponds to no derivative.[0]
corresponds to \(\partial / \partial \alpha_1\) and[1]
corresponds to \(\partial / \partial \alpha_2\).[0, 2]
corresponds to \(\partial^2 / \partial \alpha_1 \partial \alpha_3\).[0, 0]
corresponds to \(\partial^2 / \partial \alpha_1^2\).[0, 2, 2, 4]
corresponds to \(\partial^4 / \partial \alpha_1 \partial \alpha_{3}^2 \partial \alpha_5\).
Configuring Computation Settings:
This function passes the computation of trace to the function
imate.trace()
. To configure the latter function, create a dictionary of input arguments to this function and pass the dictionary withglearn.Covariance.set_imate_options()
. See examples below for details.Examples
Basic Usage:
Create a sample dataset with four points in \(d=2\) dimensional space. Then, compute the trace of \(\boldsymbol{\Sigma}^{2}(\boldsymbol{\alpha}, \sigma, \varsigma)\) for \(\boldsymbol{\alpha} = (1, 2)\), \(\sigma=2\), and \(\varsigma=3\).
>>> # Generate a set of points >>> from glearn.sample_data import generate_points >>> x = generate_points(num_points=4, dimension=2) >>> # Create a covariance object >>> from glearn import Covariance >>> cov = Covariance(x) >>> # Compute trace >>> cov.trace(sigma=2.0, sigma0=3.0, scale=[1.0, 2.0], p=2) 817.7863657241508
Configure Computation:
The following example shows how to compute the trace of \(\boldsymbol{\Sigma}^{\frac{3}{2}}(\boldsymbol{\alpha}, \sigma, \varsigma)\). Note that the exponent \(p\) is not an integer. To compute the trace of non-integer exponents, the backend function
imate.trace()
should be configured to use eithereigenvalue
orslq
methods. In the following example, the eigenvalue method is used.>>> # Check the default imate option >>> cov.get_imate_options() { 'method': 'cholesky' }
The above method (Cholesky) cannot compute the trace of non-integer exponents. In the following we change the method to eigenvalue method.
>>> # Change the default imate option >>> options = { ... 'method' : 'eigenvalue' ... } >>> cov.set_imate_options(options) >>> # Compute trace with non-integer exponent >>> cov.trace(sigma=2.0, sigma0=3.0, scale=[1.0, 2.0], p=1.5) 201.2755406790841
Taking Derivatives:
Compute the trace of the second mixed derivative
\[\frac{\partial^2}{\partial \alpha_1 \partial \alpha_2} \boldsymbol{\Sigma}(\alpha_1, \alpha_2 \vert \sigma, \varsigma).\]Note
When taking the derivative, the exponent \(p\) should be 1.
>>> # Compute second mixed derivative >>> cov.trace(sigma=2.0, sigma0=3.0, scale=[1.0, 2.0], p=1, ... derivative=[0, 1]) 0.0