glearn.Covariance.auto_covariance#

Covariance.auto_covariance(training_points)#

Compute the auto-covariance between a set of test points.

Parameters:
training_pointsnumpy.ndarray

An array of the size \(n^{\ast} \times d\) representing the coordinates of \(n^{\ast}\) test points. Each row of the array is the coordinates of a point \(\boldsymbol{x} = (x_1, \dots, x_d)\).

Returns:
S_star_starnumpy.ndarray

The covariance array \(\boldsymbol{\Sigma}^{\ast \ast}\) of the size \(n^{\ast} \times n^{\ast}\).

Notes

Auto-Covariance:

Given a set of test points \(\{ \boldsymbol{x}^{\ast}_i \}_{i=1}^{n^{\ast}}\), this function generates the \(n^{\ast} \times n^{\ast}\) auto-covariance \(\boldsymbol{\Sigma}^{\ast \ast}\) where each element \(\Sigma^{\ast \ast}_{ij}\) of the matrix is the covariance between the points in the \(i\)-th and \(j\)-th test point, namely,

\[\Sigma^{\ast \ast}_{ij} = \mathrm{cov}( \boldsymbol{x}^{\ast}_i, \boldsymbol{x}^{\ast}_j).\]

Specifying Hyperparameters:

The auto-covariance matrix \(\boldsymbol{\Sigma}^{\ast \ast}\) depends on a set of hyperparameters as it is defined by

\[\boldsymbol{\Sigma}^{\ast \ast}(\boldsymbol{\alpha}, \sigma, \varsigma) = \sigma^2 \mathbf{K}^{\ast \ast}(\boldsymbol{\alpha}) + \varsigma^2 \mathbf{I}.\]

In the above, \(\mathbf{I}\) is the identity matrix and \(\mathbf{K}^{\ast \ast}\) is the auto-correlation matrix that depends on a set of scale hyperparameters \(\boldsymbol{\alpha}=(\alpha_1, \dots, \alpha_d)\).

Note

Before using glearn.Covariance.auto_covariance(), the hyperparameters \(\sigma\), \(\varsigma\), and \(\boldsymbol{\alpha}\) of the covariance object should be defined. These hyperparameters can be either defined at the time of instantiation of glearn.Covariance, or to be set by

Summary of Covariance Functions:

Suppose \(\{ \boldsymbol{x}_i \}_{i=1}^{n}\) and \(\{ \boldsymbol{x}^{\ast}_i \}_{i=1}^{n^{\ast}}\) are respectively training and test points. Three covariance matrices can be generated:

  • glearn.Covariance.get_matrix() returns the auto-covariance between training points by the \(n \times n\) matrix \(\boldsymbol{\Sigma}\) with the components

    \[\Sigma_{ij} = \mathrm{cov}(\boldsymbol{x}_i, \boldsymbol{x}_j).\]
  • glearn.Covariance.cross_covariance() returns the cross-covariance between the training points and test points by the \(n \times n^{\ast}\) matrix \(\boldsymbol{\Sigma}^{\ast}\) with the components

    \[\Sigma_{ij}^{\ast} = \mathrm{cov}(\boldsymbol{x}_i, \boldsymbol{x}^{\ast}_j).\]
  • glearn.Covariance.auto_covariance() returns the cross-covariance between the test points by the \(n^{\ast} \times n^{\ast}\) matrix \(\boldsymbol{\Sigma}^{\ast \ast}\) with the components

    \[\Sigma_{ij}^{\ast \ast} = \mathrm{cov}( \boldsymbol{x}^{\ast}_i, \boldsymbol{x}^{\ast}_j).\]

Examples

Create a covariance matrix based on a set of sample data with four points in \(d=2\) dimensional space.

>>> # Generate a set of points
>>> from glearn.sample_data import generate_points
>>> x = generate_points(num_points=4, dimension=2)

>>> # Create a covariance object
>>> from glearn import Covariance
>>> cov = Covariance(x, sigma=2.0, sigma0=3.0, scale=[1.0, 2.0])

Now, create a set of test points \(\boldsymbol{x}^{\ast}\), and compute the auto-covariance between the test points.

>>> # Generate a random set of points
>>> x_star = generate_points(num_points=4, dimension=2, seed=42)

>>> # Auto-covariance between test points
>>> cov.auto_covariance(x_star)
array([[4.        , 2.68545065, 2.54164549, 2.9067261 ],
       [2.68545065, 4.        , 2.15816164, 2.01221865],
       [2.54164549, 2.15816164, 4.        , 2.76750357],
       [2.9067261 , 2.01221865, 2.76750357, 4.        ]])