g-learn Documentation#
g-learn is a modular and high-performance Python package for machine learning using Gaussian process regression with novel algorithms capable of petascale computation on multi-GPU devices.
Supported Platforms#
Successful installation and tests performed on the following operating systems, architectures, and Python versions:
Platform |
Arch |
Device |
Python Version |
PyPy Version 1 |
Continuous Integration |
|||||
---|---|---|---|---|---|---|---|---|---|---|
3.9 |
3.10 |
3.11 |
3.12 |
3.8 |
3.9 |
3.10 |
||||
Linux |
X86-64 |
CPU |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
|
GPU |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
|||
AARCH-64 |
CPU |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
||
GPU |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
|||
macOS |
X86-64 |
CPU |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
|
GPU |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
|||
ARM-64 |
CPU |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
||
GPU |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
|||
Windows |
X86-64 |
CPU |
✔ |
✔ |
✔ |
✔ |
✖ |
✖ |
✖ |
|
GPU |
✔ |
✔ |
✔ |
✔ |
✖ |
✖ |
✖ |
Python wheels for g-learn for all supported platforms and versions in the above are available through PyPI and Anaconda Cloud. If you need g-learn on other platforms, architectures, and Python versions, raise an issue on GitHub and we build its Python Wheel for you.
Install#
Install with pip
from PyPI:
pip install glearn
Install with conda
from Anaconda Cloud:
conda install -c s-ameli glearn
For complete installation guide, see:
Docker#
The docker image comes with a pre-installed g-learn, an NVIDIA graphic driver, and a compatible version of CUDA Toolkit libraries.
Pull docker image from Docker Hub:
docker pull sameli/glearn
For a complete guide, see:
GPU#
g-learn can run on CUDA-capable multi-GPU devices, which can be set up in several ways. Using the docker container is the easiest way to run g-learn on GPU devices. For a comprehensive guide, see:
The supported GPU micro-architectures and CUDA version are as follows:
Version \ Arch |
Fermi |
Kepler |
Maxwell |
Pascal |
Volta |
Turing |
Ampere |
Hopper |
---|---|---|---|---|---|---|---|---|
CUDA 9 |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
✖ |
CUDA 10 |
✖ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
✔ |
CUDA 11 |
✖ |
✖ |
✖ |
✔ |
✔ |
✔ |
✔ |
✔ |
CUDA 12 |
✖ |
✖ |
✖ |
✔ |
✔ |
✔ |
✔ |
✔ |
API Reference#
Check the list of functions, classes, and modules of g-learn with their usage, options, and examples.
Tutorials#
Launch an online interactive notebook with Binder. You can also explore the Jupyter notebooks below to get started using g-learn.
Features#
Randomized algorithms using Hutchinson and stochastic Lanczos quadrature algorithms (see Overview)
Novel method to interpolate matrix functions. See Interpolation of Affine Matrix Functions.
Parallel processing both on shared memory and CUDA Capable multi-GPU devices.
Sparse covariance
Mixed covariance model, object
Automatic Relevance Determination (ARD)
Jacobian and Hessian based optimization
Learn hyperparameters in reduced space (profile likelihood)
Prediction in dual space with with \(\mathcal{O}(n)\) complexity.
Technical Notes#
Some notable implementation techniques used to develop g-learn are:
OS-independent customized dynamic loading of CUDA libraries.
Static dispatching enables executing g-learn with and without CUDA on the user’s machine with the same pre-compiled g-learn installation.
Completely GIL-free Cython implementation.
Providing manylinux wheels build upon customized docker images with CUDA support available on DockerHub:
How to Contribute#
We welcome contributions via GitHub’s pull request. If you do not feel comfortable modifying the code, we also welcome feature requests and bug reports as GitHub issues.
Publications#
For information on how to cite g-learn, publications, and software packages that used g-learn, see:
License#
This project uses a BSD 3-clause license, in hopes that it will be accessible to most projects. If you require a different license, please raise an issue and we will consider a dual license.