glearn.info#

glearn.info(print_only=True)#

Provides general information about hardware device, package version, and memory usage.

Parameters:
print_onlybool, default=True

It True, it prints the output. If False, it returns the output as a dictionary.

Returns:
info_dictdict

(Only if print_only is False). A dictionary with the following keys:

  • glearn_version: str, the version of the glearn package in the format "major_version.minor_version.patch_number".

  • imate_version: str, the version of the imate package in the format "major_version.minor_version.patch_number".

  • processor: str, the model name of the CPU processor.

  • num_threads, int, number of CPU threads that are available and allocated to the user.

  • gpu_name: str, model name of the GPU devices.

  • num_gpu_devices: int, number of GPU devices in multi-GPU platforms.

  • cuda_version: str, the version of CUDA Toolkit installed on the machine in the format "major_version.minor_version.patch_number".

  • nvidia_driver: str, the version of NVIDIA graphic driver.

  • mem_used: int, resident memory usage for the current Python process.

  • mem_unit, str, the unit in which mem_used is reported. This can be "b" for Byte, "KB" for Kilo-Byte, "MB" for Mega-Byte, "GB" for Giga-Byte, and "TB" for Tera-Byte.

Notes

CUDA Version:

In order to find CUDA Toolkit information properly, either of the environment variables CUDA_HOME, CUDA_ROOT, or CUDA_PATH should be set to the directory where CUDA Toolkit is installed. Usually on UNIX operating systems, this path is /usr/local/cuda. In this case, set CUDA_HOME (or any of the other variables mentioned in the above) as follows:

export CUDA_HOME=/usr/local/cuda

To permanently set this variable, place the above line in profile file, such as in ~/.bashrc, or ~/.profile, and source this file, for instance by

source ~/.bashrc

If no CUDA Toolkit is installed, then the key cuda_version shows not found.

Note

It is possible that the CUDA Toolkit is installed on the machine, but cuda_version key shows not found. This is because the user did not set the environment variables mentioned in the above.

GPU Devices:

If the key gpu_name shows not found, this is because either

  • No GPU device is detected on the machine.

  • GPU device exists, but NVIDIA graphic driver is not installed. See Install NVIDIA Graphic Driver for further details.

  • NVIDIA graphic driver is installed, but the executable nvidia-smi is not available on the PATH`. To fix this, set the location of the nvidia-smi executable on the PATH variable.

Memory:

The key mem_used shows the resident set size memory (RSS) on RAM hardware. The unit of the reported memory size can be found in mem_unit, which can be b for Bytes, KB for Kilo-Bytes, MB for Mega-Bytes, and so on.

Examples

Print information:

>>> from glearn import info
>>> info()
glearn version  : 0.17.0
imate version   : 0.18.0
processor       : Intel(R) Xeon(R) CPU E5-2623 v3 @ 3.00GHz
num threads     : 8
gpu device      : 'GeForce GTX 1080 Ti'
num gpu devices : 4
cuda version    : 11.2.0
nvidia driver   : 460.84
process memory  : 1.7 (Gb)

Return information as a dictionary:

>>> from glearn import info
>>> info_dict = info(print_only=False)

>>> # Neatly print dictionary using pprint
>>> from pprint import pprint
>>> pprint(info_dict)
{
    'glearn version': 0.17.0,
    'imate version': 0.18.0,
    'processor': Intel(R) Xeon(R) CPU E5-2623 v3 @ 3.00GHz,
    'num threads': 8,
    'gpu device': 'GeForce GTX 1080 Ti',
    'num gpu devices': 4,
    'cuda version': 11.2.0,
    'nvidia driver': 460.84,
    'process memory': 1.7 (Gb)
}