Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ?gemm_batch routines #916

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

foxtran
Copy link

@foxtran foxtran commented Oct 24, 2023

Description

This PR adds ?gemm_batch routines and new xerblai error handler for batched routines. ?gemm_batch routines can be extremely useful in case of large set of small matrixes in the case of optimal implementation for multicore machines.
See more about performance improvement by the following link: https://www.intel.com/content/www/us/en/developer/articles/technical/introducing-batch-gemm-operations.html

In Intel MKL and cublas, there are corresponding routines.

  1. Intel MKL https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-fortran/2023-2/gemm-batch.html
  2. cublas https://docs.nvidia.com/cuda/cublas/#cublas-t-gemmbatched

A good article about batched API for BLAS/LAPACK can be found here: https://eprints.maths.manchester.ac.uk/2464/1/batched_api.pdf

I took an API from Intel MKL since it can take both options for ?gemm_batch for fixed size matrixes, and for different size matrixes. While GROUP_COUNT specifies the number of matrixes' shapes, GROUP_SIZE specifies the number of matrixes with given shapes.

Note, in current implementation, some features from Fortran 2003 are used. Another option is to use LOC operation which is an extension for Fortran standard, so I would prefer not to use it.

Checklist

  • The documentation has been updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant