Matrix Multiplication in Multiword Arithmetic: Error Analysis and Application to GPU Tensor Cores

Fasi, Massimiliano and Higham, Nicholas J. and Lopez, Florent and Mary, Theo and Mikaitis, Mantas (2022) Matrix Multiplication in Multiword Arithmetic: Error Analysis and Application to GPU Tensor Cores. [MIMS Preprint] (Submitted)

This is the latest version of this item.

[thumbnail of fhlm22-R1.pdf] Text
fhlm22-R1.pdf

Download (574kB)

Abstract

In multiword arithmetic, a matrix is represented as the unevaluated sum of two or more lower-precision matrices, and a matrix product is formed by multiplying the constituents in low precision. We investigate the use of multiword arithmetic for improving the performance-accuracy tradeoff of matrix multiplication with mixed precision block fused multiply-add (FMA) hardware, focusing especially on the tensor cores available on NVIDIA GPUs. Building on a general block FMA framework, we develop a comprehensive error analysis of multiword matrix multiplication. After confirming the theoretical error bounds experimentally by simulating low precision in software, we use the cuBLAS and CUTLASS libraries to implement a number of matrix multiplication algorithms using double-fp16 (double-binary16) arithmetic. When running the algorithms on NVIDIA V100 and A100 GPUs, we find that double-fp16 is not as accurate as fp32 (binary32) arithmetic despite satisfying the same worst-case error bound. Using probabilistic error analysis, we explain why this issue is likely to be caused by the rounding mode used by the NVIDIA tensor cores, and propose a parameterized blocked summation algorithm that alleviates the problem and significantly improves the performance-accuracy tradeoff.

Item Type: MIMS Preprint
Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 65 Numerical analysis
Depositing User: Mr Mantas Mikaitis
Date Deposited: 16 Jul 2022 08:36
Last Modified: 16 Jul 2022 08:36
URI: https://eprints.maths.manchester.ac.uk/id/eprint/2862

Available Versions of this Item

Actions (login required)

View Item View Item