SIMULATING LOW PRECISION FLOATING-POINT ARITHMETIC

Higham, Nicholas J. and Pranesh, Srikara (2019) SIMULATING LOW PRECISION FLOATING-POINT ARITHMETIC. [MIMS Preprint]

Warning
There is a more recent version of this item available.
[thumbnail of paper.pdf] Text
paper.pdf

Download (372kB)

Abstract

The half precision (fp16) floating-point format, defined in the 2008 revision of the IEEE standard for floating-point arithmetic, and a more recently proposed half precision format bfloat16, are increasingly available in GPUs and other accelerators. While the support for low precision arithmetic is mainly motivated by machine learning applications, general purpose numerical algorithms can benefit from it, too, gaining in speed, energy usage, and reduced communication costs. Since the appropriate hardware is not always available, and one may wish to experiment with new arithmetics not yet implemented in hardware, software simulations of low precision arithmetic are needed. We discuss how to simulate low precision arithmetic using arithmetic of higher precision. We examine the correctness of such simulations and explain via rounding error analysis why a natural method of simulation can provide results that are more accurate than actual computations at low precision. We provide a MATLAB function chop that can be used to efficiently simulate fp16 and bfloat16 arithmetics, with or without the representation of subnormal numbers and with the options of round to nearest, directed rounding, stochastic rounding, and random bit flips in the significand. We demonstrate the advantages of this approach over defining a new MATLAB class and overloading operators.

Item Type: MIMS Preprint
Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 65 Numerical analysis
Depositing User: Dr Srikara Pranesh
Date Deposited: 20 Mar 2019 11:44
Last Modified: 20 Mar 2019 11:44
URI: https://eprints.maths.manchester.ac.uk/id/eprint/2692

Available Versions of this Item

Actions (login required)

View Item View Item