Accurately Computing the Log-Sum-Exp and Softmax Functions

Blanchard, Pierre and Higham, Desmond J. and Higham, Nicholas J. (2019) Accurately Computing the Log-Sum-Exp and Softmax Functions. [MIMS Preprint]

This is the latest version of this item.

[img] Text

Download (502kB)


Evaluating the log-sum-exp function or the softmax function is a key step in many modern data science algorithms, notably in inference and classification. Because of the exponentials that these functions contain, the evaluation is prone to overflow and underflow, especially in low precision arithmetic. Software implementations commonly use alternative formulas that avoid overflow and reduce the chance of harmful underflow, employing a shift or another rewriting. Although mathematically equivalent, these variants behave differently in floating-point arithmetic \new{and shifting can introduce subtractive cancellation}. We give rounding error analyses of different evaluation algorithms and interpret the error bounds using condition numbers for the functions. We conclude, based on the analysis and numerical experiments, that the shifted formulas are of similar accuracy to the unshifted ones, so can safely be used, but that a division-free variant of softmax can suffer from loss of accuracy.

Item Type: MIMS Preprint
Uncontrolled Keywords: log-sum-exp, softmax, floating-point arithmetic, rounding error analysis, overflow, underflow, condition number
Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 65 Numerical analysis
Depositing User: Nick Higham
Date Deposited: 17 May 2020 19:27
Last Modified: 17 May 2020 19:27

Available Versions of this Item

Actions (login required)

View Item View Item