# Topics in Matrix Computations: Stability and Efficiency of Algorithms

Hargreaves, Gareth (2005) Topics in Matrix Computations: Stability and Efficiency of Algorithms. Doctoral thesis, University of Manchester.

Numerical algorithms are considered for three distinct areas of numerical linear algebra: hyperbolic matrix computations, condition numbers of structured matrices, and trigonometric matrix functions. We first consider hyperbolic rotations and show how to construct them accurately. A new accurate representation is devised which also avoids overflow. We show how to apply hyperbolic rotations directly, in mixed form, and by the OD procedure, with a rounding error analysis that shows the latter two methods are stable. A rounding error analysis for combining a sequence of nonoverlapping hyperbolic rotations applied in mixed form or by the OD procedure is then given. Applying a hyperbolic rotation directly is generally thought to be unstable but no proof has previously been given. However, using numerical experiments we prove that it is unstable. We describe several methods of applying fast hyperbolic rotations and unified rotations, giving a rounding error analysis and numerical experiments to show which are stable and which are not. Hyperbolic Householder transformations are briefly discussed. We then consider the hyperbolic QR factorization for which we present new results for the existence of the closely related HR factorization, and then use these to prove new theorems for the existence of the hyperbolic QR factorization. We describe how nonoverlapping hyperbolic rotations can be used to compute the hyperbolic QR factorization, with a rounding error analysis to show that this method is stable. Two applications of the hyperbolic QR factorization are also discussed. For an $n \times n$ tridiagonal matrix we exploit the structure of its QR factorization to devise two new algorithms for computing the 1-norm condition number in $O(n)$ operations. The algorithms avoid underflow and overflow, and are simpler than existing algorithms since tests are not required for degenerate cases. An error analysis of the first algorithm is given, while the second algorithm is shown to be competitive in speed with existing algorithms. We then turn our attention to an $n \times n$ diagonal-plus-semiseparable matrix, $A$, for which several algorithms have recently been developed to solve $Ax=b$ in $O(n)$ operations. We again exploit the QR factorization of the matrix to present an algorithm that computes the 1-norm condition number in $O(n)$ operations. We also consider algorithms for computing the matrix cosine. The algorithms scale a matrix by a power of two to make the norm of the scaled matrix small, use a Pad\'e approximation to compute the cosine of the scaled matrix, and recover the cosine of the original matrix using the double angle formula $\cos(2A) = 2\cos^2(A)-I$. We make several improvements to an algorithm of Higham and Smith to derive new algorithms, which are shown by theory and numerical experiments to bring increased efficiency and accuracy. We also consider an algorithm for simultaneously computing $\cos(A)$ and $\sin(A)$ that extends the ideas for the cosine and intertwines the cosine and sine double angle recurrences.