The growing availability of lower precision arithmetics has led to the
development of mixed precision algorithms in numerical linear algebra.
Most recently we have seen the emergence of AI-specialized hardware such
as GPU accelerators implementing extremely fast matrix
multiply--accumulate operations, but using extremely low precisions such
as 16-, 8- or even 4-bit floating-point or integer formats. This
motivates a new class of mixed precision algorithms aiming to emulate
high accuracy with low precision computations. In this talk, we will
review the dense and rapidly growing literature on precision emulation.
We will notably compare two competing classes of methods respectively
based on multiword and multimodular decompositions, and discuss in which
situation each method is preferable.