My interest in random matrix theory is two-fold. On one hand, it provides us with a very succesful model in statistical mechanics, with precise notions of universality and a rich internal mathematical structure unifying integrable systems, operator theory, orthogonal polynomials, probability theory and representation theory. On the other hand, random matrix theory meshes beautifully with numerical linear algebra. Some of the best constructions of matrix ensembles were by numerical linear algebraists. Further, random matrix theory provides a way to quantify the performance of several workhorse algorithms of applied mathematics, including techniques for the solution of linear systems, eigenvalue problems, and linear and semidefinite programming.
My goal is to tie these ends together to provide precise answers to some of the central questions in applied mathematics: what is the structure and performance of the fundamental algorithms of applied mathematics. The kind of question in this area that keeps me up at night is simple to state: what are the basic bounds on the average case and worst case performance Gaussian elimination with partial pivoting? Its been almost seventy years since Von Neumann and his co-workers began to look at such questions, but we still don't know the answer.
The explorations below began with Christian Pfrang's thesis and have now become a thriving industry that has mainly been pursued by Percy Deift and Tom Trogdon. I view the asymptotic performance of algorithms as a beautiful problem in statistical mechanics and have many problems for Ph.D students in the area. The rise of deep learning has also meant that the study of completely integrable and gradient flows on spaces of matrices provide useful benchmarks for the optimization under the hood in deep learning.