Return
 

Solving large linear systems with multiple right-hand sides

- Tuesday June 10th, 2003, 17h00/19h00, Saint Girons (Ariège, France) -
- Julien Langou -
- PhD. Thesis defence -

- defence for a PhD. in Applied Mathematics of the ``Institut National des Sciences Appliquées de Toulouse'' -
Jury :
Guillaume Alléon EADS-CCR (France) guess
Åke Björck Linköping University (Sweden) external examiner
Iain S. Duff CERFACS (France) & RAL (UK) jury member
Luc Giraud CERFACS (France) thesis advisor
Gene H. Golub Stanford University (CA, USA) jury member
Gérard Meurant CEA (France) external examiner
Chris C. Paige Mc Gill University (Canada) external examiner
Yousef Saad University of Minnesota (USA) guess
 
Abstract

  The starting point of this thesis is a problem posed by the electromagnetism group at EADS-CCR: How to solve several linear systems with the same coefficient matrix but various right-hand sides ? For the targeted application, the matrices are complex, dense and huge (of order of a few millions). Because such matrices cannot be computed nor stored in numerical simulations involved in a design process, the use of an iterative scheme with an approximate matrix-vector product is the only alternative. The matrix-vector product is performed using the Fast Multipole Method. In this context, the goal of this thesis is to adapt Krylov solvers so that they handle efficiently multiple right-hand sides. Some preliminary works dealing with one right hand side show that GMRES was an efficient and robust solver for that application. Consequently, we mainly focus, in this thesis, on variants of GMRES. 

  The orthogonalization schemes that we implemented for GMRES are some variants of the Gram-Schmidt algorithm. In a first part, we have investigated the effect of rounding errors in the Gram-Schmidt algorithms. Our results answer questions that have been around for the last 25 years. We give theoretical explanations of what was currently observed and accepted, namely: 

  • modified Gram-Schmidt algorithm generates a well-conditioned set of vectors,
  • and Gram-Schmidt algorithm iterated twice gives an orthogonal set of vectors.
These two sentences holds when the initial matrix is ``not-too-ill-conditioned'' in a sense that is clearly defined. Furthermore, when the Gram-Schmidt algorithm is iterated with selective reorthogonalizations then, in a third part, we give a new criterion. We prove that the resulting algorithm is robust while the most commonly used criterion might have some weakness. In a fourth part, we generalize standard results for modified Gram-Schmidt from norms to singular values. This enables us to propose an a-posteriori reorthogonalization procedure for modified Gram-Schmidt algorithm based on a low rank update. These results have several direct applications, we give examples for Krylov methods for solving linear systems and for eigenvalue computations. Finally, instead of using the Euclidean scalar product, we have derived our results on Gram-Schmidt algorithm with the A-scalar product, where A is an Hermitian definite positive matrix. The relevance of such a study is illustrated in a collaboration with the Global Change team at CERFACS, the Lanczos algorithm with reorthogonalization is used for data assimilation in a climate modeling problem.

  In a second part, we have implemented variants of the GMRES algorithm for both real and complex, single and double precision arithmetics suitable for serial, shared memory and distributed memory computers. This software implementation, that complies with scientific library standard quality, is described in detail. For the sake of the simplicity, flexibility and efficiency the GMRES solvers have been implemented using the reverse communication mechanism for the matrix-vector product, the preconditioning step and the dot product computations. Several orthogonalization procedures have been implemented to reduce the cost of the dot product calculation, that is a well known bottleneck for the Krylov method efficiency in a parallel distributed environment. The implemented stopping criterion is based on a normwise backward error. The variants available are GMRES-DR, seed-GMRES and block-GMRES (that adds to standard GMRES, flexible GMRES and SQMR). An LU-matrix-vector product step is performed in GMRES-DR in order to store the approximate eigenvectors on the first Krylov vectors. Implicit restart and implicit preconditioning is done in seed-GMRES to avoid a matrix-vector product and a preconditioning step per right-hand side and per GMRES cycle. The block-GMRES version allows the user to select either no deflation or deflation based on the singular value decomposition of the Krylov vectors. Finally we extend existing theoretical result that relates the norm of the residual to the smallest singular value of the space constructed by the Krylov solver from GMRES to block-GMRES. 

  The third part is dedicated to the improvement of these standard methods for the solution of linear systems arising in electromagnetic applications. After a deep presentation of the code we use, we study the influence of the level of non-symmetry in the SQMR algorithm and illustrate the behaviour of the GMRES-DR method in our example. Then we mainly focus on the multiple right-hand sides solves. First of all, we examined in details techniques to adapt single right-hand side method in the multiple right-hand sides case: increase the quality of the preconditioner, initial guess strategy or gathering strategy. In the context of the monostatic radar cross section calculation, we prove that the number of independent right-hand sides is finite. The dimension of the space of right-hand sides given by our theory and the dimension numerically observed corresponds well each other. This property enables us to reduce considerably the number of linear systems to solve. In this context, a particular implementation of the block-GMRES method is given. Then, some specific issues about the seed and the block methods are discussed. Finally more prospective results are given. First, different strategies to extract and add spectral informations in a GMRES cycle are presented and compared. Then, we exploit the fact that the Fast Multipole Method is an inexact matrix-vector product for which the accuracy can be tuned. The less accurate the matrix-vector product is, the fastest the computation. We show how to take advantage of this by using relaxed schemes (inexact Krylov methods) or inner-outer schemes (flexible GMRES). Finally we study the relevance of the normwise backward error as a stopping criterion for the iterative solvers in the monostatic radar cross section problem.