Speaker:
Speaker Link:
Institution:
Time:
Location:
Recently, a class of algorithms combining classical fixed point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as 10^{108} x 10^{108}. So far, a complete mathematical explanation for their success has proven elusive. The family of random sparsification methods has not yet been extended to the important case of linear system solves. This talk proposes a new algorithm based on repeated random sparsification that is capable of solving linear systems in extremely high dimensions and provides a complete mathematical analysis of the new algorithm. The analysis establishes a faster-than-Monte Carlo convergence rate and justifies use of the scheme even when the solution vector is too large to store.