Project description
Many linear inverse problems can be formulated as convex optimization problems. These can be solved using iterative schemes that involve gradients and proximal operators of the functionals.
Computation of the gradient requires the evaluation of the adjoint of the forward operator. In many large-scale applications, however, the adjoint cannot be evaluated exactly. This leads to a biased approximation of the gradient which can lead to failure of convergence of the iterative algorithm.
In this project, you will develop iterative methods that are robust to such errors. You will also investigate the regularizing effect of iterative methods (semi-convergence) in the face of noisy data.