Picof12

Sparsity and Inverse Problems in Image Processing

J. Fadili (ENSICAEN), J. Starck (CEA)

Divide and Conquer: Splitting Algorithms for Inverse Problems

Jalal Fadili (ENSICAEN)

This talk will focus on several nonsmooth convex optimization problems involved in inverse problems. Such problems appear in many fields including image and signal processing, and have attracted a lot of interest recently. In this talk, we formalize many of these problems within a unified and versatile framework of convex optimization theory, and invoke tools from convex analysis (e.g. duality, proximity operators) and maximal monotone operator splitting. We show how fast iterative splitting algorithms can be used to solve these problems in their primal or dual versions. This makes them very competitive for large-scale problems. This framework includes some previously proposed algorithms as a special case. Applications to several inverse problems such as denoising, component separation, inpainting and compressed sensing are also reported to demonstrate the wide range of applicability of the proposed framework.

Robust Sparse Analysis Regularization

Grabriel Peyré (CNRS, Univ. Paris-Dauphine)

In this talk I will detail several key properties of L1-analysis regularization for the resolution of linear inverse problems [5]. With the notable exception of [1,3], most previous theoretical works consider sparse synthesis priors where the sparsity is measured as the norm of the coefficients that synthesize the signal in a given dictionary, see for instance [3,4]. In contrast, the more general analysis regularization minimizes the L1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several well-known regularizations such as the discrete total variation, the fused lasso and sparse correlation with translation invariant wavelets. I will first study the variations of the solution with respect to the observations and the regularization parameter, which enables the computation of the degrees of freedom estimator. I will then give a sufficient condition to ensure that a signal is the unique solution of the analysis regularization when there is no noise in the observations. The same criterion ensures the robustness of the sparse analysis solution to a small noise in the observations. Lastly I will define a stronger condition that ensures robustness to an arbitrary bounded noise. In the special case of synthesis regularization, our contributions recover already known results [2,4], that are hence generalized to the analysis setting. I will illustrate these theoritical results on practical examples to study the robustness of the total variation, fused lasso and translation invariant wavelets regularizations. (This is a joint work with S. Vaiter, C. Dossal, J. Fadili)

Bibliography

  1. E. Candes, Y.C. Eldar, D. Needell, and P. Randall. Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis, 31(1):59–73, 2010.
  2. J.J. Fuchs. On sparse representations in arbitrary redundantbases. IEEE Transactions on Information Theory, 50(6):1341–1344,2004.
  3. S. Nam, M.E. Davie, M. Elad, R. Gribonval, The Cosparse Analysis Model and Algorithms, preprint arXiv:1106.4987, 2011. Robust Sparse Analysis Regularization.
  4. J.A. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Transactions on Information Theory, 52 (3):1030–1051, 2006.
  5. S. Vaiter, G. Peyré, C. Dossal, J. Fadili, Robust Sparse Analysis Regularization, preprint arXiv:1109.6222v1, 2011.

Bayesian reconstruction of biomedical images within the framework of sparse stochastic processes

Michael Unser (EPFL)

We introduce a general statistical framework for image reconstruction from noisy linear measurements based on the theory of continuous-domain sparse stochastic processes. These processes are defined in terms of a generalized innovation model: they are characterized by a whitening operator, which shapes their Fourier spectrum, and a Lévy exponent which controls the sparsity of the (non-Gaussian) innovations (white Lévy noise). Starting from the characteristic form of these processes, we derive an extended family of MAP estimators. While our family of estimators includes the traditional methods of Tikhonov and total-variation (TV) regularization as particular cases, it opens the door to a much broader class of potential functions (associated with infinitely divisible priors) that are inherently sparse and typically nonconvex. By introducing a discrete counterpart of the innovation variables, we are able to develop an alternating minimization scheme that can handle arbitrary potential functions. We apply our framework to the reconstruction of magnetic resonance images and phase-contrast tomograms; we also present simulation examples where the proposed scheme outperforms the more traditional convex optimization techniques (in particular, TV). Joint work with Emrah Bostan, Ulugbek Kamilov and Masih Nilchian

CMAP CNRS Fondation EADS DIGITEO Région IDF INRIA Saclay FMJH MMNS X SMAI LAMSIN GAMNI
CMAP UMR 7641 École Polytechnique CNRS, Route de Saclay, 91128 Palaiseau Cedex France, Tél: +33 1 69 33 46 00 Fax: +33 1 69 33 46 46