In inverse problems it is widely recognized that the incorporation of a sparsity prior yields a regularization effect on the solution. This approach is grounded on the a priori assumption that the unknown can be appropriately represented in a basis with a limited number of significant components while most coefficients are close to zero. This occurrence is frequently observed in real-world scenarios, such as with piecewise smooth signals. In this study we propose a probabilistic sparsity prior formulated as a mixture of degenerate Gaussians, capable of modelling sparsity with respect to a generic basis. Under this premise we design a neural network that can be interpreted as the Bayes estimator for linear inverse problems. Additionally, we put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network. To evaluate the effectiveness of our approach we conduct a numerical comparison with commonly employed sparsity-promoting regularization techniques, namely Least Absolute Shrinkage and Selection Operator (LASSO), group LASSO, iterative hard thresholding and sparse coding/dictionary learning. Notably, our reconstructions consistently exhibit lower mean square error values across all one-dimensional datasets utilized for the comparisons, even in cases where the datasets significantly deviate from a Gaussian mixture model.
Learning a Gaussian mixture for sparsity regularization in inverse problems
Alberti, Giovanni S;Ratti, L;Santacesaria, M;Sciutto, S
2025-01-01
Abstract
In inverse problems it is widely recognized that the incorporation of a sparsity prior yields a regularization effect on the solution. This approach is grounded on the a priori assumption that the unknown can be appropriately represented in a basis with a limited number of significant components while most coefficients are close to zero. This occurrence is frequently observed in real-world scenarios, such as with piecewise smooth signals. In this study we propose a probabilistic sparsity prior formulated as a mixture of degenerate Gaussians, capable of modelling sparsity with respect to a generic basis. Under this premise we design a neural network that can be interpreted as the Bayes estimator for linear inverse problems. Additionally, we put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network. To evaluate the effectiveness of our approach we conduct a numerical comparison with commonly employed sparsity-promoting regularization techniques, namely Least Absolute Shrinkage and Selection Operator (LASSO), group LASSO, iterative hard thresholding and sparse coding/dictionary learning. Notably, our reconstructions consistently exhibit lower mean square error values across all one-dimensional datasets utilized for the comparisons, even in cases where the datasets significantly deviate from a Gaussian mixture model.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



