About Split Proximal Algorithms for the Q-Lasso

Abdellatif Moudafi

Authors

  • Support Team

Abstract

Numerous problems in signal processing and imaging, statistical learning and data mining, or computervision  can be formulated as optimization problems which consist in minimizing a sum of convex functions,not necessarily differentiable, possibly composed with linear operators. Each function is typically either a data fidelity term or a regularization termenforcing some properties on the solution, see for example [5, 6] and references therein. In this note we are interested in  the general form of  Q-Lasso  introduced in [1] which generalized the well-known Lasso of Tibshirani [9]. $Q$ is a closed convex subset of a Euclidean $m$-space, for some integer $m\geq1$, that can be interpreted as the set of errors  within given tolerance level when linear measurements are taken to cover a signal/image via the Lasso.  Only the unconstrained case   was discussed in [1], we  discuss  here some split proximal algorithms  for  solving  the general case. It is worth mentioning that the lasso model  a number of applied problems arising from machine learning and signal/image processing due to the fact it  promotes the sparsity of a signal.

Downloads

Published

2017-04-01

How to Cite

Team, S. (2017). About Split Proximal Algorithms for the Q-Lasso: Abdellatif Moudafi. Thai Journal of Mathematics, 15(1), 1–7. Retrieved from https://thaijmath2.in.cmu.ac.th/index.php/thaijmath/article/view/660

Issue

Section

Articles