简体   繁体   中英

Any Differences in `cvxpy` Library Between L2 Norm and `sum_of_square()`?

I am trying to use cvxpy lib to solve a very simple least square problem. But I found that cvxpy gave me very different results when I use sum_squares and norm(x,2) as loss functions. The same happens when I try the l1 norm and sum of absolute values.

Does these differences come from mathematically the optimization problem definition, or the implementation of the library?

Here is my code example:

s = cvx.Variable(n)

constrains = [cvx.sum(s)==1 , s>=0, s<=1] 

prob = cvx.Problem(cvx.Minimize(cvx.sum_squares(A * s - y)), constrains)

prob = cvx.Problem(cvx.Minimize(cvx.norm(A*s - y ,2)), constrains)

Both y and s are vectors representing histograms. y is the histogram of randomized data, and s is the original histogram that I want to recover. A is an*m "transition" matrix of probabilities how s is randomized to y.

The following are the histograms of variable s:

Using sum_of_square

Using norm2

Let's examine both problem using the Lagrangian of the problem:

$$\\begin{align*} {\\left\\| A x - b \\right\\|} {2} + \\lambda R \\left( x \\right ) \\tag{1} \\ {\\left\\| A x - b \\right\\|} {2}^{2} + \\lambda R \\left( x \\right ) \\tag{2} \\end{align*}$$

在此处输入图片说明

Clearly, for the same value of $ \\lambda $ in (1) and (2) the result will be different as the value of the fidelity term ($ A x - b $) will be different.

There is nothing wrong with the code or the solver, it is just a different model.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM