简体   繁体   中英

Which L1 norm does sklearn.preprocessing.normalize consider?

In this reference http://mathworld.wolfram.com/L1-Norm.html , L1 norm is calculated as the sum of values in a vector.

Now, on this website http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/ L1 norm is calculated by summing up the differences between each value of a vector and the vector mean.

My question is: why so different interpretations for the same norm? which one is correct? and most importantly, which one is used and how it is used when using sklearn.preprocessing.normalize?

These are two different scenarios. The first one refers to the norm of a vector, which is a measure of the length of the vector.

The second use of L1 refers to the loss function, used to measure how well your model performs. Here L1 is NOT calculated by summing up the differences between each value of the vector and the vector mean. Rather it is calculated by first calculating the absolute values of each true value and its corresponding prediction and them summing them together. In this case, the vector itself is the difference vector between true values and predictions.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM