[英]Which L1 norm does sklearn.preprocessing.normalize consider?
In this reference http://mathworld.wolfram.com/L1-Norm.html , L1 norm is calculated as the sum of values in a vector. 在本参考文件http://mathworld.wolfram.com/L1-Norm.html中 ,L1范数被计算为向量中值的总和。
Now, on this website http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/ L1 norm is calculated by summing up the differences between each value of a vector and the vector mean. 现在,在此网站上, http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/ L1范数是通过总结向量的每个值与向量均值之间的差异。
My question is: why so different interpretations for the same norm? 我的问题是:为什么对同一规范会有如此不同的解释? which one is correct? 哪一个是正确的? and most importantly, which one is used and how it is used when using sklearn.preprocessing.normalize? 最重要的是,在使用sklearn.preprocessing.normalize时,将使用哪个以及如何使用它?
These are two different scenarios. 这是两种不同的方案。 The first one refers to the norm of a vector, which is a measure of the length of the vector. 第一个是指向量的范数,它是向量长度的度量。
The second use of L1 refers to the loss function, used to measure how well your model performs. L1的第二种用法是指损失函数,用于测量模型的性能。 Here L1 is NOT calculated by summing up the differences between each value of the vector and the vector mean. 这里,L1不是通过将向量的每个值与向量平均值之间的差求和来计算的。 Rather it is calculated by first calculating the absolute values of each true value and its corresponding prediction and them summing them together. 相反,它是通过首先计算每个真实值的绝对值及其对应的预测并将它们相加而得出的。 In this case, the vector itself is the difference vector between true values and predictions. 在这种情况下,向量本身就是真实值和预测之间的差向量。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.