简体   繁体   English

sklearn.preprocessing.normalize考虑哪个L1规范?

[英]Which L1 norm does sklearn.preprocessing.normalize consider?

In this reference http://mathworld.wolfram.com/L1-Norm.html , L1 norm is calculated as the sum of values in a vector. 在本参考文件http://mathworld.wolfram.com/L1-Norm.html中 ,L1范数被计算为向量中值的总和。

Now, on this website http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/ L1 norm is calculated by summing up the differences between each value of a vector and the vector mean. 现在,在此网站上, http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/ L1范数是通过总结向量的每个值与向量均值之间的差异。

My question is: why so different interpretations for the same norm? 我的问题是:为什么对同一规范会有如此不同的解释? which one is correct? 哪一个是正确的? and most importantly, which one is used and how it is used when using sklearn.preprocessing.normalize? 最重要的是,在使用sklearn.preprocessing.normalize时,将使用哪个以及如何使用它?

These are two different scenarios. 这是两种不同的方案。 The first one refers to the norm of a vector, which is a measure of the length of the vector. 第一个是指向量的范数,它是向量长度的度量。

The second use of L1 refers to the loss function, used to measure how well your model performs. L1的第二种用法是指损失函数,用于测量模型的性能。 Here L1 is NOT calculated by summing up the differences between each value of the vector and the vector mean. 这里,L1不是通过将向量的每个值与向量平均值之间的差求和来计算的。 Rather it is calculated by first calculating the absolute values of each true value and its corresponding prediction and them summing them together. 相反,它是通过首先计算每个真实值的绝对值及其对应的预测并将它们相加而得出的。 In this case, the vector itself is the difference vector between true values and predictions. 在这种情况下,向量本身就是真实值和预测之间的差向量。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 numpy.linalg.norm是否可以将sklearn.preprocessing.normalize(X,norm ='l1',)替换为矩阵的L1-norm? - Can numpy.linalg.norm replace sklearn.preprocessing.normalize(X, norm='l1',) for L1-norm of matrix? sklearn.preprocessing.normalize中的norm ='l2'对于矩阵归一化有什么作用? - What does norm='l2' in sklearn.preprocessing.normalize do for matrix normalization? sklearn.preprocessing.normalize 中的规范参数 - norm parameters in sklearn.preprocessing.normalize python sklearn:“ sklearn.preprocessing.normalize(X,norm ='l2')”和“ sklearn.svm.LinearSVC(penalty ='l2')”之间有什么区别 - python sklearn: what is the different between “sklearn.preprocessing.normalize(X, norm='l2')” and “sklearn.svm.LinearSVC(penalty='l2')” scipy.linalg.norm与sklearn.preprocessing.normalize不同吗? - scipy.linalg.norm different from sklearn.preprocessing.normalize? sklearn.preprocessing.normalize如何对数据进行归一化,并且可以在具有均值和标准差的新数据上进行复制吗? - How does sklearn.preprocessing.normalize normalize data, and can I replicate on new data with mean and standard deviation? 如何使用 sklearn.preprocessing.normalize 规范化 DataFrame 的列? - How to normalize the columns of a DataFrame using sklearn.preprocessing.normalize? 具有 L1 正则化逻辑回归的 Sklearn SelectFromModel - Sklearn SelectFromModel with L1 regularized Logistic Regression CVXOPT L1范数近似 - 1dB的非法值 - CVXOPT L1 norm approximation - Illegal value of ldB 如何在 Keras 或 Tensorflow 中将 L1 范数添加到损失 function? - How to add L1 norm to loss function in Keras or Tensorflow?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM