简体   繁体   English

在sklearn python中撤消L2规范化

[英]Undo L2 Normalization in sklearn python

Once I normalized my data with an sklearn l2 normalizer and use it as training data: How do I turn the predicted output back to the "raw" shape? 一旦我使用sklearn l2规范化器对我的数据进行标准化并将其用作训练数据:如何将预测输出恢复为“原始”形状?

In my example I used normalized housing prices as y and normalized living space as x. 在我的例子中,我使用标准化住房价格作为y,并将生活空间标准化为x。 Each used to fit their own X_ and Y_Normalizer. 每个用于适合自己的X_和Y_Normalizer。

The y_predict is in therefore also in the normalized shape, how do I turn in into the original raw currency state? 因此y_predict也处于规范化形状,如何进入原始原始货币状态?

Thank you. 谢谢。

If you are talking about sklearn.preprocessing.Normalizer , which normalizes matrix lines, unfortunately there is no way to go back to original norms unless you store them by hand somewhere. 如果你正在谈论规范化矩阵线的sklearn.preprocessing.Normalizer ,遗憾的是除非你在某处手工存储它们,否则无法回到原始规范。

If you are using sklearn.preprocessing.StandardScaler , which normalizes columns , then you can obtain the values you need to go back in the attributes of that scaler ( mean_ if with_mean is set to True and std_ ) 如果您正在使用sklearn.preprocessing.StandardScaler ,进行规范化 ,那么你就可以得到你需要回去在定标器的属性值( mean_如果with_mean设置为Truestd_

If you use the normalizer in a pipeline, you wouldn't need to worry about this, because you wouldn't modify your data in place: 如果您在管道中使用规范化器,则无需担心这一点,因为您不会在适当的位置修改数据:

from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer

# classifier example
from sklearn.svm import SVC

pipeline = make_pipeline(Normalizer(), SVC())

Thank you very much for your answer, I didn't know about the pipeline feature before 非常感谢您的回答,之前我不知道管道功能

For the case of L2 normalization turns out you can do it manually. 对于L2规范化的情况,你可以手动完成。 Here is one example for a small array: 以下是小数组的一个示例:

x = np.array([5, 8 , 12, 15])

#Using Sklearn
normalizer_x = preprocessing.Normalizer(norm = "l2").fit(x)
x_norm = normalizer_x.transform(x)[0]
print x_norm

>array([ 0.23363466,  0.37381545,  0.56072318,  0.70090397])

Or do it manually with the weight of the squareroot of the squaresum: 或者用正方形的平方根的重量手动完成:

#Manually
w = np.sqrt(sum(x**2))
x_norm2 = x/w
print x_norm2

>array([ 0.23363466,  0.37381545,  0.56072318,  0.70090397])

So turning them "back" to the raw formate is simple by multiplying with "w". 因此,通过乘以“w”,将它们“回”到原始甲酸酯是很简单的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在python中使用L2范数的LAD? (sklearn) - LAD with L2 norm in python? (sklearn) python sklearn:“ sklearn.preprocessing.normalize(X,norm ='l2')”和“ sklearn.svm.LinearSVC(penalty ='l2')”之间有什么区别 - python sklearn: what is the different between “sklearn.preprocessing.normalize(X, norm='l2')” and “sklearn.svm.LinearSVC(penalty='l2')” sklearn.preprocessing.normalize中的norm ='l2'对于矩阵归一化有什么作用? - What does norm='l2' in sklearn.preprocessing.normalize do for matrix normalization? 如何使用 Sklearn 在 Python 中对列表列表进行 L2 规范化 - How to L2 Normalize a list of lists in Python using Sklearn 稀疏矩阵中行的L2归一化 - L2 normalization of rows in scipy sparse matrix L2矩阵逐行归一化梯度 - L2 matrix rowwise normalization gradient 具有 Logloss 和 L2 正则化的 SGD 分类器 使用 SGD 而不使用 sklearn python - SGD Classifier with Logloss and L2 regularization Using SGD without using sklearn python Sklearn 逻辑分类器的 L1 和 L2 惩罚 - L1 and L2 Penalties on Sklearn Logistic Classifier 在Pytorch中应用l2归一化时尺寸超出范围 - Dimension out of range when applying l2 normalization in Pytorch 在SVM中使用特征之前如何在特征上使用L2规范化 - How to use L2 normalization on features before using them in SVM
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM