I noticed the math for SVR states that SVR uses L1 penalty or epsilon insensitive loss function. But sklearn SVR model documentation mentions L2 penalty. I don't have much experience with SVR thought the community who has experience could shed some light on this.
Here is the snippet from the documentation :
C: float, default=1.0
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty .
Check out this link:https://scikit-learn.org/stable/modules/svm.html#svm-regression . quote - Here, we are penalizing samples whose prediction is at least away from their true target
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.