[英]How to adjust parameters when training a svm model
If I using rbf
as the kernel function, then two parameters( c
and g
) has to be adjusted. 如果我使用rbf
作为内核函数,则必须调整两个参数( c
和g
)。 I can search every parameter pair( ci
, gi
),and select the best pair. 我可以搜索每个参数对( ci
, gi
),并选择最佳对。 Is there any better approach to find the best parameters. 有没有更好的方法来找到最好的参数。
The highlight from this blog on the kernel width choice : 这个博客关于内核宽度选择的亮点:
To pick, say 1000 pairs (x,x’) at random from your dataset, compute the distance
of all such pairs and take the median, the 0.1 and the 0.9 quantile. Now pick λ
to be the inverse any of these three numbers. With a little bit of cross
validation you will figure out which one of the three is best. In most cases you
won’t need to search any further.
And this post from cross validated provides an analysis on the reason why such method works well. 交叉验证的这篇文章分析了这种方法运作良好的原因。 Basically changing the decision function for all or only one datapoint is avoided. 基本上可以避免改变所有或仅一个数据点的决策函数。
Besides, you may search "Heuristic method" on the parameter choice in SVM. 此外,您可以在SVM中的参数选择中搜索“启发式方法”。 For example, in M.Boardman et al's A Heuristic for Free Parameter Optimization with Support Vector Machines , the authors applied simulated annealing to improve parameter search efficiency compared to an exhaustive grid search. 例如,在M.Boardman等人的A Heuristic for Free Parameter Optimization with Support Vector Machines中 ,作者应用模拟退火来提高参数搜索效率,与穷举网格搜索相比。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.