简体   繁体   English

从 scikit-learn 训练 SVC 表明使用 -h 0 可能更快?

[英]Training SVC from scikit-learn shows that using -h 0 may be faster?

I am training an SVC model on a large dataset, and since I have set verbose=True , it is showing a Warning: using -h 0 may be faster .我正在一个大型数据集上训练 SVC model ,并且由于我设置了verbose=True ,它显示了一个Warning: using -h 0 may be faster

I have two questions here:我在这里有两个问题:

  • what is this warning and how we can set any option of libsvm as it has mentioned in the warning?这是什么警告以及我们如何设置警告中提到的 libsvm 的任何选项?
  • Does the size of cache in sklearn.svm.SVC parameter setting can affect the speed of training? sklearn.svm.SVC参数设置中缓存的大小会影响训练速度吗? I have set it as cache_size=2000 .我已将其设置为cache_size=2000

Your expert view is appreciated感谢您的专家观点

-h parameter controls shrinking: whether to use the shrinking heuristics, 0 or 1 (default 1) -h参数控制收缩:是否使用收缩启发式,0 或 1(默认为 1)

You can set it in the SVC constructor using shrinking argument.您可以使用shrinking参数在 SVC 构造函数中设置它。 Shrinking is a heuristics to speed up the optimization problem.收缩是加速优化问题的启发式方法。

Check Original Paper and Similar Quesiton on shrinking检查原始纸张类似的收缩问题

Caching is technique for reducing the computational time of the decomposition method which is part of the training.缓存是用于减少作为训练的一部分的分解方法的计算时间的技术。 This size is controlled via cache_size parameter.这个大小是通过cache_size参数控制的。

I higlhy recommend reading the original libsm paper, especially section 5.我强烈建议阅读原始 libsm 论文,尤其是第 5 节。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM