简体   繁体   中英

How to increase the number of iterations to optimize my cost function at each step using partial_fit at Scikit SGDClassifier?

When using partial_fit at Scikit SGDClassifier the number of iteration for the convergence of the cost functions equals 1, as stated in the description:

Perform one epoch of stochastic gradient descent on given samples.
Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user.

How can I increase max_iter such that my cost function is optimized properly and not just with one iteration? Or related to the scikit- description, how can I handle “objective convergence” and “early stopping” to my classifier using partial_fit?

You can simply use the fit() method instead of the partial_fit() method and increase the max_iter by providing an integer value for the number of iterations you would like to have for the SGDClassifier. The default here is 1000 iterations.

Have a look at the documentation with the max_iter parameter: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html

Is simply executing the partial_fit() command again and again with the same data, eg with the actual batch. Here is my code fragment, where I just programmed a loop around the partial_fit() command:

for i_iter in np.arange(iterPerBatch):
       clf.partial_fit(X_batch, y_batch, classes=[0,1])

The variable iterPerBatch defines the number of iterations.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM