[英]Sklearn SGDClassifier partial fit
I'm trying to use SGD to classify a large dataset. 我正在尝试使用SGD对大型数据集进行分类。 As the data is too large to fit into memory, I'd like to use the partial_fit method to train the classifier. 由于数据太大而无法放入内存,因此我想使用partial_fit方法来训练分类器。 I have selected a sample of the dataset (100,000 rows) that fits into memory to test fit vs. partial_fit : 我选择了适合内存的数据集样本(100,000行)来测试fit与partial_fit :
from sklearn.linear_model import SGDClassifier
def batches(l, n):
for i in xrange(0, len(l), n):
yield l[i:i+n]
clf1 = SGDClassifier(shuffle=True, loss='log')
clf1.fit(X, Y)
clf2 = SGDClassifier(shuffle=True, loss='log')
n_iter = 60
for n in range(n_iter):
for batch in batches(range(len(X)), 10000):
clf2.partial_fit(X[batch[0]:batch[-1]+1], Y[batch[0]:batch[-1]+1], classes=numpy.unique(Y))
I then test both classifiers with an identical test set. 然后,我使用相同的测试集测试两个分类器。 In the first case I get an accuracy of 100%. 在第一种情况下,我的准确度为100%。 As I understand it, SGD by default passes 5 times over the training data (n_iter = 5). 据我了解,默认情况下,SGD在训练数据上传递5次(n_iter = 5)。
In the second case, I have to pass 60 times over the data to reach the same accuracy. 在第二种情况下,我必须对数据传递60次才能达到相同的精度。
Why this difference (5 vs. 60)? 为什么会有这种差异(5比60)? Or am I doing something wrong? 还是我做错了什么?
I have finally found the answer. 我终于找到了答案。 You need to shuffle the training data between each iteration , as setting shuffle=True when instantiating the model will NOT shuffle the data when using partial_fit (it only applies to fit ). 您需要在每次迭代之间重新整理训练数据 ,因为在实例化模型时设置shuffle = True不会在使用partial_fit时重新整理数据(仅适用于fit )。 Note: it would have been helpful to find this information on the sklearn.linear_model.SGDClassifier page . 注意:在sklearn.linear_model.SGDClassifier页面上找到此信息将很有帮助 。
The amended code reads as follows: 修改后的代码如下:
from sklearn.linear_model import SGDClassifier
import random
clf2 = SGDClassifier(loss='log') # shuffle=True is useless here
shuffledRange = range(len(X))
n_iter = 5
for n in range(n_iter):
random.shuffle(shuffledRange)
shuffledX = [X[i] for i in shuffledRange]
shuffledY = [Y[i] for i in shuffledRange]
for batch in batches(range(len(shuffledX)), 10000):
clf2.partial_fit(shuffledX[batch[0]:batch[-1]+1], shuffledY[batch[0]:batch[-1]+1], classes=numpy.unique(Y))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.