简体   繁体   中英

scikit-learn NearestNeighbors .kneighbors() on tfidf gives ValueError: UPDATEIFCOPY base is read-only

I am using scikit-learn NearestNeighbors to find the nearest neighbor, using tfidf on people wiki data.

In my .kneighbors() method call

res = neigh.kneighbors(obama_tfidf, return_distance=False)

a Multiprocessing module threw an exception of:

ValueError: UPDATEIFCOPY base is read-only

I have uploaded my complete code and sample data (80 MB in size) at my github location here for reference.

Here is a part of the error listing:

---------------------------------------------------------------------------
JoblibValueError                          Traceback (most recent call last)
<ipython-input-12-dbcbed49b042> in <module>()
      1 obama_word_counts = count_vectorizer.transform(['obama'])
      2 obama_tfidf = tfidf_transformer.transform(obama_word_counts)
----> 3 res = neigh.kneighbors(obama_tfidf, return_distance=False)
      4 print res

/usr/local/lib/python2.7/dist-packages/sklearn/neighbors/base.pyc in kneighbors(self, X, n_neighbors, return_distance)
    355             if self.effective_metric_ == 'euclidean':
    356                 dist = pairwise_distances(X, self._fit_X, 'euclidean',
--> 357                                           n_jobs=n_jobs, squared=True)
    358             else:
    359                 dist = pairwise_distances(

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/pairwise.pyc in pairwise_distances(X, Y, metric, n_jobs, **kwds)
   1245         func = partial(distance.cdist, metric=metric, **kwds)
   1246 
-> 1247     return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
   1248 
   1249 

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/pairwise.pyc in _parallel_pairwise(X, Y, func, n_jobs, **kwds)
   1094     ret = Parallel(n_jobs=n_jobs, verbose=0)(
   1095         fd(X, Y[s], **kwds)
-> 1096         for s in gen_even_slices(Y.shape[0], n_jobs))
   1097 
   1098     return np.hstack(ret)

/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
    787                 # consumption.
    788                 self._iterating = False
--> 789             self.retrieve()
    790             # Make sure that we get a last message telling us we are done
    791             elapsed_time = time.time() - self._start_time

/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.pyc in retrieve(self)
    738                     exception = exception_type(report)
    739 
--> 740                     raise exception
    741 
    742     def __call__(self, iterable):

JoblibValueError: JoblibValueError

I can't paste the entire Multiprocessing exception as it exceeds the S/O posting limit.

What am I missing here?

When n_jobs is equal to -1, then the number of jobs is set to the number of CPU cores, as mentioned in the ref .

The error happens when the sklearn NN function calls _parallel_pairwise() , which then tries to get even slices.

Try setting n_jobs to an even number, which of course is less than the number of CPU cores.


As you mentioned already, you can run this with n_jobs equal to 1, which doesn't parallelize the code, thus not exposing the error.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM