简体   繁体   中英

Difference between random draws from scipy.stats…rvs and numpy.random

It seems if it is the same distribution, drawing random samples from numpy.random is faster than doing so from scipy.stats.-.rvs . I was wondering what causes the speed difference between the two?

scipy.stats.uniform actually uses numpy, here is the corresponding function in stats (mtrand is an alias for numpy.random)

class uniform_gen(rv_continuous):
    def _rvs(self):
        return mtrand.uniform(0.0,1.0,self._size)

scipy.stats has a bit of overhead for error checking and making the interface more flexible. The speed difference should be minimal as long as you don't call uniform.rvs in a loop for each draw. You can get instead all random draws at once, for example (10 million)

>>> rvs = stats.uniform.rvs(size=(10000, 1000))
>>> rvs.shape
(10000, 1000)

Here is the long answer, that I wrote a while ago:

The basic random numbers in scipy/numpy are created by Mersenne-Twister PRNG in numpy.random. The random numbers for distributions in numpy.random are in cython/pyrex and are pretty fast.

scipy.stats doesn't have a random number generator, random numbers are obtained in one of three ways:

  • directly from numpy.random, eg normal, t, ... pretty fast

  • random numbers by transformation of other random numbers that are available in numpy.random, also pretty fast because this operates on entire arrays of numbers

  • generic: the only generic generation random number generation is by using the ppf (inverse cdf) to transform uniform random numbers. This is relatively fast if there is an explicit expression for the ppf, but can be very slow if the ppf has to be calculated indirectly. For example if only the pdf is defined, then the cdf is obtained through numerical integration and the ppf is obtained through an equation solver. So a few distributions are very slow.

I ran into this today and just wanted to add some timing details to this question. I saw what joon mentioned where, in particular, random numbers from the normal distribution were much more quickly generated with numpy than from rvs in scipy.stats . As user333700 mentioned there is some overhead with rvs but if you are generating an array of random values then that gap closes compared to numpy . Here is a jupyter timing example:

from scipy.stats import norm
import numpy as np

n = norm(0, 1)
%timeit -n 1000 n.rvs(1)[0]
%timeit -n 1000 np.random.normal(0,1)

%timeit -n 1000 a = n.rvs(1000)
%timeit -n 1000 a = [np.random.normal(0,1) for i in range(0, 1000)]
%timeit -n 1000 a = np.random.randn(1000)

This, on my run with numpy version 1.11.1 and scipy 0.17.0, outputs:

1000 loops, best of 3: 46.8 µs per loop
1000 loops, best of 3: 492 ns per loop
1000 loops, best of 3: 115 µs per loop
1000 loops, best of 3: 343 µs per loop
1000 loops, best of 3: 61.9 µs per loop

So just generating one random sample from rvs was almost 100x slower than using numpy directly. However, if you are generating an array of values than the gap closes (115 to 61.9 microseconds).

If you can avoid it, probably don't call rvs to get one random value a ton of times in a loop.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM