简体   繁体   中英

Efficiency with very large numpy arrays

I'm working with some very large arrays. An issue that I'm dealing with of course is running out of RAM to work with, but even before that my code is running slowly so that, even if I had infinite RAM, it would still take way too long. I'll give a bit of my code to show what I'm trying to do:

#samplez is a 3 million element 1-D array
#zfit is a 10,000 x 500 2-D array

b = np.arange((len(zfit))

for x in samplez:
    a = x-zfit
    mask = np.ma.masked_array(a)
    mask[a <= 0] = np.ma.masked
    index = mask.argmin(axis=1)
    #  These past 4 lines give me an index array of the smallest positive number 
    #  in x - zift       

    d = zfit[b,index]
    e = zfit[b,index+1]
    f = (x-d)/(e-d)
    # f is the calculation I am after

    if x == samplez[0]:
       g = f
       index_stack = index
    else:
       g = np.vstack((g,f))
       index_stack = np.vstack((index_stack,index))

I need to use g and index_stack, each of which are 3million x 10,000 2-D arrays, in a further calculation. Each iteration of this loop takes almost 1 second, so 3 million seconds total, which is way too long.

Is there anything I can do so that this calculation will run much faster? I've tried to think how I can do without this for loop, but the only way I can imagine is making 3 million copies of zfit, which is unfeasible.

And is there someway I can work with these arrays by not keeping everything in RAM? I'm a beginner and everything I've searched about this is either irrelevant or something I can't understand. Thanks in advance.

It is good to know that the smallest positive number will never show up in the end of rows.

In samplez there are 1 million unique values but in zfit , each row can only have 500 unique values at most. The entire zfit can have as much as 50 million unique values. The algorithm can be greatly sped up, if the number of 'finding the smallest positive number > each_element_in_samplez' calculation can be greatly reduced. Doing all 5e13 comparisons are probably an overkill and careful planing will be able to get rid of a large proportion of it. That will heavy depend on your actual underlying mathematics.

Without knowing it, there are still some small things can be done. 1, there are not so many of possible (ed) so that can be taken out of the loop. 2, The loop can be eliminated by map . These two small fix, on my machine, result in about 22% speed-up.

def function_map(samplez, zfit):
    diff=zfit[:,:-1]-zfit[:,1:]
    def _fuc1(x):
        a = x-zfit
        mask = np.ma.masked_array(a)
        mask[a <= 0] = np.ma.masked
        index = mask.argmin(axis=1)
        d = zfit[:,index]
        f = (x-d)/diff[:,index] #constrain: smallest value never at the very end.
        return (index, f)
    result=map(_fuc1, samplez)
    return (np.array([item[1] for item in result]),
           np.array([item[0] for item in result]))

Next: masked_array can be avoided completely (which should bring significant improvement). samplez needs to be sorted as well.

>>> x1=arange(50)
>>> x2=random.random(size=(20, 10))*120
>>> x2=sort(x2, axis=1) #just to make sure the last elements of each col > largest val in x1
>>> x3=x2*1
>>> f1=lambda: function_map2(x1,x3)
>>> f0=lambda: function_map(x1, x2)
>>> def function_map2(samplez, zfit):
    _diff=diff(zfit, axis=1)
    _zfit=zfit*1
    def _fuc1(x):
        _zfit[_zfit<x]=(+inf)
        index = nanargmin(zfit, axis=1)
        d = zfit[:,index]
        f = (x-d)/_diff[:,index] #constrain: smallest value never at the very end.
        return (index, f)
    result=map(_fuc1, samplez)
    return (np.array([item[1] for item in result]),
           np.array([item[0] for item in result]))

>>> import timeit
>>> t1=timeit.Timer('f1()', 'from __main__ import f1')
>>> t0=timeit.Timer('f0()', 'from __main__ import f0')
>>> t0.timeit(5)
0.09083795547485352
>>> t1.timeit(5)
0.05301499366760254
>>> t0.timeit(50)
0.8838210105895996
>>> t1.timeit(50)
0.5063929557800293
>>> t0.timeit(500)
8.900799036026001
>>> t1.timeit(500)
4.614129018783569

So, that is another 50% speed-up.

masked_array is avoided and that saves some RAM. Can't think of anything else to reduce RAM usage. It may be necessary to process samplez in parts. And also, dependents on the data and the required precision, if you can use float16 or float32 instead of the default float64 that can save you a lot of RAM.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM