简体   繁体   中英

How to enforce scipy.optimize.fmin_l_bfgs_b to use 'dtype=float32'

I am trying to optimize functions with GPU calculation in Python, so I prefer to store all my data as ndarrays with dtype=float32 .

When I am using scipy.optimize.fmin_l_bfgs_b , I notice that the optimizer always passes a float64 (on my 64bit machine) parameter to my objective and gradient functions, even when I pass a float32 ndarray as the initial search point x0 . This is different when I use the cg optimizer scipy.optimize.fmin_cg , where when I pass in a float32 array as x0 , the optimizer will use float32 in all consequent objective/gradient function invocations.

So my question is: can I enforce scipy.optimize.fmin_l_bfgs_b to optimize on float32 parameters like in scipy.optimize.fmin_cg ?

Thanks!

I am not sure you can ever do it. fmin_l_bfgd_b is provided not by pure python code, but by a extension (a wrap of FORTRAN code). In Win32/64 platform it can be found at \\scipy\\optimize\\_lbfgsb.pyd . What you want may only be possible if you can compile the extension differently or modify the FORTRAN code. If you check that FORTRAN code, it has double precision all over the place, which is basically float64 . I am not sure just changing them all to single precision will do the job.

Among the other optimization methods, cobyla is also provided by FORTRAN. Powell's methods too.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM