简体   繁体   English

优化包含 8 个 for 循环的嵌套循环以最小化函数

[英]Optimizing a Nested loop that contains 8 for loops to minimize a function

I have a python code that maximizes a function over 8 parameters using a nested for loop.我有一个 python 代码,它使用嵌套的 for 循环最大化超过 8 个参数的函数。 It takes approximately 16 Minutes to execute which is way too much because I have to do the optimization numerous time for the problem I am trying to solve.执行大约需要 16 分钟,这太多了,因为我必须为我试图解决的问题进行多次优化。

I tried:我试过:

1.) Replacing the for loop by list comprehension but there was no change in performance. 1.) 用列表理解替换 for 循环,但性能没有变化。

2.) Jug to parallize but the entire system freezes and restarts. 2.) Jug以并行化,但整个系统冻结并重新启动。

My Questions:我的问题:

1.) Is there any other way to parallelize a nested for loop using multiprocessing module? 1.) 有没有其他方法可以使用多处理模块并行化嵌套的 for 循环?

2.) Is there any way I can replace the nested loop with completely different method to maximize the function? 2.) 有什么办法可以用完全不同的方法替换嵌套循环来最大化函数吗?

Code Snippet: 
def SvetMaxmization(): #Maximization function
    Max = 0
    res = 1.0 # Step Size, execution time grows expo if the value is reduced
    for a1 in np.arange(0, pi, res):
        for a2 in np.arange(0,pi, res):
            for b1 in np.arange(0,pi, res):
                for b2 in np.arange(0,pi, res):
                    for c1 in np.arange(0,pi, res): 
                        for c2 in np.arange(0,pi, res):
                            for d1 in np.arange(0,pi, res):
                                for d2 in np.arange(0,pi, res):
                                   present =Svet(a1,a2,b1,b2,c1,c2,d1,d2) #function to be maximized 
                                   if present > Max:
                                        Max = present

svet() function: svet() 函数:

def Svet(a1,a2,b1,b2,c1,c2,d1,d2):
    Rho = Desnitystate(3,1) #Rho is a a matrix of dimension 4x4
    CHSH1 = tensor(S(a1),S(b1)) + tensor(S(a1),S(b2)) + tensor(S(a2),S(b1)) - tensor(S(a2),S(b2)) # S returns a matrix of dimension 2x2
    CHSH2 = tensor(S(a1),S(b1)) + tensor(S(a1),S(b2)) + tensor(S(a2),S(b1)) - tensor(S(a2),S(b2))
    SVet3x1 = tensor(CHSH1, S(c2)) + tensor(CHSH2, S(c1))
    SVet3x2 = tensor(CHSH2, S(c1)) + tensor(CHSH1, S(c2))                   
    SVet4x1 = tensor(SVet3x1, S(d2)) + tensor(SVet3x2, S(d1))           
    Svd = abs((SVet4x1*Rho).tr())

    return Svd

System details: Intel Core I5 clocked at 3.2GHz系统详细信息:主频为 3.2GHz 的 Intel Core I5

Thanks for your time!!谢谢你的时间!!

It's hard to give a single "right" answer, as it will depend a lot on the behaviour of your cost function.很难给出一个“正确”的答案,因为这在很大程度上取决于您的成本函数的行为。

But, considering that you are now doing a gridsearch over the parameter space (basically brute-forcing the solution), I think there are some things worth trying.但是,考虑到您现在正在对参数空间进行网格搜索(基本上是强制解决方案),我认为有些事情值得尝试。

  1. See if you can use a more sophisticated optimization algorithm.看看是否可以使用更复杂的优化算法。 See the scipy.optimize module , eg just if请参阅scipy.optimize模块,例如,如果

    x0 = ... # something bounds = [(0,np.pi) for _ in range(len(x0))] result = minimize(Svet, x0, bounds=bounds)

    can solve the problem.可以解决问题。

  2. If the cost function is so badly behaved that none of those methods work, your only hope is probably to speed up the execution of the cost function itself.如果成本函数表现得如此糟糕以至于这些方法都不起作用,那么您唯一的希望可能是加快成本函数本身的执行速度。 In my own experience, I would try the following:根据我自己的经验,我会尝试以下方法:

    1. numba is a good first alternative because it is very simple to try since it does not require you to change anything in your current code. numba是一个很好的第一个选择,因为它非常易于尝试,因为它不需要您更改当前代码中的任何内容。 It doesn't always speed up your code though.不过,它并不总是能加快您的代码速度。

    2. Rewrite the cost function with Cython .Cython重写成本函数。 This requires some work on your part, but will likely give a large boost in speed.这需要您做一些工作,但可能会大大提高速度。 Again, this depends on the nature of your cost function.同样,这取决于成本函数的性质。

    3. Rewrite using eg C, C++, or any other "fast" language.使用例如 C、C++ 或任何其他“快速”语言重写。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM