简体   繁体   中英

Vectorising / Paralellising GEKKO equations

Using Phython's GEKKO library and following the example to solve the Wave Equation from their original paper , one needs to create a whole lot of equations.

m.Equations([u[i].dt() == v[i]for i in range(npx)])
m.Equations([v[i].dt() == c**2*(u[mod(i+1,npx)]-2.0*u[mod(i,npx)]+u[mod(i-1,npx)])/dx**2 for i in arange(0,npx,1)])

(I already reduced it to two lines via mod , incorporating the boundary conditions and let i run from 0 to 99, but it's basically still the same as in the example.)

It is still using a for -loop, ie v[i]for i in range(npx)] . Can one avoid this and thus make the code more readable or the simulation faster by paralellising / vectorising it like one can do with numpy arrays?

Of course for this problem the computation only takes <18s on my laptop, but bigger problems might be very slow.

There is something about arrays in the documentation, but I don't understand it and I don't know if this is even what I am looking for.

I attached a MWE, using the above code here .

If you want to improve the speed, try a sequential solution with m.options.IMODE=7 . For simulation, it is generally much faster than IMODE=4 (simultaneous solution).

from gekko import GEKKO
import numpy as np
import time
start = time.time()
pi = np.pi
#Initialise model
m = GEKKO(remote = False)
#Discretisations(time and space)
npt = 200; npx = 100
m.time = np.linspace(0, 1, npt)
xpos = np.linspace(0, 2*pi, npx)
dx = xpos[1]-xpos[0]
#Define Variables
c = m.Const(value = 10)
u = [m.Var(value = np.cos(xpos[i])) for i in range(npx)]
v = [m.Var(value = np.sin(2*xpos[i])) for i in range(npx)]
#Automatic discretisation in time and manual discretisation in space
m.Equations([u[i].dt() == v[i] for i in range(npx)])
m.Equations([v[i].dt() == c**2*(u[np.mod(i+1,npx)]-2.0*u[np.mod(i,npx)]\
                                 +u[np.mod(i-1,npx)])/dx**2 \
                          for i in np.arange(0,npx,1)])

#Set options
m.options.imode = 7
m.options.solver = 1
m.options.nodes = 3
m.solve(disp = False, GUI = False)
print('Solve Time: ' + str(m.options.SOLVETIME))
print('Total Time: ' + str(time.time()-start))

With IMODE=4 it takes more time to set up the problem but less time to solve it.

Solve Time: 2.7666
Total Time: 25.547

With IMODE=7 it takes less time to set up the problem but more time to solve it.

Solve Time: 8.2271
Total Time: 11.819

As you get to larger problems, IMODE=7 will take much less time than IMODE=4 for simulation. For parallel options, you may be able to construct your model in parallel but the solution of that model doesn't have many options for a much faster solution besides IMODE=7 . You can create a parallel Gekko application in the following ways:

  • Use parallel linear solvers in IPOPT with ma77, ma97, and others. This is typically only a 20-60% improvement in speed from some of the testing that I've done on large-scale problems. These options aren't available in the IPOPT version that is distributed publicly because the solvers require a license. The linear solver MUMPS is distributed with Gekko but does not include parallel support (although this is potentially coming later). The issue is that the solver is only part of the solution and even if the solver were infinitely fast, the automatic differentiation, objective evaluation, and equation evaluation still takes about 50% of the CPU time.

  • Run independently. This is often called "massively parallel" because the processes can be split into separate threads and then the code combines again when all of the sub-processes are complete. This instructional material on multi-threading and How to do parallel Python Gekko? show how to compute in parallel. Your problem is not set up for multi-threading because the equations are solved together.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM