简体   繁体   中英

Using finite difference in python

I am trying to use Python with Numpy to solve a basic equation using the finite difference method. The code gives me the correct first value for a ie it gives me a[1]; however, every other value after that is just zero? I don't know what I'm doing wrong because it obviously works for the first value so how do I fix this? Any ideas would be very helpful.

from numpy import *
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from scipy.integrate import odeint

def solver(omega_m, dt):
    #t_0, H_0, a_0, dt, n, T; always the same; omega's change
    t_0 = 0.0004
    a_0 = 0.001
    H_0 = 1./13.7

    T = 13.7
    dt = float(dt)
    n = int(round((T - t_0)/dt))
    x = zeros(n+1)
    t = linspace(t_0, T, n+1)

    x[0] = a_0

    for i in range (0, n):
        x[i+1] = x[i] + (H_0 * ((omega_m)**(1./2.)) * ((x[i])**(-1./2.)) * dt)
        return x, t

a, t = solver(omega_m =1, dt=0.001)
print a, t

Your function returns after the first iteration because your return statement is inside the for loop. You should dedent the return statement so your loop does not terminate prematurely:

for i in range (0, n):
    x[i+1] = x[i] + (H_0 * ((omega_m)**(1./2.)) * ((x[i])**(-1./2.)) * dt)
return x, t

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM