简体   繁体   English

适用于JAVA的Octave dfdp.m

[英]Octave dfdp.m for JAVA

I want to implement Levenberg Marquardt fitting in JAVA and found apache commons math suitable. 我想在JAVA中实现Levenberg Marquardt拟合,并发现适用于Apache Commons数学。 Since I want to fit a function, where I dont have the derivative to calculate the gradient or Jacobian, I need somthing like dfdp.m from GNU octave to calculate numerical derivatives. 由于我想拟合一个函数,而我没有导数来计算梯度或雅可比行列式,因此我需要像GNU octave的dfdp.m这样的东西来计算数值导数。 Has someone done this already? 有人做过吗?

I did it myself, in case someone else needs it here is the approach 我自己做的,以防万一有人需要,这是方法

dfdp.m code dfdp.m代码

m=size(x,1); if (m==1), m=size(x,2); end  %# PAK: in case #cols > #rows
n=length(p);      %dimensions
ps=p; prt=zeros(m,n);del=zeros(n,1);       % initialise Jacobian to Zero
for j=1:n
      del(j)=dp(j) .*p(j);    %cal delx=fract(dp)*param value(p)
      if p(j)==0
           del(j)=dp(j);     %if param=0 delx=fraction
      end
      p(j)=ps(j) + del(j);
      if del(j)~=0, f1=feval(func,x,p);   %FJ ~= not equal (!=)  ...> p is now (p + dp*p)
           if dp(j) < 0, prt(:,j)=(f1-f)./del(j);
           else
                p(j)=ps(j) - del(j);  %FJ ...> p is now (p - dp*p)
                prt(:,j)=(f1-feval(func,x,p))./(2 .*del(j)); %FJ 2 steps from (ps + del) to (ps - del)
           end
      end
      p(j)=ps(j);     %restore p(j)
end

JAVA code JAVA代码

private static class GhoosProblem {
 private double[][] data;
 private double[] dp;

 public GhoosProblem(double[][] datapoints, double[] delta_p) {
  data = datapoints;
  //dp= fractional increment of p for numerical derivatives
  //dp(j)>0 central differences calculated
  //dp(j)<0 one sided differences calculated
  //dp(j)=0 sets corresponding partials to zero; i.e. holds p(j) fixed
  dp = delta_p;
 }

 public MultivariateVectorFunction getModelFunction() {
  return new MultivariateVectorFunction() {
    public double[] value(double[] params) {
        double[] values = new double[data.length];
        for (int i = 0; i < values.length; ++i) {
            final double t = data[i][0];  // get the double value
            values[i] = params[0] *
                Math.pow(t, params[2]) *
                Math.exp(-params[1] * t);  // Ghoos function
        }
        return values;   // function values
    }
  };
 }
 public MultivariateMatrixFunction getModelFunctionJacobian2() {
    return new MultivariateMatrixFunction() {
        public double[][] value(double[] params) {
            double[][] jacobian = new double[data.length][params.length];

            final double a = params[0];
            final double b = params[2];
            final double c = params[1];

            for (int i = 0; i < jacobian.length; ++i) {
                final double t = data[i][0];  // get the double value

                jacobian[i][0] = Math.pow(t, b) * Math.exp(-c*t);
                jacobian[i][2] = a * Math.exp(-c*t) * Math.pow(t, b) * Math.log(t);
                jacobian[i][1] = a * Math.pow(t, b) * (-t*Math.exp(-c*t));
            }
        //System.out.println("Jacobian= "+ Arrays.deepToString(jacobian));
        return jacobian;
        }
    };
}

// compared to Ge2.m octave
public MultivariateMatrixFunction getModelFunctionJacobian() {
        return new MultivariateMatrixFunction() {
            public double[][] value(double[] params) {

        int m = data.length;    // cols
        int n = params.length;  // rows
        double[] p = params;
        double[] ps = params;
        double[] del = new double[n];
        double[] f = new double[n];
        double[] f1 = new double[n];
        BlockRealMatrix prt = new BlockRealMatrix(m, n);  // initializes to zeros

        f=feval(p);
        for (int j=0; j<n; ++j) {
            del[j]=dp[j] * p[j];  //delta_x=fractional(dp) * param value(p)
            if (p[j]==0)
                del[j]=dp[j];     //if param=0 delta_x=fractional(dp)
            p[j]=ps[j] + del[j];
            if (del[j]!=0) {
                f1=feval(p);      //p is now (p + dp*p)
                if (dp[j]<0)
                    prt.setColumn(j,(new ArrayRealVector(f1)).subtract(new ArrayRealVector(f)).mapDivideToSelf(del[j]).toArray());  // one sided diff
                else {
                    p[j]=ps[j] - del[j];  // p is now (p - dp*p)
                    prt.setColumn(j,(new ArrayRealVector(f1)).subtract(new ArrayRealVector(feval(p))).mapDivideToSelf(2*del[j]).toArray());  // central diff
                }
            }
            p[j]=ps[j];     //restore p(j)
        }//for
        //System.out.println("Jacobian= "+ Arrays.deepToString(prt.getData()));
        return prt.getData();  //jacobian, dimension is (m x n)
        }
    };
}

    public double[] feval(double[] params) {
        double[] values = new double[data.length];
        for (int i = 0; i < values.length; ++i) {
            final double t = data[i][0];  // get the double value
            values[i] = params[0] *
            Math.pow(t, params[2]) *
            Math.exp(-params[1] * t);  // Ghoos function
        }
        return values;
    }

}//GhoosProblem

sorry if idention of code did not come out nice! 很抱歉,如果代码提示不好!

the relevant part is the getModelFunctionJacobian() -Function 相关的部分是getModelFunctionJacobian()-Function

I have renamed the analytical derivatives part as getModelFunctionJacobian2(), and posted here for comparison 我已将解析导数部分重命名为getModelFunctionJacobian2(),并在此处发布以进行比较

to complete with here is the levenberg marquardt setup to use the GhoosFunction 完成此操作的是使用GhoosFunction的levenberg marquardt设置

public void fit() {

final double[][] dataPoints = {  //  x, y
    //{0.0/60,  0.0},   // never use {0, 0} => org.apache.commons.math3.exception.ConvergenceException: illegal state: unable to perform Q.R decomposition on the 17x3 jacobian matrix
    {15.0/60,   8.891104},
    {30.0/60,   13.21852},
    {45.0/60,   28.09051},
    {60.0/60,   43.0011},
    {75.0/60,   57.43561},
    {90.0/60,   67.06862},
    {105.0/60,  82.60239},
    {120.0/60,  72.4649},
    {135.0/60,  61.4},
    {150.0/60,  43.97924},
    {165.0/60,  30.6},
    {180.0/60,  20.77112},
    {195.0/60,  15.5},
    {210.0/60,  10.85442},
    {225.0/60,  9.33},
    {240.0/60,  7.260234},
};

    final double[] initialGuess = { 1.0, 1.0, 1.0 };  // p
    final double[] fract_change = { 1E-4, 1E-4, 1E-4 };  // dp should be below 0.0001

    final GhoosProblem problem = new GhoosProblem(dataPoints, fract_change);

    final int len = dataPoints.length;
    final double[] weights = new double[len];
    final double[] target = new double[len];

    for (int i = 0; i < len; i++){
        weights[i] = 1.0;// / dataPoints[i][1];
        target[i] = dataPoints[i][1];
}

final LevenbergMarquardtOptimizer optimizer = new LevenbergMarquardtOptimizer()
                            .withCostRelativeTolerance(1E-4)  // stol in octave
                            .withParameterRelativeTolerance(1E-4);  // dp should be below 0.0001

    final Optimum optimum = optimizer.optimize(
            builder(problem)
                    .weight(new DiagonalMatrix(weights))
                    .target(target)
                    .start(initialGuess)
                    .maxIterations(100)
                    .build()
    );

    final RealVector solution = optimum.getPoint();

    solution.setEntry(0, solution.getEntry(0) / 60.0);  // go back to minutes
    System.out.println("solution= " + solution);
    System.out.println("CostRelativeTolerance= " + optimizer.getCostRelativeTolerance());
    System.out.println("ParameterRelativeTolerance= " + optimizer.getParameterRelativeTolerance());
    System.out.println("evaluations= " + optimum.getEvaluations());
    System.out.println("iterations= " + optimum.getIterations());
    //System.out.println("residuals= " + optimum.getResiduals());
    System.out.println("RMS= " + optimum.getRMS());
    System.out.println("sigma= " + optimum.getSigma(1E-10));

}//fit

public LeastSquaresBuilder builder(GhoosProblem problem){
    return new LeastSquaresBuilder()
            .checkerPair(new SimpleVectorValueChecker(1e-6, 1e-6))   // The SimpleVectorValueChecker Class (Simple implementation of the ConvergenceChecker) contains a method that uses the value of the function between two successive iterations of the optimisation algorithm to check if convergence has occured
            .maxEvaluations(Integer.MAX_VALUE)
            .maxIterations(Integer.MAX_VALUE)
            //.lazyEvaluation(true)
            .model(problem.getModelFunction(), problem.getModelFunctionJacobian());
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM