简体   繁体   中英

neural-network non linear time series Narx model in python

im trying to create a Neural-Network -nonlinear time series NARX Model

Algorithm my inputs is

1-2D Matrix (x,y)

2-another 2D Matrix (x,y)

and the target is the real exact values in this 2D matrix (x,y)

firstly i had searched and i modeled this Network using MATLAB and i had a

Good results i will show it in DOWN

how ever,i want to implement this model " NARX MODEL " in PYTHON

i had searched on algorithms for (NARX MODEL) and i didn't get eny result i want:

1-some one giving me Any References , Websites , BOOKs , video series

2-or showing me a WAY to search in this specific task correctly

3-or giving me a steps to make a code in python equivalent to **MATLAB" NARX

source code and function "**

here is the CODE MATLAB :

    % Solve an Autoregression Problem with External Input with a NARX   Neural Network              
    % Script generated by NTSTOOL
    % Created Wed Nov 09 20:28:50 EET 2016
    %
    % This script assumes these variables are defined:
    %
    %   input- input time series.
    %   output- feedback time series.

    inputSeries = tonndata(input,true,false);
    targetSeries = tonndata(output,true,false);

    % Create a Nonlinear Autoregressive Network with External Input
    inputDelays = 1:2;
    feedbackDelays = 1:2;
    hiddenLayerSize = 10;
    net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

    % Choose Input and Feedback Pre/Post-Processing Functions
    % Settings for feedback input are automatically applied to feedback output
    % For a list of all processing functions type: help nnprocess
    % Customize input parameters at: net.inputs{i}.processParam
    % Customize output parameters at: net.outputs{i}.processParam
    net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
    net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};

    % Prepare the Data for Training and Simulation
    % The function PREPARETS prepares timeseries data for a particular network,
    % shifting time by the minimum amount to fill input states and layer states.
    % Using PREPARETS allows you to keep your original time series data unchanged, while
    % easily customizing it for networks with differing numbers of delays, with
    % open loop or closed loop feedback modes.
    [inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,   {},targetSeries);

    % Setup Division of Data for Training, Validation, Testing
    % The function DIVIDERAND randomly assigns target values to training,
    % validation and test sets during training.
    % For a list of all data division functions type: help nndivide
    net.divideFcn = 'dividerand';  % Divide data randomly
    % The property DIVIDEMODE set to TIMESTEP means that targets are divided
    % into training, validation and test sets according to timesteps.
    % For a list of data division modes type: help nntype_data_division_mode
    net.divideMode = 'value';  % Divide up every value
     net.divideParam.trainRatio = 70/100;
     net.divideParam.valRatio = 15/100;
     net.divideParam.testRatio = 15/100;

     % Choose a Training Function
     % For a list of all training functions type: help nntrain
     % Customize training parameters at: net.trainParam
     net.trainFcn = 'trainlm';  % Levenberg-Marquardt

     % Choose a Performance Function
     % For a list of all performance functions type: help nnperformance
     % Customize performance parameters at: net.performParam
     net.performFcn = 'mse';  % Mean squared error

    % Choose Plot Functions
    % For a list of all plot functions type: help nnplot
    % Customize plot parameters at: net.plotParam
    net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
   'ploterrcorr', 'plotinerrcorr'};

    % Train the Network
   [net,tr] = train(net,inputs,targets,inputStates,layerStates);

   % Test the Network
   outputs = net(inputs,inputStates,layerStates);
   errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)

    % Recalculate Training, Validation and Test Performance
   trainTargets = gmultiply(targets,tr.trainMask);
  valTargets = gmultiply(targets,tr.valMask);
    testTargets = gmultiply(targets,tr.testMask);
     trainPerformance = perform(net,trainTargets,outputs)
      valPerformance = perform(net,valTargets,outputs)
      testPerformance = perform(net,testTargets,outputs)

         % View the Network
       view(net)

          % Plots
         % Uncomment these lines to enable various plots.
          %figure, plotperform(tr)
       %figure, plottrainstate(tr)
          %figure, plotregression(targets,outputs)
        %figure, plotresponse(targets,outputs)
      %figure, ploterrcorr(errors)
     %figure, plotinerrcorr(inputs,errors)

            % Closed Loop Network
        % Use this network to do multi-step prediction.
       % The function CLOSELOOP replaces the feedback input with a direct
          % connection from the outout layer.
        netc = closeloop(net);
         netc.name = [net.name ' - Closed Loop'];
         view(netc)
         [xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
      yc = netc(xc,xic,aic);
          closedLoopPerformance = perform(netc,tc,yc)

          % Early Prediction Network
           % For some applications it helps to get the prediction a timestep        early.
       % The original network returns predicted y(t+1) at the same time it is given y(t+1).
         % For some applications such as decision making, it would help to have predicted
         % y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
       % The network can be made to return its output a timestep early by removing one delay
       % so that its minimal tap delay is now 0 instead of 1.  The new network returns the
     % same outputs as the original network, but outputs are shifted left one timestep.
    nets = removedelay(net);
       nets.name = [net.name ' - Predict One Step Ahead'];
       view(nets)
       [xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
       ys = nets(xs,xis,ais);
      earlyPredictPerformance = perform(nets,ts,ys)

Here is the input

this is the x coordinates

1 4 7 9 11 17 14 16 18 19 

this is the y coordinates

1 2 4 6 7  8  10 10 13 14

this is another x coordinates

1 7 10 13 16 18 19 23 24 25

this is another y coordinates

1 5 7 9 12 14 16 17 19 20

here is the target

this is the actual x coordinates

1 4 5 8 9 15 17 18 20 22

this is the actual y coordinates

1 1 4 7 8 10 13 14 18 20

and the result was good enough with respect to this large errors in both inputs Compared with output but with changing the nerouns we can enhance this output

[5.00163468043085;3.99820942369434]

[8.00059395052246;6.99872447652641] 

[11.5625431537178;8.00040094120297] 

[14.9982223917152;9.24359668634943] 

[19.3511330333522;13.0001065644369] 

[18.4627579643821;13.9999624796494] 

[20.0004073095041;17.9997197490528] 

[22.0004822590849;19.9997852867243]

Hope that will be enough and clear

thanks in advance

PyNeurGen is a possible solution for your problem. Its a python library with support for several net architectures.

The library also contains a demo for a feed forward network.

For using NARX net you can use the definition from: NARX Net PyNeurGen

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM