简体   繁体   中英

MATLAB Neural Network pattern recognition

I've made simple neural network for mouse gestures recognition (inputs are angles)and I've used nprtool (function patternnet for creating). I saved the weights and biases of the network:

W1=net.IW{1,1};
W2=net.LW{2,1};
b1=net.b{1,1};
b2=net.b{2,1};

and for calculating result I used tansig(W2*(tansig(W1*in+b1))+b2); where in is an input. But the result is awful (each number is approximately equal to 0.99). Output from commend net(in) is good. What am I doing wrong ? It's very important for me why first method is bad (the same I do in my C++ program). I'm asking for help:)

[edit] Below there's generated code from nprtool GUI . Maybe for someone it would be helpful but I don't see any solution to my problem from this code. For hidden and output layers neurons is used tansig activation function (is there any parameter in MATLAB network ?).

% Solve a Pattern Recognition Problem with a Neural Network
% Script generated by NPRTOOL
% Created Tue May 22 22:05:57 CEST 2012
%
% This script assumes these variables are defined:
%
%   input - input data.
%   target - target data.    
inputs = input;
targets = target;

% Create a Pattern Recognition Network
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);

% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};


% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand';  % Divide data randomly
net.divideMode = 'sample';  % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;

% For help on training function 'trainlm' type: help trainlm
% For a list of all training functions type: help nntrain
net.trainFcn = 'trainlm';  % Levenberg-Marquardt

% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse';  % Mean squared error

% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
  'plotregression', 'plotfit'};


% Train the Network
[net,tr] = train(net,inputs,targets);

% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)

% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets  .* tr.valMask{1};
testTargets = targets  .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)

% View the Network
view(net)

% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotconfusion(targets,outputs)
%figure, ploterrhist(errors)

As can be seen in your code, the network applies automated preprocessing of the input and postprocessing of the targets - look for the lines which define processFcns . It means that the trained parameters are valid for input which is preprocessed, and that the output of the network is postprocessed (with the same paramaters as the targets were). So in your line tansig(W2*(tansig(W1*in+b1))+b2); you can't use your original inputs. You have to preprocess the input, use the result as the network's input, and postprocess the output using the same parameters that were used to postprocess the targets. Only then will you get the same result as calling net(in) .

You can read more here: http://www.mathworks.com/help/toolbox/nnet/rn/f0-81221.html#f0-81692

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM