[pn,minp,maxp,tn,mint,maxt]=premnmx(p,t)%归一化数据,方便后面的预测
net.trainParam. show = 100 %这里培弊的show是显示步数配败族,每100步显示一次
net.trainParam.goal=0.0001%目标误差,训练得到的数据和原始输入
net.trainParam.lr = 0.01 %lr是学习动量,一般越小越好
y1=sim(net,pn) %sim用来预测的
xlswrite('testdata6',tnew1) ?这里的testdata6是excel表枯核格的名称
你可以看看书的,书上都有介绍
P=[。。。]输入T=[。。。]输出% 创建一个新的前向神经网络
net_1=newff(minmax(P),[10,1],{'tansig','purelin'},'traingdm')
% 当前输入层权值和阈睁培值
inputWeights=net_1.IW{1,1}
inputbias=net_1.b{1}
% 当前网络层权值和阈值
layerWeights=net_1.LW{2,1}
layerbias=net_1.b{2}
% 设置训练参春碰数
net_1.trainParam.show = 50
net_1.trainParam.lr = 0.05
net_1.trainParam.mc = 0.9
net_1.trainParam.epochs = 10000
net_1.trainParam.goal = 1e-3
% 调用 TRAINGDM 算法训练 BP 网络悉森唯
[net_1,tr]=train(net_1,P,T)
% 对 BP 网络进行仿真
A = sim(net_1,P)
% 计算仿真误差
E = T - A
MSE=mse(E)
x=[。。。]'%测试
sim(net_1,x)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
不可能啊 我2009
x=[0.45 420.32 420.47 510.52 500.88 60.92 30.01 210.06 40.58 480.78 44]y=[1011000011]
inputs = x'
targets = y'
hiddenLayerSize = 8
net = patternnet(hiddenLayerSize)
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}
net.outputs{2}.processFcns = {'removeconstantrows','桥差森mapminmax'}
net.divideFcn = 'dividerand' % Divide data randomly
net.divideMode = 'sample'敏亩 % Divide up every sample
net.divideParam.trainRatio = 70/100
net.divideParam.valRatio = 15/100
net.divideParam.testRatio = 15/100
net.trainFcn = 'trainlm' % Levenberg-Marquardt
net.performFcn = 'mse' % Mean squared error
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression', 'plotfit'}
[net,tr] = train(net,inputs,targets)
outputs = net(inputs)
errors = gsubtract(targets,outputs)
performance = perform(net,targets,outputs)
trainTargets = targets .* tr.trainMask{1}
valTargets = targets .* tr.valMask{1}
testTargets = targets .* tr.testMask{1}
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
view(net)
训练的庆散模型保存在net这个结构体中,想通过输入得到输出用sim()函数
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)