p1=sin(t)
p2=sin(t)*2
plot(t,p1,'r')
hold
on
plot(t,p2,'b--')
hold
on
t1=ones(1,20)t2=ones(1,20)*2%产生两组向陪尘量,分别为这两波形幅值,作为输出向量
p=[p1
p2
p1
p2]
t=[t1
t2
t1
t2]
Pseq=con2seq(p)%将矩阵形式的训练样本转换芦闭禅为序列的形式
Tseq=con2seq(t)
R=1%输入元素的数目为1
S2=1%输出曾的神态衡经元个数为1
S1=10%中间层有10个神经元
net=newelm([-2,2],[S1,S2],{'tansig','purelin'})
net.trainParam.epochs=100%设定次数
net=train(net,Pseq,Tseq)
y=sim(net,Pseq)
%预测
P=randn(12,2)T=randn(12,2)
threshold=[0
10
10
10
10
10
10
10
10
10
10
10
1]
a=[11
17
23]
for
i=1:3
net=newelm(thresho...
如你所说的,newgrnn是广义rbf,而广义rbf是不迹宽亏需要train的,所以怎么会有误差曲线巧运了?P = [1 2 3]
T = [2.0 4.1 5.9]
net = newgrnn(P,T)
这个表示这个网络已经固定了。
network
Create custom neural network
newc
Create competitive layer
newcf
Create cascade-forward backpropagation network
newdtdnn
Create distributed time delay neural network
newelm
Create Elman backpropagation network
newff
Create feedforward backpropagation network
newfftd
Create feedforward input-delay backpropagation network
newfit
Create a fitting network
newgrnn
Design generalized regression neural network
newhop
Create Hopfield recurrent network
newlin
Create linear layer
newlind
Design linear layer
newlrn
Create layered-recurrent network
newlvq
Create learning vector quantization network
newnarx
Create feedforward backpropagation network with feedback from output to input
newnarxsp
Create NARX network in series-parallel arrangement
newp
Create perceptron
newpnn
Design probabilistic neural network
newpr
Create a pattern recognition network
newrb
Design radial basis network
newrbe
Design exact radial basis network
newsom
Create self-organizing map
sp2narx
Convert series-parallel NARX network to parallel (feedback) form
如果姿神是design的,一般是不需要train的
如果是crate,一般有误差曲线
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)