BP算法实现步骤(软件):
1)初始化
2)输入训练样本对,计算各层输出
3)计算网络输出误差
4)计算各层误差信号
5)调整各层权值
6)检查网络总误差是否达到精度要求
满足,则训练结束;不满足,则返回步骤2)
3、多层感知器(基于BP算法)的主要能力:
1)非线性映射:足够多样本->学习训练
能学习和存储大量输入-输出模式映射关系。只要能提供足够多的样本模式对供BP网络进行学习训练,它便能完成由n维输入空间到m维输出空间的非线橡液性映射。
2)泛化:输入梁肆物新样本(训练时未有)->完成正确的输入、输出映射
3)容错:个别样本误差不能左右对权矩阵的调整
4、标准BP算法的缺陷:
1)易形成局部极小(属贪婪算法,局部最优)而得不到全局最优;
2)训练次数多使得学习效率低下,收敛速度慢(需做大量运算);
3)隐节点的选取缺乏理论支持;
4)训练时学习新样本有遗忘旧样本趋势。
注3:改进算法—增加动量项、自适应调整学习速率(这个似乎不错)及引入雹春陡度因子
function Solar_SAEtic
n = 300
m=20
train_x = []
test_x = []
for i = 1:n
%filename = strcat(['D:\Program Files\MATLAB\型此R2012a\work\DeepLearn\Solar_SAE\64_64_3train\' num2str(i,'%03d') '.bmp'])
%filename = strcat(['E:\matlab\work\c0\TrainImage' num2str(i,'%03d') '.bmp'])
filename = strcat(['E:\image restoration\3-(' num2str(i) ')-4.jpg'])
b = imread(filename)
%c = rgb2gray(b)
c=b
[ImageRow ImageCol] = size(c)
c = reshape(c,[1,ImageRow*ImageCol])
train_x = [train_xc]
end
for i = 1:m
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3test\' num2str(i,'%03d') '嫌租盯.bmp'])
%filename = strcat(['E:\matlab\work\c0\TestImage'芹和 num2str(i+100,'%03d') '-1.bmp'])
filename = strcat(['E:\image restoration\3-(' num2str(i+100) ').jpg'])
b = imread(filename)
%c = rgb2gray(b)
c=b
[ImageRow ImageCol] = size(c)
c = reshape(c,[1,ImageRow*ImageCol])
test_x = [test_xc]
end
train_x = double(train_x)/255
test_x = double(test_x)/255
%train_y = double(train_y)
%test_y = double(test_y)
% Setup and train a stacked denoising autoencoder (SDAE)
rng(0)
%sae = saesetup([4096 500 200 50])
%sae.ae{1}.activation_function = 'sigm'
%sae.ae{1}.learningRate = 0.5
%sae.ae{1}.inputZeroMaskedFraction = 0.0
%sae.ae{2}.activation_function = 'sigm'
%sae.ae{2}.learningRate = 0.5
%%sae.ae{2}.inputZeroMaskedFraction = 0.0
%sae.ae{3}.activation_function = 'sigm'
%sae.ae{3}.learningRate = 0.5
%sae.ae{3}.inputZeroMaskedFraction = 0.0
%sae.ae{4}.activation_function = 'sigm'
%sae.ae{4}.learningRate = 0.5
%sae.ae{4}.inputZeroMaskedFraction = 0.0
%opts.numepochs = 10
%opts.batchsize = 50
%sae = saetrain(sae, train_x, opts)
%visualize(sae.ae{1}.W{1}(:,2:end)')
% Use the SDAE to initialize a FFNN
nn = nnsetup([4096 1500 500 200 50 200 500 1500 4096])
nn.activation_function = 'sigm'
nn.learningRate = 0.03
nn.output = 'linear'% output unit 'sigm' (=logistic), 'softmax' and 'linear'
%add pretrained weights
%nn.W{1} = sae.ae{1}.W{1}
%nn.W{2} = sae.ae{2}.W{1}
%nn.W{3} = sae.ae{3}.W{1}
%nn.W{4} = sae.ae{3}.W{2}
%nn.W{5} = sae.ae{2}.W{2}
%nn.W{6} = sae.ae{1}.W{2}
%nn.W{7} = sae.ae{2}.W{2}
%nn.W{8} = sae.ae{1}.W{2}
% Train the FFNN
opts.numepochs = 30
opts.batchsize = 150
tx = test_x(14,:)
nn1 = nnff(nn,tx,tx)
ty1 = reshape(nn1.a{9},64,64)
nn = nntrain(nn, train_x, train_x, opts)
toc
tic
nn2 = nnff(nn,tx,tx)
toc
tic
ty2 = reshape(nn2.a{9},64,64)
tx = reshape(tx,64,64)
tz = tx - ty2
tz = im2bw(tz,0.1)
%imshow(tx)
%figure,imshow(ty2)
%figure,imshow(tz)
ty = cat(2,tx,ty2,tz)
montage(ty)
filename3 = strcat(['E:\image restoration\3.jpg'])
e=imread(filename3)
f= rgb2gray(e)
f=imresize(f,[64,64])
%imshow(ty2)
f=double (f)/255
[PSNR, MSE] = psnr(ty2,f)
imwrite(ty2,'E:\image restoration\bptest.jpg','jpg')
toc
%visualize(ty)
%[er, bad] = nntest(nn, tx, tx)
%assert(er <0.1, 'Too big error')
%读取训练数据[f1,f2,f3,f4,class] = textread('trainData.txt' , '%f%f%f%f%f',150)
%特征值归一化
[input,minI,maxI] = premnmx( [f1 , f2 , f3 , f4 ]')
%构造输出矩阵
s = length( class)
output = zeros( s , 3 )
for i = 1 : s
output( i , class( i ) ) = 1
end
%创建神慎圆绝宽姿经网络
net = newff( minmax(input) , [10 3] , { 'logsig' 'purelin' } , 'traingdx' )
%设置训练参数
net.trainparam.show = 50
net.trainparam.epochs = 500
net.trainparam.goal = 0.01
net.trainParam.lr = 0.01
%开始训练
net = train( net, input , output' )
%读取测试数据
[t1 t2 t3 t4 c] = textread('testData.txt' , '%f%f%f%f%f',150)
%测试数据归腔誉一化
testInput = tramnmx ( [t1,t2,t3,t4]' , minI, maxI )
%仿真
Y = sim( net , testInput )
%统计识别正确率
[s1 , s2] = size( Y )
hitNum = 0
for i = 1 : s2
[m , Index] = max( Y( : , i ) )
if( Index == c(i) )
hitNum = hitNum + 1
end
end
sprintf('识别率是 %3.3f%%',100 * hitNum / s2 )
看了你的数据,你至少要有的类标号吧,不知道你哪里是输入向量,哪里是输出向量
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)