[phi,psi,x] = wavefun('sym4',10)%建议使用10次以上迭代计算,比较精确
subplot(211),plot(x,phi)%尺度函数
subplot(212),plot(x,psi)%小波函数
Morlet(Morl)、MexicanHat(mexh)是没有尺度函数定义的小波基,只能显示其小波函数,
[psi,x] = wavefun('Morl',10)
subplot(212),plot(x,psi)%小波函数
参看matlab对wavefun的帮助文档即可,对各种类型的小波基的格式有很详细的说明。
程序如下:function [nn, L] = nntrain(nn, train_x, train_y, opts, val_x, val_y)
%NNTRAIN trains a neural net
% [nn, L] = nnff(nn, x, y, opts) trains the neural network nn with input x and
% output y for opts.numepochs epochs, with minibatches of size
% opts.batchsize. Returns a neural network nn with updated activations,
% errors, weights and biases, (nn.a, nn.e, nn.W, nn.b) and L, the sum
% squared error for each training minibatch.
assert(isfloat(train_x), 'train_x must be a float')
assert(nargin == 4 || nargin == 6,'number ofinput arguments must be 4 or 6')
loss.train.e = []
loss.train.e_frac = []
loss.val.e = []
loss.val.e_frac= []
opts.validation = 0
if nargin == 6
opts.validation = 1
end
fhandle = []
if isfield(opts,'plot') &&opts.plot == 1
fhandle = figure()
end
m = size(train_x, 1)
batchsize = opts.batchsize
numepochs = opts.numepochs
numbatches = m / batchsize
assert(rem(numbatches, 1) == 0, 'numbatches must be a integer')
L = zeros(numepochs*numbatches,1)
n = 1
for i = 1 : numepochs
tic
kk = randperm(m)
for l = 1 : numbatches
batch_x = train_x(kk((l - 1) * batchsize + 1 : l * batchsize), :)
%Add noise to input (for use in denoising autoencoder)
if(nn.inputZeroMaskedFraction ~= 0)
batch_x = batch_x.*(rand(size(batch_x))>nn.inputZeroMaskedFraction)
end
batch_y = train_y(kk((l - 1) * batchsize + 1 : l * batchsize), :)
nn = nnff(nn, batch_x, batch_y)
nn = nnbp(nn)
nn = nnapplygrads(nn)
L(n) = nn.L
n = n + 1
end
t = toc
if ishandle(fhandle)
if opts.validation == 1
loss = nneval(nn, loss, train_x, train_y, val_x, val_y)
else
loss = nneval(nn, loss, train_x, train_y)
end
nnupdatefigures(nn, fhandle, loss, opts, i)
end
disp(['epoch ' num2str(i) '/' num2str(opts.numepochs) '. Took ' num2str(t) ' seconds' '. Mean squared error on training set is ' num2str(mean(L((n-numbatches):(n-1))))])
nn.learningRate = nn.learningRate * nn.scaling_learningRate
end
end
Matlab之小波滤波函数 :1 wfilters函数
[Lo_D,Hi_D,Lo_R,Hi_R] = wfilters('wname') computes four filters associated with the orthogonal or biorthogonal wavelet named in the string 'wname'. The four output filters are
Lo_D, the decomposition low-pass filter Hi_D, the decomposition high-pass filter Lo_R, the reconstruction low-pass filter
Hi_R, the reconstruction high-pass filter
2 biorfilt函数
The biorfilt command returns either four or eight filters associated with biorthogonal wavelets.
3 orthfilt函数
[Lo_D,Hi_D,Lo_R,Hi_R] = orthfilt(W) computes the four filters associated with the scaling filter W corresponding to a wavelet
4 biorwaef函数
[RF,DF] = biorwavf(W) returns two scaling filters associated with the biorthogonal wavelet specified by the string W.
5 coifwavf函数
F = coifwavf(W) returns the scaling filter associated with the Coiflet wavelet specified by the string W where W = 'coifN'. Possible values for N are 1, 2, 3, 4, or 5
6 dbaux函数
W = dbaux(N,SUMW) is the order N Daubechies scaling filter such that sum(W) = SUMW. Possible values for N are 1, 2, 3, ...
W = dbaux(N) is equivalent to W = dbaux(N,1) W = dbaux(N,0) is equivalent to W = dbaux(N,1)
7 dbwavf函数
F = dbwavf(W) returns the scaling filter associated with Daubechies wavelet specified by the string W where W = 'dbN'. Possible values for N are 1, 2, 3, ..., 45.
8 mexihat函数
[PSI,X] = mexihat(LB,UB,N) returns values of the Mexican hat wavelet on an N point regular grid, X, in the interval [LB,UB].
Output arguments are the wavelet function PSI computed on the grid X. This wavelet has [-5 5] as effective support.
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)