clear all
close all
%channel system order
sysorder = 5
% Number of system points
N=2000
inp = randn(N,1)
n = randn(N,1)
[b,a] = butter(2,0.25)
Gz = tf(b,a,-1)
%This function is submitted to make inverse Z-transform (Matlab central file exchange)
%The first sysorder weight value
%h=ldiv(b,a,sysorder)'
% if you use ldiv this will give h :filter weights to be
h= [0.0976
0.2873
0.3360
0.2210
0.0964]
y = lsim(Gz,inp)
%add some noise
n = n * std(y)/(10*std(n))
d = y + n
totallength=size(d,1)
%Take 60 points for training
N=60
%begin of algorithm
w = zeros ( sysorder , 1 )
for n = sysorder : N
u = inp(n:-1:n-sysorder+1)
y(n)= w' * u
e(n) = d(n) - y(n)
% Start with big mu for speeding the convergence then slow down to reach the correct weights
if n <20
mu=0.32
else
mu=0.15
end
w = w + mu * u * e(n)
end
%check of results
for n = N+1 : totallength
u = inp(n:-1:n-sysorder+1)
y(n) = w' * u
e(n) = d(n) - y(n)
end
hold on
plot(d)
plot(y,'r')
title('System output')
xlabel('Samples')
ylabel('True and estimated output')
figure
semilogy((abs(e)))
title('Error curve')
xlabel('Samples')
ylabel('Error value')
figure
plot(h, 'k+')
hold on
plot(w, 'r*')
legend('Actual weights','Estimated weights')
title('Comparison of the actual weights and the estimated weights')
axis([0 6 0.05 0.35])
% RLS 算法
randn('seed', 0)
rand('seed', 0)
NoOfData = 8000 % Set no of data points used for training
Order = 32% Set the adaptive filter order
Lambda = 0.98% Set the forgetting factor
Delta = 0.001% R initialized to Delta*I
x = randn(NoOfData, 1) % Input assumed to be white
h = rand(Order, 1) % System picked randomly
d = filter(h, 1, x) % Generate output (desired signal)
% Initialize RLS
P = Delta * eye ( Order, Order )
w = zeros ( Order, 1 )
% RLS Adaptation
for n = Order : NoOfData
u = x(n:-1:n-Order+1)
pi_ = u' * P
k = Lambda + pi_ * u
K = pi_'/k
e(n) = d(n) - w' * u
w = w + K * e(n)
PPrime = K * pi_
P = ( P - PPrime ) / Lambda
w_err(n) = norm(h - w)
end
% Plot results
figure
plot(20*log10(abs(e)))
title('Learning Curve')
xlabel('Iteration Number')
ylabel('Output Estimation Error in dB')
figure
semilogy(w_err)
title('Weight Estimation Error')
xlabel('Iteration Number')
ylabel('Weight Error in dB')
自适应过程一般采用典型LMS自适应算法,但当滤波器的输入信号为有色随机过程时,特别是当输入信号为高度相关时,这种算法收敛速度要下降许多,这主要是因为输入信号的自相关矩阵特征值的分散程度加剧将导致算法收敛性能的恶化和稳态误差的增大。此时若采用变换域算法可以增加算法收敛速度。变换域算法的基本思想是:先对输入信号进行一次正交变换以去除或衰减其相关性,然后将变换后的信号加到自适应滤波器以实现滤波处理,从而改善相关矩阵的条件数。因为离散傅立叶变换�DFT 本身具有近似正交性,加之有FFT快速算法,故频域分块LMS�FBLMS 算法被广泛应用。FBLMS算法本质上是以频域来实现时域分块LMS算法的,即将时域数据分组构成N个点的数据块,且在每块上滤波权系数保持不变。其原理框图如图2所示。FBLMS算法在频域内可以用数字信号处理中的重叠保留法来实现,其计算量比时域法大为减少,也可以用重叠相加法来计算,但这种算法比重叠保留法需要较大的计算量。块数据的任何重叠比例都是可行的,但以50%的重叠计算效率为最高。对FBLMS算法和典型LMS算法的运算量做了比较,并从理论上讨论了两个算法中乘法部分的运算量。本文从实际工程出发,详细分析了两个算法中乘法和加法的总运算量,其结果为:
复杂度之比=FBLMS实数乘加次数/LMS实数乘加次数=(25Nlog2N+2N-4)/[2N(2N-1)]�
采用ADSP的C语言来实现FBLMS算法的程序如下:
for(i=0i<=30i++)
{for(j=0j<=n-1j++)
{in[j]=input[i×N+j]
rfft(in,tin,nf,wfft,wst,n)
rfft(w,tw,wf,wfft,wst,n)
cvecvmlt(inf,wf,inw,n)
ifft(inw,t,O,wfft,wst,n)
for(j=0,j<=N-1j++)
{y[i×N+j]=O[N+j].re
e[i×N+j]=refere[i×N+j]-y[i×N+j]
temp[N+j]=e[i×N+j}
rfft(temp,t,E,wfft,wst,n)
for(j=0j<=n-1j++)
{inf_conj[j]=conjf(inf[j])} ��
cvecvmlt(E,inf_conj,Ein,n)
ifft(Ein,t,Ein,wfft,wst,n)
for(j=0j<=N-1j++)
{OO[j]=Ein[j].re
w[j]=w[j]+2*u*OO[j]}��
}
在EZ-KIT测试板中,笔者用汇编语言和C语言程序分别测试了典型LMS算法的运行速度,并与FBLMS算法的C语言运行速度进行了比较,表2所列是其比较结果,从表2可以看出滤波器阶数为64时,即使是用C语言编写的FBLMS算法也比用汇编编写的LMS算法速度快20%以上,如果滤波器的阶数更大,则速度会提高更多。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)