hello
well, it is a bit converging with your data , but it's not "as good" as with a truly stationnary and ergodic signals we prefer to use for system identification (white noise , random signal, which is the same)
why use seismic data for that task ???
also note that in the original code , the random noise input is updated at each iteration while with your signal we can only run 1 iteration; it makes the reading of the convergence much more difficult as there is no averaging effect
your signal has also a quantization problem : the signal is quite "staircase" as the max peak of signal is only coded on 10 levels
this is also a limiting factor of LMS algorithm performance
try to see if you can get a better signal with a analog preamp before signals gets digitized
With random signal , the LMS and VSSLMS identified models are perfectly overlaid (because we have the "best" input signal)
with your signal and one single iteration, there is still a convergence but not complete; you can try to use a longer excitation signal or increase the mu gain
this plot is obtained with the same convergence gains as in the original code :
now we can make the convergence much faster : i increased the gain for both algos by a factor 100
%learning rate for LMS algorithm
mu_LMS = 0.0004*100;
and
%% Defining initial parameters for VSS-LMS algorithm
mu_VSS(1)=1; %initial value of mu for VSS
alpha = 0.97;
gamma = 4.8e-4*100;
and we got almost the "perfect" results even though the signal is not really 100% appropriate for the tasks
and the convergence curve :
this is in the updated code below :
sys_desired = [86 -294 -287 -262 -120 140 438 641 613 276 -325 -1009 -1487 -1451 -680 856 2954 5206 7106 8192 8192 7106 5206 2954 856 -680 -1451 -1487 -1009 -325 276 613 641 438 140 -120 -262 -287 -294 86] * 2^(-15);
x = readmatrix('BKHL.HHZ.new.csv');
model_coeff = zeros(1,length(sys_desired));
model_coeff_vss = zeros(1,length(sys_desired));
model_tap = zeros(1,length(sys_desired));
d = randn(size(x))*10^(-noise_floor/20);
sys_opt = filter(sys_desired,1,x)+d;
mu_max = 1/(input_var*length(sys_desired));
model_tap=[x(i) model_tap(1:end-1)];
model_out(i) = model_tap * model_coeff';
e_LMS(i)=sys_opt(i)-model_out(i);
model_coeff = model_coeff + mu_LMS * e_LMS(i) * model_tap;
model_out_vss(i) = model_tap * model_coeff_vss';
e_vss(i) = sys_opt(i) - model_out_vss(i);
model_coeff_vss = model_coeff_vss + mu_VSS(i) * e_vss(i) * model_tap;
mu_VSS(i+1) = alpha * mu_VSS(i) + gamma * e_vss(i) * e_vss(i) ;
elseif(mu_VSS(i+1)<mu_min)
mu_VSS(i+1) = mu_VSS(i+1) ;
err_LMS(itr,:) = e_LMS.^2;
err_VSS(itr,:) = e_vss.^2;
disp(char(strcat('iteration no : ',{' '}, num2str(itr) )))
plot(10*log10(mean(err_LMS,1)),'-b');
plot(10*log10(mean(err_VSS,1)), '-r');
title('Comparison of LMS and VSS LMS Algorithms'); xlabel('iterations');ylabel('MSE(dB)');legend('LMS Algorithm','VSS LMS Algorithm')
plot(model_coeff, '-*r');
plot(model_coeff_vss, '-*m');
legend('FIR model','LMS FIR ID','VSSLMS FIR ID');