Why do I observe a shift in mean when using the DECIMATE function in MATLAB 6.5 (R13)?
8 views (last 30 days)
Show older comments
When I use the DECIMATE function using the following command:
grid on; hold on;
I observe a shift in the mean when viewing the plot. I can also verify the same from the command line:
as these two values are different.
The shift in the mean is caused by the order of the filter used in DECIMATE. By default, an 8th order Chebyshev Type I Low Pass Filter is used as an anti-aliasing filter before removing samples. However, there is a cost function that evaluates the magnitude response of the filter at the cut-off frequency to determine the filter order. Depending on the machine used (Intel Pentium, AMD Athlon, etc) and the operating system (Windows XP, Linux, etc), different numerical libraries produce different results in the cost function and as a result, different filter orders are selected for the same DECIMATE command.
Even order filters generate a 0.05 dB attenuation at f = 0, which then produces the shift in the mean. Odd order filters do not exhibit this behavior at f = 0 and hence do not produce the shift in the mean.
The workaround is to use DECIMATE with the FIR filter option. The FIR filter does not have a 0.05 dB attenuation at f = 0 and hence the shift in mean does not appear in the downsampled signal.
The reason for evaluating the cost function that computes the magnitude response of the filter at the cut-off frequency is that as the passband-edge frequency decreases, the filter that is designed may degenerate for the default 8th order filter. You can see the distortion by following these steps:
1. Start MATLAB
2. Open decimate.m
3. Put a breakpoint on line 126 of decimate.m
4. In the Command Window, enter the following command:
5. When MATLAB breaks at line 126, view the 8th order Chebyshev Type I IIR filter by entering the following command:
6. Zoom into the cut-off frequency of the filter and you will notice that the passband has significant distortion.
The distortion is caused by forming the transfer function from poles and zeros and the roundoff error that accumulates from the convolutions involved. By lowering the filter order, we can reduce this distortion significantly. You can verify this by resuming the MATLAB debugger and viewing the frequency response of the 7th, 6th, etc order filters at the cutoff frequency.
More Answers (0)
Find more on Multirate Signal Processing in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!