LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Question about ALIASING

Suppose you have an unknown signal f(t). Suppose you have sampled it at a certain rate. Is there any criterion/method/theorem that can be used if one has the suspect that the data suffer from aliasing?

In other words: if the rate cannot be increased, is there any check in order to understand if the data are suffering from aliasing or not?

0 Kudos
Message 1 of 9
(3,620 Views)

No, there is no check.

Except you have an expectation of the kind of the signal (regarding frequency spectrum)…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 9
(3,605 Views)

Ideally you should place an analog filter matched to your sampling rate.

 


gnappo wrote:

In other words: if the rate cannot be increased, is there any check in order to understand if the data are suffering from aliasing or not?


Well, you are not mentioning the possibility of reducing the rate. Since the alias frequency is a strong function of the real frequency and the sampling frequency, measuring at several unrelated slower rates (e.g. 99%, 95% of the original sampling rate) should not change a reasonably sampled real frequency, but would change alias frequencies of signals that have much higher frequencies, often dramatically (Just an idea. Have not tried ;))

Message 3 of 9
(3,582 Views)

One trick in signal processing is to sample the same signal at two different rates.  This in an undersampling concept.  Then from the two aliased signals, you can figure out what the actual input frequency was (usually based on other criteria like filters that were used in the aquisition path).


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 4 of 9
(3,560 Views)

I may be misinterpreting the intent of the question, but the theory is that if the input signal x(t) maximum frequency is limited to F such that there exists no signal energy higher than F, then a sampling rate of 2F (i.e. the Nyquist rate) will allow the original signal to be reconstructed without aliasing in the resulting discrete time signal.

 

As others said, you may be able to employ techniques to discover or guess the sampling rate, but it is far better to understand the characteristics of the original signal, because energy at a higher frequency than F, even if "low" may still cause low level aliasing in the result. The band limited nature of the original signal is key to understanding whether aliasing will be p resent or not.

 

It sounds like perhaps you don't know the expected bandwidth of your original signal f(t).

0 Kudos
Message 5 of 9
(3,526 Views)

Actually I have a sampled signal and my task is to show that it is not suffering from aliasing and that the sampling rate was correctly chosen. 

However, I am not experienced of signal analysis, and I have never used Labview to extract the spectral components of my signal in the frequency domain. I have hust installed the Advanced Signal Processing toolkit. Furthermore, I have found this

http://zone.ni.com/reference/en-XX/help/371361H-01/lvanlsconcepts/aliasing/

which says that the cut frequencies of the signal are shifted according a formula (AF = |CIMSF – IF|)  I did not know about it. 

I don't know it this could be a good starting point. 

Please if you are aware of any Labview example which I could check in order to make my ideas clearer, please suggest it to me. 

0 Kudos
Message 6 of 9
(3,504 Views)

If all you have is a sampled signal, you cannot tell real from alias frequencies.

 

Is there anything known about the signal? What are the instrument and DAQ characteristics?

 

This is not a LabVIEW questions, but general signal processing

0 Kudos
Message 7 of 9
(3,492 Views)

@gnappo wrote:

Suppose you have an unknown signal f(t). Suppose you have sampled it at a certain rate. Is there any criterion/method/theorem that can be used if one has the suspect that the data suffer from aliasing?

In other words: if the rate cannot be increased, is there any check in order to understand if the data are suffering from aliasing or not?


Tim alluded to frequency unfolding

 

This is commonly used in doppler weather radars to anti-alias the velocity spectrum by using different pulse repetition frequencies and assuming that any velocity spectral components that a Nyquist Aliased (Spectrally folded) return may be "Unfolded" by a ratio of the two PRR bandwidths.  It involves a bit of understanding of  math but not a lot of calculus.

 

Similarly, if you understand the concept of "Second Range Returns" in Radar, they can be un-aliased by  comparing the returns "Delta Rho" with repetition rate.  Some of the best Radar designers missed that.  (Look up RVP-7.  I have it on the authority of the patient holder that "RVP is short for Radar Video Processor"- (Quote attributed to Dr. Richard V. Passerelli) 


"Should be" isn't "Is" -Jay
0 Kudos
Message 8 of 9
(3,482 Views)

@gnappo wrote:

Actually I have a sampled signal and my task is to show that it is not suffering from aliasing and that the sampling rate was correctly chosen.


With just your 1 sampled waveform, you can't.  You could use use a Spectrum Analyzer that can go way higher in frequency and see what signals our out there.  Or you can try sampling at a different rate and see if the spectrums come out the same.  In the real world, we would just choose a sample rate based on the filters that are before the sampling circuitry.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 9 of 9
(3,466 Views)