LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

12 bit ADC, double FFT, cast to U16 for saving

Hi. So I am processing a lot of data at 20MHz and saving a not-insignificant portion of it. Our ADC collects with 12 bit resolution. In order to run the built-in FFT, this of course gets coerced to a 64 bit double. If the data is saved directly as a 64 bit double, we end up with files on the order of 8-10GB, much of which is fluff.

I reasoned that since the ADC resolution is 12 bits, we shouldn't lose any "significant figures" by casting the data (amplitude and phase wrapped [0, 2Pi) ) into an unsigned 16 bit integer before saving. The best option would be to only save 12 bits of information but since there is no U12 built-in, that would be more complexity than it would be worth.

This works and cuts the storage down by almost a factor of 4 (files are ~2GB). It is only almost because now I have to include a scaling factor with each array to get the quantitative amplitude back during processing. No big deal.

I just want to check with people who have more experience, could I run into any issues with this? The fidelity for the amplitude is quite good, with the error upon returning to double representation 6 orders of magnitude less than the feature of interest. With the phase there are some visible, albeit minor, changes when the signal is unwrapped, primarily away from resonance. The phase is not part of our analysis at the moment, but could be in the future.

Thoughts?

Bonus question: is there any way to do an FFT on single precision or integer numbers in LabVIEW? So far as I can see, the answer is "no" without the headache of a .dll call.

0 Kudos
Message 1 of 6
(3,665 Views)

Offhand, I see no serious problems.  Your "input data" is 12 bits, so you can only expect at most 12 bits of precision in any sensible data manipulation (including an FFT).  I'm not sure what you mean by a "scaling factor" -- it should be a constant for all of your data, so hardly needs to be stored.

 

Of course, I'm nominally a scientist, not an engineer, so I'd want to "do an experiment" -- simulate (known) data similar to the kinds you are acquiring, add in a "variable noise factor", collect the simulated data, do the FFT, and compare the numbers from the Dbl version to what you get when you cast it to an I16 representation.  When you add noise, you'll want to do multiple runs so that you can look at means and standard deviations.  Don't forget that you "raw data" to the FFT, while generated as Dbls, need to be "sampled with a 12-bit A/D" and stripped of precision before entering the Transform.

 

My Prediction (which you are trying to refute!) is that the distribution of the I16 results will overlap the distribution of the Dbl results, meaning there's no statistical difference.  But don't take my word for it -- It's Your Data, and You Can Cry If It's Imprecise ... (you may need to be A Certain Age to get that).

 

Bob Schor

0 Kudos
Message 2 of 6
(3,641 Views)

Hey,

I agree with Bob that they should not be any problem. Just a small comment: If the value you are measuring with the ADC is always in a given range like 0.9 to 1.1V but the ADC has a range of 0...10 it could be that the effective resolution of your measurement is even a factor 20 or so smaller. Than you can even get away with saving the data as U8 after subtracting a constant offset. This reduces memory space by another factor of 2.

 

0 Kudos
Message 3 of 6
(3,635 Views)

I (vehemently) (well, strongly) disagree.  Data Are Sacred -- if the data are very close to 1 v, you change the A/D Full Scale to 0-2 v (or add a simple amplifier into the circuit).  You can't always eliminate noise, but you certainly don't need to add "digitizing noise" to the system.  Been there, done that (well, my BME students, who should have known better than to use ±10v scaling for a 0-3 v Triaxial Acclerometer, then wondered why their data were so noisy, were amazed that changing the A/D to 0-5v made such a difference).

 

Bob Schor

0 Kudos
Message 4 of 6
(3,622 Views)

What does the data represent and what kind of frequencies does it contain? It is possible that large ranges in the FFT transform are just noise or zero and could be omitted without loss in information. Maybe there are just a few frequencies and harmonics needed to fully restore the interesting signal. This would greatly reduce the storage requirements.

 

So you basically use 32bits to represent an amplitude+phase pair. It seems 16bits each is sufficient, but if one needs more bits you could split it unevenly (18bits for amplitude, 14 bits for phase, etc.). Alternatively, you could also save real and imaginary parts as 16 bits each.

 

Is this just a single FFT or does the frequency composition change over time in an interesting way?

 

 

0 Kudos
Message 5 of 6
(3,618 Views)

Hi Bob, what I mean by scaling factor is the one needed to scale the data from U16 representation back to the true amplitude. As I said, I already checked the fidelity and found it satisfactory. I'm mostly asking just in the event that there may be some hook I'm missing. And although I'm quite a deal younger than the song you have in mind, I grew up with enough oldies tapes that I do get it Smiley Very Happy

Nambu123, the ADC range is already set to the lowest setting and we measure an AC only signal with max amplitudes around half the max range. Also, as Bob said, I wouldn't feel okay with adding digitizing noise, it may not seem like a problem now, but who knows? Honestly the 8GB files weren't a huge problem on their own; our institute has terabytes of available server space. But the lab PC only has a 500GB PCIe SSD. I don't want that to fill up so fast that we have to constantly think about archiving.

Altenbach, we are doing resonance spectroscopy looking at 2 or more modes. The regions of interest lie between 10s of kHz and ~3MHz. We are already only saving slices ~400kHz wide for each resonance. It is with those slices (amplitude and phase, each getting changed from doubles to U16, for 2 frequency ranges) that we were ending up with the 8GB, now 2GB files.

We are interested in shifts of the target resonances over time. As it stands, one batch of data is collected over ~10 minutes with an excitation burst triggered every ~4 ms to net ~100,000 spectra with a frequency resolution of ~250 for each FFT. I ran the numbers before and I think we end up passing something like 100GB of raw data through the computer per measurement batch.

0 Kudos
Message 6 of 6
(3,607 Views)