06-04-2022 08:00 PM
I'm using a USB-6001 DAQ to acquire sensor samples. This particular sensor needs to be calibrated before use and it also needs to be polarized and "settle" before accurate measurements can be made. If it hasn't been used for a few days, the polarization/settling can take 3-4hrs, if it has been used recently and remained powered on it's ready to go and just needs a few minutes to go through calibration routine. I need to show the data for the whole time in either scenario.
The minimum sampling rate is 30hz, I'd probably prefer something like 120hz but I'm concerned about the performance implications of 120 samples/sec for those times when it takes 3-4hrs to polarize. While that seems like a huuuuge amount of data to "remember" to my own mushy cpu, I have no idea if that's a piece of cake for a modern PC. This is a single channel analog acquisition. Does it sound reasonable to just sample like that for 3-4hrs keeping all that data on a shift register or do I need to consider coming up with a way to display more recent data at 30-120hz and either log to disk the older stuff or down sample the older stuff and keep it on the register? I would prefer not writing to disk so my initial idea is to show/keep in memory about 5 minutes of "high frequency" data and periodically down sample the older data and also keep it in memory. At the end of this process I need to be able to present the 3-4hrs of data in a graph but it is ok to present only 1 sample/sec. After all this I started to wonder whether I just have no conception of how taxing it is to keep things simple, keep the full 30-120hz data on a register, and use it for both the live view and final graph view. That would certainly simplify things for me. Assume a basic Core i5 cpu with 8gb ram, running Windows 10.
06-04-2022 08:41 PM
Let's calculate,
based on this calculation, 14MB is insignificant for modern computers. As long as you use the best practices for array manipulation, you can minimize memory duplication and be efficient.
Some articles to read,
https://forums.ni.com/t5/Example-Code/Managing-Large-Data-Sets-in-LabVIEW/ta-p/4100668
06-05-2022 11:00 AM
As Santosh points out, a simple calculation with pencil and paper (or, you could write a little LabVIEW Program to do the multiplication for you) will give you the answer to "How many Samples" (in terms of Channels/Sample x Samples/Sec * Time Duration of Sample), and if you know how the Samples are stored (i.e. Bytes/Sample), you can estimate the amount of memory to hold all of the Samples in memory.
So you'd think if you needed to sample at 1 MHz for 4 days, this comes out to (on the order of) a million samples x 86,400 sec/day * 4 days = 345.6 Giga-samples, which is probably more RAM than you have on your PC. But who says you have to save in RAM? You spool the data to a Disk (a small investment can get you a Terabyte or more of storage). Once your 4 days are up, you can read all of those data back in and start processing them "in batches" ...
Bob Schor