09-22-2024 09:00 PM
Hello,
I am working on a data logger for an older MTS load frame system at a university doing a long cyclical (~0.1-1 Hz) test that may go millions of cycles. The logger is recording displacement, load, elapsed time and cycle counts as received from a pulsed value from the hardware controller. The amount of data is currently too much even though we are recording every Nth cycle (10, 50, or 100) with the current program listed as DataflowLoop.vi. I am trying to limit this recording to only the max/min values of a cycle but need to pull from a full sampled wave to avoid aliasing as the peaks are important for verification.
The current program does occasionally miss counts already, about 100-200 out of 300,000, but will likely run into more issues if we go to calculate max/min. I decided to try to recreate the code in a producer/consumer loop in order to continuously look for the counter and receive data and perform the calculations in parallel. Additionally, I would like the system to trigger a higher recording, maybe the full sampled waveform and not max/min when the specimen goes past a displacement limit suggesting that a rapid failure is starting in an effort not to miss this data.
I am getting data and triggers, but I think I am not clear on what the best practice would be here on the DAQmx tasks start/stop/clear in the producer loop. Is it better to do continuous samples and start once like it is now or should I use finite samples and start/stop inside the producer loop like the dataflowloop test? I did get some -200088 errors on that.
This is my first foray into queues, and I am wondering if my current setup is at risk of buffer overrun over such a long test as well. 
I have not put in the max/min feature yet, but I plan to index the array to pull out the load and displacement arrays ->array to max/min then bundle back with the cycle count to record. Any thoughts or help would be appreciated. This is LabVIEW 2020, but I am writing it for a DAQ system running LabVIEW 2016 on I believe a pci mio 16xe 10.
Thank you,
Chris
09-22-2024 11:10 PM
Have you evaluated FlexLogger for your application? That product has already figured out the logging for mixed IO using DAQmx devices.
If you decide that your custom VI is still the way to go, I recommend streamlining the DAQ producer loop even more. Push everything you can (such as Mean) into the consumer loop. Wire the number of samples into the DAQmx Read for analog input. Coding for some data or no data is more difficult than coding for the same data size every time.
You mention:
The current program does occasionally miss counts already, about 100-200 out of 300,000...
Is that the DAQ loop or the consumer/processing loop that misses counts?
Use the Tools>>Profile>>Performance and Memory Tool to help locate performance bottlenecks in your current implementation. I think you will find that the Write Characters to File is the clear bottleneck. If you write to TDMS, I think you will find that your write time is acceptable even when logging all samples even when logging for long durations to an already large file. No matter which File IO API you use, open the file during initialize (before the loop) and close it during cleanup (after the loop).
09-23-2024 08:56 AM
Hello Doug,
Thanks for the response. I have not checked out FlexLogger yet. It looks like it may be a good option. It will take me some time to get it through our IT procedures for evaluation and potentially purchase, but it may work out well for several electromechanical machines we are working on so I will start looking into it further.
The producer should then only have the counter read, analog read and perhaps the elapsed time being bundled into the queue? Would it be bad form then to move the displacement limit stop/button stop into the consumer loop and using a local variable to feed to the producer loop stop? That makes sense on the samples and the data variation.
The DAQ loop is the current one that I noticed was missing counts, based off the hardware counter built into the controller.
I have never used the performance tool, so I will give it a try. TDMS or Write to Text would record the data to the file every loop so if the system became unresponsive or there was a power outage the data would still be retrievable assuming it was not corrupted correct?
Thanks again.
09-23-2024 11:13 AM
Don't have LabVIEW on this computer, so can only remark on diagram you have shown.
09-24-2024 02:31 AM
Hi Strehl,
@Strehl wrote:
I am getting data and triggers, but I think I am not clear on what the best practice would be here on the DAQmx tasks start/stop/clear in the producer loop. Is it better to do continuous samples and start once like it is now or should I use finite samples and start/stop inside the producer loop like the dataflowloop test? I did get some -200088 errors on that.
There are (atleast) twothings I would immediatly change in your producer loop:
Minor change:
09-24-2024 10:54 PM
Hello McDuff,
1&5: Those make sense, thank you.
2,3 & 4: I had it pretty high for resolution during machine tuning, though I think 10-30 Hz would be fine. There will be some variation, but it should be actuating at less than 1 Hz. The wait was in there to keep the resource load low on the older computers for a different test with continuous sampling and will be removed.
I believe I may be misunderstanding how the sampling works in this. Are you saying I should set the sampling to continuous on the timing block and then set the samples on the read and this will automatically control the buffer size?
09-24-2024 11:06 PM
Hello GerdW
There are (atleast) twothings I would immediatly change in your producer loop:
Agreed on the wait function and I think this is what McDuff is stating as well. I am just a bit confused I guess on the difference between continuous and finite behaviors in this situation. I would sum my understanding as: "we want continuous samples of a finite set in each loop"
Thanks, I will do that on the clock. Parallelling the DI and AI in this case would be just splitting the Error Line right?
09-25-2024 02:23 AM
Hi Strehl,
@Strehl wrote:
I am just a bit confused I guess on the difference between continuous and finite behaviors in this situation. I would sum my understanding as: "we want continuous samples of a finite set in each loop"
You clearly defined the AI task as "Continuous samples", so I guess you want to read samples continuously.
(A "Finite samples" task is something different!)
Inside your loop you should define a fixed number of samples to read. The common suggestion is to read about 1/10 of the samplerate, like 100 samples for a samplerate of 1kS/s…
@Strehl wrote:
Parallelling the DI and AI in this case would be just splitting the Error Line right?
Yes: THINK DATAFLOW!
09-25-2024 10:47 AM
@Strehl wrote:
I believe I may be misunderstanding how the sampling works in this. Are you saying I should set the sampling to continuous on the timing block and then set the samples on the read and this will automatically control the buffer size?
The buffer size refers the extra memory LabVIEW allocates on the host, that is, your PC, for data acquisition in case there are any hiccups. Because memory is usually plentiful, I set the buffer to 4-8s worth of data manually. BUT, you don't have to set the buffer size manually for your application, it is only worth doing when you have high speed acquisition, 8 or more MSa/s.
Your DAQ read controls the loop rate. It will read every NumSamples*SamplingRate. GerdW provided a good explanation in his post. So there is no need for a Wait in your loop.
09-26-2024 10:36 PM
Thanks. I have been trying to utilize a circular buffer for different application, so I was curious how that worked, but I do not have any DAQs higher than 1.25 MSa/s so it should not be a problem.
Working on streamlining the producer loop with the mentioned changes and will perform a test.