11-10-2014 12:43 PM
Hello,
Currently I am working with a myRIO-1900 and I am trying to write a program to log data. I want to sample the analog input data at 256*60=15360 Hz, which is around 65 us.
To do so I tried to build a real-time application and run it at the myRIO. However, it seems I am not able to run a real-time application at that frequency. In the attached program(in which only the execution time is saved) I run my code for 77000 iterations which I would expect to take around 5 seconds (77000*65us) it takes 15seconds however. I tried different types of timing loops with no succes.
Can someone please explain me what I am doing wrong and if it possible to run a real-time application at that frequency in the first place?
Thank you very much!
Maurice
11-10-2014 01:08 PM
Just a 5 second glance at the VI:
11-12-2014 02:28 PM
Dear Altenbach,
Thank you for your reply, I tried to improve my code and extend it. In my extended code I save the last 64 data points of the analog inputs to an array, which I will need for further calculations. However, I want so save all the data points to file. Therefore, I save the array of 64 data points after 64 iterations to a file. I chose to do this since saving 1 data points per iteration would require to much time.
There are two things I would like to improve:
Currently 1 iteration requires +-130 us (Using timed loop of dt:260 us atm) . Desirable I would like to bring this down to <65 us to achieve a sampling frequency of +-15kHz of the analog input.
Currently 1 iteration where the array is saved to a file requires +-6000us. Is there a better way to save the data points without influencing the timed loop? (Using parallel loops? I had a quick look at this but then I had difficulties passing data between the loops)
This is my first LabVIEW programming experience so please excuse me if I am heading into the wrong direction with my code.
Thank you,
Maurice
11-12-2014 02:36 PM
Saving the data to disk is really slow compared to everything else you want to do here. Look at the Producer/Consumer. You will simply send your data from the timed loop (which is the time critical loop) through a queue to your logging loop, which will run at whatever rate it can.
11-13-2016 04:24 PM
Please excuse the response to a 2 year old post. I've actually inherited the code the original poster was working on. He has moved on to bigger and better things, and now this is MY first labview project. Per the suggestion, Maurizeee implemented the producer/consumer design pattern.
I am trying to use this code to capture data that can be fed into Simulink to validate a model I've put together. A question that I haven't been able to find from google searching is how I can log loop time, specifically the time samples are taken. I've got iteration time being fed into an array along with analog input readings, and my instinct is that I could just sum the iteration times to get the time since starting the loop, but would like to be sure.
Another problem I have is that I would like the period between samples to be consistant, but if I'm interpreting correctly, subsequent samples are taken immediately when the previous loop completes, rather than following a specified interval.
I'm preparing to migrate this to a compact RIO 9065, and I'm considering using the FPGA to try to speed things up. I've been looking through the FPGA tutorials, but haven't been satisfied with anything like a summary of benefits and limitations of using FPGA. Please correct me if I am wrong, but i think that my sampling could be made much more efficient if it were done via the FPGA rather than the CPU?
With that migration I'm also considering reworking this VI to toggle between queueing data and writing to file, but don't know what limitation there is to the memory for queueing. I've run the file on MyRIO 1900 with downsampling too frequent (and thus an ever increasing queue), to the point of having >50k array entries queued without seeing any sort of memory overflow error. If I could alternatively do data capture, toggle a switch (or automate by switching between modes based on queue length) and process the queue into a data file, then I expect I could get far superior resolution on my sample time.
Thanks a lot for any insight you might provide.