06-29-2023 06:49 AM
Hello,
I have a problem with my VI. I'm using LabView 21.0.1f2 (32-bit) with the NI cDaq-9174 with the PT100 Temperature Input Module NI-9216. With the NI-9216 I have connected PT100 temperature sensors.
For my VI I want two or more Producer loops and one Consumer loop. The Producer loops should acquire the data and store it in the queue. The Consumer loop should take the data out of the loop and write it into a file.
All of the above is working in my VI, the problem occurs if the two Producer loops are timed differently. In the final VI there should be different sensors (not only temperature), that’s the reason why I need different timed loops. The VI only writes the data with the timing of the Producer loop with the higher wait time, which is 2000ms in this example.
It would also be nice if the data from different sensors are sorted (see picture below).
I think I know why this problem occurs I just can’t find a way to solve it.
Thank you in advance!
Solved! Go to Solution.
06-29-2023 07:45 AM
Hi emi,
@emi1010 wrote:
The VI only writes the data with the timing of the Producer loop with the higher wait time, which is 2000ms in this example.
Without being able to look at your VI right now I guess you use queues and more than one dequeue function inside your consumer loop!?
When you don't provide a timeout value for Dequeue then it will wait for new elements in the queue - and so the slower producer will dictate when your consumer loop is able to run the next iteration!
I guess all producers should write to the same queue, so the consumer only has to read one queue. The queue element should contain the information which producer sent the message, like cluster of [sender, timestamp, value, additional data]…
06-29-2023 08:31 AM
OK, I'm not completely sure why your code behaves as it does, but it is oddly backwards. Here are my suggestions:
Bob (Can you tell I'm a Nut about Channel Wires?) Schor
06-29-2023 10:25 AM
I'm at LV 2020 and can't open your code. Here are some general thoughts about having multiple data producers delivering to a single consumer at different rates.
1. First make a decision about your file format. A format like TDMS is very good at handling different "channels" at different rates. But you can't open such files in simple apps like Notepad. There *is* however a free plugin for Excel that will painlessly import a tdms file to Excel, with separate worksheets as needed for the different-rate channels.
2. If you're sure you want to stick with simple ASCII text files (such as CSV format), there's another approach I've been known to use. It has generally been built on top of a QMH-style framework where the queue datatype is a cluster of string message and variant data.
Each of the producers is contained in its own parallel loop, and the producers get into a free running state where they repeatedly "push" their data and timing info into a single shared queue. The string message identifies what kind of data it is, the variant holds the data and timing info. Only the parallel consumer loop ever performs dequeues.
Meanwhile the consumer maintains a (typedef'ed) cluster of state variables, including fields for the data from each of the producers. Each time the consumer dequeues, the string message identifies who the data is from (and by implication, what to convert the variant data into), and that data goes into the correct field of the big state variable cluster. (Sometimes I may accumulate data into an array, sometimes I simply replace prior stale values -- it depends on the needs of the app.)
I'll have decided ahead of time what's going to trigger me to write 1 CSV line to a file. It's usually every time I get an update from one of the producers. Which one? Well again, that's very app dependent. When I write based on a faster producer, I'll end up with many lines in the file where the slower producers' data are repeated because I'm always simply writing the most recent known value. When I write based on a slower producer, I'll typically accumulate data from the faster producers in an array so when it comes time to write I'll have options. I can do averaging, filtering, most recent value, etc. In some cases I might do 2 or more of those things in separate "columns" of the CSV.
There are pros and cons and a lot depends on the kind of workflow you need to support when analyzing the data and creating reports. I think TDMS is a better inherent fit for multi-rate data, but I've more often written to CSV simply to support internal customers' preferred workflow.
-Kevin P
06-29-2023 10:53 AM
I guess you don't want the empty rows on the slower signal?
It makes perfect sense currently, as the consumer only have 1 sample every second time.
The easy solution is to write to a TDMS file instead, then you can extract and defragment and stuff afterwards.
06-30-2023 02:10 AM - edited 06-30-2023 02:49 AM
Hi,
I'm not sure about that, i never used tunnels, but it seems your tunnel blocks the execution of the second (faster) producer loop. Use a different way to stop your loops.
If you want your data sorted like in the picture, you will have 40 'sensor 2' values per 'sensor 1' value, just fyi.
You need to your consumer which 'sensor' the data is coming from, i.e. add a constant to the build array with '1' for the first sensor (edit: I'd make this a Cluster), '2' for the secound and add 2 columns to the data of the second sensor before writing to the file.
good luck
Timo
06-30-2023 06:54 AM
Thanks to you all for your suggestions.
I will try some of these solutions next week and will update you!
07-06-2023 06:27 AM
Hello everyone,
after some trys I managed to fix the error thanks to your help. I used an tag instead of the stream channel wire and it worked.
But I still have some problems with sorting the data as shown in the picture I attached above. I tried some things, but in both solutions the data of the different sensors are not aligned through the time. I want the data which is aquired within the same timestamp to be written in the same row of the array.
Both solutions i have come up with are attached below.
At this point I'm also good with using TDMS-file instead of csv, but I couldn't figure that out as well.
Thank you!
Emi
07-06-2023 07:18 AM - edited 07-06-2023 07:19 AM
That's a hard one, as you generate the timestamp with fraction of seconds (like '14:04:41.947') they are most likely not the same.
What comes into my mind, is to save the first result in a shift register, with the next result, compare if the time matches (without the fractions, like in your picture), if not, write the first one to the file and put the latest result in the shift register. If they match, concatenate the strings and write them into the file.
With that you have a delay of writing the file, but same time will be in the same row. As you write the time of every sensor in it's own column, you don't lose data with that comparison.
Timo
07-06-2023 02:51 PM
Sorry, I haven't been following this discussion so closely, but I wonder if it wouldn't be easier to save the "raw" data files that come from different sources at different timings in different files (particularly if at least some of the data are "regular in time" so that they can be saved in a compact form such as a Waveform). Once the data files are all written, they can be examined and "merged" if this is necessary.
There could well be interaction between the "regular" and the "irregular" data channel, but that might have nothing to do with the format of the data files, rather with how the two channels interact. It might be that you can describe a condition where "when Channel 2 shows this, then we need to be on the lookout for Channel 1 to do that". This almost sounds like a parallel task handling these data, maybe via a separate Producer/Consumer design. You, of course, are in the best position to determine "What" you want to do, or "What" you have to do. Try to get that clearly delineated before getting lost in the Weeds of "how" you do that -- therein lies Spaghetti Code
(and with its color-coded Wires, LabVIEW can make awesome Spaghetti).
Bob Schor