10-12-2023 07:39 AM
The problem:
Hello everyone! I am having an issue with a DAQ program that takes a logged event from a waveform chart and saves it to a .csv file for later review. (I rebuild it with a different labview program) The issue is that the CSV file does not always contain the first three lines. These first three lines are done through a write to spreadsheet file VI. The file is then appended with an export waveforms to spreadsheet VI. Like I said earlier, every now and then I get the waveforms data, but not the first three. Until this morning the file reference was being converted BACK to a path so that the write to spreadsheet VI could be used. I just restructured it to not need that conversion by using a reference from the driv/file verifier VI.
-Running on Windows 7 computer with low C drive resources
-I checked if the "header" was executing second, but it is not at the bottom of the file either but maybe I need to force execution order better?
The nitty gritty:
in the below snippet I am waiting for a few things to line up, then the case goes true. I take the "save" name, remove bad characters in "name check" subvi, then I verify a user path in the "drive/file verifier" sub vi. (if user path is crap, it goes to a default path and filing is handled later at program close) After all that happens without error we enter the Error conditional structure
The string array "Plot names and configs" comes from the wave chart directly so that I can recreate the waveform chart when it is pulled up later. the "spikes per channel" is inserted into this array and is basically more data for the user.
Waveform chart history is parsed until a timestamp is reached that predates the "pre quench record" and this value is used to determine how much data to send to the CSV file.
Solved! Go to Solution.
10-12-2023 08:28 AM
Sigh. Even though it is a Snippet, I cannot get it to open in LabVIEW, so I can't "study" it as closely as I'd like.
But I think I see the problem. Your Case Statement has a Write Delimited Spreadsheet function, and another function that I cannot identify (I don't see it in LabVIEW 2019, but I'm guessing it is a Waveform Write function) that, according to the Third Law of Data Flow, are programmed to run "in parallel", meaning there is no way to know (or determine) which runs first. When the Write Delimited goes first, all is OK, but when the Write Waveform goes first, you have a bad file.
Judicious use of the Error Line (running it into the Write Delimited Spreadsheet, and from the output of this function into the Write Waveform) should completely cure this problem. (Almost) always wire and use the Error Line to serialize your LabVIEW code (and to warn you of errors, of course).
Bob Schor
10-12-2023 09:09 AM
Hi Bob,
@Bob_Schor wrote:
another function that I cannot identify (I don't see it in LabVIEW 2019, but I'm guessing it is a Waveform Write function)
ExportWaveformToSpreadsheet, found in the Waveform palette - also in LV2019…
10-12-2023 09:54 AM
Hello Bob!
This is a snippet from Labview 2012. it was saved by LV as a .PNG, which may be the issue.
Great catch! I had suspected this may be the case, and it IS a problem, but I don't know if it is THE problem. I had considered the lack of data flow while posting. If waveform export VI (append T) ran first and the write to spreadsheet is set (append:F) I would expect it to have JUST the three lines of text that make up the "header". I did a quick copy and paste test and that is what it did. Instead I am only getting the export output without the write to spreadsheet output. Export is send append T. Its like it isn't running that part of the code.
I failed to mention that on the file without the plot name and color "header" the data was also wonky. I had manually added those three lines to the top of the CSV and got some wierd data. The program uses an event structure in the main loop that utilizes data collection and production of data. a separate consumer loop charts and displays the data. I really need the data producer in its own loop for a more robust program in case an event gets hung.
Thank you again for your help!
10-12-2023 10:02 AM
@MontgomeryScott wrote:
These first three lines are done through a write to spreadsheet file VI. The file is then appended with an export waveforms to spreadsheet VI..
As mentioned by Bob, there is no "then", because you don't define the execution order and create a glaring race condition instead. both your file IO operation occur in parallel, but the outcome depends on the order. You were lucky to discover the problem because it might only have shown up later, e.g. after a recompile.
For some reason, you are still using the legacy version and I recommend to upgrade to the write delimited spreadsheet instead. Instead of branching the filename (arrow in picture below) to the two operations you can e.g. wire the filename output of it to the next file IO function to guarantee execution order.
10-12-2023 10:18 AM
As I said, since you use the legacy VI, you don't even have an error wire to force execution order. If we assume that this should execute first, you can use the path output to wire to the next function, e.g. as follows:
Still, this entire thing has some "code smell". For example what file are you closing later? (don't have any of the subVIs here).
10-12-2023 10:42 AM
"CODE SMELL" 😄
in a the file / drive verifier I create the .csv file based on user input and an appended timestamp. the refnum from this open/create is what I am closing. I try to close any refnums to keep memory leaks down.
As far as using legacy, I have labview 2012. I added a case structure around the first and routed error wires through it to force execution order
when you say "race condition" I think about flip flop circuits going back and forth as fast as possible. in parallel I suspect you mean that whichever one finishes first frees file resources then the second executes? Or would this race condition cause file output errors like I am seeing?
Thanks!
10-12-2023 10:58 AM - edited 10-12-2023 11:04 AM
A race condition in programming has a specific meaning.
The write to spreadsheet file opens and closes the file, so if that same file has been opened by another process, this function will fail (and unless you use the newer version, there is not error output!) Assuming that the delimited table can be written, the file will be closed and the next function will have access. If that same file has been held open by some earlier process, all bets are off. That's why I was curious about the placement of the file close function. I have no idea if that is the same or a different file. If it is the same, it should of course be closed before the operations in question.
If everything deals with the same file, I would probably keep the file open for the duration of the process and use low-level file IO for all operations. Much more efficient because constantly opening and closing files is expensive.
@MontgomeryScott wrote:
I added a case structure around the first and routed error wires through it to force execution order!
So you are abusing a case structure as a glorified sequence frame! Seems overkill! 😄 All you need is the path wire as I showed earlier.
10-12-2023 12:22 PM
Yes I"m abusing the case structure! I don't have the new VI with a throughput and the trick I know is to use a while loop with a directly wired condition (runs one time) or a case structure.I'll look into using the newer version of this (and other) vi's. Not sure if I can stuff newer VI's into LV2012 or not.
Thank you so much for the help. I do a lot of different things and labview is just one of them. I usually get by with my 5 days of training (core I and II) and when that fails I ask the pro's @ the forum posts. Thank you again so much for the help. From what you showed me on a race condition, the file may have been manipulated by two functions at once, thus giving crap data.
And finally, YES it is the same file. I open/create in a previous sub VI, keep it open, and close when finishing all the writing. From what you said, this is also known as doing it WRONG! I'll open/create and close immediately. then let the write functions do thier thing. This output routine is run for a new file every time and is done a few times a day at most, so I won't swap to low level IO just yet, but I'll keep it in the back of my head. for future datalogging projects. Thanks again!