04-06-2017 11:50 AM
I have a semi-complicated for LabVIEW code that I use to read the data from 10 custom build DAQs over a USB connection. In order to try and speed up the code a little bit I am trying to read the data from each DAQ if a parallel for loop, however, I can't seem to get the replace array subset function to work as I think it should.
LabVIEW is saying I have a "dependence between loop" error, but I am not sure how to fix it. Here is the code I tried to implement,
The data from the spreadsheet VI comes out as a 1D 4 element array. If I change the shift registers to tunnel mode the code executes just fine, however, I only get the last loop of the code for values.
If it helps here is the implementation in a for loop.
Thanks in advance,
Solved! Go to Solution.
04-06-2017 11:56 AM
Did you try to use "Concantenate" mode of the outputs tunnel?
Ben
04-06-2017 12:23 PM
Ben,
I did. If I set the tunnel mode to concatenate I can run the code without errors but the data in the loop is not updated. So my array comes out as all zeros. Again, if I use a simple tunnel I get results from only one iteration of the loop.
Best,
04-06-2017 12:29 PM
I am working TOOOO hard trying to imagine what your code looks like.
Share a scaled down version or at least an image.
Ben
04-06-2017 12:35 PM
Ben,
Here is the code without the parallel loops. Ignore the trigger and plotting portion, the USB input data is the task on the bottom. All I am trying to do is to get the data off the VISA task into a 1D array.
Best,
04-06-2017 12:50 PM - edited 04-06-2017 12:53 PM
@Austin-Downey wrote:
LabVIEW is saying I have a "dependence between loop" error, but I am not sure how to fix it. Here is the code I tried to implement,
The data from the spreadsheet VI comes out as a 1D 4 element array. If I change the shift registers to tunnel mode the code executes just fine, however, I only get the last loop of the code for values.
There is no "tunnel mode" for shift registers.
You are getting the dependency error" because any parallelization would potentially change the outcome. There are a few transforms that the compiler knows how to work around them, but I guess this is not one of them. Once parallelized, there will be several instances of the loop code that can execute in any order (and in parallel, of course), so replacing the same element from two different parallel instances would result in a different outcome, depending on which one executes last.
The "fix" is not to parallelize. It would make no difference. My bet is that it might even run faster without parallelization.
04-06-2017 12:56 PM
The reason I was having trouble imagining because you were not doing what I suggested..
Ben
04-06-2017 12:58 PM - edited 04-06-2017 01:01 PM
@Austin-Downey wrote:
Here is the code without the parallel loops.
As has been mentioned, use a concatenating output tunnel (see picture). Now you can even parallelize without breaking the VI. (... but as I said, it probably makes no difference and could even slow you down)
(Cleanup needed: You still have a greedy loop. Also your sequence structure has no purpose once you would implement correct data dependencies, e.g. wire the error across the greedy loop.)
04-06-2017 01:21 PM
altenbach,
Thanks, that makes more sense. You are correct, this did not speed up my code much at all. I think I may need to increase the baud rate coming from my DAQs to increase the speed.
The flat sequence is needed for the external triggers. Maybe I could merge the last two cells, but I need the trigger off as soon as possible to prevent it from allowing the next cycle to begin. Maybe I should start a new thread related to optimizing the code but I have a few other things to implement first.
04-06-2017 01:42 PM - edited 04-06-2017 01:43 PM
I don't know if it makes a difference to the compiler, but the concatenating output tunnel in the previous example does NOT know the array size coming from each iteration. It would be worth investigating to ensure that each iteration outputs a 4 element array, even if fewer or more elements are returned or an error occurs.
Here is what I would try:
Since we are replacing a four element array with new data, the output will always be a 4 element array. Similarly, if an error occurs in one of the reads (e.g. timeout or other problems), you'll just get 4 NaNs. Seems much more deterministic and the final array size can be determined before the loop runs.