LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Preferred way to handle configuration and runtime data within a program with parallel loops?

What is your preferred way to handle configuration data (settings read from a file at the start of a program) and runtime data (that's what I call a cluster with for example queue references, file references to logfiles aso.) used but not changed by almost all of my subVIs after initialization.

 

Currently I'm using a typedef'd cluster with the contents of it written only at initialization (after parsing a configuration file, creating queues, opening logfiles). This cluster is then piped into almost every sub- and subsubVI, which are parallel loops for different tasks. While some of the loops may change the cluster locally (I know that other loops wont see the change), almost all operations on it are read operations (like "get the queue reference for the queue to another loop" or "get the IP adress that was set in a config file"). If there is something another loop has to know, there are queues to transfer messages (btw. how many parallel loops and queues is too much? I'm currently at 7).

 

This works quite well. But since it's more like a habit I started and grew into, not something I read anywhere, I'd like to know if there are better ways.

For example: on every call of a subVI that uses this cluster as far as I understand a copy of this cluster will be made. And since I spam subVI quite generously (as soon as the block diagram is lager than the screen somethings gonna be turned into a subVI), I guess this implies quite some overhead in CPU and memory usage. Even loops (and subvis contained) running every ten ms will all have this cluster wired into them.

What I also don't like is that often the wire just goes into a subVI just to be connect to one of its subVIs. Wouldn't mind if I hadn't that clutter.

 

I looked at functional globals. But since it requires a non-reantrant vi, I fear the loops will the delay themselves sometimes when traing to read values at the same time (and especially some of the fast, timed loops must not be late).

 

What is your preferred way of handling something like this?

0 Kudos
Message 1 of 5
(2,662 Views)

For a simple application I tend to use Functional Globals (FGV). I tend not to worry about the reentrancy / execution time due to waits because I think the time for executing the SubVI will be very small (you're only returning a cluster of data etc. after all and not doing any processing/loops). I guess the only time it would be a problem is if you're calling the FGV inside fast running loops (which - if that's the case, why not move the FGV outside of the loop and create a mechanism for updating the configuration inside the loop elsewhere (such as a state machine or a queued event for 'update configuration' going into a shift register which you then read inside your loop).

 

I think the only situation where that wouldn't work is if you want to have individual configurations when you're calling a VI multiple times (I'm thinking spawning clones of VIs for individual channels etc. with their own configuration).

 

The other option might be to use something like a named notifier/queue (if you're using a queue then use a single element queue and don't de-queue when you want to access the data) to store your configuration data - this would allow you to access the queue data from anywhere by obtaining a reference to the queue/notifier by name without having to wait for previous calls to complete (i.e. solves your concern over blocking read/writes of the configuration data).

 

Perhaps not the ideal solution, these are a couple that I use and I'm sure others may have more to say!


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 2 of 5
(2,653 Views)

I would tend to manage the data structure by grouping only related elements together and using multiple FGV for access.  Sometimes this is not possible and you really do need a large data set with access to it from multiple places.  In that case use a Data Value Reference.  Yuo will wind up with many copies of the DVR but that is just an I32 so its pretty cheap and does not introduce more risk of race conditions than your current practice of copying all the data while curing your need to pass modified elements via queue as well since there is only one copy of the actual data the DVR points to.

 

Any of these methods FGV, Queue, DVR are excellent ways to reduce data copies.  Queues are much faster than FGVs and DVRs are slightly more effecient than queues and when your data is a class the class properties can be used to limit access scope to class members by optionally preventing creation of a DVR to private class data.


"Should be" isn't "Is" -Jay
0 Kudos
Message 3 of 5
(2,648 Views)

You didn't specify if your are running on a RT system or not. We have applications with over 60 background tasks running. This is on a PC so we aren't that limited on memory or the CPU. We have not run into any issues with this many tasks (parallel loops).

 

As for your question on the configuration data I agree with Jeff that a DVR is a good approach. If you need to control access a bit more you can create a singleton LVOOP object to store your data and provide the accessor methods to get it. The class would contain a single copy of the data. This is very similar to the FGV but a bit more flexible since you are not constrained to a single connector pane.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 4 of 5
(2,630 Views)

Thanks to all!

 

I guess I'll go for FGV for the time being. I made some tests (always reading and immediately writing the FGV again in a for loop) and it seems that a read + a write is in the order of a few microseconds (as far as you can trust the GetPerformanceCounter of kernel32.dll on a normal Windows PC). If that's really all it takes (and you're right, there aren't any calculations involved anyway), I don't think there'll be a problem with one call blocking another for too long.

One funny thing: I implemented the FGV with a shift register and a while loop (the "traditional" FGV) and in another case with a feedback node (no while loop required). For a large number of reads and writes (>100 in a row) the version with the feedback node is about 5 times faster! Even at two iterations the feedback node version is faster. The only exception: for a single call (the typical usecase of a FGV) the version with the while loop is faster for some reason. I'll use the version with the feedback node. Not only does the "traditional" FGV feel like an abuse of a while loop 😉 , but what's even better is the possibility to set the initializer of the feedback node to useful values. Using this the FGV can be initialized without "first call" special cases.

 

EDIT:

found the reason why the single call feedback node was slower: while the while loop FGV was already initialized, the feedback node had to be initialized on the first call. Calling the FGV once before the real "timing" makes even single calls twice as fast than the while loop version.

0 Kudos
Message 5 of 5
(2,622 Views)