LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Passing and retrieving data between queues (QMH)

Solved!
Go to solution

I have a QMH-based program with a queue dedicated to each hardware peripheral in my system (for the most part). In one instance I have a QMH loop (lets say LoopA) that needs a piece of data from some serial hardware that is controlled/read from a different QMH loop (LoopB). 

 

What I have been doing thus far is enqueuing a message from LoopA to LoopB to get data. LoopB gets the data and posts it to a GV, while LoopA waits a fixed amount of time then reads the data at the GV. Is there a better way that follows the LabVIEW dataflow convention?  First thing that comes to mind is semaphores, but that feels clunky.

 

Edit: I suppose I could have LoopA finish it's message then LoopB could enqueue a message to LoopA once it retrieves the data from the hardware, then LoopA could handle the message with the data from LoopB, but that doesn't seem very efficient.

Thanks

JP

0 Kudos
Message 1 of 7
(126 Views)

You can e.g. send it back with a Notifier, they work (very) similar to a 1 element queue. Queue up the request and wait for a Notifier, the consumer loop reads the value and send the notifier with the value. That way it won't have to wait.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 2 of 7
(72 Views)

@JonathanPost wrote:

Edit: I suppose I could have LoopA finish it's message then LoopB could enqueue a message to LoopA once it retrieves the data from the hardware, then LoopA could handle the message with the data from LoopB, but that doesn't seem very efficient.

Something like that would probably be my approach.  At first glance it can seem inefficient.  But it's more in the spirit of the QMH design pattern.

 

Your other proposal involves waiting for some amount of time after requesting data before checking a global variable.  Here are a copule downsides and inefficiencies associated with that:

 

- LoopA gets stuck during that delay time.  No other incoming messages can be processed until the waiting is over.  It's also an inelegant solution becauase you are inserting "synchronous" behavior into a QMH design pattern that's meant to operate asynchronously

 

- After waiting, how do you know whether you waited long enough?  Is the global value you read old or new?  How will you verify?  If you find that you retrieved an old value, then what will you do?   Wait some more?  When do you give up?

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 3 of 7
(69 Views)
Solution
Accepted by topic author JonathanPost

@JonathanPost wrote:

I suppose I could have LoopA finish it's message then LoopB could enqueue a message to LoopA once it retrieves the data from the hardware, then LoopA could handle the message with the data from LoopB, but that doesn't seem very efficient.

 


We do this at my company and it's not at all inefficient.

 

We use an method that first generates a temporary one-shot queue name.  To make this more useful, it's the name of the calling VI with a globally incrementing number attached to the end, letting us know both the order that temporary queues were made in as well as the origin VI of those queues.

 

This temporary queue name, along with the other needed data, are sent to the other queue as a cluster.

 

The sending VI then immediately attempts to dequeue from a queue with the temporary name, with a long but non-infinite timeout (30s usually, occasionally extended if longer delays are possible).  The dequeue data is always the expected return data payload plus an error cluster.  If it doesn't arrive in the timeout period, it creates a detailed error message saying what it enqueued, to which queue at what time.

 

Any message handler that can receive these messages is set to always send a reply unless the temporary queue name is somehow blank.  Any incoming error other than an error generating the temporary queue reference gets sent out on that temporary queue as data to be handled as desired by the recipient.  Possibly locally as well, but always sent to the original message sender.

 

After getting a single message on the temporary queue, the queue is destroyed to not leave the reference open.

Message 4 of 7
(48 Views)

I've never considered temporary queues like this - what a great idea! Thanks for the tip. 

 

Also thanks Kevin and Yamaeda for the responses. 

0 Kudos
Message 5 of 7
(41 Views)

Just a little food for thought...

 

Kyle97330's suggestion using a temporary queue solves the 2nd of the two issues I raised but not the 1st.  This is not to say that it's *wrong*, only that it can have side effects that you may not want.

 

It sets up a temporary synchronous relationship between the QMH loops because one loop is waiting for either a reply or a 30s timeout.  This *can* be fine in a number of circumstances, but less fine in a number of others.  It depends on the structure and duties of your requesting loop.  Do you *need* to get an answer back (or a timeout) before allowing the requester to respond to other QMH messages?  Is it ok to get stuck for up to 30s?

 

A variation is that instead of creating a new temp queue to send along with the request, just send your requester's normal message queue reference as the "return address".  (And perhaps also the message string you want the receiver to use in its response).  Then you don't get stuck waiting, you just receive the reply whenever it happens to arrive.

   This can introduce its own set of problems.  Your requester loop must now be written in a way that accommodates this kind of async request-response in which 0 to N other messages might get processed between the request and the response.  That can sometimes be difficult to wedge into the logic.

 

Again, just food for thought.  Design patterns aren't sacrosanct.  It's usually best to "go with the flow" of their intended usage, but there *are* times it pays to make an exception.  Dunno for sure what'll be best for you, just wanted to give you a little more context to chew on.

 

 

-Kevin P

 

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 6 of 7
(18 Views)

Thanks for the follow up. Fortunately in my situation a little wait time is fine with a timeout condition also being acceptable.

 

I also like your simplified suggestion of sending the requester loop's queue ref as part of the request payload, but in this case I need to get the requested data back before my original loop completes its iteration (I know there would be a way to break up the code a bit more so I could handle with the existing queues, but that's what I'd like to avoid - hence the global variable)

0 Kudos
Message 7 of 7
(12 Views)