LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

RT FIFO interlock inside Timed Sequence

Hi everyone,

 

I found this weird behavior while debugging my RT application, so I thought I would share it in case someone has already encountered it.

Here is an example where I simultaneously write an element to an RT FIFO and wait indefinitely for this same element on the same FIFO inside the same Timed Sequence.

The 2 following codes, while being visually identical and configured exactly the same, behave completely differently.

 

This one executes instantly:

raphschru_3-1716808847627.png

 

While this one hangs forever (blocked at RT FIFO Read):

raphschru_4-1716808901835.png

 

The only difference resides in the firing order of the nodes given by this method:

raphschru_2-1716808726792.png

Indeed in the first case, the "Write" function was added first, then function "Read".

In the second case, the "Read" function was added first, then function "Write".

 

In the first case, we write an element to the FIFO and immediately read it. Everything is fine.

In the second case, we start by waiting an element on the FIFO, which is not there yet. Strangely, nothing can execute in parallel. When debugging, it appears the "Write" simply does not execute for no reason.

 

My question is: is it an expected behavior of RT FIFOs inside timed structures? (It does not happen without a Timed Sequence)

To me, it seems to break the fundamental principle of data flow...

 

As a side note, you can make this (simple) code run without hanging by configuring the RT FIFO read mode to "blocking" instead of "polling".

However, in my real RT application (with around 10 Timed Sequence running in parallel on a dual core PXI target), even though all my RT FIFOs are configured as "blocking", the hanging still happens...

 

Attached is my sample code to reproduce the mentioned behavior. There is also a script that gives you the firing order of the nodes inside the active Timed Sequence.

 

Regards,

Raphaël.

0 Kudos
Message 1 of 8
(542 Views)

EDIT:

 

After further tests, my simple example also hangs with a blocking RT FIFO when executed on an RT Target (works fine on "My Computer" though). I have tested with my PXIe-8821 (PharLap, x86) and LV2016-32bit.

 

Here is an updated version of my example project with an added PXI target and an improved script to get the firing order.

0 Kudos
Message 2 of 8
(501 Views)

I will preface this comment by admitting that I don't have experience with LabVIEW RT, but I do have quite a bit with with LabVIEW FPGA.

 

That being said, why would you write and read from the same FIFO in the same loop? I doesn't seem to make much sense, or to be an expected use of the RT FIFO.

And more to the point, your FIFO read has a timeout of -1, "to wait indefinitely". So that's exactly what the FIFO Read does, and why your VI hangs.

 

Edit: If you want your code to run the way I think you expect it to, you should wire a sensible value (e.g. 10 ms) to the timeout input.

0 Kudos
Message 3 of 8
(490 Views)

The problem is likely that you do not have the same multithreading going on inside a timed sequence or loop as what you are used to on your LabVIEW host system. That it is Pharlap ETS doesn't help either.

 

The code itself doesn't make much sense either. Using a FIFO to pass data inside the same structure around is at best an unnecessary roundabout and more likely a performance problem compared to a simply wire.

Rolf Kalbermatter
My Blog
Message 4 of 8
(475 Views)

Thanks for your responses, this will allow me to explain my use case more in details.

 

As you can imagine, this is an extremely simplified version of what happens in my real RT application. This is the minimal code to reproduce the problem.

 

This is actually part of the initialization code of one of our drivers that requires synchronization between 2 parallel subtasks, which are asynchronously launched in place of the Read and Write in my simple example. Both subtasks can talk to each other via a generic request/reply mechanism implemented with RT FIFOs.

 

RT FIFOs should be perfect for this job, as they are just like Queues but more efficient and deterministic.

 

The -1 in the "Read" subtask is because I want to wait that the "Write" subtask sends me a message, which I don't know how much time it will take.

 

When waiting for a message with a non-zero timeout, I expect the "RT FIFO Read" function to enter a sort of "sleep mode" and let other parts of the code to execute in parallel. Here is seems it blocks the whole execution inside the Timed Sequence. Again, this does not happen without a Timed Sequence.

0 Kudos
Message 5 of 8
(469 Views)

@raphschru wrote:

Thanks for your responses, this will allow me to explain my use case more in details.

 

As you can imagine, this is an extremely simplified version of what happens in my real RT application. This is the minimal code to reproduce the problem.

 

This is actually part of the initialization code of one of our drivers that requires synchronization between 2 parallel subtasks, which are asynchronously launched in place of the Read and Write in my simple example. Both subtasks can talk to each other via a generic request/reply mechanism implemented with RT FIFOs.

 

RT FIFOs should be perfect for this job, as they are just like Queues but more efficient and deterministic.

 

The -1 in the "Read" subtask is because I want to wait that the "Write" subtask sends me a message, which I don't know how much time it will take.

 

When waiting for a message with a non-zero timeout, I expect the "RT FIFO Read" function to enter a sort of "sleep mode" and let other parts of the code to execute in parallel. Here is seems it blocks the whole execution inside the Timed Sequence. Again, this does not happen without a Timed Sequence.


Regardless of simplification or not, why do both in the same loop?

 

If you need the same loop to handles its own generated value you probably don't need queues you need a pipelining pattern.

 

The thing you're not accounting for is that timed loops serialize execution into a single execution thread and is absolutely dependent on what order things get compiled in. You either don't need the Timed structure or you don't need the Queue and just to handle things in the correct order anyway.

 

 

~ The wizard formerly known as DerrickB ~
Gradatim Ferociter
0 Kudos
Message 6 of 8
(446 Views)

@IlluminatedG  wrote:
Regardless of simplification or not, why do both in the same loop?

This is a timed sequence, not a timed loop. The initialization of our drivers belongs to the Main task, which must have maximal priority, which is why it is enclosed in a timed sequence. Then, each driver's task has a loop also within their own Timed Sequence. This allows us to adjust priorities between the various drivers.

 

 


@IlluminatedG  wrote:

If you need the same loop to handles its own generated value you probably don't need queues you need a pipelining pattern.


As explained earlier, both "Read" and "Write" in my simple example are actually dynamic launchings of 2 different asynchronous tasks, which must talk to each other via a request/reply mechanism when starting to execute. The first task accesses some resources that the other do not, which is why it sends its data to the other at the beginning. Pipelining would be impossible since both tasks are asynchronous.

 

What I ended up doing is initializing the "Write" task first, then sending its data without waiting a reply, then initializing the "Read" task. This is a little less elegant because my reply mechanism has an error handling system that traces back the "request chain" to better detect and debug errors.

 

Another workaround could be to enclose either the "Write" or the "Read" task in a nested Timed Structure, not very elegant either...

 

 


@IlluminatedG  wrote:

The thing you're not accounting for is that timed loops serialize execution into a single execution thread and is absolutely dependent on what order things get compiled in.


I have noticed, but to me, the "RT FIFO Read" that waits for an element should allow other code to execute in the same thread while it sleeps.

 

Well, I just hit a limitation of Timed Structures, nothing that I can't circumvent, but finding it in my whole application was a real nightmare...

 

 

0 Kudos
Message 7 of 8
(436 Views)

@raphschru wrote:

@IlluminatedG  wrote:
Regardless of simplification or not, why do both in the same loop?

This is a timed sequence, not a timed loop. The initialization of our drivers belongs to the Main task, which must have maximal priority, which is why it is enclosed in a timed sequence. Then, each driver's task has a loop also within their own Timed Sequence. This allows us to adjust priorities between the various drivers.

 

 


@IlluminatedG  wrote:

If you need the same loop to handles its own generated value you probably don't need queues you need a pipelining pattern.


As explained earlier, both "Read" and "Write" in my simple example are actually dynamic launchings of 2 different asynchronous tasks, which must talk to each other via a request/reply mechanism when starting to execute. The first task accesses some resources that the other do not, which is why it sends its data to the other at the beginning. Pipelining would be impossible since both tasks are asynchronous.

 

What I ended up doing is initializing the "Write" task first, then sending its data without waiting a reply, then initializing the "Read" task. This is a little less elegant because my reply mechanism has an error handling system that traces back the "request chain" to better detect and debug errors.

 

Another workaround could be to enclose either the "Write" or the "Read" task in a nested Timed Structure, not very elegant either...

 

 


@IlluminatedG  wrote:

The thing you're not accounting for is that timed loops serialize execution into a single execution thread and is absolutely dependent on what order things get compiled in.


I have noticed, but to me, the "RT FIFO Read" that waits for an element should allow other code to execute in the same thread while it sleeps.


That is indeed a bit of a limitation of the RT FIFO node it seems. Before LabVIEW had native OS multithreading support (LabVIEW 5.0 I think), it already had its own cooperative multithreading. Nodes that could suspend code execution were behind the scenes programmed in a way that the LabVIEW scheduler could put the according clump onto a wait queue and execute other clumps of the diagram. The wait queue element could either be awakened after a certain time or through an event signaled by a callback routine. Once signaled, the element was put back into the clump queue ready for execution. Not as neat and seamless as when using native OS based threads but quite efficient. Any function with a timeout or wait functionality was principally marked as async to allow the LabVIEW clumper to arrange the code clumps on the diagram in a way that these functions were a valid clump border. When LabVIEW learned to use native OS threads too, this functionality was retained and pretty seamlessly integrated with the new native OS multithreading.

 

For VISA functions there was (and still is) an option to disable that async behavior since VISA can in some complicated ways interfere with the OS multithreading in LabVIEW so that the performance is badly influenced when the node uses async functionality and LabVIEW also tries to use OS multithreading on that clump.

 

It would seem that the RT FIFO Read node does not implement async operation or is for some reasons not marked as async capable. Since the RT FIFO functionality was introduced long after LabVIEW had gained native OS multithreading support, it may have been a deliberate choice at that time to not implement the more complex async interface underneath and simply let the native multithreading do its magic. Which works beautifully until that multithreading suddenly is not available (because of the timed frame limitation). Or because someone disabled multithreading in the LabVIEW configuration file. It is still possible to force LabVIEW into single threaded operation through the configuration file, although I wouldn't really know why someone would want to do that.

 

That all said, use of a Read and Write function to the same object in the same diagram is a very strange idea. Even if I did that for some strange reason I would always make sure that the Write is sequentially before the Read, simply because it feels right. Trusting inherent multithreading is one thing but programming defensively is another and I really prefer the latter anytime I can think of a way to do so.

Rolf Kalbermatter
My Blog
Message 8 of 8
(403 Views)