Distributed Control & Automation Framework (DCAF)

cancel
Showing results for 
Search instead for 
Did you mean: 

DCAF Blog: Executing Module Plugins Asynchronously Using the Standard Engine

For the next installment, let's discuss executing module plugins asynchronously from the main engine. Like the last post, I will outline capabilities that exist today, known limitations, and ideas for future improvement. First let me share some more background.

DCAF was designed to execute code sequentially, without blocking execution or memory allocations, in order to maintain a consistent and reliable loop rate. By default, all plugins in an engine execute within a single loop. This provides a lot of value in terms of consistent execution behavior. The main downside is that any one plugin can stall or otherwise affect the entire loop if it doesn't execute reliably.

Most of the plugins designed for the framework keep this in mind. The TDMS Logging plugin for example spawns and manages its own asynchronous logging loop and carefully communicates with it using pointers and RT FIFOs. This can be a lot of work to do right, so a feature was added to the Standard Engine to do this automatically for other modules that may benefit.

Today's Capabilities

Turning this feature on today is pretty simple. The Standard Engine provide's unique configuration options for each module plugin that it will execute. The checkbox for 'Execute Asynchronously?' is one of those options.

Asynchronous Execution Checkbox.PNGWhen checking this box, the engine will spawn additional background loops for each Asynchronous plugin, and create the communication FIFOs needed to exchange data between those loops and the main engine. Within the main engine, the execution involves simply getting data to and from those communication FIFOs and the engine's main tag bus. This isolates the engine from any delays, memory allocations, or other behavior in the modules running asynchronously.

It is also worth noting that the asynchronous threads are timed using their communication queues. This means that they will automatically run at a rate up to as fast as the primary engine loop. The code in the asynchronous loops could also run slower than the primary loop if they can't keep up or if they contain code that blocks.

A drawback to using this feature is that it will introduce at least one cycle of delay between when data makes it into the plugin and when it is received back into the engine. This isn't a limitation of the feature, but is inherent to how a feature like this must work. The engine doesn't wait for a plugin to update, and instead will check back in for new data on the next iteration, causing cycle delays.


A key point of this design is that plugin modules don't require any modification to use it. Plugins can run either inline or asynchronously, and this decision can be made differently for each application.

For anyone that wants to study this in more detail, the majority of the source code for this feature can be found in the 'async executor' folder of the main engine.lvlib.

Async Source Library.PNG

Limitations

One key limitation of this feature is that it is provided by the Standard Engine and not yet a requirement of the framework on all engines. This means that if a new type of engine is developed, this functionality may not be there, and any module plugins that rely on it may not be usable in those engines. For this reason, developing modules that execute deterministically is still the recommended best practice.

A second limitation is that turning the feature on must be done manually. If a plugin was designed with this feature in mind, users of that plugin currently must remember to check the box for each instance of that plugin that is used in their projects.

Ideas for Improvement

The framework already has a property for the Module Configuration parent class for 'Module Timing'. Module developers can use this property to return execution information for that module in the categories of 'Deterministic, Non-Blocking, or Blocking'. Deterministic execution signifies that the module has been tested and validated for deterministic performance. Non-blocking signifies that the execution of the module will not wait to return and will execute as quickly as possible, but may allocate memory or perform other non-deterministic tasks. Blocking means that the module will likely take more time to return than a typical execution period, possibly because the module is waiting for some event to occur before returning.

This field currently isn't used by the Standard Engine (and therefore may not be accurately specified in all modules). However in the future the Standard Engine could be improved such that a developer could specify the desired execution performance for the entire engine in the configuration editor. The engine could then querry the 'Module Timing' property for each module, and automatically run modules asynchronously based on the desired level of performance. For example, if a user told the engine to run deterministically, all module's labeled 'Non-Blocking' or 'Blocking' could automatically be made to execute asynchronously. This would greatly simply the configuration experience for using this feature and limit unintentional errors.

Let me know in the comments below if you have questions or more to add on the capabilities, limitations, or ideas for improvement. I'd also love to get feedback on how to improve posts like this in the future as well as any new topics that may be of interest.

Message 1 of 1
(3,985 Views)