LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DLL execution time

Hello,

 

Does anyone know how to reduce the time of calling the DLL file in the LabVIEW RT ? When I call the DLL file in a timed loops, the iteration duration shows that it needs half of a millisecond ( around 0.5 ms) to load the DLL file (inside the DLL file it do nothing just adding x to y and return the sum). Therefore I want to ask does the LabVIEW RT need to reload the DLL file every time in the timed loop?  And how could I reduce the time of calling DLL file?

PS: the DLL is made by LabWindows CVI with Real time module.

 

Thank you,

Yan

0 Kudos
Message 1 of 9
(5,125 Views)

@XiaolinAtUB wrote:

When I call the DLL file in a timed loops, the ...


Is it necessary to call into the DLL in the Timed Loop?  The timed loop is a SINGLE thread per CPU, and when you go down into the DLL call you're preempting everything else everywhere - nothing else on that CPU will run until your thread comes back out of the DLL.  This is just a preemptive warning for anyone ELSE reading this and thinking it's "generally" okay to call into DLL code from a Timed Loop; there's nothing WRONG with calling into a DLL from within a timed loop as long as you understand the consequences and have designed your DLL code appropriately (LabVIEW primitives, like DAQmx calls, do this regularly).  Just be careful.

 


@XiaolinAtUB wrote:

Does anyone know how to reduce the time of calling the DLL file in the LabVIEW RT ? 


Okay, now that we got that out of the way, there are things you can do to optimize calls to the Call Library Function Node.

  1. Unless your DLL entry point is deemed "non-thread-safe", and therefore must run synchronous to other calls into the same DLL entry point, NEVER use the default thread - that will use the UI thread, which controls more than just the DLL call.  Always change the thread context to "Run in any thread" so that the "current" thread (or any other ready LabVIEW execution thread) can be used.  This will significantly reduce jitter in your application and prevent certain classes of runtime deadlocks.  You can tell if the Call Library Function Node is set to UI thread or "any thread" by looking at the color of it - it will be Orange if using the "UI" thread, or Yellow if using "any" thread.
  2. You can prevent a DLL from being loaded / unloaded each execution call by following a few simple programming techniques.  Basically by exposing the DLL path input and reusing the exact same node used to originally load the DLL, if you then call back into that very same node - but this time provide a blank path - the DLL previously loaded in memory will be used instead of reloading a/any DLL again.  This way you're dynamically loading the DLL at runtime and are allowing the node to keep the DLL in memory.
  3. Keep the DLL call short and sweet.  

-Danny

Message 2 of 9
(5,106 Views)


  1. You can prevent a DLL from being loaded / unloaded each execution call by following a few simple programming techniques.  Basically by exposing the DLL path input and reusing the exact same node used to originally load the DLL, if you then call back into that very same node - but this time provide a blank path - the DLL previously loaded in memory will be used instead of reloading a/any DLL again.  This way you're dynamically loading the DLL at runtime and are allowing the node to keep the DLL in memory.

-Danny


This is very backwards!

 

If you do not enable the DLL path then the DLL is loaded when the VI containing the Call Library Node is loaded and then never again, until the last VI containing a Call Library Node referencing that DLL is unloaded. Enabling the DLL path on the diagram and wiring an empty path (or invalid path) to that terminal WILL unload the DLL IF it is the last Call Library Node referencing that DLL. Otherwise the DLL stays in memory until the last Call Library Node referencing that DLL does unload the DLL.

 

As to performance: The already mentioned recommendation to select "Run in any thread" is a good one, but the programmer needs to be sure that the function can be called reentrantly and in parallel with any other function from that same DLL.

 

Another performance issue is the "Error Checking" on the last Tab of the Call Library Node configuration dialog. Maximum does a lot of extra checks around each call to the DLL function that will really cost a lot of performance. Default does some minimal checking and is usually a good compromise between some safety and performance. If you really need to squezze out the last nanosecond from the DLL call then you should select Disabled. But be aware that this setting removes any suspenders and safety nets. Any bug in the DLL or an error in the Call Library Node configuration will generally simply corrupt memory. If you are lucky it will result in an immediate crash, otherwise it could just silently cause all kind of hard to debug errors.

Rolf Kalbermatter
My Blog
Message 3 of 9
(5,087 Views)

Thank you Danny, your reply is very helpful for us to understand the mechnism of how LabVEIW RT is running.

 


Texas_Diaz wrote:

Is it necessary to call into the DLL in the Timed Loop?  The timed loop is a...


 

 

It is very time-saving for us to use DLL file to in our project, because we are using a third party PCI card (Beckhoff FC1100 PCI EtherCAT slave card) and it's very convenient for us to manipulate the PCI card by using the C language. Therefore we want to turn this C-language-based program into a DLL and call it in the LabVIEW RT.


 

Okay, now that we got that out of the way, there are things you can do to optimize calls to the Call Library Function Node.

  1. Unless your DLL entry point is deemed "non-thread-safe", and therefore must run synchronous to other calls into the same DLL entry point, NEVER use the default thread - that will use the UI thread, which controls more than just the DLL call.  Always change the thread context to "Run in any thread" so that the "current" thread (or any other ready LabVIEW execution thread) can be used.  This will significantly reduce jitter in your application and prevent certain classes of runtime deadlocks.  You can tell if the Call Library Function Node is set to UI thread or "any thread" by looking at the color of it - it will be Orange if using the "UI" thread, or Yellow if using "any" thread.
  2. You can prevent a DLL from being loaded / unloaded each execution call by following a few simple programming techniques.  Basically by exposing the DLL path input and reusing the exact same node used to originally load the DLL, if you then call back into that very same node - but this time provide a blank path - the DLL previously loaded in memory will be used instead of reloading a/any DLL again.  This way you're dynamically loading the DLL at runtime and are allowing the node to keep the DLL in memory.
  3. Keep the DLL call short and sweet.  

 

1. Yes, I enable it "Run in any thread", it can save a little execution time by 5-9 us.

2. By following the provided example, namely loading the dll file dynamically, but the execution stayed unchanged. 

3. I use a simple DLL file to do the test, so in this simple DLL file, it just execute sum = x + y, which x, y is the input argument and sum is the return value. so I think the DLL file is simple enough.

 

But I still have some questions:

Do you think the performance of the target machine has a profound impact on DLL execution time? Because I use 32bit desktop PC with core 2 duo CPU as the target machine.

 

Another question is that: 

 

I use the LabWindows CVI to generate the DLL file, do you think the execution time will be same if I use the visual studio to program the DLL file? Actually, I have tried use Visual Studio 2012 to generate a DLL file, but the DLL file cannot run on the LabVIEW real-time target, I think because the DLL files is generated by the Visual studio, therefore it may needs some Microsoft dependency, do you know how to generate the LabVIEW RT- compliable DLL file by using Visual Studio? 

0 Kudos
Message 4 of 9
(5,023 Views)

Hello Rolf,

 

Thank you for your reply.

 

I have tried following ways to reduce the DLL execution time:

 

1. Dynamiclly load the DLL file.

2. Enable it "Run in any thread"

3. Disable the error checking

 

but there is no significant reduce on the execution time. do you think I need a more powerful computer ? or use other tools rather than the LabWindows CVI to generate the DLL file ?

 

 

0 Kudos
Message 5 of 9
(5,018 Views)
The problem is a bit that you talk about the performance being unsatisfactory, but never really name any specific numbers nor have you provided any samples so others could try to reproduce the issue on their own hardware. Generally speaking the dynamic loading will NEVER be faster! The explicit DLL name in the configuration dialog is the fastest solution. Anything else will involve extra checks at runtime.
If error checking doesn't make any noticable difference call overhead is unlikely the prime candidate for performance improvement.
LabWindows is not an optimizing C compiler but a simple a+b calculation leaves little room for optimization so even Intel C will not likely create much faster code.
Each VisualC version has a new C runtime library. NI provides a ported version of the Visual C runtime for their own libraries for the LabVIEW RT system but they use a certain version which is not always the latest. Changing toolchain version is an involved process as everything needs to be revalidated and more complex C source code like LabVIEW always needs some modifications to adapt to new C toolchain versions or even to work around bugs in them.
Most likely depending on the version of LabbVIEW RT you have you should try to compile your DLL in Visual C 2008 or 2010.


Rolf Kalbermatter
My Blog
0 Kudos
Message 6 of 9
(4,989 Views)

@XiaolinAtUB wrote:

Do you think the performance of the target machine has a profound impact on DLL execution time? Because I use 32bit desktop PC with core 2 duo CPU as the target machine. 


First, Rolf is CORRECT - In my haste I got that backwards.  Passing in a null path dynamically unloads the DLL, otherwise the DLL stays in memory as long as LabVIEW is in memory.  

 

Okay, now we get down to the nitty-gritty.  There's a couple "performance" elements we have to consider:

 

(1) Execution time to open and load a DLL (it likely has a DLLMain() function with ThreadAttach / ProcessAttach entry points).

(2) Execution time to marshall a call into the DLL (lookup the function offset, pass/translate inputs into the call, call into that function).

(3) Execution time to run the code within the entry point.

(4) Execution time to clean up and return values to LabVIEW.

 

Hopefully if you do the OPPOSITE of what I recommended (as Rolf pointed out) you will only incur the cost of (1) a single time.  You can incur that cost at any time, even in your initialization code, if done correctly.  Items (2) and (4) are incredibly fast - we've optimized that as much as possible because WE use this mechanism A LOT.  But, the 98% execution time element is (3) - there's nothing LabVIEW can do about the code within the DLL itself.  If that code isn't optimized, and takes a long time, then there's really nothing we can do about it.

 


@XiaolinAtUB wrote:

 

I use the LabWindows CVI to generate the DLL file, do you think the execution time will be same if I use the visual studio to program the DLL file? Actually, I have tried use Visual Studio 2012 to generate a DLL file, but the DLL file cannot run on the LabVIEW real-time target, I think because the DLL files is generated by the Visual studio, therefore it may needs some Microsoft dependency, do you know how to generate the LabVIEW RT- compliable DLL file by using Visual Studio? 


The LabWindows CVI compiler, linker, and associated runtime contain their own ANSI C Library implementation that's been optimized for use with LabWindows/CVI.  In my personal opinion, the LabWindows CVI compiler is the optimal method for building DLLs meant for LabVIEW Real-Time (though, admittedly, if your code is not taking advantage of the CVI runtime the difference is probably negligible).  However, if you also use the DLL in Windows, you should probably compile with Visual Studio.

 

It's possible to use Visual Studio 2012 to generate a DLL file, but you have to force MSVS2012 to build with MSVC7.1 (Visual Studio "Dot Net" 2003) or MSVC9.0 (Visual Studio 2008) compatibility modes - those are the two MSVC runtimes we support in LVRT (and you need to make sure the MSVC9 support module is installed to the RT system via MAX).  To do this, you need Visual Studio 2008 also installed on your computer (or at least the necessary components from Visual Studio 2008).  

-Danny

0 Kudos
Message 7 of 9
(4,985 Views)

Thank you rolf, I think I will contact the NI engineer to resolve this issue. 

 

Thank you again.

 

Yan

0 Kudos
Message 8 of 9
(4,912 Views)

Thank you Danny, your analysis is very precise but I did not yet find out what is going on for my case therefore I think I need call help from NI engineer.

 

Thank you,

 

Yan

0 Kudos
Message 9 of 9
(4,907 Views)