LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Special LabVIEW Considerations when creating a DLL?


@rolfk wrote:

Let's get to more basic things first.

 

What is the entire header file created from LabVIEW when you build the DLL?

What is the Python ctypes definition for the import of the relevant functions?


I officially figured it out.

I was limiting the size of my queue.  This is not an issue in the LabVIEW environment or as an executable.  But in the DLL world, it seems to have caused a serious limitation.  Once the max queue size was reached, it very slowly parsed the remaining data.  Why this is confusing is, again, because it's parallel, the queue should have been dequeuing asynchronously thus not being an issue.  Which is probably why it works in LabVIEW world, but not DLL world.  Not really sure what happens when the same code is turned into DLL, but it's something to be aware of. 

Message 11 of 24
(6,262 Views)

All I know is that a long time ago, dlls created in LV ran in a single thread which caused it to run in a serial fashion.  Then they introduced a flag that would let dlls run in a multithreaded way.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
Message 12 of 24
(6,249 Views)

@billko wrote:

All I know is that a long time ago, dlls created in LV ran in a single thread which caused it to run in a serial fashion.  Then they introduced a flag that would let dlls run in a multithreaded way.


This still technically tracks based on what I'm seeing.  Just need to figure out what that flag is or how to set it.

0 Kudos
Message 13 of 24
(6,242 Views)

@LuminaryKnight wrote:

@billko wrote:

All I know is that a long time ago, dlls created in LV ran in a single thread which caused it to run in a serial fashion.  Then they introduced a flag that would let dlls run in a multithreaded way.


This still technically tracks based on what I'm seeing.  Just need to figure out what that flag is or how to set it.


Yes, I can confirm that the loops (which requires multiple threads) executed in parallel in the development environment are executed sequentially when called from Python. It is quite simple to demonstrate.

This is LabVIEW code for DLL:

DLLCode.png

There are two for loops with Sleep(100) from kernel32.dll, expected execution time is 1 second, required two threads for execution.

 

This is how it called from Python:

 

import ctypes
import os

script_dir = os.path.dirname(os.path.abspath(__file__))
dll_path = os.path.join(script_dir, "SharedLib.dll")
dll = ctypes.CDLL(dll_path)
par_test = dll.ParTest
par_test.restype = ctypes.c_double
result = par_test()
print(f"ParTest execution time: {result:.6f} seconds")

 

 and result is:

 

>python Test.py
ParTest execution time: 2.011455 seconds

 

When called from LabVIEW, then 1 s as expected:

DLLCall.png

That is.

By the way, it is same when called from C:

int main (int argc, char *argv[])
{
	double res = ParTest();
	printf("Time is %f s\n", res);
	return 0;
}
>DLL_Test.exe
Time is 2.011993 s

So, it seems to be not an Python issue, but how DLL with multi threading called from "Non-LV" caller by default.

May be called in UI thread, may be something else.

Test project in the attachment, you have to recompile DLL with your LV, of course.

 

Message 14 of 24
(6,232 Views)

@Andrey_Dmitriev wrote:

So, it seems to be not an Python issue, but how DLL with multi threading called from "Non-LV" caller by default.

May be called in UI thread, may be something else.


Ah, that was easy. The default execution system is "same as caller", but python environment used as caller is not LabVIEW. Needs to be set to Standard, then it should work also with multithreading:

Screenshot 2024-11-06 09.46.11.png

Now execution speed is 1s in both Python and C environment.

D:\DLL_Test\builds\DLL_test\My DLL>python Test.py
ParTest execution time: 1.005697 seconds

D:\DLL_Test\CVI Test>DLL_Test.exe
Time is 1.005611 s

Modified project in attachment. Try the same with your VI.

 

 

Message 15 of 24
(6,216 Views)

It doesn't surprise me that a DLL function called from another environment is not multithreading. When called from Python, or C or whatever, there is only the calling thread in your Python application that executes the function. As such it will simply execute in that thread. It doesn't mean that the whole DLL is single threading but a function call is. You can call multiple functions, and even the same function in parallel, from different threads in your Python application. That's how multithreading works in Python and other classical programming environments.

 

LabVIEW's parallel loop execution is something else and a feature of the automatic semi-dynamic thread scheduling in the LabVIEW code clump scheduler. When you run a VI in LabVIEW it executes in a specific execution system and each execution system has a number of threads assigned. Whenever LabVIEW encounters an asynchronous function or a parallel execution loop, it grabs additional threads from the thread pool for that execution system.

 

When a function is called from an external application like Python, there isn't anything like an execution system with its own thread pool that the function is invoked from, but only the thread from the calling application. That's what LabVIEW has to deal with. It seems that the additional option to allow multithreading of functions basically would need to introduce an intermediate scheduler that executes the VI in the DLL in its own execution system like what happens in LabVIEW, but that also will require marshalling of parameters between the calling thread and the thread execution in the now present execution system and according synchronization between both. There is no easy way of sharing execution between the calling thread and additional threads from an internal thread pool.

 

And indeed assigning a specific execution system to the VI being exported will allow the LabVIEW thread scheduling to work, as you have found. But that has additional overhead as the VI is then not called in the callers thread but that thread is simply suspended and the parameters are then marshalled to the LabVIEW execution context, executed, then any results are marshalled back and then the external thread is resumed.

Rolf Kalbermatter
My Blog
Message 16 of 24
(6,213 Views)

@rolfk wrote:

...

And indeed assigning a specific execution system to the VI being exported will allow the LabVIEW thread scheduling to work, as you have found. But that has additional overhead as the VI is then not called in the callers thread but that thread is simply suspended and the parameters are then marshalled to the LabVIEW execution context, executed, then any results are marshalled back and then the external thread is resumed.


Yes, fully agree with everything above. In general from architecture point of view of the "mixed" environment we have always two choices — to use multithreading offered by LabVIEW or using "native" multithreading offered by "third-party" environment like Python, Rust, C, etc. Both ways will work and have own "pros" and "cons" in term of penalties caused by threads start/stop/sync as well as how "convenient" this from programming point of view and how much "low level control" do we need over the threads. In this particular case we can split our LabVIEW code into two SubVIs, wrap every into own DLL func and call both in parallel.

 

The LabVIEW "do something" code will be as simple as following:

1snippet.png 

 

Then both could be called from C (using CVI) like this:

Spoiler
#include <ansi_c.h>
#include <utility.h>
#include "SharedLib.h"

 int CVICALLBACK ThreadFunction1 (void *functionData);
 int CVICALLBACK ThreadFunction2 (void *functionData);
 
int main (int argc, char *argv[])
{
	int functionId1, functionId2;
	
	if (InitCVIRTE (0, argv, 0) == 0) return -1; /* out of memory */
	
	ParTest1(); ParTest2(); //"Dummy" call to get everything loaded	
	 
    clock_t start = clock();
	 
    CmtScheduleThreadPoolFunction (DEFAULT_THREAD_POOL_HANDLE, ThreadFunction1, NULL, &functionId1);	
    CmtScheduleThreadPoolFunction (DEFAULT_THREAD_POOL_HANDLE, ThreadFunction2, NULL, &functionId2);	
	
	CmtWaitForThreadPoolFunctionCompletion (DEFAULT_THREAD_POOL_HANDLE, functionId1, 0);
	CmtWaitForThreadPoolFunctionCompletion (DEFAULT_THREAD_POOL_HANDLE, functionId2, 0);
	
	clock_t end = clock();
    double cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
	 
	printf("Two Threads took %f seconds to execute \n", cpu_time_used); 

	CmtReleaseThreadPoolFunctionID (DEFAULT_THREAD_POOL_HANDLE, functionId1);
	CmtReleaseThreadPoolFunctionID (DEFAULT_THREAD_POOL_HANDLE, functionId2);	
	
	return 0;
}

int CVICALLBACK ThreadFunction1 (void *functionData)
{
	printf("Thread1 execution time: %f s\n", ParTest1());
    return 0;
}


int CVICALLBACK ThreadFunction2 (void *functionData)
{
	printf("Thread2 execution time: %f s\n", ParTest2());
    return 0;
}

 

Or in case of Python something like that:

 

Spoiler
import ctypes
import os
import threading
import time

def ThreadFunction1():
	result = par_test1()
	print(f"Thread1 execution time: {result:.6f} seconds")

def ThreadFunction2():
	result = par_test2()
	print(f"Thread2 execution time: {result:.6f} seconds")
	
script_dir = os.path.dirname(os.path.abspath(__file__))
dll_path = os.path.join(script_dir, "SharedLib.dll")
dll = ctypes.CDLL(dll_path)
par_test1 = dll.ParTest1
par_test2 = dll.ParTest2
par_test1.restype = ctypes.c_double
par_test2.restype = ctypes.c_double
par_test1()
par_test2()
   
t1 = threading.Thread(target=ThreadFunction1, args=())
t2 = threading.Thread(target=ThreadFunction2, args=())

start = time.time()

t1.start()
t2.start()

t1.join()
t2.join()

end = time.time()
result = end - start
print(f"Two threads took {result:.6f} seconds")

Now both "single thread" LabVIEW functions called in parallel, the overall execution speed is still around 1 second in both environments:

 

>DLL_Test.exe
Thread1 execution time: 1.000765 s
Thread2 execution time: 1.000640 s
Two Threads took 1.015000 seconds to execute

>python TestMT.py
Thread1 execution time: 1.000129 seconds
Thread2 execution time: 1.000881 seconds
Two threads took 1.001891 seconds

 

Modfied project in the attachment, may be will be useful for someone...

Message 17 of 24
(6,194 Views)

This has all been very good reading.  Thank you, everyone!  Looks like we've got some solid\knowledgeable Developers available.

 

There is another issue I'm trying to sort out with this DLL issue:

First, I've set the "Preferred Execution System" to Standard.  And I build the DLL this way.  (Does Priority setting make any difference?)  The issue I'm running into, now, depending on the size of the file to be parsed, is memory and proper shutdown.

 

I have thoroughly scoured my LabVIEW code to make sure there are no memory leaks.  All Queues are destroyed.  All DVRs are closed.  All shift registers are replaced with empty data arrays once they're no longer needed (just a precaution at this point).  All file references are closed.  But... when Python runs the DLL, even though Python is able to move on to the next task after the DLL has completed (posting to terminal the filepath produced from the DLL), Python essentially hangs up.  I cannot run it again because it appears to be waiting for the DLL to release (Speculation). 

 

The attached image is AFTER the DLL has "completed." The memory then slowly (but eventually) disappears to 0 and Python can now run again.  But, the reason I believe it's related to the DLL is because I cannot build the DLL while Python is in this state as LabVIEW states something is currently using the DLL.

 

Is there something in Python that's missing to kill DLLs after they're done?  Note, this memory "leak" (lack of a better word at the moment) is definitely very dependent on the size of the file I'm parsing.  I've also tried adding "Quit LabVIEW" and "Abort this VI" to see if that make any difference, alas, it does not.  Some (more) thoughts would be great.

LuminaryKnight_0-1730906920727.png

 

0 Kudos
Message 18 of 24
(6,147 Views)

Okay, so I was able to solve this memory issue as well.  The answer is: make it even more good.  Any data anywhere in the code that seems like it'll be sizeable, absolutely make sure you use a DVR.  This way you can destroy the DVR once the task is done.  And do a lot of your data manipulation within the In Place Element Structure (generally a necessity to use DVRs).  I also place a few Request Deallocation.vi in some locations.  I'm not entirely sure this is doing anything helpful.  But I'm seeing good results where the DLL is properly being released now because the memory is being handled A LOT better.

 

So any future person who reads this thread: Special LabVIEW Considerations when Creating a DLL? Manage your memory like a pro!

0 Kudos
Message 19 of 24
(6,083 Views)

LV is generally good at managing its memory, but it might still be considered active/used as long as Python runs. One Request deallocation as you exit should be enough, but if the DVRs work well, then go for it. 🙂

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 20 of 24
(6,063 Views)