LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Creating a DLL to work on 2D Arrays

Solved!
Go to solution

I've tried to implement the pointers, but when I test with LabVIEW I'm not getting the expected output.

 

"Inc Image (ptr).vi" has the two MoveBlock calls as you've shown.

"Test Inc Image (ptr).vi" calls the built DLL. But, the destination / output array shows whatever was input to it, in this case an array of zeroes.

 

Can you check if I have the MoveBlock or the call to the built DLL defined incorrectly?

0 Kudos
Message 21 of 47
(749 Views)

@Gregory wrote:

 

 

Can you check if I have the MoveBlock or the call to the built DLL defined incorrectly?


It seems to everything OK with your code. I just opened it, recompiled in LV 2024Q1 without any changes, and it seems to be functional:

Screenshot 2024-05-20 18.30.30.png

LabVIEW version should be not problem, of course. Typical issue - during work you copy DLL and source in different locations and calling "wrong" DLL version. OR sometimes LabVIEW is opened, but DLL cannot be written at this moment, etc. Check again. Sometimes I using this trick during debugging: LabVIEW Call Library Function Node Not Unloading a DLL After VI Execution, this will allow to recompile DLL while VI is opened. And one more trick - you can launch two copies of LabVIEW (I don't remember INI key, usually I simiply copy LabVIEW.exe into LabVIEWTest.exe, in one develop DLL and in another one laucht and test DLL. This will "isolate" memory spaces and when something gets wrong with pointers and LabVIEW will crash (may happened), then your dev environment with source will be still OK.

Message 22 of 47
(737 Views)

Interesting, I'm finding some inconsistencies with the "MoveBlock" function calls. 

 

PC1 / LV 2016 32-bit : Running "Test Inc Image (ptr).vi" returns all zeroes, Error_Code = 0 (no error).

 

PC2 / LV 2016 32-bit : Running "Test Inc Image (ptr).vi" returns all zeroes, Error_Code = 1 (error!).

 

PC2 / LV 2016 64-bit : Running "Test Inc Image (ptr).vi" returns all incremented array as expected., Error_Code = 0.

 

I went back to the "MoveBlock" function calls. The pointer inputs that were defined as "pointer-sized integer" show an "sz" on the function node, and when I create a control it creates an I64. I made a new function "Inc Image (ptr32).vi" and changed the pointers to be U32. This works on all 3 PC / LV combinations above! But, I feel like I'm asking for trouble defining the pointer as a U32 instead of a pointer-sized integer...

 

2024-05-20 screenshot.png

 

0 Kudos
Message 23 of 47
(723 Views)

Just wanted to add, I was using "signed pointer-sized integer" which shows up as "sz" on the node and creates an I64 when I right-click it. I thought later that uPtr should be "unsign\ed pointer-sized integer" which shows up as "usz" on the node and creates a U64. However, this shows the same behavior as before where it only works in 64-bit LV but not 32-bit.

0 Kudos
Message 24 of 47
(710 Views)

I think your allocatable memory has a 64 bit range on a 64 bit machine. I think you were just lucky that your allocated memory pointer fits into U32.


Could be that the CLN doesn't adapt to bitness after creation regarding pointer size definition.

Actor Framework
Message 25 of 47
(712 Views)

@Gregory wrote:

Interesting, I'm finding some inconsistencies with the "MoveBlock" function calls. 

 

I went back to the "MoveBlock" function calls. The pointer inputs that were defined as "pointer-sized integer" show an "sz" on the function node, and when I create a control it creates an I64. I made a new function "Inc Image (ptr32).vi" and changed the pointers to be U32. This works on all 3 PC / LV combinations above! But, I feel like I'm asking for trouble defining the pointer as a U32 instead of a pointer-sized integer...

I just didn't catch that you working with both 32- and 64-bit. I using 32-bit very rarely novadays.

Unfortunately such "mixed" build is not very convenient in LabVIEW. If we working with C/C++ dev environment like Visual Studio, then usually we have both build specs shared in one projects and can perform batch build, using preprocessor directives to switch between 32/64 bit in the source if needed. But in LabVIEW you should bulld project in the same bitness.

In general pointer-sized integer should be OK for both 32-bit and 64-bit. In 32-bit LabVIEW it works (strange, the coercion dot shown, but works).:

 

Screenshot 2024-05-20 22.27.35.png

It is designed in this way exactly for the purpose when MoveBlock called from 32 or form 64 bit and you don't need to change parameters which switched from one dev environment to another or avaoid using conditional compilation. It looks like you will need two build specs - one for 32-bit and another for 64 bit, and with code duplication if you will have separate wrappers it will work. The only question how to avoid code duplication in elegant way, may be conditional compilation will help, something like that:

Screenshot 2024-05-20 22.37.19.png

And in two different build specs use different controls as parameters.

Message 26 of 47
(771 Views)

I was actually only working in 32-bit (which I didn't mention). But then when the code worked on your PC, but not on mine, it prompted to try it in a 64-bit version as well. Ok, I think having 2 wrappers is not an issue. It's just a little frustrating that in 32-bit, the "unsigned pointer-sized integer" creates a U64, but it only works properly if I override that and make it a U32 instead.

 

Anyway, thanks to everyone for all the help! I think I have a couple good options to pass the image now. We can use pointers, filepaths, 2D arrays (with the extra memory management functions that LV exports), and maybe I can add support for image files later.

0 Kudos
Message 27 of 47
(762 Views)

@Quiztus2 wrote:

I think your allocatable memory has a 64 bit range on a 64 bit machine. I think you were just lucky that your allocated memory pointer fits into U32.


Could be that the CLN doesn't adapt to bitness after creation regarding pointer size definition.


Yes, sure it can, but your "problem" is that you using this in unusual "opposite" direction where DLL created in LabVIEW and you have front panel control involved, which can't adapt so easily.

Let me explain, how it work in "usual" way.

For example, you have DLL with two functions — allocate() and deallocate():

 

 

unsigned short* allocate (int size, unsigned short fill)
{
	unsigned short* image;
	image = (unsigned short*)malloc(size * sizeof(unsigned short));
	if (image) for (int i = 0; i < size; i++) image[i] = fill;
	return image;
}

void deallocate(unsigned short* image)
{
	free(image);
}

 

 

It is not the best design pattern, but I would like to use MoveBlock to demonstrate.

Now I will compile these two into 32-bit and 64-bit DLLs.

One side note — in buld spec I will append '32' to the 32-bit DLL and '64' to 64-bit, in CVI (you will see below why):

Screenshot 2024-05-21 06.25.36.png

and

Screenshot 2024-05-21 06.25.19.png

Now I will call both from LabVIEW:

I will put both in the loop to ensure and demonstrate that we have no memory leakage at all:

Snippet.png

Here two important points.

First one, the DLL called as following:

Screenshot 2024-05-21 06.30.15.png

When opened in 32-bit LabVIEW, then '*' will be repalced with 32, and when 64-bit, then with '64'. This is documented behaviour, described in Configuring the Call Library Function Node.

Second, all pointers are declared as following:

Andrey_Dmitriev_0-1716266032649.png

Andrey_Dmitriev_1-1716266097482.png

Andrey_Dmitriev_2-1716266139768.png

Now when opened in 32-bit LabVIEW, the MyDLL32.dll will be loaded and all pointers will be 32-bit, and when opened in 64-bit, then MyDLL64.dll will be loaded and all pointers will be still OK as 64-bits.

So, I have "universal" sources from both sides and don't need to add any extra code to handle different bitness.

Behind the scenes not only different type on pointer parameter, but also the way how parameters passed to DLL in 32-bit and 64-bit. On 32-bit they passed via stack, and on 64-bit - via registers (the first four). That was a reason why "wrong" bitness on LabVIEW control worked for 64-bit (when here wrong 32 bit type was used) and doesn't work on 32-bit DLL when 64-bit type was used. When 32-bit pointer passed to 64-bit dll, and allocated below 4 GB range, then upper part of register remains unused and everything worked (by the way, typical trouble on migration 32-bit projects to 64 bit), but in opposite direction when to 32-bit DLL occasionally the 64 bit paramater passed, then stack get misaligned - caller expected 4 bytes per parameter, but callee takes data from 8 bytes touched wrong stack area and everything worked not as expected.

And just one more thing - as you can see, when memory is allocated with malloc(), the allocated space needs to be deallocated with free(), otherwise, in the loop, we will have memory leakage. But this is not the case for LabVIEW's Build Array; just think about this. In general, this is also a kind of allocation, but we don't need to deallocate LabVIEW's arrays; they will be deallocated or reused automatically. This is how memory management works in LabVIEW. However, when you try to 'spread' allocation and usage into different DLL functions created in LabVIEW and manipulate the data externally, LabVIEW will have no idea how they are passed over the calls, and you may have a situation where LabVIEW's array gets deallocated unexpectedly. One of many possible ways to avoid this is to start a resident 'daemon' VI with a while loop at the first DLL call, which will hold the allocated array in a kind of functional global, but this will add some additional complexity. On the other hand, I haven't thoroughly tested this scenario..

Message 28 of 47
(704 Views)

There's been some speculating and guessing here over the long weekend and I struggle to collect all the points that were somewhat misleading or inaccurate.

 

About Python open_cv, it uses internally numpy arrays for its image data. These are indeed allocated as single block of memory in 2D (monochrome) or 3D (color channels) organization.

 

The LabVIEW DLL builder allows to configure 1D arrays to be exported as data pointers, but insists on 2D and higher dimensional arrays being exported as LabVIEW array handle. The main reason is that 2D data can be represented in several different ways that are memory wise completely incompatible. In C and LabVIEW (and Python if you use numpy or by extension open_cv), it is usually a single block of memory where the different dimensions are interleaved in the memory buffer. C++ and pure Python applications tend to often use vectors of vectors for that since that makes them easier to handle for the programmer. It is however a relatively complex way for the processor to handle and also is memory wise not as performant, since the memory manager has to create many blocks of memory for a single 2D array.

 

The export of function variants with actual pointers in LabVIEW is not really very handy. As you have found it has the problem that LabVIEW wants to treat pointers as 64-bit entity on the diagram (and according front panel controls) to be compatible in both 32-bit and 64-bit. But that creates the problem that the function exports this parameter as 64-bit value, since the DLL export configuration does not support to define the actual type to anything else than the LabVIEW type that is used on the front panel. It wouldn't make much sense to support that for random datatypes but in the case of a 64-bit control it may be useful to allow specifying that this is really a pointer sized parameter.

On the other hand there is no real difference in terms of the Inc Image (1D).vi function and Inc Image (ptr32).vi or Inc Image (ptr64).vi in terms of what is exported, except that the ptr32 and ptr64 variants are bitsize specific. The exported function for 32-bit is exactly the same for Inc Image (1D).vi function and Inc Image (ptr32).vi and the same applies in 64-bit for Inc Image (1D).vi function and Inc Image (ptr64).vi. So those ptr32 and ptr64 variants are simply superfluous.

 

In terms of what a C compiler creates as assembly construct for passing a parameter, uint16_t ArrayIn[], uint16_t *ArrayIn, and void *ArrayIn are absolutely the same. For 32-bit uint32_t ArrayIn is also the same, and for 64-bit uint64_t ArrayIn is the same. So just forget about the two ptr32 and ptr64 variants they only complicate your DLL interface. You can simply use the Inc Image (1D).vi version. You can change the header file prototype to void IncImage1D(uint16_t *ArrayIn, int32_t NumRowsIn, int32_t NumColsIn, uint16_t *ArrayOut, int32_t *NumRowsOut, int32_t *NumColsOut, int32_t lenIn, int32_t lenOut).

 

The only kind of ugly thing is that LabVIEW absolutely wants to get the two len values for the passed in arrays, to make sure that the limits of the memory buffers are not overrun in any way. Your user could call this function with an array of length 100 and specify that it contains 10 by 50 elements. If he correctly specifies that the array was allocated with 100 elements LabVIEW will basically create a new buffer of 10 by 50 elements at the Resize function, copy the 100 elements from the input array in there, process it and only copy as many elements in the output array as the second len indicates. It may feel superflous to the user and that is likely the reason you tried to create those uptr32 and uptr64 variants but that only creates additional problems that the exported functions now require two different VIs for 32-bit and 64-bit DLL compilation.

 

Another mentioning that struck me as misleading, was the comment that LabVIEW arrays will not need to be deallocated as LabVIEW will do that automatically. If that refers to the use of functions like the exported AllocateUint16Array(), then that statement is definitely wrong. The general consensus is that whoever allocates an array should also deallocate it. But that does not hold true for managed environments when using their native managed datatypes. If a LabVIEW function returns an array or string handle the caller of that function automatically gets the owner of that handle and either has to pass it on to someone else for use or properly deallocate it with one of the LabVIEW memory manager functions (or the exported convenience function DeAllocateUint16Array() in the case of this specific DLL). Otherwise you create a memory leak!

Rolf Kalbermatter
My Blog
Message 29 of 47
(685 Views)

@rolfk wrote:

...

Another mentioning that struck me as misleading, was the comment that LabVIEW arrays will not need to be deallocated as LabVIEW will do that automatically. If that refers to the use of functions like the exported AllocateUint16Array(), then that statement is definitely wrong.

...


No, Rolf, I didn't mentioned that that the deallocation shall be omitted at all, it was just notice about particular piece of the code above, where Build Array was used, but no "Destroy". This is how we working with pure LabVIEW code with luxury Memory Manager. I only pointed that we should pay some attention when mixing "managed" and "unmanaged" code (slightly similar to C#). By the way, may I see the source of the AllocateUint16Array()? I've scrolled this topic from the top to the bottom, but unable to see it (may be I'm blind).

Anyway, we can try to make one more round with native LabVIEW array. In general there are two possibilities — create LabVIEW VI where 2D array will be allocated (with Build Array primitive!), then turn this VI into DLL, which will be called from third-party code prior to array's manipulation, then check if disposing will be necessary or not, or another approach where third.party code will be linked with labview.lib, then call DSNewHandle()/DSNewPtr(), then obviously memory needs to be released with DSDisposeHandle()/DSDisposePtr(). I'm only not sure if LabVIEW Run-Time will be happy with such direct call from third-party application (may be some additional initialization will be necessary, I never ever have done such experiment before).

0 Kudos
Message 30 of 47
(668 Views)