LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Why/How did the Call Library Function Node change in Labview 2017?

Hey everyone,

 

I have not been able to find an answer to this anywhere on the forums, or at least, not a complete answer. This problem is related to the well known 32-Bit/64-Bit DLL issues that exist with the Call Library Function Node, specifically the fact that some DLLs require the use of a "Pointer Sized Integer" Data Type instead of a simple 32-Bit or 64-Bit Integer. This requirement has been pointed out by rolfk on the forums a number of times, which has been extremely helpful, but he references a change that was made in LabView 2009. In my testing, this did not become a problem until LabView 2017, and I would like to understand why.

 

To test this, you can download one of the example FTDI LabView packages from their website and just use it with any FTDI Device (USB to Serial Converter, Arduino, RS485 Interface, etc.). I chose the Write-Read String Demo example.

 

https://ftdichip.com/software-examples/code-examples/labview-examples/

 

When running this example in LabView 2014, 2015 or 2016, all 64-Bit versions, the examples all work perfectly fine. They require no change at all and can be run continuously without failure or error. Running the same example in LabView 2017 and onwards (Tested in 2017, 2020 and 2024) again all 64-Bit versions, the program will run once, and then on the second run, the device is no longer detected.

 

FTDI LabView 2016 Working.PNG

 Working in LabView 2016

 

FTDI LabView 2017 Broken.PNG

 Broken in LabView 2017

 

The reason for this is of course due to the Call Library Function Node, which requires changing all 32-Bit Unsigned Integers to Unsigned Pointer Sized Integers on the "Handle" Parameter, but the question is why? These are all 64-Bit versions of LabView, and they are all in fact loading the exact same DLL. What change was made in 2017 to break this functionality and require this change? I cannot find anything in the changelog or documentation, and apart from what has been pointed out by rolfk, I cannot find any references to this issue.

 

The FTDI drivers are just one example of this, but I have seen this issue exist in many SDKs that support LabView. Sometimes they throw a 1097 error, and sometimes they simply do not work, which is the case with these FTDI drivers. Some SDKs or DLLs unfortunately do not have corresponding documentation that outlines what Data Type they expect, but regardless, since I have done all of my testing with 64-Bit versions of LabView, why did they work just fine with 32-Bit Unsigned Integers in 2016 and prior, and then require the change to Pointer Sized Integers on 2017 onwards?

 

I can provide more examples if needed, but any help with this would be greatly, greatly appreciated!

0 Kudos
Message 1 of 4
(320 Views)

This most likely has to do with memory layout differences due to different base address settings for particular DLLs as well as increased memory consumption of newer LabVIEW versions. As long as the internal data structure for the FT_HANDLE in the FTDI DLL is allocated below the 4GB memory virtual address, the resulting pointer value is not affected as the upper 32bit are all 0 bits anyhow, so the truncation to 32-bit if you use an explicit uint32 configured parameter and/or use a 32-bit control on the LabVIEW front panel has no effect.

 

There are other potential differences. If you downloaded the FTDI drivers recently, they have all been compiled by one of the most recent Visual Studio versions. Since Visual Studio 2015, Microsoft abandoned the separate C Runtime version for each new Visual Studio version. All Visual Studio 2015 and later generated binaries link to the C Runtime 14.x with x being an incremental number, but the base version 14 and according DLL name stays the same. This means that a Visual Studio 2015 compiled binary can also work with the C runtime for Visual Studio 2022 (MSVC version 14.30 to 14.39).

NI usually uses the latest Visual Studio version for compiling LabVIEW but with about 1 year delay, since the development of a new LabVIEW release starts usually a year earlier and they do not normally upgrade the underlaying tools during such a development cycle. This means that LabVIEW 2017 was probably the first to be compiled with Visual Studio 2015 and to link to MSVC Runtime 14.0.

So your FTDI and the LabVIEW 2017 is then using the same C runtime and as the low level C runtime heap manager is part of the C runtime library, the memory allocations suddenly are pulled from the same heap and that will substantially change the runtime memory layout so the FTDI library easily can pull memory from a very different location than in earlier versions.

 

So there are many many possible reasons and they are most likely not even related to specific changes in the Call Library Node in LabVIEW but more likely related to other changes that NI doesn't define themselves.

 

One other possibility might be the error handling around Call Library Node calls. If you look in the Call Library Node configuration you have an Error Checking tab in there. You can define the error handling level that LabVIEW performs when it calls the specific DLL function. Maximum level does a lot of additional checks including using so called trampolines around buffers that can detect if the DLL function has written beyond the actually valid buffer. This adds a lot of extra overhead as LabVIEW generally has to copy each parameter into a new buffer for this but lets you detect buffer overwrite errors. LabVIEW then generates the well known 1097 error.

In Default mode LabVIEW does among other things also stack validation, this means it will check that the stack after return from the function call is correct in respect to the calling convention and the configured parameters and adjusts it to the value it expects according to the Call Library Node configuration. This error also will generate a 1097 error.

In Disabled nothing of this is done. You won't get 1097 errors but if something is not correctly configured, expect LabVIEW to die sooner or later!

Some of these protections have been extended in later LabVIEW versions so there might be something that changed and caused the wrong configuration to suddenly generate a more clear error/crash.

 

But my money is on the specific memory layout due to more memory consumption and shared runtime library with the FTDI DLL in LabVIEW 2017 and later.

 

Generally, when the Call Library Node is not configured correctly this can and often does generate an error or crash, but it does not have to! It either might corrupt memory that is not used for anything specific because it hits for instance filler bytes in memory buffers, or it corrupts memory that is seldom or never accessed (until you try to exit LabVIEW and LabVIEW as nice OS citizen tries to clean up all memory it allocated and then stumbles over these corrupted memory pointers). But it can also immediately crash or generate an 1097 error if LabVIEW can detect that something went wrong.

 

The fact that it all SEEMS to run without crash is NEVER a guarantee that you got the Call Library Node configuration correct. The only guarantee is by having each and every Call Library Node reviewed and checked by someone who knows all the details very well! And yes that won't be an Algorithmic Intelligence for many, many years to come! 😃

 

So it was wrong all along but you have been lucky by chance before!

Rolf Kalbermatter
My Blog
Message 2 of 4
(282 Views)

Hey Rolf,

 

Thanks so much for the reply. You really are a wizard with this stuff and this has been super interesting. A few things that I am still not sure about though:

 


@rolfk wrote:

This most likely has to do with memory layout differences due to different base address settings for particular DLLs as well as increased memory consumption of newer LabVIEW versions. As long as the internal data structure for the FT_HANDLE in the FTDI DLL is allocated below the 4GB memory virtual address, the resulting pointer value is not affected as the upper 32bit are all 0 bits anyhow, so the truncation to 32-bit if you use an explicit uint32 configured parameter and/or use a 32-bit control on the LabVIEW front panel has no effect.

This totally makes sense, but for something like the Handle for an FTDI device, I would think that a 32-Bit number would suffice, especially because it works just fine in the 32-Bit version of the DLL, and especially because it works with the 64-Bit version of the DLL with older versions of LabView. Are you saying that the updated LabView versions have increased memory consumption? Why do you think that change would affect this parameter?

 


@rolfk wrote:

Since Visual Studio 2015, Microsoft abandoned the separate C Runtime version for each new Visual Studio version. All Visual Studio 2015 and later generated binaries link to the C Runtime 14.x with x being an incremental number, but the base version 14 and according DLL name stays the same. This means that a Visual Studio 2015 compiled binary can also work with the C runtime for Visual Studio 2022 (MSVC version 14.30 to 14.39).

 

So your FTDI and the LabVIEW 2017 is then using the same C runtime and as the low level C runtime heap manager is part of the C runtime library, the memory allocations suddenly are pulled from the same heap and that will substantially change the runtime memory layout so the FTDI library easily can pull memory from a very different location than in earlier versions.


I must admit, I don't understand the low level functions of C well enough to be able to make sense of what the runtime heap manager changes in regards to stuff like this, but I do believe you that the memory locations could indeed be changed. Still, do you believe that a memory management change affects something like this?

 

For example, C defines an "int" based on the system architecture. If a function is expecting a 64-Bit int, but only contains a value <32 bits long, then a 32-Bit Int would suffice. Why then, if the memory management changes, does this 32-Bit int no longer work? I might be missing something here, but that is how this would work in my mind.

 


@rolfk wrote:

One other possibility might be the error handling around Call Library Node calls.


I did make sure that everything in regards to error handling was the same, including setting both nodes to have Maximum Error Checking. Unfortunately this does not seem to help much in debugging, at least this specific issue.

 

I have another example that might be a bit more useful. It is an example where I know the data types expected by the DLL and they exhibit the same behavior with different versions of LabView. I will post it in a bit.

 

Thanks again for the help though Rolf, it is always much appreciated! 

0 Kudos
Message 3 of 4
(181 Views)

I saw your message about how a handle might need to be 64-bit to work properly but you seem to have deleted that message later.

 

The reason is that a handle is simply an opaque value in terms of the caller. What it means for the underlaying library is private to that library and should never be relied to be a specific thing by the caller other than a numeric with the specified size. Some of the Windows handles are for instance indices into system tables, so their value indeed usually stays way below the 2^31 limit that an int32 reliably could represent. But some are simply pointers to a memory structure that the according subsystem allocates to store all necessary information to manage the respective resource.

 

If the heap manager in a 64-bit application happens to allocate that memory structure above the 2 (or 4GB) memory mark, the bits in the higher 32-bit of the handle are not all cleared anymore and it matters if they are transported around or not. The FTDI FT_HANDLE is such a memory structure hidden behind a handle value. The library allocates a fairly complex structure for each opened device handle where it stores information about the USB access, device status and many more things.

 

Do not try to peak into this pointer however. It's layout can change between library versions drastically, because the developer chose to use an opaque handle to hide the implementation details. So they are able to change the actual implementation of it at will anytime they want. This is not to pester you but to be able to adapt to new requirements that might come up with new OS versions, security measures and what else without having to break compatibility. What you don't promise, or even just document, to your users (a specific memory layout of the structure behind a handle) you don't have to stay compatible with when a change is needed.

 

A handle simply is an identifier for a specific resource. How the subsystem in question interprets this handle to locate the necessary information for that resource is private to that subsystem. And since it is defined as a pointer sized opaque value you have to treat it as such and not try to make shortcuts by forcing it into a 32-bit entity. That might work for some time, some versions, on some systems but has always the potential to blow up eventually.

Rolf Kalbermatter
My Blog
0 Kudos
Message 4 of 4
(211 Views)