LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Drivers and multiple versions of LabVIEW

What happens in a situation like this?

 

Say I have VIs written in LV 2018 which a driver in C:\Program Files\National Instruments\LabVIEW 2018\instr.lib.

 

Now say I install LV 2019 and mass compile (or control-shift-click) those VIs. Will this recompile the drivers in the 2018 installation as 2019 VIs. Would this cause problems with a different project done in 2018 using those same drivers?

0 Kudos
Message 1 of 9
(223 Views)

@stephenb2 wrote:

 

Would this cause problems with a different project done in 2018 using those same drivers?

Yes of course! You should not recompile VIs in a different LabVIEW version folder but instead install that VI library into the new LabVIEW folder too.

 

Only LabVIEW 2024 has a new feature that lets you configure a LabVIEW library, class or project and all its contained VIs to be kept in an older version while still working on it in 2024.(but you still shouldn’t do that on VIs located in an older LabVIEW installation folder).

Rolf Kalbermatter
My Blog
Message 2 of 9
(197 Views)

@rolfk wrote:

You should ... install that VI library into the new LabVIEW folder too.

To be clear, instr.lib is a logical folder, meaning it's a not a single folder with a fixed location. When a caller calls a VI which has the path <instrlib>\... it will look for it in the folder for that version of LV, so when you open a VI which calls those VIs in LV 2019, it will try to load them from <LabVIEW 2019>\instr.lib, which is why Rolf says you should put them there.

 

If you open LV 2019 and then directly open the files from the LV 2018 folder, then that will overwrite them.

 

Incidentally, other than the automatic save version feature Rolf mentioned, at least some recent (~2022 and later) NI drivers are now installed to a single location instead of having a copy for each version. I don't know if this is something that users can access as well, but even if it is, it won't work in older LV versions.


___________________
Try to take over the world!
Message 3 of 9
(158 Views)

@tst  a écrit :

recent (~2022 and later) NI drivers are now installed to a single location instead of having a copy for each version. I don't know if this is something that users can access as well, but even if it is, it won't work in older LV versions.

Yes, this common location is "C:\Program Files\NI\LVAddons" . In theory for recent version the "common" files could be placed here. As far as I can see the 21.0 version used here. This location is "Read Only" (without writing permission), which could make some troubles for mass compile (I have seen lot of entries in the logs when LabVIEW tried to recompile this folder, but got access denied, but very recent LabVIEW take care of this, I guess). Also important point that this location shared between bitness (so, 32-bit and 64-bit LabVIEWs will use the same files) and this fact could make some troubles if you need different VIs with different connector pane for 32-bit and 64-bits environment.

0 Kudos
Message 4 of 9
(148 Views)

@Andrey_Dmitriev wrote:
Also important point that this location shared between bitness (so, 32-bit and 64-bit LabVIEWs will use the same files) and this fact could make some troubles if you need different VIs with different connector pane for 32-bit and 64-bits environment.

There are hooks in there to set if the addon folder will work with 32 or 64 bit. It is set by the lvaddoninfo.jon file for a release. Look at the nidaqmx folders for more details. This is one of those things that R&D has stated needs public documentation released.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 5 of 9
(125 Views)

@crossrulz  a écrit :

@Andrey_Dmitriev wrote:
Also important point that this location shared between bitness (so, 32-bit and 64-bit LabVIEWs will use the same files) and this fact could make some troubles if you need different VIs with different connector pane for 32-bit and 64-bits environment.

There are hooks in there to set if the addon folder will work with 32 or 64 bit. It is set by the lvaddoninfo.jon file for a release. Look at the nidaqmx folders for more details. This is one of those things that R&D has stated needs public documentation released.


Well, then NI's R&D Department should share this documentation with NI'S Vision Department first, because what I see, for example, for "IMAQ GetImagePixelPtr" which is loaded from "C:\Program Files\NI\LVAddons\nivisioncommon\1\vi.lib\vision\Basics.llb\IMAQ GetImagePixelPtr" for both 32-bit and 64-bit - exactly same file and output 64 bit for both. As result I have 64 bit pointer in 32-bit environment. Not very elegant (but works).

Screenshot 2024-10-15 16.29.32.png

More critical situation is with "IMAQ Image Datatype to Image Cluster.vi" which is loaded from "C:\Program Files\NI\LVAddons\nivisioncommon\1\vi.lib\vision\DatatypeConversion.llb\IMAQ Image Datatype to Image Cluster.vi". Here opposite case - the output (which is address in general) is 32-bit for both, and this VI will simply crash the next in 64-bit environment, because adddress will be truncated:

Screenshot 2024-10-15 16.29.56.png

Good point is that the "IMAQ Image Datatype to Image Cluster.vi" is almost unused (except some very special and very obsolete legacy code), but anyway I would like to have different VIs for different bitness as long as we haven't "Pointer-sized integer" for connector pane terminal.

0 Kudos
Message 6 of 9
(116 Views)

Nope! LabVIEW does not know pointer sized integers outside of the Call Library Node! On the LabVIEW diagram it is ALWAYS a 64-bit integer! The upper 32-bit are unused in 32-bit LabVIEW and the Call Library Node does the correct extraction and (sign) extension depending on the current LabVIEW bitness, if the parameter is configured as pointer sized.

 

Those legacy VIs are legacy for a reason and won’t work correctly in 64-bit LabVIEW. Replacing them with VIs with 64-bit integers for 64-bit LabVIEW is no real solution as it would break VIs that depended on the old datatype. The only correct solution would be to remove them entirely as it will otherwise cause trouble one way or the other anyways.

 

YOU DO NOT WANT TO HAVE VIs WITH BIT DEPENDING CONNECTOR PANE PARAMETERS EVER FOR ANY LIBRARY! NEVER EVER!

Rolf Kalbermatter
My Blog
0 Kudos
Message 7 of 9
(101 Views)

No, I only talking about that before migration to "common" place the 32.bit Vision and 64-bit Vision VIs was deployed separately and they was different in different versions. Long time ago when NI introduced 64 bit vision, there was a bug here (the image cluster contains 32-bit integer instead of 64 bit), I contacted support and this gets fixed (don't remember CAR number). Now this old issue is here again, because 32-bit value was used for both bitnesses. Technically yes, the VIs are fully obsolete, but still functional, technically behind of "IMAQ Image Datatype to Image Cluster.vi" is just call to LV_LVDTToImage from NiVisSvc.dll, and inside this function the function LV_LVDTToGRImage() is called after LV_SetThreadCore(), nowadays I just calling both inside of my DLL and this works like a charm (but they are undocumented, I take this idea from NI's OpenCV Wrapper). Inside of 32- and 64- bits DLL everything works fine, of course.

 

Solution "Always 64-bit integer with unused upper part" for pointer-sized parameter for me looks not very elegant (but in some cases simplifies universal 32/64 bit development). On 32-bit environment I have the only 32-bit pointers. Period. In classical programming language I have sizeof(size_t)  as well as sizeof(void *) both 4 bytes in 32-bit environment. For which weird reason I have pointer-sized 64 bits wide in 32 bit LabVIEW? It is rhetorical question, the answer is "by design", of course.

0 Kudos
Message 8 of 9
(67 Views)

@Andrey_Dmitriev wrote:

No, I only talking about that before migration to "common" place the 32.bit Vision and 64-bit Vision VIs was deployed separately and they was different in different versions. Long time ago when NI introduced 64 bit vision, there was a bug here (the image cluster contains 32-bit integer instead of 64 bit), I contacted support and this gets fixed (don't remember CAR number).

That was not a fix but an atrocity committed by whoever did that! For the sake of getting a quick and dirty fix for your complaint, at the price of creating a legacy bomb. LabVIEW, unlike C, is a very strict typed language, with every datatype except one being exactly defined in its size. In C you have int, char, long, short and they can be all kinds of sizes depending what the compiler builder fancies the most at that moment, to suit the underlaying hardware architecture, the mood of his dog or whatever. LabVIEW never did this type of datatype chimerism, with the exception of one, namely the extended float. And that was a legacy sin back in the beginning of the 90ies. Logical size for that would have been 128 bits and there were some architectures like the 68000 that actually had such a type, but Intel in its infinite wisdom had decided to use 80 bits instead. And the LabVIEW developers wanting to squeeze out those additional 16 bits went to a lot of effort, including actual assembly programming, to support that. But they did go to even greater length to guarantee that the flattened format for LabVIEW datatypes was on all platforms the same regardless, by always extending those 80 bits to 128 bits. So the principle of LabVIEW is: when data is flattened it must be irrespective of the underlaying hardware always the same format and size. This means that the only way to support 64-bit pointers, those integers carrying such a pointer value must be 64-bit, there is NO datatype in LabVIEW that can be sometimes 32-bit and sometimes 64-bit. And that is no problem at all: 32-bit easily fits into 64-bits.

 

The extended float is in fact an even bigger joke nowadays since it is on all platforms but the 32-bit Intel platform simply the same as a double floating point. But of course, to satisfy the original requirement that the flattened format needs to be consistent under all circumstances, such an extended float is in fact always expanded to 128-bit when being streamed in any way that demands a flattened format (Write to Binary File, Typecast, Flatten to String).

 

But it is extremely wasteful!! How? It could be, if you had huge arrays of pointers, but LabVIEW doesn't really use pointers on the diagram and if you are going to pass huge arrays of pointers from one DLL function to the next you certainly have a bigger problem to care about, than 4 bytes per pointer of wasted memory! Trying to optimize that is simply micro optimization at the cost of many fundamental problems that now need to be solved to keep LabVIEW consistent.

 

Solution "Always 64-bit integer with unused upper part" for pointer-sized parameter for me looks not very elegant (but in some cases simplifies universal 32/64 bit development).

Anything else would be much less elegant in many ways and cause a lot of corner cases that would require even more work to solve in involved ways that create even more compatibility nightmares.

 

On 32-bit environment I have the only 32-bit pointers. Period. In classical programming language I have sizeof(size_t)  as well as sizeof(void *) both 4 bytes in 32-bit environment. For which weird reason I have pointer-sized 64 bits wide in 32 bit LabVIEW? It is rhetorical question, the answer is "by design", of course.


On 32-bit you have a 32-bit pointer but as far as the LabVIEW diagram is concerned it has to be transported in a 64-bit integer if you want your VIs to be compatible in both cases without involved conditional compile structures or bitness specific VI installations, that would potentially create broken arrows if you try to use typedefs for your clusters. In LabVIEW any VI library that wants to be 64-bit compatible, has to always transport a pointer as 64-bit value. You can even do pointer arithmetic on such a value and it still works fine, aside from a potential value overflow but if that is a possibility you would have to explicitly protect from such overflow even if you used a 32-bit integer for that pointer value.

 

The only moment the exact size of such a pointer should be important is when you pass it to a DLL, and here you have the option to configure the parameter to be pointer sized and the LabVIEW Call Library Node will do the right thing depending on what platform it runs.

 

The sooner you forget about that pointers should be explicit 32-bit or 64-bit values when passed around on the LabVIEW diagram, the sooner you will discover that things get much much easier.

 

Of course we are getting into the rats about little here. Soon enough LabVIEW will be purely 64-bit only anyhow and 32-bit will be entirely history. Until then, try to treat pointers in LabVIEW always as 64-bit value and don't forget to configure the parameter configuration in the Call Library Node to use pointer sized integers for them. With this, most of your 32/64-bit worries simply will disappear like snow in the spring sun.

 

And yes, even better is of course to avoid pointers on the LabVIEW diagram completely by writing DLL interfaces that use explicit native LabVIEW datatypes. That way you have no problems at all! 😁 Aside from some more involved code on the C side of course, but that usually pays of in manyfold since the LabVIEW diagram to call those functions gets extremely trivial and you have very little possibilities to get something wrong when trying to play C compiler on the LabVIEW diagram. 

Rolf Kalbermatter
My Blog
0 Kudos
Message 9 of 9
(50 Views)