02-07-2014 04:53 PM
@Bublina wrote:
So you want me to have such prototype in Call library node, and have a different prototype in the called function !
Man, I will never do that, that is even more mayhem then those pointer to number to pointer casts.
It is the correct way to avoid the ugly and sloppy C casts. It doesn't seem like mayhem to me (nor to Rolf, apparently, since he suggested the same).
Ignore the function prototype and think about how you would call the function in C. If the function prototype is
void square (int *p);
and you already have a variable of type int *i, you would call the function square(i) - essentially passing i by value since it's already a pointer type. That's what you should be doing in LabVIEW, too, because the value on the wire is a pointer.
02-07-2014 05:21 PM - edited 02-07-2014 05:22 PM
@Bublina wrote:
Man, I will never do that, that is even more mayhem then those pointer to number to pointer casts.
Once you have programmed a bit in C you will quickly come to recognize that typecasts are mostly a good way to go all insane. Unneccessary typecasts are even worse.
And you can be almost 100% sure to have unneccessary typecasts if you happen to typecast the same object variable at more than one place!
Since LabVIEW has no real pointer datatype on diagram level you have to do something there, not litter your C source code with typecasts to wrongly adapt it to what you think LabVIEW does.
Basically on the diagram, a pointer has to always be represented by a 64 bit integer. You just have to make sure to configure the Call Library Node parameter to be pointer sized (passed by value or reference) and LabVIEW will take care to translate the 64 bit integer into the correct size value for the platform it is running on.
You might ask why LabVIEW represents pointer sized integers always as 64 bit integers on the diagram. The reason is mostly because LabVIEW has always made sure to have a consistent memory image of its datatypes. So a cluster containing two int32 values is always 64 bits long. If LabVIEW would implement a pointer sized integer on diagram level, that would not hold true anymore. A cluster containing two such values would take up 64 bits on 32 bit platforms while it uses 128 bits on 64 bit platforms. The provision that a flattened cluster always ends up in being the same size on every LabVIEW platform would then be violated. So LabVIEW uses the biggest necessary size to adapt to pointer sized Call Library Node parameters (in the hopes that 128 bit CPUs are still far into the future ) and lets the Call Library Node take care about translating that 64 bit integer to and from whatever size the pointer should be on the current platform.
02-08-2014 03:24 AM
Ok.
Thank you for answers.