02-09-2015 12:00 PM
I take performance benchmarks from NI with a good-sized grain of salt. I won't tell you why in this forum, but let's just say BTDT.
02-10-2015 03:41 AM - edited 02-10-2015 03:47 AM
From NI only?
Benchmarks are like statistics. Don't trust them if you haven't forged them yourself!
And performance increase numbers in marketing text are usually a single number from an engineer, taken out of context and declared as general fact. Just like many of the "technical" articles in non-technical publications like newspapers and popular media.
02-10-2015 12:26 PM
11-28-2023 04:56 PM
I thought the 64 bit version would be more CPU effecient than the 32 bit version.
NI seems to acknowledge that its 64-bit implementation was not done for more efficiency.
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000kIctSAE
LabVIEW 32-Bit and 64-Bit Compatibility, Updated Jul 30, 2023, "LabVIEW 64-bit does not provide any speed improvements over LabVIEW 32-bit"
11-28-2023 05:09 PM - edited 11-28-2023 05:33 PM
The CPU specific compiler is purely LLVM dependent. The NI optimizations are in the data flow graph to DFIR transformations and they are on a level where the CPU is not yet relevant.
Also, 64-bit is typically larger due to all pointers and even offsets being generally double the size. Memory bandwidth is a real concern even with high speed DDR5 memory and also causes caches to fill up faster so can result in more cache misses. That easily can kill simple and not so simple gains in 64-bit code execution.
But NI is not really involved in the LLVM development at all, other than using it. So it is what it is!
Those 64-bit gains of 20 to 25% are based on carefully crafted code that makes maximum use of the new optimized CPU instructions and tuned to fit into local CPU cache in order to reduce memory bandwidth effects.