LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how to control the cluster padding in Labview 64?

Hello,

I have a problem with the data transfer through a C-dll. It works fine when compiled in 32bits+Labview32, but not with 64bit compilation + labview64

 

if I simplify:

my c code is

```

#include "lv_prolog.h"
#pragma pack(push, 1) // note that is is required here even when compiling in 64bits, because I read directly the structure from fread

 

typedef struct {
int16_t num;
int32_t a;
double val;
} test_struct;

 

#include "lv_epilog.h"

#pragma pack(pop)

 

extern "C" __declspec(dllexport) int headertest(char filename[], test_struct* w)
{
int b_read;
FILE* fileid;
fileid = fopen(filename, "rb");
b_read = fread(w, sizeof(test_struct), 1, fileid);
fclose(fileid);
return b_read;
};

```

I use a test file that is basically in hex '0102 0304 0506 0708 090A 0B0C 0D0E 0F10'

when I call the dll with the corresponding cluster (I16 I32 DBL) in labview and unflatten, I get

- '0201 0807 0605 0000 0E0D 0C0B 0A09' in Labview 64 bits

- '0201 0605 0403 0E0D 0C0B 0A09 0807 ' in Labview 32 bits

32bits being the answer expected

note that when I modify the LabView cluster to I32 I32 DBL in labview 64 bits I get:

- '0403 0201 0807 0605 0000 0E0D 0C0B 0A09 '

thus, the dll is properly sending the data (0403 is present), but it looks like there is a padding expected by Labview 64bits.

 

In my application, the struct is much more complicated. Is there a way to control the padding conversion with Labview cluster?

 

thanks in advance,

 

0 Kudos
Message 1 of 8
(473 Views)

I don't know of any difference between 32 bit and 64 bit LabVIEW that would account for what you're seeing. This is a good reference on how data is stored in memory:

 

https://www.ni.com/docs/en-US/bundle/labview/page/how-labview-stores-data-in-memory.html#d79105e437

 

If you are using the "flatten to string" or "unflatten from string" functions in LabVIEW, there is an input that controls whether or not it includes or expects bytes prepended to arrays that include array size.

Message 2 of 8
(439 Views)

@lippoi wrote:

Hello,

I have a problem with the data transfer through a C-dll. It works fine when compiled in 32bits+Labview32, but not with 64bit compilation + labview64

 

if I simplify:

my c code is

```

#include "lv_prolog.h"
#pragma pack(push, 1) // note that is is required here even when compiling in 64bits, because I read directly the structure from fread

 

typedef struct {
int16_t num;
int32_t a;
double val;
} test_struct;

 

#include "lv_epilog.h"

#pragma pack(pop)

 

extern "C" __declspec(dllexport) int headertest(char filename[], test_struct* w)
{
int b_read;
FILE* fileid;
fileid = fopen(filename, "rb");
b_read = fread(w, sizeof(test_struct), 1, fileid);
fclose(fileid);
return b_read;
};

```

I use a test file that is basically in hex '0102 0304 0506 0708 090A 0B0C 0D0E 0F10'

when I call the dll with the corresponding cluster (I16 I32 DBL) in labview and unflatten, I get

- '0201 0807 0605 0000 0E0D 0C0B 0A09' in Labview 64 bits

- '0201 0605 0403 0E0D 0C0B 0A09 0807 ' in Labview 32 bits

32bits being the answer expected

note that when I modify the LabView cluster to I32 I32 DBL in labview 64 bits I get:

- '0403 0201 0807 0605 0000 0E0D 0C0B 0A09 '

thus, the dll is properly sending the data (0403 is present), but it looks like there is a padding expected by Labview 64bits.

 

In my application, the struct is much more complicated. Is there a way to control the padding conversion with Labview cluster?

Your comment behind packpush() is ignoring an important fact:

 

The lv_prolog.h and lv_epilog.h are necessary around data structures that define data types that you want to pass from the LabVIEW diagram to your shared library. This IS how LabVIEW puts data in its memory and you can jump high or low but that simply won’t change.

 

If you want a structure to map to your file structure you have to declare that separately and convert between the two in your C code by assigning each element explicitly!

 

Or you read the file in LabViEW as binary file. The LabVIEW flatten format that binary File Read and Write uses is for compatibility reasons always the packed format!

 

Just watch out when a struct contains variable sized elements like arrays or strings. They are in any way different than what a C program likely would do so there is no trivial “read in one go” for that, no matter what!

 

And your use of pack(push) and pack(pop) are also completely messed up. They are stack based so it is important to use them in the correct order.Intermixing them with other pack pragmas as used in the prolog and epilog headers for 32-bit Windows will mess up your alignment setting very good. Stack based constructs need to be deconstructed in the opposite order than they were constructed but in this case you cant use pack(push) anyhow. You mess up what lv_prolog.h does (respectively doesn’t do) for non 32-bit compiling such as your 64-bit Windows compile. LabVIEW data structs absolutely REQUIRE 8 byte alignment in 64-bit compilation!!!

Rolf Kalbermatter
My Blog
Message 3 of 8
(398 Views)

Hello,

 

thanks for your helpful replies.

I fully understand the point concerning the alignent and the use of pragma pack, but I have three strains:

- alignment with 1 bits to read the data 

- alignment with 1bit (LV32) / 8 bits (LV64)

- I would like to have the same code compiled with 32 and 64 bits.

however I simplified by removing lv_prolog.h as I am only compiling with VC2022 / windows.

 

I followed your advice and it is now working.

It would be very helpful, If you know a solution to automatize the to_labiew function, as my true structure is much more complicated.

 

thanks again,

 

Lionel

 

#pragma pack(push, 1) // not using anymore lv_prolog.h as I am working only under windows

 

typedef struct {
int16_t num;
int32_t a;
double val;
} test_struct ;

static test_struct test_struct_default = { 0, 0, 0.0 };

 


#include "lv_epilog.h"
#if MSWin && (ProcessorType == kX64)
         #pragma pack(push, 😎             // #pargma pack(show) gives 16 by default with VC2022; LV64 requires 8
#endif

typedef struct {
int16_t num;
int32_t a;
double val;
} test_struct_lv;

 

int to_labview(test_struct *wtemp, test_struct_lv *w)
{
w->a = wtemp->a;
w->num = wtemp->num;
w->val = wtemp->val;
return(0);
}
extern "C" __declspec(dllexport) int headertest(char filename[], test_struct_lv *w)
{
int b_read;
FILE *fileid;
test_struct *wtemp = &test_struct_default;
fileid = fopen(filename, "rb");
b_read = fread(wtemp, sizeof(test_struct), 1, fileid);
to_labview(wtemp, w);
fclose(fileid);
return b_read;
};

0 Kudos
Message 4 of 8
(350 Views)

Something is wrong. Windows 64 bit ABI default alignment is 8 byte. Did you change the default alignment in your project settings??

But since you only have 8 byte data elements it shouldn't change anything. Not even sure what 16 byte alignment should change. Maybe for special FPU/MMX datatypes but that should not come into play with a normal C program and is no option to use in the LabVIEW diagram.

 

There is no trivial way to convert between different alignments. Writing a function that could do that like the LabVIEW flatten and unflatten function does is going to take quite a bit of C code and requires the user to specify a recursive type descriptor. Nothing that would end up to be easier than to write your specific function out. Or you do the unflattening in the LabVIEW diagram. Unless you also have strings or arrays in there the LabVIEW flattened format matches byte packing exactly.

Rolf Kalbermatter
My Blog
0 Kudos
Message 5 of 8
(332 Views)

no /Zp option in the compilation.

 

I found on the webpage: https://learn.microsoft.com/en-us/cpp/preprocessor/pack?view=msvc-170

 

" If the compiler option isn't set, the default value is 8 for x86, ARM, and ARM64. The default is 16 for x64 native and ARM64EC."

0 Kudos
Message 6 of 8
(308 Views)

I see. However since non of your datatypes is more than 8 bytes it should not make any difference. Also you should not need to do special packing for Windows 64-bit. Supposedly the packing configuration as used in lv_prolog.h is what LabVIEW uses for all its data declarations. I have done quite a bit of shared library code for LabVIEW 32-bit and 64-bit and always left the lv_prolog.h setting without any extra pragmas for LabVIEW diagram datatypes. Admittingly not in Visual Studio 2022 or thereabout bur still!

Rolf Kalbermatter
My Blog
0 Kudos
Message 7 of 8
(279 Views)

According to your advice, we can propose the following solution that works nicely on my application on both 32bits and 64bits.

Thanks again for your help.

 

################# Struct that are internal C

#pragma pack(push, 1)

 

typedef struct {
int16_t num;
int32_t a;
double val;
} test_struct ;

 

static test_struct test_struct_default = { 0, 0, 0.0 };

 

################ Struct exchanged with LabView

#include "lv_prolog.h"

#if MSWin && (ProcessorType == kX64)  
#pragma pack(push, 😎                                ##  LV64bits needs 8 whereas VS2022 default is 16
#endif                                                         

typedef struct {
int16_t num;
int32_t a;
double val;
} test_struct_lv;

 

#include "lv_epilog.h"

 

int tolabview(test_struct *wtemp, test_struct_lv *w)
{
w->a = wtemp->a;
w->num = wtemp->num;
w->val = wtemp->val;
return(0);
}


extern "C" __declspec(dllexport) int headertest(char filename[], test_struct_lv *w)
{
int b_read;
FILE *fileid;
test_struct *wtemp = &test_struct_default;
fopen_s(&fileid, filename, "rb");
b_read = fread(wtemp, sizeof(test_struct), 1, fileid);
tolabview(wtemp, w);
fclose(fileid);
return b_read;
};

0 Kudos
Message 8 of 8
(258 Views)