LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Write to Binary File Cluster Size

Solved!
Go to solution

I have a cluster of data I am writing to a file (all different types of numerics, and some U8 arrays). I write the cluster to the binary file with prepend array size set to false. It seems, however, that there is some additional data included (probably so LabVIEW can unflatten back to a cluster). I have proven this by unbundling every element and type casting each one, then getting string lengths for the individual elements and summing them all. The result is the correct number of bytes. But, if I flatten the cluster to string and get that length, it's 48 bytes larger and matches the file size. Am I correct in assuming that LabVIEW is adding some additional metadata for unflattening the binary file back to a cluster, and is there any way to get LabVIEW to not do this?

 

I'd really prefer to not have to write all 30 cluster elements individually. A different application is reading this and it is expecting the data without any extra bytes.

0 Kudos
Message 1 of 7
(5,319 Views)
Solution
Accepted by topic author GregFreeman

Had overlooked this in context help:

 

Arrays and strings in hierarchical data types such as clusters always include size information.

 

Well, that's a pain.

0 Kudos
Message 2 of 7
(5,315 Views)

Yup I remember reading a CLAD question on this.  It had a 2D array of U8, that was 2x2 and it asked if you write this to a binary file how much space does it take up?  The obvious answer is 4 bytes, but the correct answer is 4 bytes plus an I32 for each dimension.  No idea the work around.  Maybe cast to a string and write that?  Or cast to a 1D array of bytes, then write to file, then delete the first 4 bytes of the file (which would be the I32 of size).  Why do you want to write to file without this information?

0 Kudos
Message 3 of 7
(5,302 Views)

@Hooovahh wrote:

 Why do you want to write to file without this information?


Because the existing parser does not expect that information there so it chokes. The workaround was just to do as follows, writing each element individually in its own case. When writing arrays that aren't within a cluster to the file and setting the prepend array size to false, it works. Of course if an element is added to the cluster, I will get a runtime error when I hit the default case, but this is a pretty fixed format so I'll cross that bridge when I come to it.

 

 

0 Kudos
Message 4 of 7
(5,283 Views)

You will still need to separate but you can include just the numbers in one cluster and the arrays separately. This might give you less than 30 individual writes.

 

Snippet.png

 

mcduff

0 Kudos
Message 5 of 7
(5,270 Views)

@mcduff wrote:

You will still need to separate but you can include just the numbers in one cluster and the arrays separately. This might give you less than 30 individual writes.

 

Snippet.png

 

mcduff


Doesn't work if the array is defined to be somewhere in the file in between two scalars in the first cluster. The easiest way to do it is just write each element individually so I have control over it, eventhough it's a bit painful.

0 Kudos
Message 6 of 7
(5,260 Views)

If you know your data type you can calculate the offset where the length is prepended.

 

Cheers,

mcduff

 

Snippet.png

0 Kudos
Message 7 of 7
(5,248 Views)