LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Waveform vs Array

Hey Y'all,  I'm currently writing some code (sorry I can't really share it as it's a pretty huge project that spans from FPGA to RT and PC) and in this project I'm moving quite a bit of data.  I currently have it as just a 2D array and am passing it up to the PC that way, but I realized that I'm essentially passing just the Y part of Waveform, which got me thinking.  Is there any advantages to using a Waveform over an Array when transporting data?  Does labview compress it differently or allocate memory differently in such a way that it's faster/more efficient?  Thank you!

 

EDIT: I forgot to mention as well, at some point I'll probably start collecting and passing the start timestamp as well, so really it's either bundle it all in a waveform or just pass all the separate components

0 Kudos
Message 1 of 4
(288 Views)

It's really just a LabVIEW cluster.  I suspect that results in a tiny bit of extra overhead, but that wouldn't matter unless you are moving millions of them around.

 

If you need more efficiency, I would looking into going the other way and converting the Y array floats into a binary package for transfers.

LabVIEW Pro Dev & Measurement Studio Pro (VS Pro) 2019
0 Kudos
Message 2 of 4
(272 Views)

The ADC will produce the lowest data amount with 1/2/4 byte per sample, depending on the ADC resolution, the rest is (usually linear) scaling. dt is also usually fixed, so start time and dt plus offset and gain together with an integer array is a easy optainable  compact* data cluster without dataloss.

 

*) I don't know TDMS compression at higher levels, but with HDF5 with high (lossless) compression you usually get even smaller data than raw ADC cut to ADC resolution (if you don't capture pure white noise 😉 ), if that is of concern. Depending on the hardware, writing and reading to/from mass memory can be faster even if the de/compression  takes some time.

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 3 of 4
(217 Views)

@Henrik_Volkers wrote:

 

*) I don't know TDMS compression at higher levels, but with HDF5 with high (lossless) compression you usually get even smaller data than raw ADC cut to ADC resolution (if you don't capture pure white noise 😉 ), if that is of concern. Depending on the hardware, writing and reading to/from mass memory can be faster even if the de/compression  takes some time.


The conversation has shifted a bit, but I did want to talk about TDMS stuff just for a moment.  TDMS itself has no compression feature built in.  My Tremendous TDMS package has a compress feature built into the Circular TDMS class, but all it does is take the in memory TDMS references, then gets the file contents, and adds them to a Zip that has a single TDMS file inside it.  Now this Zip with a single TDMS file in it can be opened directly in Scout by SignalX.  This is a TDMS viewing utility which is underrated, and super useful if you deal with TDMS files. I didn't add compression into the other classes partially because you are dealing with a practically limitless amount of data, where the circular buffer is intended to be time based and on the order of minutes of data to compress. 

 

I have looked into a few other ideas with TDMS like compressing individual channels, or samples within a channel.  The problem is compression most typically is used when there are patterns in the data.  These patterns aren't usually present from single samples, but instead from an array of samples.  But then at that point you'd need tools like Scout to be updated to detect these types of compression, and decompress, on the fly, likely storing some kind of meta data like the number of samples, and maybe data type.  I could see a situation where maybe 1000 samples in a channel are compressed into a ZLib Deflate function, then write that as a single sample within a TDMS file as a string (or array of bytes).  Then if you want to read data within this TDMS file like in Scout, the reader would have to know the format of the compression, perform inflate on the subsamples you want to read and return it.  A fun experiment for sure.

 

But for now the best solution really is just to write your TDMS file, then perform a zip on it after.  All the data will be there, can be opened in Scout, and is relatively easy to work with, while also drastically reducing the file size.

0 Kudos
Message 4 of 4
(184 Views)