LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Improving code efficiency wihen dealing with large array sizes, plotting large 1D waveform graphs

I would be glad to have recommendations on Improving code efficiency when dealing with large array sizes. Also with plotting large 1D waveform graphs.

In particular when building a large 1D waveform ( using build waveform array with the channel curve nane string array as attribute and double numeric array for the data) , what is best way to do this without experiencing freezes/hangs?

[BADGE NAME]

0 Kudos
Message 1 of 28
(4,142 Views)

In general, decimate the array you present on the graph. A full screen graph is ~2k pixels wide, so how many samples do you really need to show?

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 2 of 28
(4,124 Views)
Sorry Yamaeda I am very much a beginner. Can you explain further what you mean by decimate. The entire samples must be shown at once irrespective of resolution. User can scroll and zoom as necessary

[BADGE NAME]

0 Kudos
Message 3 of 28
(4,121 Views)

It is much easier to answer a "specific" question, and much easier to answer a question when relevant code (VIs, not pictures) are included.

 

So what does "Large" mean when talking about arrays?  What will you be doing with these arrays?  What sort of accuracy and precision do you need?  If you are displaying data on a graph, you should take note that unless you have a really (really) big screen, most displays can show less than 2000 points along the X axis, so for graphing purposes, you could consider either decimating or averaging your "large array" to get the display points in the range of 1000 or so points.

 

Bob Schor

 

Decimate -- display every Nth point (one point "stands for" the other N-1).

Average -- display the average of N points (and only the averages).

 

Both schemes take M points (where M = "miliions", or a large number) and reduce it to N (on the order of 1000) for display purposes.  You see a single plot, no scrolling, but (obviously) don't see every single point.  Decimation "preserves the variablility" in the data -- decimated data looks "as noisy" as a sample of size N.  Averaging reduces the variability of the data by a factor of sqrt(N) (this is sometimes called "The Law of Large Numbers").

Message 4 of 28
(4,118 Views)
Bob, by large I mean arrays in the neighborhood of 300 Columns by 1 million rows ( or even more potentially)

The data should have minimum 6digits of precision. In this particular question I am concerned about plotting. The reason I don't want to average is that When the user places the cursor at any point on the graph it should show the real value at that point not some average value.

[BADGE NAME]

0 Kudos
Message 5 of 28
(4,103 Views)

Attached is what I am trying to achieve. Yamaeda if i understood what you meant by decimate correctly, does it mean taking the transpose of the array so that the rows are much shorter than the columns, then index  the resulting array column by column through the FOR loop? As opposed to interleaved?

 

 

 

 

[BADGE NAME]

0 Kudos
Message 6 of 28
(4,091 Views)
OK Bob, I understand what you mean by decimate now. However I would like to be able to plot all points, but will decimate as a last resort if I can't .

Currently I am able to plot 80 columns * 110000 rows without problems. Is there away to plot this stage by stage to achieve multiples of the aforestated array size?

[BADGE NAME]

0 Kudos
Message 7 of 28
(4,079 Views)

@blessedk wrote:
OK Bob, I understand what you mean by decimate now. However I would like to be able to plot all points, but will decimate as a last resort if I can't .

Currently I am able to plot 80 columns * 110000 rows without problems. Is there away to plot this stage by stage to achieve multiples of the aforestated array size?

You should ofcourse keep all data in memory, but plotting 1000x the resolution is just a memory and performance hog, which is exactly what you asked about. 🙂

(though i personally like to draw too much, i try to keep it at max 10x the resolution i.e. 10k-20k samples.

To keep a fast and smooth program you should capture scroll and zoom events on the graph and grab a relevant data subset and graph that.

 

Generally the graph is to get a feel how the data behaves, it's not the result, it should be calculated from the raw data.

 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 8 of 28
(4,052 Views)

@blessedk wrote:

Attached is what I am trying to achieve. 


That VI does not tell us anything at all. Please show us something more realistic.


@blessedk wrote:

The reason I don't want to average is that When the user places the cursor at any point on the graph it should show the real value at that point not some average value.


You are missing the point. There is no way to place the cursor more precise than one pixel, and if you are plotting thousands of points per pixel, whatever the user gets by clicking is completely random.

How fast does the data change? Typically nearby points are correlated so getting six digit resolution on a random point of a noisy trace is not meaningful. A local avarage is a much more useful value.

Message 9 of 28
(4,036 Views)
I have written about this exact point.

http://www.notatamelion.com/2015/07/31/plotting-large-datasets-without-waiting-forever/

A key point is that for the decimation process to work well you have to understand your data.

Mike...

Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
Message 10 of 28
(4,030 Views)