LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can, when collecting data, start a new column in my array every 100 data points?

Solved!
Go to solution

@ErnieH wrote:

Everything should be challenged, including unnecessarily rude and condescending remarks?


My quote was: "It is considered bad form to use "delete from array" inside an inner loop, you are constantly allocating new memory. For large arrays it can be orders of magnitude slower." 

 

Can you point out the rude and condescending parts for me? It seems to me very neutral and factual. Would it have made a difference if I had added a few smileys at the end? 😄


@ErnieH wrote:

I read a lot by don't post often. This is an example of why that is the case. My mistake was posting to begin with.


The tone of the forum is honest and professional, and you should have noticed that after reading a lot. Everybody is encouraged to post! For beginners, it is one of the best ways to learn. Everything I know about LabVIEW was learned by making mistakes.

 

(However, I agree that your second post was not really necessary.)

 


Message 11 of 21
(1,535 Views)

@ErnieH wrote:

Everything should be challenged, including unnecessarily rude and condescending remarks? I read a lot by don't post often. This is an example of why that is the case. My mistake was posting to begin with.


 

I'm not one to really go around getting in the middle of discourse on these forums! But I would like to say I feel altenbachs comment was read out of context. He was just stating a fact, and the punctuality of it may have come off as cross but I am sure he didn't have any hostility behind it. He was just being frank about a programming practice that I also learned here on the forums and now always use in my code because I have been bitten by the delete from array plague before! I respectfully encourage you to do the same! The reason I feel it is important to code using in placeness even with small arrays when it doesn't matter in terms of timing has nothing to do with efficiency if the array grows in the future (most of the time I know it won't), but instead has to do with the opportunity to create a habit of this particular good programming practice so that it becomes my first and most familiar option in situations where "delete from array" could be used. Again, take or leave the advice, but I encourage you to try not to take things too personally. Everyone here is just trying to help each other become better programmers! Please excuse my runon sentences. I'm writing from my cell phone. Just couldn't let this thread go without putting in my two cents right away!

0 Kudos
Message 12 of 21
(1,522 Views)

I was more focused on showing how to create a VI with a dataset to make it easier to develop the algorithm. Here is a vi with some comparisons between methods and an updated VI. You can play around with the array size to see how it affects overall timing. Of course with larger arrays, the dereferences become more pronounced.

Download All
0 Kudos
Message 13 of 21
(1,512 Views)

@ErnieH wrote:

Here is a vi with some comparisons between methods and an updated VI. You can play around with the array size to see how it affects overall timing.


I really don't want to criticise you, but your benchmark is completely meaningless because it does not measure what you want to measure. The results will be quite random.

 

The two FOR loops as well as the "start tick" are all independent code segments that run in parallel. There is no guarantee that the tick frame completes before the two loops start and since both loops run in parallel (until one completes), they steal each others clock cycles and all you are measuring is some random compiler scheduling. Results will strongly depend on the number of available CPU cores. A inner loop should also not have any indicators, because you don't want to measure front panel updates and transfer buffer writes. Taking the tick count also takes a finite effort, so that should not be done with each iteration of the loop. I would also turn off debugging, because fast loops have a measurable overhead if they need to drag along debugging code.

 

A typical benchmark consists of a three-frame flat sequence with a tick in the first and last frame and the code of interest in the middle. Debugging should be turned off. There should be no controls and indicators in the middle frame and there should not be any other code that can execute in parallel with it. Turn on folding display for wires and structures to ensure that no folding is going on (which could give artificially fast results). Instead of the tick count, I would also recommend the "high resolution relative seconds.vi" (sp?) found in the .../vi.lib/utilities folder. It is much more accurate for fast time deltas.

 

Accurate benchmarking is an art and most here have been bitten by incorrect benchmarking code!

0 Kudos
Message 14 of 21
(1,504 Views)

For some values of "Number of points in each column", Delete is faster than Subset (unless I've messed up). For ALL values of "Nopiec", altenbach's method is three orders of magnitude faster. Speed does not appear to be a problem for the OP. For me, it is maddening when one of the heavy-hitters comes in and offers what feels like an over-educated, idealist solution to a simple problem. But then I run into the need to process very large amounts of data, and I am thankful for the knowledge.

Message 15 of 21
(1,491 Views)

Well, I messed that up! After fixing the Delete case, its performance decreases by a factor of about 10.

And I neglected to transpose them all. (Details, details!)

 

0 Kudos
Message 16 of 21
(1,481 Views)

OK, here's a quick benchmark comparing a few algorithms:

 

  1. delete from array 
  2. Array subet
  3. reshape
  4. parallel array subset (same as #2, but using parallel FOR loop)

The results are identical and create a new array with X numbers of points per row (I omitted the transpose in my code to get the same results as the others). I only have a dual core here, so #4 makes no difference. Maybe there is some gain for 4 core computers.

 

For an input of 100000 and a row size of 100, I get:

 

  1. 31ms
  2. 2ms
  3. 0.5ms
  4. 2ms

The results are much more dramatic for larger sizes or shorter rows. For example if I keep the input at 100000, but change the row size to 10, I get:

 

  1. 280ms (several hundred times slower!!)
  2. 10ms
  3. 0.7ms
  4. 10ms

Your mileage may vary. 😄

 

Please verify correct operation. A few things can still be improved. This just looks at speed. There are also other issues to consider, such as memory fragmentation.

 

Message 17 of 21
(1,443 Views)

Hey thank you this worked perfectly! I had to modify it a bit for my needs but it helped me understand the array reshaping better. Is it possible to add a first column of constant time integers? I'm taking data at regular intervals and just need to insert a column of constant intervals (ie, if I take 50 data points each .01ms apart, I need a column with .01, .02, .03 etc).  I've looked at other forum posts about it and they all seem to do it earlier in the data collection process. I wasn't sure if there's a way to append it to the file I've been working with.

 

Thanks!

0 Kudos
Message 18 of 21
(1,402 Views)
Message 19 of 21
(1,400 Views)

Worked perfectly, thanks! My next question (thank you for continuing to help me, I really appreciate it!) is what do I modify to do the same thing for a column. Can I just transpose it somewhere?

 

0 Kudos
Message 20 of 21
(1,386 Views)