LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Request: Make these VIs faster for large arrays

Solved!
Go to solution

Hey everyone,

 

Would anyone mind taking a look at these two VIs? They run fine for small data sets, but they start to take up to 200ms each for large arrays (~5000 rows x 2 columns). The first actually sends data to the 2nd, so the total delay when running these VIs can be 400, 500ms. Far too much for using these in real time.

 

Background of how they're used:

The user can click or click-and-drag on an Intensity graph to create a map of die to test. These VI are run to update a list of what die are chosen and a list of how the machine will move. When in click-and-drag mode, these VI create a very large delay in the FP response.

 

First VI:

The input is a 2D array of 1 or 0, denoting test or don't test. The output is a 2D array (Nx2) of row,column numbers that are to be probed.

 

Second VI:

The input is a 2D array (Nx2) of R,C numbers (from the 1st VI). This VI calculates relative movement from the current die to the next die. The output is a 2D array (Nx2) of relative movements (Y,X). Example: ((2,4),(5,1)) returns ((0,0),(3,-3)).

 

I've already cut the execution time down to about half of what it was, but it's still not fast enough. And yes, the error terminals are not connected, I know 🙂

 

VIs (also attached):

Probe array to test array.png

 

relative movement list.png

 

So what do you think? Is there some subVI that I don't know about that completely replaces the 1st VI? Is one of the subVI's I'm using inherently slow?

0 Kudos
Message 1 of 59
(6,315 Views)

In the first vi you should initialize an array before entering the loop. Inside the loop use replace array subset instead of build array. Using build array requires LabVIEW to find another memory block each time. SLOW.

=====================
LabVIEW 2012


Message 2 of 59
(6,307 Views)

I'm not sure I can do that, as I don't know how big the final array will be.

 

I did try it with a static array size, and it was much, much quicker.

 

EDIT: Yes, for now I can sum "Probe Array" and that will tell me the length of "Die to Test Array", but I plan on changing Probe array to be more than just 1 or 0.

0 Kudos
Message 3 of 59
(6,299 Views)

Hi dthor,

 

You can use 'Initialise Array'. You need to calculate the absolute maximum array size that can be created from your input array. Then use 'Replace Array Subset' in your loop. At the end get the array subset of the wanted data:

 

Faster Array.png

 

In the example above i first calculate the maximum size of the array, then initialise it at this size. At the end i remove the data we need. Is actually astounding how much faster it is.

 

Rgs,

 

Lucither

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
Message 4 of 59
(6,285 Views)

@dthor wrote:

I'm not sure I can do that, as I don't know how big the final array will be.


Neither does LabVIEW which is why it is allocating new memory every time through the loop Smiley Happy But yea what Lucither said - size it down after the loop. I forgot to mention it.

=====================
LabVIEW 2012


0 Kudos
Message 5 of 59
(6,274 Views)

Dthor,

 

For your 2nd vi it seems to me that you always need an array that will be the same size as the input array. The actual relative array will always be 1 less then the input but you add an array element at the beginning. As this is the case it will be simple to also input the array into a shift register and write over the copy of the original array:

 

2nd Faster Array.png

 

Unless i have misunderstood from your image (Dont have 2010 so couldnt run it). I wrote an example of your original, just adding a dummy initial value at the beginning of the array. On the version you have it appears that you always get 1 more array element then needed, on mine was always 0,0. Anyway, on the version with replace array subset it came out how i expected and a lot, lot quicker.

 

Rgs,

 

Lucither

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
Message 6 of 59
(6,270 Views)

Hey guys, take a few steps back and think about the problem. There are better ways. 

 

It took me a few minutes to rewrite the problem so it is about 40-50x faster (!!) 🙂

 

(Your version takes 130ms and my version takes 3ms).

 

Judge for yourself, it produces the same result. Makes sense? Who can make it even faster? 😉

 

Download All
Message 7 of 59
(6,251 Views)

He stated that in the future he will be looking for more then 1's and 0's, having a case structure makes it more scalable for what he may want to achieve.

 

How big is your probe array? I cant open yours as i dont have 2010. Is the array just full of 1's, just want to try and simulate your results against what i got.

 

Rgs,

 

Lucither

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
0 Kudos
Message 8 of 59
(6,236 Views)

Oh man, those work wonderfully! I'm sitting here and just saying "Wow!" over and over again. I guess I marked the solution a bit too soon though, eh altenbach? 

 

I'll be taking a more in-depth look at these methods tomorrow at work. I've got to spend a bit of time interpreting altenbach's... but it is LIGHTNING fast, even with 16 million points: 4.26 seconds on my machine. Luckily I'll never be testing that many die at once... Once I understand it I can see if it can be expanded to take more than just a 0,1 input.

 

And Lucither, you were pretty much dead on with understanding my problem.

0 Kudos
Message 9 of 59
(6,228 Views)

In the example i posted, if i place a 1000*2 2d array into our loops, the original op's gets 4102ms whereas i only get 1ms!! Thats 4000x faster (Unless im doing something wrongSmiley Tongue, which is highly likely). We both end up with exact same arrays. I am creating the dummy array with all 1's also, so on each iteration the output arrays are getting built.

 

Is my code really this much faster or, as i suspect, have a made a mistake in calculating the speeds.

 

Rgs,

 

Lucither.

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
0 Kudos
Message 10 of 59
(6,224 Views)