11-05-2009 05:55 AM
Maybe, but I see some serious errors in your code:
-Deal with I32 instead of Singles
-Delete 4 elements instead of 1
To speed up my example we could just output the data into a tunnel and let it autoindex into a 2D array and use Reshape Array.
Ton
11-05-2009 06:54 AM
TCPlomp wrote:I see some serious errors in your code:
It's not my code.
Nevertheless i think the solution is your first post.
Everything else will cost a few seconds
11-05-2009 07:11 AM
I agree, supposed solutions are to slow. I'll need to make patch to fix previus version of software and edit all of files record.
Thanks to all of you who helped.
11-05-2009 12:10 PM
You can give Kudos to this idea, to help improve LabVIEW,
11-05-2009 12:20 PM
I don't have to to code up and bench mark but...
I suspect most of the time would be associated with copying buffers and reallocating etc. , so...
1) Put the original array in a SR in a For loop.
2) Inside the For loop, use the Delete from the SR and pt the result back in the SR.
3) Delete the segments/blocks started at the max index and work toward the min index (this way you don't have to re-calcualte your index everytime you delete something).
If that does not do it, I'd try the inplace operations to slide down all the stuff I want to keep and after teh inplace work is done re-size the array.
Just tossing out other ideas,
Ben
11-05-2009 01:28 PM
I figure the most time is spend in reading the actual data from disk.
If I was _thomas I woud write a batch converter (based on my last code post) that removes all the length info bytes and store these to disk, run this overnight.
And adjust the actual code that currently writes the data to disk.
No need anymore to delete that info.
Ton