11-25-2009 08:25 AM
Hi! I am acquiring data from cDaq at given rate. Before writing them I would like to decimate them – each channel with different decimation factor. Since I acquire data over a long time the continuous decimation is required. In addition I read all the data in the buffer - not just predefined amount. I tried to create memory efficient algorithm for that so I wanted to ask any of you to have a look at it. What could be improved?
It’s based on Action Engine concept. When the VI is initialized it creates array for each channel for storing samples that can be skipped in the beginning of each array.
When decimating it skips those elements and decimates the rest evaluating how many elements are redundant and should be skipped in next iteration on this channel.
Main_TestDecim.vi – test the algorithm on given data set
Decim_Core.vi – the algorithm of the decimation plus removal of redundant elements
Decim_Cont.vi – Action engine
Could I ask some professional to have a look at it. Can I improve it anyhow?
Thanks
11-27-2009 08:17 AM
Hey Ceties,
I have taken a look at your code. It's really well developed, congratulations for your style! 🙂
There are a few things in which I see room for improvement:
Have you tried VI Analyzer? It can help you a lot in performance.
Have a nice day!
Matyas
11-27-2009 04:26 PM
Hey Matyas and thanks for your time. I cannot simply replace the build array by the auto indexed tunnel from the for loop since the data has different length for each decimation factor and thus it would append zeros for channels with bigger decimation factor. Or did I miss something?
Isn't there some other method to perform decimation than the one I developed?
Thanks!11-30-2009 05:55 AM
Hi Ceties!
I'll give you two options:
I suggest you run VI Analyzer in both cases to check memory further information about memory usage.
I only modified the main VI...
Have a nice day,
Matyas
11-30-2009 03:11 PM - edited 11-30-2009 03:12 PM
Hey!
1] are u sure about the insert array instead of build array? I know that the build array is quite optimized operation as long as you are adding to the end of the aray.
2] I am aware of the optimization but it won’t work for me since I have continuous decimation over multiple channels and also that it adds zeros is a problem – but thanks for trying anyway.
12-01-2009 03:19 AM
Hey!
1.) What's the problem with insert into array? You're continuously filling an array with new values. It's just what you need, don't you?
Building array constantly reorganize your array in the memory. When you start building not 3*100, but 3*1000000 elements your PC will be stacked under the load. Imagine that you need to move a 1D array with 1 MSamples from one memory location to another in one loop iteration! It wouldn't be a honeymoon, even if it's a well optimzed function 🙂 Ever, you need to build an array once then you fill it with elements. You can use initalize/indexed for loop/build array for creating the array than you insert your data with insert into/replace element/... etc.
2.) Agree. I just wanted to give you an alternative solution. 🙂
Didn't you run the code? It gives you the same result with less CPU usage. Is it not what you were looking for?
Best,
Matyas
12-01-2009 04:23 AM - edited 12-01-2009 04:27 AM
Hi and again thanks.
Now I see what you meant. I can use the for loop to autoindex for each channel – I made it way too complicated – thanx for pointing itout.
Nevertheless there isone thing wrong with your code – YOU HAVE TO assure that the RESET of the AEoccurs before the for loop runs – that’s why the output of it has to be wiredinto it. Otherwise the execution is not deterministic and you can recognize itif you run the code multiple times.
If I replace your insert into array just by build array I get slightly better or equal results (inthis case they both behave the same allocating chunk in memory and then copyingif the chunk is not big enough)
If I ran the code 1Mx
Build array: 117.004ms
Insert into array:119.942ms
Build array is in addition muchbetter readable in the code.
12-01-2009 07:25 AM - edited 12-01-2009 07:31 AM
Hi Ceties,
Yes, it's true. There's no real difference in speed.
I caught the explanation on the wrong end. Our computers will not be limited in speed using build array function inside a loop. It's about memory allocation...
You will definetely be more memory efficient when not using those build array functions inside the loop.
Check this KB for more details:
http://digital.ni.com/public.nsf/websearch/771AC793114A5CB986256CAB00079F57?OpenDocument
From LabVIEW Help:
For example, if you see places where you are frequently increasing the size of an array or string using the Build Array or Concatenate Strings functions, you are generating copies of data.
http://zone.ni.com/reference/en-XX/help/371361B-01/lvconcepts/vi_memory_usage/
You have to make the decision:
I look forward to get your opinion.
Matyas
12-01-2009 08:20 AM
12-02-2009 02:03 AM
Hi,
Yes. You convinced me. If you'll later use this as a subVI for decimating a continuous measurement, it will make no difference in speed or memory. In those for loops used, you could preallocate the array, because you know the exact size of the array at the begining.
If you take a deeper look at the link to LabVIEW Help I attached, you'll see that we broke at least two thumb rules:make copy of arrays and use cluster of arrays. So, what you can do now, is to get rid of that cluster >> you won't need neither the initalize array nor the build array functions.
Try to follow my second post and use different arrays for all channles to store your decimated data!
Decimate in paralell and don't try to build one table for all the data >> You will neither have 2D array filled with zeros to match size.
Is it possible in your application?
Bye,
Matyas