03-11-2010 11:08 AM
Hello all,
I am trying to identify a known pattern of mass spectral peaks in a Labview data collection code in real time. My data collect occurs on 50 ms intervals (20 Hz). Unfortunately, my analysis routines are taking ~160 ms to complete, and therefor causes my hardware to miss data collects.
Using the profile tool, it appears that one of the key offenders is a subroutine that predicts what the full spectrum should be, assuming that a few common peaks of desired spectral pattern are in the spectrum. It creates a spectrum of gaussians for the prediction, which it then compares to the real peaks. If it doesn't correlate well, then the guess is wrong; if it does correlate, then the code returns what the masses of the peaks are. I can then find the bounds of the peaks (I need the area of the peak, not just the peak location), and use them for science.
In any event, it has to run the subroutine to make the predicted spectrum numerous times. Typically I am predicting the locations of 4 peaks in the spectrum, and the spectrum is commonly 3000 points long. So my routine to make a predicted spectrum has the following inputs:
Where size is the # of points, width is the 1/2 width of all of the peaks, amplitude is the height of an individual peak, and center is the location of the peak in the 3000 point spectrum. Gaussian array is the output.
The code looks like this:
(vi attached below)
My question is: Can I can do something to improve the speed of this calculation (in other words, am I doing something dumb?)? Or is this pretty much as efficient a subroutine as is possible?
Many thanks for your insight!
RipRock
Solved! Go to Solution.
03-11-2010 12:28 PM
One solution to your problem, independent of how you do your calculation, is to run your data collection seperately from your analysis.
Collect your data, and add it to the end of a queue.
In a seperate loop, dequeue data in pieces, run your calculations, then do whatever it is that you do with those values.
This is commonly reffered to as a 'producer consumer' architecture.
This way, your data collection will not get hung up waiting for your analysis to run.
The one thing you have to keep an eye on is the size of your queue because this solution, by design, will make the queue grow in size until data collection stops.
I hope that made some sense.
03-11-2010 12:28 PM
I would suggest that you code the calculation using native LabVIEW functions rather than using the formula node.
I've tried to refactor the VI using primitives; see attached (LabVIEW 8.6)
03-11-2010 12:43 PM
03-11-2010 12:48 PM
Phillip Brooks wrote:I've tried to refactor the VI using primitives; see attached (LabVIEW 8.6)
since you are autoindexing on the inner loop, the "# of peaks" input is probably not needed.
03-11-2010 01:30 PM
Thanks to all of you for your thoughts!
Phillip - for reasons that are less than stellar, I am stuck for another couple of months with LV 8.5. Any chance of a down-conversion to 8.5?
Cory - I have been thinking about trying a producer-consumer loop as well; especially because pretty much all of the machines produced today have more than one core, and I think in theory I could benefit from splitting the code.
Darin - I will try shutting off debugging.
03-11-2010 07:55 PM - edited 03-11-2010 07:58 PM
There is no question. You must split the code; separate data collection and analysis.
BTW I use formula nodes all the time. I think for complex mathematical functions the text is a lot better in terms of readability and you can add comments (which don't float all over the screen!).
03-12-2010 09:02 AM
03-12-2010 11:19 AM
03-12-2010 11:21 AM
So I had a routine a little like this early on, and tried to rewrite it the way I did to avoid have as many loops, fearing they might be the source of the speed hit. I suppose I should try running both, and seeing if I can determine which is faster.
Thanks,
RipRock