High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

what's the overhead on initializing a scope for each acquisition in a loop?

Hi there, let me try and clarify the quesiton...

    During the course of the experiment, I want to build a 2D dataset (wavelength vs. time).  At each different wavelength, I want to collect X # of records from the scope and then average them.  I have mainVI which will call subScopeVI for each different wavelength and begin the acquisition.  The data will then be fetched and returned to the mainVI for some signal processing.  (I'm envisioning this as a producer/consumer type of application). 

    Here's the question.  Would it be better to initialize the scope one time at the beginning of the experiment, and then close it at the end, or would it be ok to have it initialize/close for each different acquisition.  I'm concerned that the latter optionwill introduce a significant amount of overhead to the total runtime of the experiment, and that it's not necessary.  However, I'm very new to using labview, so I figured I should ask.

I'm using LV8 and a pxi-5114 scope.

Thanks for any advice
-z
0 Kudos
Message 1 of 10
(8,039 Views)
You are correct.  Initializing the scope each time you use it is a significant overhead.  Once at the beginning with a close at the end is your best option.  Pass the reference through loops with a shift register, not a tunnel.  This prevents a copy of the reference being made and ensures that the reference will get through if your loop does not execute for some reason (e.g. FOR loop with zero iterations).

FYI, the NI-SCOPE measurement set includes a waveform average.  But waveform averaging brings up a subtle point.  The 5114 (and most NI-SCOPE devices) has a high quality time-to-digital converter which reports the time a trigger occurs, rather than the data point on which it occurs (see waveform data or t0 if using waveform output).  This gives you subsample timing resolution.  A trigger can occur at any time between two points.  This is random unless you have synchronized your data source and the scope (you can synchronize in many cases).  Averaging waveforms by simply adding them and dividing by the number of waveforms will smear by about a half a sample period.  In most cases, this is not an issue.  However, if you really want "better" data, you can use the timing information and the resample VIs to shift your data in time, so the trigger points all line up, before averaging.  You will probably not need to do this, but you should know the option exists.

Good luck.  It sounds like you are on the right track.  Let us know if you run into problems.
0 Kudos
Message 2 of 10
(8,024 Views)
First off, thanks for your response!

@DFGray wrote:
You are correct.  Initializing the scope each time you use it is a significant overhead.  Once at the beginning with a close at the end is your best option.  Pass the reference through loops with a shift register, not a tunnel.  This prevents a copy of the reference being made and ensures that the reference will get through if your loop does not execute for some reason (e.g. FOR loop with zero iterations).

I'll have to learn more about this point to respond very well, but here goes...  I understand how shift registers work, but I have done nothing with tunnels yet.  I was going to probably use a stacked sequence structure with shift registers to separate initialization, collection, and closing, but in all of the reading that I've done on these forums (which are FANTASTIC!, btw) -  I have read that sequence structures are not considered particularly elegant.  I want to make the code very friendly so people who work with this stuff after I'm gone (which won't be long) don't have to start from scratch.


FYI, the NI-SCOPE measurement set includes a waveform average.  But waveform averaging brings up a subtle point.  The 5114 (and most NI-SCOPE devices) has a high quality time-to-digital converter which reports the time a trigger occurs, rather than the data point on which it occurs (see waveform data or t0 if using waveform output).  This gives you subsample timing resolution.  A trigger can occur at any time between two points.  This is random unless you have synchronized your data source and the scope (you can synchronize in many cases).  Averaging waveforms by simply adding them and dividing by the number of waveforms will smear by about a half a sample period.  In most cases, this is not an issue.  However, if you really want "better" data, you can use the timing information and the resample VIs to shift your data in time, so the trigger points all line up, before averaging.  You will probably not need to do this, but you should know the option exists.

I have multiple points to respond to:
0) I'm using an external trigger to trigger the scope and the data source.  I assume that this solves my problem, but I want to make sure.  The 4ns time resolution is important to these experiments.  (I'm not sampling faster than 250Ms/s because I don't want RIS stuff)
1) are you refering to 'multi acq average', or is there a VI that does the waveform averaging.
2) I may have been thinking about how these scopes work incorrectly.  I was under the impression that when a trigger even occurs, the scope sets that point = to 0 and then assigns values from the data source to the following datapoints to fill the record.
3) I dont' under stand what either of these mean in the context that you wrote them
       -(see waveform data or t0 if using waveform output)  >  I looked for these phrases in the scope help, but with no luck
       -This gives you subsample timing resolution. > can you explain this better for me please?


Finally, I wanted to talk about the algorithm for calculating the average (if I don't use the scope's capabilities you mentioned).  I don't actually care about the individual scans, I just care about the average of 50 of them.  So it seems to me that the best way to do this is to create an array, fetch the record, and just add subsequent scans.  Then at the very end, I can divide by the total # of scans.  However, I was wondering if there was another way to do this that you know of....  Is there a way to calculate a running average pass that value between iterations.??  if this isn't clear, I can explain it more in a new post....

Thanks,
-Z

Message Edited by zskillz on 10-06-2006 11:59 PM

0 Kudos
Message 3 of 10
(8,016 Views)
also, what happens if I dno't 'close' the scope?

--- and while I'm at it ....

all of this discussion has let to another question which i think you may be able to help with since it's so closely related.  the thread can be found here...
http://forums.ni.com/ni/board/message?board.id=170&message.id=209151

thanks
-Z

0 Kudos
Message 4 of 10
(8,001 Views)

My apologies for not making myself clear. Taking your points one at a time.

  1. Your trigger time resolution (40ps with TDC on) is far higher than your sample time resolution (4ns). If you fetch your data using one of the cluster fetches, there will be an output cluster called wfm info. The relativeInitialX value in that cluster gives you the time from the trigger point to the first data point. This is almost always negative, since the default acquisition settings place the trigger point at the center of the data set ( reference position=50% ). If you fetch using the waveform data type for output, you can get the same information in the t0 of the output waveform by setting timestamp type to relative. Using an external trigger cannot guarantee that the trigger will occur on one of the sample clock edges. You will need to synchronize your source clock with the scope sample clock, if possible, to do this (I have never done this with a 5114, so I will not be much help here). If 4ns resolution is OK for you, then do not worry about it.
  2. I am referring to the Multi Acq. Average. Averaged waveforms are just another one of the processing types, confusing as that may be.
  3. Internally, the scope devices have a ring buffer which they are continually filling, even when not in use (the 5102 is an exception). When a trigger occurs, data collection continues until all post-trigger points are acquired. Pre-trigger points are already in the buffer. The trigger position is calculated based on the time to digital converter (TDC) value at the time of the trigger (assuming the TDC is on).
  4. Items 0 and 2 partially answered this. In short, the TDC in the 5114 has 40ps timing resolution. Its value is stored when a trigger occurs. The sample clock has 4ns timing resolution in your case (250MHz acquisition rate). This allows you to locate the trigger position between the samples to much higher resolution than the sample clock.

You can calculate a running average several ways, but the easiest is exponential averaging. In this method, the influence of previous data exponentially decays away. You first need to decide what weight to give previous data and what weight to give the current data. For example's sake, let's assume a 75% weight on previous data and a 25% weight on current data. This is sort of like a four sample average. To start, take a set of data and load it into a shift register, which will be your averaged data. On subsequent runs, multiply the shift register data by 0.75 and the new data by 0.25, add together, then display and reload the shift register with the new averaged data. Here is a picture of the idea.

ExponentialAveraging

Let me know if you need more clarification

Message Edited by DFGray on 10-09-2006 08:58 AM

0 Kudos
Message 5 of 10
(8,000 Views)
If you don't close the scope, LV may clean up the reference for you.  But that depends on the local settings of LabVIEW.  If you have Automatically close VISA sessions set in the Environment tab of the Tools>>Options dialog, then the reference will be cleaned up.  If not, it will stick around until you close LV.  I would recommend closing it.
0 Kudos
Message 6 of 10
(7,987 Views)


@DFGray wrote:
However, if you really want "better" data, you can use the timing information and the resample VIs to shift your data in time, so the trigger points all line up, before averaging.  You will probably not need to do this, but you should know the option exists.


Hi DFGray,
    I've been working on this program quite a bit since my last post, but I've come back to this stuff to ask a few more questions of you.  I'm interested in getting the best data possible, so I don't mind doing the extra work to 'time shift' the data.  I read this post http://forums.ni.com/ni/board/message?board.id=150&message.id=1203&query.id=99702#M1203 in which someone else had a similar problem, but I wasn't able to totally figure it out on my own yet.  Basically, I want to average X number of scans as accurately as possible, but I can't figure out how to do this yet.  I understand that I have the relativeX point and the array, but I'm not sure how to proceed.  Also, I've looked at different types of fetching, and some of them include a t0.  I'm not sure what the difference is between this and relativeX??

thanks,
-z
0 Kudos
Message 7 of 10
(7,934 Views)

This may help. Let me know where I need to fill in the details.

  1. Take your first data set and record the relativeInitialX value (a shift register works well). This will be your baseline for future calculations.
  2. Put the first waveform in your average buffer.
  3. On each succeeding waveform, find the difference in relativeInitialX between that waveform and the first waveform you took. This value should be less than a sample period, but could be positive or negative.
  4. Now use some sort of interpolation scheme to find the data at each point relative to the data you already have. Off the top of my head, I can think of three ways to do this, listed in easiest to hardest, slowest to fastest (I think)
    1. Use the Align and Resample Express VI. This will not be available if you have the base version of LabVIEW.
    2. Use FFTs to shift your data. To do this, take the FFT of your data. Shift the phase of each element the correct amount to move the waveform. This will be a different amount for each frequency bin, since the frequency is linearly changing. Take the inverse FFT. Since you are moving less than one sample width, this works very well. However, I would pad the ends by at least 10 extra points each to take care of any ringing artifacts.
    3. Use a Savitzky-Golay filter to interpolate your data. You can find details on the Savitzky-Golay filter in Numerical Recipes in C by Press et. al. It is available in most libraries.
  5. Average the result into your average buffer as before.
Good luck.
0 Kudos
Message 8 of 10
(7,911 Views)
sweet!  thanks.

@DFGray wrote:
This may help. Let me know where I need to fill in the details.
  1. Take your first data set and record the relativeInitialX value (a shift register works well). This will be your baseline for future calculations.
  2. Put the first waveform in your average buffer.
  3. On each succeeding waveform, find the difference in relativeInitialX between that waveform and the first waveform you took. This value should be less than a sample period, but could be positive or negative.
  4. Now use some sort of interpolation scheme to find the data at each point relative to the data you already have. Off the top of my head, I can think of three ways to do this, listed in easiest to hardest, slowest to fastest (I think)
    1. Use the Align and Resample Express VI. This will not be available if you have the base version of LabVIEW.
    2. Use FFTs to shift your data. To do this, take the FFT of your data. Shift the phase of each element the correct amount to move the waveform. This will be a different amount for each frequency bin, since the frequency is linearly changing. Take the inverse FFT. Since you are moving less than one sample width, this works very well. However, I would pad the ends by at least 10 extra points each to take care of any ringing artifacts.
    3. Use a Savitzky-Golay filter to interpolate your data. You can find details on the Savitzky-Golay filter in Numerical Recipes in C by Press et. al. It is available in most libraries.
  5. Average the result into your average buffer as before.
Good luck.


so if I'm understanding this correctly, it almost seems as if this adds waaaay more overhead/time to each collection... and I'm not even sure it's that much more accurate?  I mean - I'm going to be interpolating every single scan after the first...  it seems as if I will also introduce error in the 'y' direction this way -- almost as if I'm trading it for error in the 'time' direction...   your thoughts?

also, I've included a picture of what I think you're saying i need to do.  does it make sense to you?  basically, I'd interpolate all of the datasets after #1, then average them Y, yi1, yi2,... yin (according to the diagram)

-Z

oh - also, it seems like this could be a way to measure jitter in my trigger signal... ??

Message Edited by zskillz on 10-24-2006 12:07 PM

Message Edited by zskillz on 10-24-2006 12:09 PM

0 Kudos
Message 9 of 10
(7,903 Views)
You are correct, this will add an enormous amount of extra computation at every iteration.  Depending on your data speeds and computing power, you may not be able to do it.  If you have dual processors, you can take advantage of them by putting your data acquisition in one loop and your analysis/average in another.  Pass the data between the loops with a queue.

As for whether or not it gives you more accuracy, that depends on your data and the method you use for interpolation.  You typically will take an extra data point or two on the ends after the first set of data, so you have data to interpolate with.  This solves the endpoint issue (if you tried it, you may have noticed the Express VI does not always return as many points as you give it).  Fitting type interpolation methods (e.g. cubic spline, Savitzky-Golay, linear interpolation) will do better or worse, depending on how fast your data changes and what coefficients/order you pick.  I really like the FFT method, since it side-steps most of these issues, but you do need to add extra endpoint data, and it helps if you smoothly join the end to the start.

Usually, you will get better, not perfect, data if you do the data shift/resample before doing the average.  However, whether the pain is worth the gain is up to you.  For most people, it either is not worth the effort or takes too long.
0 Kudos
Message 10 of 10
(7,880 Views)