Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

PXI-6115: missing samples with retriggerable analog input

Solved!
Go to solution

Hi, I am writing a script that repeatedly acquires analog samples from 3 channels. In order to minimize dead time I based my script on the example "Multi-Function-Ctr Retrigg Pulse Train Generation for AI Sample Clock.vi". With a PXI-6251 card I am getting the expected behavior, but I am experiencing issues with a PXI-6115 card (which I need for its higher sampling rate) : the number of available samples never reaches the number of counter pulses.

 

I have attached a minimal working example that should be understandable by anyone familiar with daqmx. The program never exits the while loop, because avail_samp_per_chan never reaches N_samps but a number I will reference as N_avail. N_avail seems to always be a multiple of 16. Here is a list of what I tried:

- monitor Ctr0 on an oscilloscope and with a counter input task on the PXI-6251 card -> Ctr0 exhibits N_samps rising edges with correct timing.

- change sampling rate -> no effect

- change N_samps to a multiple of 16 = n*16 -> N_avail saturates at (n-1)*16

- change N_samps to n*16 + m -> starting from m = 2, N_avail saturates at n*16

 

Ideally I would like to get the same number of AI samples than the number of CO pulses. But my minimum requirement is that I should be able to define with confidence a list of timestamps corresponding to each of the N_avail samples. Thanks ! 

 

0 Kudos
Message 1 of 8
(3,389 Views)
0 Kudos
Message 2 of 8
(3,388 Views)

Nothing definitive here, and I don't know or use the python API.

 

I don't know the reason behind the multiple-of-16 for N_avail observation, but here are a few thoughts on other things you might try.

 

1. Instead of a while loop that waits for N_samps, just call task.read() while requesting the same # N_samps.  (You may need to loop over this while actively handling timeout errors if triggers come at unpredictable times.)

 

2. note that querying co.count will not give you a # of generated pulses.  It's a live look at the instantaneous value of the board's count register.  If you were to query it often enough, you'd see that it keeps counting down to 0, loading a new high or low time value then counting down to 0 again, etc.  The countdown will probably happen at 20 MHz, the board's max timebase freq.

 

3. Have you monitored the 6115 counter pulse output on a scope?  Does it also behave properly, including retriggering?  (The 6115 uses a pretty old timing engine, the DAQ-STC.  I'd have guessed it didn't support retriggered finite pulse trains, but also wouldn't be shocked if it does.  I just don't recall clearly.)

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 3 of 8
(3,368 Views)

Thank you for your answer. Almost every function from the python API behaves like the Labview VI or property with the same name so the translation between the two languages is in most cases pretty straightforward. As for the multiple-of-16, the issue was mentioned in this question but no answer was provided.

 

1. I tried using task.read(N_samps). The N_samps never become available if a new trigger is not sent to the CO task, plus part of the next acquisition then ends up in the data. Ignoring the timeout errors is anyways not a valid solution in the final application which should retrigger very fast (faster than stopping/starting a simple AI task).

2. I did notice that co.count wasn't behaving like I expected, thanks for the explanation.

3. I did monitor the counter pulse and it did seem to behave and retrigger as expected.

 

Here is how I am currently proceeding as a temporary fix:

1. acquire N_samps + M such that N_avail = N_samps

2. wait for co.pulse_done to become True and N_avail >= N_samps

3. read all available samples and keep N_samps of them

 

However some samples get stuck (FIFOs, buffer, ... ?) and pop up in the next acquisition's results. How can I ensure they are discarded before the next acquisition starts?

0 Kudos
Message 4 of 8
(3,348 Views)

Note there is a related unanswered question here.

0 Kudos
Message 5 of 8
(3,345 Views)

The multiple-of-16 mystery seems *mostly* explained by the linked threads (see msg #17 in the first one).  Caveat: the threads are very old and it's possible that newer versions of DAQmx don't behave exactly the same.  What's *not* explained is why things still aren't working for you when you're careful to set N_samps to be a multiple of 16. 

 

I'd be inclined to focus my investigation on that hardware/driver issue before falling back to try to rely on a workaround like the one you outlined (and found to be not really reliable).

 

1. First thoroughly confirm the retriggering counter behavior.  Set it up for small #'s of finite pulses at about 1 kHz you can easily capture on a scope.   Give yourself manual control over the triggering signal (example: toggle it with a DO line).  Configure your 6251 card to measure the 6115 pulse train.

    Once you get through the basics, start stressing it more.  Have the 6115 get retriggered off a pulse train you generate with the 6251.  Start with something slow like 1 Hz retriggering rate.  When you confirm the correct # of edge counts per trigger, start speeding up both the output pulse freq and the retriggering rate.  Make sure it keeps behaving well as you get up to your normal operating speed.

   And so on.  The main idea is to divide and conquer.  Confirm the counter first, then move on to the AI.

 

2. Add your AI task.  Keep the edge-counting task going on the 6251.  Revert to manual trigger control.  Use a very slow output pulse freq like 4 or 8 Hz.  Keep reporting both edge count from the 6251 and the # available samples from the AI task on the 6115.  Look for patterns, especially keep an eye on this multiple-of-16 stuff.

 

3. Feed your AI with a manually controllable voltage.  Set it for a constant value. Trigger your setup manually and wait for all your output pulses.  Change the voltage, then trigger & wait again.  Etc.   After 5 or so triggers, read all your AI data at once.  See whether you got the same (& correct) # of samples at each voltage level.

 

Just a few thoughts to try to get to the bottom of the hardware / driver part of the issue.  At the trigger and acquisition rates you're trying to achieve, you're going to *need* to get to the bottom of this in hardware, software timing won't be an option.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 6 of 8
(3,325 Views)
Solution
Accepted by topic author thibaudruelle

Thanks for nudging me in the right direction. While looking into the number of available samples when repeatedly triggering the counter, I found that I was missing 2 samples on the first trigger only. This lead me to this question and this KB, which fully explain my observations, from the 16-bit-multiple to the 2 missing samples.

 

Each channel of the PXI 6115 card has a dedicated ADC that stores 2 samples in a dedicated pipeline. This pipeline needs to be filled before samples are transferred to the acquisition buffer. This explains the 2 missing samples on the first trigger.
The buffer contains 16 samples on my PXI 6115 card with the extended memory option. This buffer makes 16 samples available for reading each time it fills up. This explains the 16-bit-multiple mystery.

 

In order to repeatedly acquire Nsamps analog samples using a retriggerable counter as the clock, one needs to configure the counter to output Nco = Nsamps + (16 - Nsamps % 16) + 2 + 14 edges.
After the first trigger, one should read Nsamps + (16 - Nsamps % 16) samples then discard the last (16 - Nsamps % 16). 14 samples are left in the buffer, and 2 in the pipeline.
After the next trigger (and all subsequent ones), one should read Nco samples then discard the first 2 + 14 that are old data and the last (16 - Nsamps % 16).

 

Note: reading a finite amount of samples instead of all available ensures we are waiting for the expected number of samples to become available.

Note2: one can optimize a little bit the number of extra counts in the case of a multi-channel acquisition.

 

Message 7 of 8
(3,308 Views)

Glad you found those explanations to help make some sense of things.  You should go ahead and mark your last post as a solution -- it may help someone in the future that's stuck on a similar out-of-the-ordinary problem.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 8 of 8
(3,301 Views)