Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

error with rapid data acquisition

Solved!
Go to solution

Hi all, here’s my situation.

 

I have a rotary encoder putting out 4304 pulses per revolution for 500 revolutions on a device rotating at approximately 500 rpm (all of these numbers are arbitrary).  I want to take data on 4 channels every time the rotary encoder puts out a pulse.  What is the best way to do this?

 

When I have much less data (was using 800 pulses per revolution yesterday) my current method works fine, which is to have a DAQ Read inside a for loop that gets called once for every pulse I’m expecting, i.e. I wire 800 pulses/rev * 500 rev into the N terminal.

 

test.PNG

 

PFI8 is the rotary encoder pulse.  PFI 3 is the rotary encoder's index pulse, which fires once every revolution, it's just used to tell my acquisition when to start.

 

My DAQ knowledge is pretty small, so I don't know if I'm trying to do something that's fundamentally flawed, or some of my settings are just wrong.  Any insights welcome.

 

I'm using an NI PCI-6229 DAQ card.

--
Tim Elsey
Certified LabVIEW Architect
0 Kudos
Message 1 of 32
(5,466 Views)

Tim,

 

You didn't specify which error you were receiving, however I'll hazard a guess as to what your problem may be.  Using the DAQmx Timing VI, you've set you task up to be a continuous task.  In this mode, the 'samples per channel' input is used to configure the size of the buffer allocated on the host into which DAQmx will read its samples.  You've set this to 1000.  This I would guess is a fairly small buffer compared to the total number of samples you're going to be acquiring.  This combined with the fact that you're using the N Chan 1 Samp version of read (reading only one sample at a time) probably leads to a situation where your read loop isn't keeping up with the acquisition.  When the read loop lags, I would guess that you're over-writing data in the buffer and your task errors.

 

To rectify this, you can specify a larger buffer via the 'samples per channel' input to the DAQmx Timing VI, or change your read method to an N Channels N Samples version such that the read loop does not need to execute as quickly in order to keep up with hardware.  Furthermore, if you know in advance how many samples you'll be acquiring, you really could use the DAQmx timing VI to perform a finite acquisition, then use a single call to DAQmx Read (N Channels N Samples) in order to retrieve all the data you expect.

 

Hope that helps,

Dan

Message 2 of 32
(5,459 Views)

 


@Mcdan wrote:

Tim,

 

You didn't specify which error you were receiving, however I'll hazard a guess as to what your problem may be.  Using the DAQmx Timing VI, you've set you task up to be a continuous task.  In this mode, the 'samples per channel' input is used to configure the size of the buffer allocated on the host into which DAQmx will read its samples.  You've set this to 1000.  This I would guess is a fairly small buffer compared to the total number of samples you're going to be acquiring.  This combined with the fact that you're using the N Chan 1 Samp version of read (reading only one sample at a time) probably leads to a situation where your read loop isn't keeping up with the acquisition.  When the read loop lags, I would guess that you're over-writing data in the buffer and your task errors.

 

To rectify this, you can specify a larger buffer via the 'samples per channel' input to the DAQmx Timing VI, or change your read method to an N Channels N Samples version such that the read loop does not need to execute as quickly in order to keep up with hardware.  Furthermore, if you know in advance how many samples you'll be acquiring, you really could use the DAQmx timing VI to perform a finite acquisition, then use a single call to DAQmx Read (N Channels N Samples) in order to retrieve all the data you expect.

 

Hope that helps,

Dan


 

I think you're right.  I have tried playing around with the buffer size and sample rate, I had them both set at 100000 earlier today and was getting a different error which I diagnosed as the loop not executing fast enough, as you said.  Then I dabbled in NChan NSamp with Continuous Acquisition and got yet another error, which led me to here. 

 

What I tried doing was just reading one revolution at a time, but this errored almost instantly.  I will always know the amount of samples before hand.  Will a Finite acquisiton be able to handle my 2M+ points?  Is this dependent on the DAQ card?  Should I stick with trying to read smaller chunks, say, 1 revolution, instead of the whole thing?

--
Tim Elsey
Certified LabVIEW Architect
0 Kudos
Message 3 of 32
(5,457 Views)

Tim,

 

First, let's figure out what error you're seeing.  I went back to the number in your first post, and it occurs to me that you'd be talking about acquiring data at a sample rate of approximately (500 * 4304) = 2.15 MS/s.  The 6229's AI is capable of 250 kS/s.  Are you seeing a -200019 (ADC overrun), or -200279 (over-write of unread samples)?  If you are seeing -200279, then we can work out which acquisition settings are best for your application.  If you are seeing -200019, then we'd need to determine what is the appropriate coarse of action for you to take (change of timing requirements / change of hardware).

 

Dan

0 Kudos
Message 4 of 32
(5,453 Views)

I'm actually looking at 500 revs/min * 4304 pulses/rev *1/60 min/sec = approx 35.9ks/s per channel which is 143.46ks/s if i'm not mistaken.  This puts me under the limit of the 6229's capabilities.

 

I can't run the code now, the operator is out, but unless I'm mistaken, it seems it should be possible, at least on the hardware side of things.

--
Tim Elsey
Certified LabVIEW Architect
0 Kudos
Message 5 of 32
(5,449 Views)
Solution
Accepted by topic author elset191

Tim,

 

You're absolutely correct... I guess I should get my afternoon coffee before I attempt basic math 🙂

 

OK, we'll run with the assumption that what you're seeing is a buffering error.  M Series devices like your 6229 have 32 bit wide AI counters, so the hardware can handle finite records of up to 4 G Samples in size.  As your record size increases, you'd likely see issues handling the data in LabVIEW, or allocating buffers with DAQmx long before your hardware would have issues.  If you use a finite acquisition, DAQmx will by default attempt to allocate a buffer that is big enough to hold your entire record.  For your VI, it would allocate a buffer that was 2.152 M * 4 samples in size (16 MB since each sample is two bytes wide).  I would not expect to see an issue with this.  Read (DAQmx Read - N Chan N Samp) would then return you a very large 2 D array of data (similar to what you're doing now with the auto-indexed output of your for loop).  If you're familiar with dealing with large data sets in LV (carefully avoiding memory copies), then this shouldn't be too much of an issue for you.

 

Breaking this up into multiple reads with a smaller buffer should also be a reasonable approach.  If you were to attempt to read iteration worth of data at a time, I would likely do the following:

DAQmx Timing VI:

  Rate: 143kHz - This parameter isn't really used for much since you're clocking your data externally... If you weren't specifying the buffer size, DAQmx would pick a default based on this. Additionally

                         DAQmx would use this for dt information if you were using the waveform data type

  Sample Mode - Continuous

  Samples Per Channel - 100,000 - Just taking a guess here... This would give you buffer space for nearly 25 revolutions... seems like plenty to me

 

I would then use the DAQmx Read (N Chan N Samp) VI, and specify that you read 4303 samples at a time.

 

My first recommendation would be to attempt to set up your acquisition as a finite acquisition.  Perform a single read to retrieve all of your data.  If you run into issues with this, then I would look into breaking this down into multiple reads.

 

Dan

Message 6 of 32
(5,437 Views)

After much trial and error, I've finally got it working.

 

At first, I switched to Finite Samples straight away and it was throwing an error because the sample rate was too slow.  I had it set at 10000 and each channel, as discussed earlier, was about 36000.  So, I changed it to 100000, and was getting yet another error.  This one, I think, is because 100000x4 channels > 250k that my board is capable of, so I knocked it down to 60000, and everything worked like a charm.  As a matter of curiosity I set it to 62600 to see if it would error, and it didn't, so I'm not sure if my logic was correct.  The (theoretical?) limit to the sample rate should be 62501, correct?  Or is my logic wrong in thinking this?

--
Tim Elsey
Certified LabVIEW Architect
0 Kudos
Message 7 of 32
(5,414 Views)

Tim,

 

Glad to hear you got things working.

 

I can explain why you are seeing errors when setting the sample clock rate too high.  M Series devices like the 6229 have one ADC which is shared between all samples.  The aggregate throughput for the device is 250 kS/s.  Each time a sample clock is received, the device generates a convert clock which switches each of your AI channels to the ADC and performs a conversion.  If you tell DAQmx that you are sampling at 100 kHz, then it will attempt to set a convert clock rate which will fit 4 channels of data (in your case) into this sampling window.  As you suspected, this rate > 250 kS/s, which is why you saw the error.  When you set the sample rate to 10,000 DAQmx likely did not program the convert clock at a high enough rate to be done before your next sample clock arrived (since your sample clock is actually arriving at ~35.9 kS/s.

 

If you want to be explicit about how fast the convert clock runs, DAQmx does allow you to set this rate.  This setting is available in the DAQmx Timing Property Node->More->AI Convert->Rate.  You can also query this property for a task if you are curious about what default values DAQmx is programming.  Once you do this, the rate you specify in the DAQmx Timing VI should have the minimal effect that I described in my previous post.  I apologize for glossing over these details previously.

 

The ADC on most of these devices will operate a little over the rate which is specified, though their performance may degrade.  As such, it is generally possible to run them a little faster than their apparent 'theoretical' limit.  Typically if you try to operate above the published specification for the device DAQmx will return a warning.  If you operate fast enough to cause the ADC to error, DAQmx will stop your task and return you the appropriate error.

 

Hope that helps,

Dan

Message 8 of 32
(5,405 Views)

Thanks for all the help, much appreciated.

--
Tim Elsey
Certified LabVIEW Architect
0 Kudos
Message 9 of 32
(5,378 Views)

You still out there Dan?

 

Is there a way to make this new method I've been using timeout if it doesn't receive pulses? 

 

Right now I have the timeout set to -1, because any other reasonable timeout (a few seconds) appears to timeout if it doesn't receive all of it's data within the allotted time.  Of course, to get hundreds of thousands points in a few seconds isn't going to happen.

 

The problem with -1 is that if our encoder stops pulsing before all of the points are aquired it sits there and we have no way to stop it other than restarting LabVIEW.  I don't see any way around this.

--
Tim Elsey
Certified LabVIEW Architect
0 Kudos
Message 10 of 32
(5,166 Views)