Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

USB 6210 Frequency Measurement with DAQ Express

Solved!
Go to solution

Hello,

 

I'm sorry for the dumb questions, but here goes anyway.  I am trying to measure wire-feed speed by measuring frequency of an encoder which is connected to a roll in contact with the wire.  Ultimately I want to compare different wire-feed mechanisms based on these speed measurements.  After a few initial tests I was shocked to see how much the measured speed varies.  There are variations due to mechanical inacuracies and probably the measurements are realistic, I just wasn't expecting to see such a jittery graph under no-load conditions.  The test mechanism has a PMDC motor driven in closed loop speed control.  The encoder on the roller offers 1024 cpr and varies in rotational speed up to 300 rpm.

 

After reading these two threads https://forums.ni.com/t5/LabVIEW/Reading-encoder-data/m-p/4094609/highlight/true#M1179300  https://forums.ni.com/t5/Counter-Timer/Reading-encoder-data-and-determining-the-RPM/td-p/4096116  I thought I should check my sample rate and timebase.  Can I do this with DAQ express?  I only see the loop cycle time as an input for sample rate.  The lowest possible input here is 1 ms which I believe is 1000 counts per second, right?  Can I adjust the timebase?

 

I guess I'm just looking for a reality check.  Performing the test from the other thread, I drove the roll at 5 rpm and measured a maximum deviation from average +/- 30%. This is pretty slow for the motor, so I also tried at 50 rpm which resulted in +/- 32% maximum deviation from average.  Both were using 10ms loop cycle, or 100 samples per second.

 

Thanks

0 Kudos
Message 1 of 5
(2,675 Views)

I have no familiarity with DAQ Express so I can't speak to its features or limitations.  As for sanity check, I agree that 30% speed variation from a speed-controlled servo sounds, well, *suspicious*.  

 

The way you describe DAQ Express, it sounds like you're relying on software timing ("loop cycle time").  Have you configured to measure position and then calculate RPM, or have you configured to measure encoder frequency directly in hardware?

 

Under a full programming environment (like LabVIEW), I'd strongly advocate measuring frequency directly in hardware.  If I was instead using an approach that used nominal 10 msec loops to measure position and then calculate RPM, it'd be easier to to explain the apparent 30%+ speed variation. 

    Software-timed loops aiming for 10 msec timing under Windows can easily show timing variation of 30% (i.e., 3 msec) or more.  If the *nominal* 10 msec interval is used to calculate RPM while the actual interval varies, it's easy to explain what you see.

 

If you calculated the RPM from a time interval that was *measured* to the nearest msec, you'd be subject to quantization error of about 10%.  That's significant, but not really enough to make the observed 30%+ seem unsuspicious.

 

What happens if you change your "loop cycle" time to 100 msec?  Does the % variation drop significantly?  If my reasoning above is on target, it definitely should.

 

It also isn't clear what *exactly* you mean by variation.  A brief transient of 30% variation could actually be mostly real, depending on your drive system and any sudden load disturbances.

 

The best option will be a hardware-based frequency measurement.  Next best is hardware-timed sampling of position followed by speed calculation.  Then software-timed sampling of position where the intervals are measured in software.  And worst is software-timed sampling where intervals are assumed to be nominal.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 2 of 5
(2,602 Views)

Hi Kevin,

 

Thanks for your input and your time.

 

So the mechanical system I'm using here for initial setup has a pmdc motor in closed loop speed control.  The motor has a worm gear reduction to an output shaft.  On the shaft is a feed roll used to feed the wire.  Affixed to the mechanism on the inlet side is a capsule with another roller in contact with the wire.  I am running roughly 50cm stretches of wire through under as near to no-load as possible.  Obviously there are multiple sources for speed fluctuation in the entire setup.

 

DAQ Express comes with a few pre-written VI's for each of the measurement types.  In this case I used the frequency measurement with the counter input task.  From the frequency measurement I added a factor to calculate wire feed speed in m/min based on cpr and roll diameter.  Within the frequency measurement task I have the option to deselect "auto managed timing".  When deselected the only other option is "software."  I did not find any real difference in the results between the two, so I left the setting on "auto managed timing."  What that means I do not know.

 

As you can see in the attached screen shots, the measurement is very jittery with similar ranges using both 1ms and 100ms loop cycles. 

 

I did swap the encoder from 1024 cpr to 150 cpr and repeat the measurement.  With the lower resolution encoder, the range of measurements is much smaller.  I do not know if this is less realistic, but for visualizing it is much nicer.  For example I have one roll system with poor runout and if I run at slow speeds I can clearly see the effects of runout.

 

Whether or not I am doing it "right", I don't know.  But it seems to be a double edged sword.  The higher resolution only appears to cloud my visibility.

 

Cheers,

0 Kudos
Message 3 of 5
(2,581 Views)
Solution
Accepted by Surfmase

I think there's some pretty good clues there, let's see what we can do with them.

 

First, here's my interpretation of the screenshots you embedded in the attached doc.  For each you mention 2 different sample rates but only show a single plot on the graph.  I'm supposing that the seeming "drop-out" is where you changed sample rate in the midst of the run?    So one plot where the left side comes from the first sample rate and the right side comes from the second?

 

Because both the degree of quantization (not dramatic) and the density of discrete points doesn't seem to change visibly, I'd have to question what exactly is meant by "sample rate" here and also what the X-axis represents (msec?  sample #?).   Not knowing DAQ Express to say for sure, I'm inclined to think that your "sample rate" setting is being *ignored*, perhaps because of your choice of "auto-managed timing"?

 

That said, the next clue is that the apparent fluctuation was reduced *dramatically* when you changed from a 1024 cpr encoder to a 150 cpr.  In fact, it seems to have been reduced pretty nearly in proportion to the encoder resolution change.   Normally, such a relationship would scream out a problem with fairly severe quantization, but here your graphs don't appear to show such quantization.

 

Earlier in the thread, you also said that a 10x speed change with the same encoder and same "loop cycle time" both resulted in about the same ~30% apparent speed fluctuation.  Of course that means the *absolute* fluctuation must have also varied by a factor of ~10.


So what can explain all this?   Well, here's part of it at least: DAQ Express must *not* be doing a fully hardware-based frequency measurement.  I think it's using software methods to query (and probably also to try to control) the time intervals.  Then it calculates frequency as an integer # of counts (hence the possibility of quantization) divided by a precisely-measured but pretty variable time interval.

 

For example, let me suppose your attached screenshots were taken at a nominal speed of ~6 RPM or 1 rev every 10 seconds.  Now let's look at the nominal time interval between encoder cycles.  With the 1024 cpr encoder you get about 100 cycles/sec so the interval is about 0.01 sec.   With the 150 cpr encoder you get about 15 cycles/sec so the interval is more like 0.07 sec.

    Now let's further suppose that the software method being used is trying to notice and react to each pulse from the encoder, at which point it queries a high-res timestamp.  Let's further suppose that the software's reaction speed to such things can vary by 1 msec (in reality, most any software timing under Windows is subject to *much* more variability than this).

   With the 1024 cpr encoder, the nominal time interval should be about 10 msec.  But the actual interval may be *anywhere* in the 9-11 msec range.  1 cycle divided by any number between 9 and 11 would lead to a +/- 10% error in the frequency calculation.  

   With the 150 cpr encoder, the nominal time interval should be about 70 msec.  Now the actual interval may be *anywhere* in the 69-71 msec range.  1 cycle divided by any number between 69 and 71 would lead to a much smaller +/- 1.5% error in the frequency calculation. 

   These values are not meant to match your graphs exactly, they're just intended to illustrate the basic idea that there's very likely a quantization effect happening, but the extreme timestamp *precision* (perhaps down to the microsecond?) helps to mask it. 

 

Unfortunately, none of this attempt at understanding will necessarily solve your problem.  The bottom line is that DAQ Express appears to use software-timing methods to measure frequency, and that will inescapably add quantization-rooted noise to your measurement.  The only way to reduce the apparent fluctuation is to measure time over longer intervals -- but then that will average out and hide any *real* fluctuations happening at higher frequencies.

 

Under LabVIEW and full access to the DAQmx driver, I'd have (at least) 2 better options:

1. Direct hardware-based frequency measurement for every encoder interval.  What makes this better than the software approach above is that the time interval is determined by the counter hardware, so the timing variation caused by the *measurement* method is no longer 1 msec or more, it's 1 80 MHz timebase cycle or 12.5 nanosec.

 

2. Direct hardware-based sampling of encoder edge counts.  This can't be done quite so directly on an M-series device like yours, it requires another task or other clock source to provide a sample clock signal.  It isn't too difficult to make this work out in LabVIEW, but I don't know whether DAQ Express has the wherewithal to handle this part, and given the observations so far I'm kinda doubtful.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 4 of 5
(2,576 Views)

Hi Kevin,

 

Thanks again for the very insightful feedback.

 

You are right, the dropout is where I stopped the motor and switched the sample rate.  Sorry I didn't do them both in the same order.  The x-axis represents sample number and I believe the program is responding to my "sample rate" because I do get more samples per unit time with 1ms loop time than with 100ms.  But that doesn't appear to change much other than making the graph longer.  I think I do follow your possible explanation resulting from how DAQ Express is probably measuring the frequency.  At the moment I'll continue my work with the 150cpr setup, but at some point I'll need to move up to the proper LabView.

 

Cheers

0 Kudos
Message 5 of 5
(2,562 Views)