LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Rotary Encoder + ni usb 6009

Solved!
Go to solution

Ok guys, 

 

I think that I got it. Doing some researches online, I found a picture that explains this 4-factor concept. If I understood correctly, the 500 ppr is referred to A or B cycle (but each one has 4 sub-cycle inside), is it right?

 

Other question, why if I add WriteToMesaurementsFile function to my VI the does velocity go strongly down? I'm using in DAQ Assistant properties Sample to read = 1 and Rate = 250Hz. Are they right values? Do you have any suggestions to fix the problem in order to save Data, just because I need to do processing later?

 

Thank for your help, it's very appreciated.

 

 

0 Kudos
Message 31 of 41
(2,001 Views)

If you add something to your loop it'll take time from the loop. Use a producer/consumer pattern to send your data to another loop.

 

With the Assistant you are not guaranteed to get good timing at all. Look at some examples and use standard DAQmx functions to record a stream of points, not just one at a time.

0 Kudos
Message 32 of 41
(1,989 Views)

Might I have some examples, please?

0 Kudos
Message 33 of 41
(1,966 Views)

And, guys, do you have any suggestions to measure angular velocity with this setup?

0 Kudos
Message 34 of 41
(1,953 Views)

Help -> Find Examples -> Hardware -> DAQmx has a bunch of good examples.

 

Measuring velocity can be hairy, but basically it's just change in position divided by change in time. If you switch off of the DAQ Assistant and get to some more low-level functions you will be able to know the time between samples exactly.

 

I'd recommend low-pass filtering the data though as it'll be VERY noisy.

0 Kudos
Message 35 of 41
(1,949 Views)

@BertMcMahan wrote:

Measuring velocity can be hairy, but basically it's just change in position divided by change in time. If you switch off of the DAQ Assistant and get to some more low-level functions you will be able to know the time between samples exactly.

 

I'd recommend low-pass filtering the data though as it'll be VERY noisy.


So your idea is to use directly DAQmx function to get raw data from each two channels, then to low-pass filter just to remove noise, and after that to check value at samples x(t), so check value y(t+dt), and doing simple math: vel = [y(t+dt)-x(t)]/(dt).

 

right?

0 Kudos
Message 36 of 41
(1,927 Views)

Mostly- you want to convert your data to quadrature, then low-pass the resulting quadrature data. You don't need to nor SHOULD you low-pass the analog data coming in. If you do it'll likely screw up your quadrature.

 

See my added text (bolded) below:

 


@valiora wrote:


So your idea is to use directly DAQmx function to get raw data from each two channels, convert from quadrature to counts, then to low-pass filter just to remove noise, and after that to check value at samples x(t), so check value y(t+dt), and doing simple math: vel = [y(t+dt)-x(t)]/(dt).

 

right?


 

0 Kudos
Message 37 of 41
(1,921 Views)

Chiming in to add a little more info.

 

The code I recall from this thread was *only* focused on decoding the quadrature signals to track an incrementing or decrementing count representing a position.   You'll also need to keep track of time.

 

Specifically, you'll want to identify the times when the quadrature state changes (causing a count change).  Any such time you identify will have an *uncertainty* of up to 1 sample period (=1/rate).  The quadrature state change you discovered at a given sample may have happened any time between just barely after the previous sample and just barely before this one. 

    Any velocity calc you do depends on both delta position and delta time.  The uncertainty over any time interval will be in the range of -1 to +1 sample period.  This will be fixed no matter how big a time interval you use.  The fixed amount of uncertainty in the denominator has a bigger effect when the denominator itself is small.  This leads to the "noisiness" that BertMcMahan's been alerting you about.

   One way to reduce the noisiness is to make the nominal time interval (the denominator) larger by only looking at, say, every 10th or 20th position change.  The total delta position over this longer interval would end up equating to the sum of all the indivdual delta positions.  That kind of summing accomplishes the same thing as an average.  And an average is one simple kind of lowpass filter.

   So in effect, by choosing to only calculate velocity every 10th or 20th position change, you'll have *already* accomplished a kind of lowpass filtering.

 

This stuff will all start to come together as you spend some time and think it through carefully.  Meanwhile, I would highly recommend you make a subvi out of all your quadrature code.  And then also create yourself a set of realistic AI data that you can use to test your code out.  (One simple way: put your encoder in motion, run a brief finite DAQmx task to capture your encoder signal, send the waveform output to an indicator, right-click the indicator and change it to a control, right-click again and "make current value default").

   This kind of strategy is a really important habit to get into when working on algorithms.  Always plan to make some test code that contains or generates realistic raw data that you can feed into the algorithm to troubleshoot your code.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 38 of 41
(1,910 Views)

Kevin has some great points.

 

Also, one thing to keep in mind- encoders are notorious for being tricky to use for determining velocity. People have written many research papers describing ways to get better velocity data from encoders by using a bunch of different averaging methods, estimators, filters, and so on.

 

I say that to say... keep your expectations fairly low for your initial results, especially if successive reads are only a "few" counts. Like Kevin said, your uncertainty is always +/- 1 count, so if you read position changes of 10 counts, then 10 counts, then 11 counts, then 9 counts, etc you will see LOTS of fluctuations, even if in reality it's not that different. If successive reads are 1000 counts, then 1000, then 1001, etc then your noise will be much lower, since that single count isn't messing with you as much.

 

If your changes per measurement are REALLY small, like one count every n cycles, it's REALLY going to need averaging. Say you get data that looks like this:

 

0,0,0,0,1,1,1,2,2,2,3,3,3,3

 

Your delta value (and velocity, if we pretend dt = 0) will be:

0,0,0,0,1,0,0,1,0,0,1,0,0,0

 

Your velocity algorithm will say it's not moving for a while, then is suddenly moving at 1 count per second, then stops again, then suddenly moves again, etc.

 

In real life it's likely the wheel is just rotating slowly, but without averaging you can't resolve speeds less than 1 count per second. In more general terms, this means you can't resolve more than 1 count per sample cycle. Kevin's "use every 20th sample" method will help with this.

0 Kudos
Message 39 of 41
(1,907 Views)

Hi guys,

 

Thank you a lot for your precious tips. They are, as always, very appreciated.

 

So, based on Kevin and BertMcMahan posts, what I should do is to increase a counter (for the velocity) every time that quadrature state changes. Then, according to that values that I received from this vel_counter (let vel_count be the counter of the velocity) I should divide this value by the dT factor that has passed from the initial position and the current position, right?

 

And to avoid the problem of that uncertainty (that I have no clear yet, and would be amazing if you might do some other examples), I have to use kind of Averaging filter (low-pass filter technique) in order to get the velocity's value every 10-20 samples window, and let's say, increasing the window size greater result about the uncertainty I should get. 

Ok, so this is the milestones. 

Based on previous code on the thread, has someone suggestions to get that?

 

And, guys, I've read about miscount error for encoder measurements, do you think guys that I have some problem about this last one factor with the code in this thread?

 

Thank you for your time and your helpful replies. 

0 Kudos
Message 40 of 41
(1,852 Views)