LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronizing two sets of data sampled at different frequencies

Hi all,

 

I am using myRIO (LabVIEW 2014 SP1) to sample 6 analog channels, 4 at 100 kHz/ch and 2 at 1 kHz/ch. Attached are screenshots of my FPGA and the host .vis. To see if all works well, I input a 1 Hz sinusoidal signal to all 6 channels and record for 150 sec. In the last plot, you see two recorded waveforms:

 

red: one of the channels sampled at 1 kHz,

blue: one of the channels sampled at 100 kHz.

 

As you will notice, there is a varying phase shift between the two (at ~100, the peaks occur at the same time but at ~135, red leads blue).

 

What might be causing the varying phase shift although the signal to both channels is the exact same signal? Any suggestions how to resolve that?

 

Thanks.

 

 

Download All
0 Kudos
Message 1 of 7
(5,107 Views)

Hi Rubinstein,

 

Over the FPGA side, you could use seperate While Loops for both samplings. and then use Loop Timer to set sampling times. Rub off Flat Sequence and For Loops. Do this and let know. 

 

 

Regards.


0 Kudos
Message 2 of 7
(4,930 Views)

On the FPGA, you're coding in this style:

 

delay:read:for loop

 

You're trying to set the sample rate with a delay.  Why?  This doesn't make sense.

 

Let's say your read and for loop elements take the exact same time. 

 

Your delays 10us for the 100kHz and 1000us for 1kHz

 

Let's say the build array and AI readings take a single cycle to operate.  That means your loops require ~6 cycles to complete.  These are based on your clock, which I'm guessing you left at 40MHz.  In that case, each cycle takes 25ns.  6 is 150ns, or 0.15us.

 

That means your loops are running at 10.15us and 1000.15us.  In one second, they will run 95,238.1 and 999.85 times respectively.  This isn't the sample rates you're expecting.  Do you see why it's a bad idea to use delays to control loop timinng?  Get rid of that sequence.  It's not useful.  You're better off with a wait functio in your logic instead and have that run in parallel.  That way, if the logic in the rest of your loop is shorter (as we've shown it is), you'll control the timing of the loop with the wait.  This should bring you closer to your desired rate.

0 Kudos
Message 3 of 7
(4,856 Views)

@NapDynamite wrote:

Hi Rubinstein,

 

Over the FPGA side, you could use seperate While Loops for both samplings. and then use Loop Timer to set sampling times. Rub off Flat Sequence and For Loops. Do this and let know. 

 

 

Regards.


Be careful with this.  He's feeding values from 2 and 4 channels into the for loop.  You can't just get rid of them.  You'd either need to have a loop time that is long enough to complete the for loop or you'd want to look at the data manipulation palete to combine those outputs into a single bitpacked integer and split them on the other end.  You can't just "rub off" the for loop

Message 4 of 7
(4,852 Views)

You have two separate clocks. I see that you are using the loop timer express vi, which will make your code execute periodically as per the instructions, but simply put there is nothing relating your two clocks so it is not surprising that there is no sync.

 

I'm not sure about your available resources, but a simple solution would be to put both processes in the same loop and operate it at a faster rate, and then use a loop counter to select every 100th iteration to excute the for loop inside a case structure. This is essentially a clock divider, with two outputs working off the same clock with different divisors. Note that you probably won't use any extra resources in sampling at the higher frequency, and so you can just put the values into the fifo periodically.

 

One final word of advice would be to test under not periodic signals, such that you can tell if you are off by an entire slow sample or not. You could still be perfectly synchronized at the slow rate but just have an integer amount of delay. Perhaps turning on/off your sig gen to get the turn on glitch might give you such a signal.

0 Kudos
Message 5 of 7
(4,643 Views)

Thank you NapDynamite, natasftw, and rik_aspinall for the great suggestions. 

 

What I understand is that using my FPGA vi in the first post, the signals were actually being sampled at f1<100 kHz and f2<1kHz (where f1/f2 was not necessarily equal to 100) but they were written as if they were exactly f1=100 kHz and f2=1 kHz, respectively. That was causing a varying amount of delay in the signals.

 

My FPGA schematic in the first post was based on the white paper "Advanced Data Acquisition Techniques With NI R Series" (http://www.ni.com/white-paper/2993/en/), where also a flat sequence is used in Fig. 5. But as pointed by NapDynamite and natasftw, it was affecting my actual sampling rates - thank you natasftw for explaining how.

 

In one of the earlier posts (http://forums.ni.com/t5/LabVIEW/Why-use-FPGA-Loop-Timer-instead-of-FPGA-Wait-to-time-loops/td-p/1188... a user says:

 

"A loop timer will ensure that the loop executes in the specified time (Count input).  It will execute the code within that loop and then wait the required amount of time so that the "Count" input is reached."

 

Therefore, I rely on the loop timers to set the sampling periods, which is a sum of the execution times of the A/D conversions, build array, and the for loop that has a FIFO write. A screenshot of the FPGA schematics is attached. Essentially from the vi in my first post, I only got rid of the flat sequence. Right now I don't have that varying delay issue. (@rik_aspinall great point, I confirmed with a varying frequency signal.)

 

The problem I am having right now is triggering the A/D conversion in the two loops at the same time. Although I have a "case structure" including the two while loops, and use a boolean switch to control the case structue in only maybe half of the recordings I see the signals are synchronised in the remaining they are not.

Do you have any idea why?

 

Thanks.

 

0 Kudos
Message 6 of 7
(4,606 Views)

If you want them synchronized exactly, you'll want them in the same I/O Node.  That requires bringing them into the same loop and using decimation, as rik suggested.

 

Try this.  Right-click on your 40MHz clock in your project and choose to derive a new clock.  Derive that clock at 100kHz.  Replace the while loop with a Timed Loop and use that 100kHz clock as your timing source.  (I'd have to put it together to see if this will complain or not.  It might not be happy about the AI and throw an error.  If that's the case, we can look at the while loop structure as well.  They should be relatively similar).  Put all of your AI tasks into the same loop, into the same node.  Add a shift register initialized to 0 to the loop.  Use a greater/equal check to see if it's 99.  If it is, use the data from your 1kHz sample.  In all other cases, disregard that data.  Following the check, use a select.  If true, output 0.  If false, output a value incrementing the prior shift register.  This creates a mechanism to check for every 100th sample without using divides (computational nightmares). 

 

Granted, this binds your timing to rates that are 100x one another.  But, that's something you're going to have to look at if you're concerned about precise sync.

 

It was a tad misleading to suggest you have two clocks.  You don't.  You've only got the one.  But, you're not guaranteeing the loops start on the same cycle.  Putting everything into the same I/O node ensures they're always read at the same time rather than potentially being a cycle apart.  If you put things into the timed loop in FPGA code, you're using what's called a Single Cycle Timed Loop.  Everything inside the loop happens within a single cycle.  That's why you can use the entire code inside of a loop with a 100kHz clock and not run into issues with multiple cycles causing the same problem we were seeing before.

0 Kudos
Message 7 of 7
(4,587 Views)