05-09-2016 04:57 PM - edited 05-09-2016 04:59 PM
Hi all,
I am using NI-MyRIO 1900. I have 2 questions:
1- How does changing the device's clock frequency affect its sampling rate ? [The max aggregate sampling rate of myRIO is 500 KSa/S @40 MHz, say I increased it to 800 Hz, how is the ADC rate affected ?]
2- I am using the Desktop Execution Node to simulate my FPGA code. However, I can only know the "simulated time" of the code but not the "real time". Is there a way to estimate the real time without trying the code out on the hardware ? This white paper has some info on real and simulated time: http://www.ni.com/white-paper/51859/en/
Thanks in advance,
Hazem
05-09-2016 06:21 PM
If the max sampling rate is 500 KS/s, it's running at 500 kHz. If the clock is running at 40 MHz, what are you trying to increase to reach 800 Hz? This is orders of magnitude smaller than the other things you mentioned.
It sounds like you're working with analog data as you're looking for the ADC. The ADC is a hardware component. Its maximum rate is fixed. You can't change that. The device clock changes how long each cycle takes. At 40 MHz, you're seeing 25ns periods. Each cycle takes 25ns. If your ADC is maxed out at 500 kHz, you require 2us, or 200ns, to get a sample. That just means it'll take 8 cycles to get the sample. The rest of your code will take however many cycles as it'd take. It's possible some of this can take place in parallel. In fact, it's almost guaranteed you'll be doing some in parallel. Changing the clock rate isn't likely to do what you're looking to do.
05-10-2016 04:56 AM
Thank you for responding,
So what would happen if I decreased the clock frequency? Shouldn't that affect the ADC as a hardware component reliant on that clock?
Also, to relate all this to question 2: I found on the link I posted that a while loop alone (with no code inside) would take 183 cycles !!
Say a code needs 400 cycles to process 1 sample. If a buffer stores incoming samples, it would accumulate 50 samples (a sample every 8 cycles) during the processing phase. As it processes the following sample another 50 will be stored (assuming parallel acquisition), and so on.
So, if you want to implement a Real-Time system (in which the acquisition buffer must NOT grow in size exponentially), this means that : processing time MUST be equal (or less than) the acquisition period (time between samples received). This would restrict the sampling rate to low frequencies (which might cause aliasing) in some applications. Is there anything I am missing?
05-10-2016 05:32 AM
By the way, it is 80 cycles between samples at max sampling frequency. You must have dropped a zero
This would grant more time. Instead of 50 samples, it would be 5. However, the issue still exists.
05-10-2016 05:42 AM
Hi Hazem,
exactly what clock do you want to modify? And why?
The link you mentioned before doesn't say an empty while loop will take 183 three cycles to execute. The value can be as low as 1 tick if you are using SCTL.
This is what takes 183 ticks:
There are a lot of methods to optimze FPGA operations for speed, you can read about the basics here:
http://www.ni.com/white-paper/3749/en/
If you will be more interested in hight throughput FPGA I recommend this document:
http://www.ni.com/tutorial/14600/en/
05-10-2016 05:43 AM
@natasftw wrote:
If your ADC is maxed out at 500 kHz, you require 2us, or 200ns, to get a sample. That just means it'll take 8 cycles to get the sample.
You lost a power of 10 in there. 2us = 2000ns = 80 clock cycles.
05-10-2016 05:47 AM
@hazem93 wrote:
So, if you want to implement a Real-Time system (in which the acquisition buffer must NOT grow in size exponentially), this means that : processing time MUST be equal (or less than) the acquisition period (time between samples received). This would restrict the sampling rate to low frequencies (which might cause aliasing) in some applications. Is there anything I am missing?
The thing on the host side is that you can operate on the arrays instead of individual samples. This often makes things a lot faster for you. So if your FPGA is sampling at 500kS/s, you just need to be able to process 500 samples in 1ms. That seems more than reasonable to me based on my applications, even on Windows.
05-10-2016 06:09 AM - edited 05-10-2016 06:12 AM
Hi, Stockson.
I am checking the links you posted, they look very useful.
So is there a way to know the actual timing of each component (without the hardware) ?
05-10-2016 06:21 AM
Hi crossrulz,
Well, the thing is that I have to implement the entire processing on an FPGA [to compare the results later on with other platforms such as a PC] and then send the results to the back host via FIFOs. SO, are there other solutions?
05-10-2016 06:52 AM
Hey hazem,
it's the best to use SCTL, common sense and guidelines from links I shared before 🙂 It is hard to accurately estimate the execution time without having more details. In generat it is much better to just run a benchmark and check on your own. And even if you don't have the hardware, you can try to compile the code and see if you get any timing violations.
And when it comes to processing the results on FPGA I recommend either using the tips mentioned here (parallel execution, pipelining) or playing with SCTLs and HT math functions. Remember that FPGA is hardware, so you can run truly parallel threads. Use that.