10-02-2023 01:39 AM
Hi
i am trying to use FIFO write in a sctl with 200 Mhz clock but get a timing violation error
What is the max rate i can use to wite data to FIFO?
can i use another method to transfer data to the CPU at 200 Mhz clock?
i am using sbRio 9608 and labview 2020
Thanks.
10-02-2023 05:55 AM
Depending on the speed grade, achievable frequencies vary, but all of them should manage 200MHz.
The problem is more likely the code providing the data/reading the data or a routing issue.
If you can show the code, we might be able to offer more help.
10-02-2023 10:22 AM
Hi and thank for your reply
I will post the code latter because I am not with my laptop for few days..but all i have is a state machine with case structure inside a 200Mhz sctl and one of the cases is only include the Fifo write function with the data that come from the previous case..
thanks..
10-02-2023 10:37 AM
When running at speeds like that you'll likely need to employ pipelining to reduce the timing requirements for each iteration of a loop but spread the work over multiple iterations. I got some pretty decent results googling "labview fpga pipelining" that could get you started.
10-04-2023 09:27 AM
hi,
how can i use pipeline in writing to FIFO?
i need to read a serial communication that run at 50Mhz and identify the Start Bit by sampling the signal at 200Mhz. if i use Pipeline i will lose data no??
i attached a very simple code example that try to write to FIFO at 200Mhz and fail on timing violation.
how can i solve this??
thanks,
Moran.
10-04-2023 09:45 AM - edited 10-04-2023 09:47 AM
@Moran78 wrote:
hi,
how can i use pipeline in writing to FIFO?
i need to read a serial communication that run at 50Mhz and identify the Start Bit by sampling the signal at 200Mhz. if i use Pipeline i will lose data no??
No! Pipelining does not loose data (unless you program a bug). It does increase the latency, since the data is processed in little time junks over multiple iterations. So from the time your data arrives until it is actually pushed into the FIFO, there is a certain, normally constant delay of n loop iterations, where n is the number of pipeline stages your algorithm uses.
Basically rather than doing one long calculation that you try to prop in between your input and output stage in a single loop iteration, you divide the calculation into several smaller ones that you apply each in parallel to the data from the previous iteration.
Unfortunately I only have LabVIEW 2018 on this computer so I can't look at your VI.
10-04-2023 09:45 AM
The whole point of pipelining is you touch every bit of data, nothing is lost. Processing is done simultaneously while acquisition is happening simultaneously while FIFOs are being pushed to. Literally all it does is break up a long chain of functionality into multiple shorter steps. I'd highly recommend reading the linked resource and doing some experimentation on a simple setup. It sounds like there's some misunderstandings of how the code and FPGAs works (evident by thinking pipelining means lost data) that will ultimately block success in overcoming these types of timing violations.
10-04-2023 09:47 AM
Rolf with the snipe 😁
10-05-2023 12:01 AM
Hi and thank you for your answer
I know pipeline doesn’t loose data maybe i used the wrong words . I mean that the data is not valid in the first few cycles.
i need to sample some i/o ( that in high state when idle )and wait for that i/o to go low
Then i have to see if that i/o is in low state for 40 ns and then start sample 16 bits of data that each bit is 20 ns
then save it i FIFO to transfer to the host.
all the states works fine except the writing to the FIFO ( by the way if i use front panel indicator it’s work)
can you please show me how can i use pipeline to solve this issue?
can you send me some example or link with explanation?
thank you very much
regard
moran.
10-05-2023 11:15 AM
One of the things you can send alongside the pipeline data is a flag for if the data is valid or not.
I'd propose this to go the other way, you try some stuff out and post what you've tried and where you've gotten stuck and people can provide more assistance with your use-case.