06-10-2019 11:00 AM
Good afternoon,
I’m currently setting up the generation and acquisition of SPI signals using a HSDIO NI-6541 module.
My main VI “Acquisition and Generation fetch” is divided in 3 sections:
The acquisition and decoding section are in a Producer/Consumer loop. Unfortunately, the decoding speed is lower than the acquisition speed and my memory eventually overflows.
I would like to increase my acquisition time as much as possible by increasing the decoding speed. I attempted to do that though the « express mode » of “SPI_decode.vi”. However, the increase in decoding speed hasn’t resulted in a sufficient increase in acquisition time … I’m starting to think the problem might be elsewhere!
Here is some data:
F_HTR (Hz) |
Express decode |
T_loop (s) |
SPI frames per record |
Records done before overflow |
Time before overflow (s) |
Number of decode_SPI before overflow |
7200 |
False |
1 |
2 |
248400 |
69 |
9 |
7200 |
True |
1 |
2 |
298800 |
83 |
30 |
7200 |
False |
2 |
2 |
230400 |
64 |
2 |
7200 |
True |
2 |
2 |
288000 |
80 |
14 |
7200 |
False |
2.5 |
2 |
234000 |
65 |
1 |
7200 |
True |
2.5 |
2 |
306000 |
85 |
12 |
Let’s consider the last 2 rows:
Now, I have 2 questions:
Thanks a lot in advance for your help,
Hugues
Solved! Go to Solution.
06-12-2019 03:06 AM
Could you zip the code or post decode_spi.vi separately?
06-12-2019 03:16 AM
Hi wiebe,
Please find attached "decode_SPI.vi".
Please note that I have to decode my SPI signal because it is sampled by the On Board Clock (20MHz) instead of the SPI_CLK (10MHz). I'm currently working on sampling my MISO signal with SPI_CLK with NI technical support. (It’s not trivial because SPI_CLK is not free running and I don’t know how I could generate a free running 10MHz CLK while generating trigged SPI signals)
SPI_CLK is generated by the HSDIO module and is therefore synchronized with the On Board Clock. One solution to decrease decode SPI execution time would be to not look for SPI_CLK's rising edges but to just take every 2 sample as a rising edge.
I was hoping you could point out bad coding practice, such as dynamically filling arrays. I've started LabVIEW a month ago and am still struggling with the basics.
Thanks a lot for your help,
Hugues
06-12-2019 03:46 AM
There's probably some speed to gain here. Some style improvements as well, but nothing too shocking.
For starters, use auto indexing. There's no point in using shift registers and a build array. LabVIEW will do the same, and almost always more efficiently. It will clear up the code as well.
The two subVIs are missing. There are significant. As there result is always used to get all the indices, the VIs can probably be modified to return the indices. That will at worse be a cleanup\simplification, but might increase performance just a little.
Can you post the other two VIs? And also some realistic inputs, preferably with desired outputs? That will help us a lot.
06-12-2019 03:49 AM
Here are the 2 other VI.
I am not too sure of how auto indexing works, I will do some research and get back to you if I don't understand.
Thanks 🙂
Hugues
06-12-2019 04:15 AM - edited 06-12-2019 04:20 AM
I think this does the same thing. It's easy to make a mistake though without being able to run the VI.
Spot the differences .
It's starting to fit the screen. I don't expect major speed increments, but it shouldn't run slower.
I'm sure there are some optimizations possible, but usually algorithmic improvements beat semantic improvements by an order of magnitude (~10X).
(EDIT: That crossed your post of the subVIs)
06-12-2019 08:52 AM
Well, it's a lot faster! I've been acquiring data for the past 30min without overflow, amazing 🙂
Thanks a lot!
06-13-2019 03:15 AM
Great! Probably just the autoindexing that did the trick.