08-20-2020 11:27 AM
I am using Labview 2015, SP1.,Version 15.0.1f1 (32 bit).
I am receiving data from a digital scope via USB3. I am getting data array with data points separated by 8 ns. This means the PC should process ~256 Mb/s.
I perform the search of the trigger in the incoming array than I extract and sum portion of this data based on the position of the trigger. This still works fine but when I added a search of another trigger the PC was already unable to process all data in time. So, I lost a lot of data.
Therefore, I have the question. How to enhance the processing of this massive data coming from the scope? Can I solve the problem using more powerful PC? Currently, I am running I7-4790CPU @ 3,6 GHz, 8Gb, Win7 64bit.
Would I get a direct correlation of the labview processing speed with the benchmark of the processors which can be found in internet? Can a different version of Labview, for an example a 64 bit one, perform faster?
08-20-2020 11:42 AM
I wouldn't go upgrading your PC just yet. I don't 64-bit LabVIEW would make a difference in processing speed. A 64-bit application can use more RAM if available, but if you're not running into memory problems then it shouldn't be the same. Later versions of LabVIEW often have performance improvements, but the first place to start is probably in your code.
It looks like your code started from the manufacturer's example, but that doesn't always mean it is good code. For instance, that DLL call at the far right side of the block diagram will actually get executed immediately after the VI starts running. It's placement to the right of the while loop does not enforce execution order.
I would start over with something simpler and see what the speed is like. Also, is it confirmed to be a USB3 instrument? If it uses a USB 2.0 chip, then plugging it into a USB 3 port on your computer will not give any speed improvements.
08-20-2020 12:25 PM
Upgrading hardware will give you much less than a factor of two and you'll run into the same wall again next time you want to add a new feature. It is a myth that computer upgrades are a magic bullet to solve your problems.
A better path would be to analyze and streamline your code. I see quite a few places that can be tightened for potentially orders of magnitude speed improvements. There seems to be a lot of slack left!
08-20-2020 12:31 PM
Thank you for the information.
I have already spent some time to optimize and debug the code. I think I got a speed, which is close to maximum.
I am sure, I am getting the data via USB3. These are 8 digital channels with the sampling rate of 8 ns. So, I am getting about 125 MB/s of data via USB3. Later they are converted by the driver to U16. This would already mean 256 Mb/s. Most of the time is lost inside the while loop. I need to extract required digital channel and perform a search in this data.
When it was only one digital channel the PC can still handle the task. So, I know that everything works fine.
When I need to extract 3 digital channels (2 for the trigger) and perform a search there, I got problems.
I know that there are much powerful processors available compared to that I am using right now. However, these are multicore processors. How well the parallelization is implemented in the Labview? If I get a 32 core processor instead of 8 core one, would I get 4 time boost in speed?
08-20-2020 12:36 PM
Thank you. It is pity that the boost is so small. I will try to explore the code based on your hints.
08-20-2020 12:51 PM
@Serge1 wrote:
How well the parallelization is implemented in the Labview? If I get a 32 core processor instead of 8 core one, would I get 4 time boost in speed?
You can if the problem can be split into independent tasks and the critical section is minimal. This requires careful design and benchmarking.
In your case splitting the data and reassembling the result might be too much overhead because the calculation is so fast and parallelization might actually slow you down.
(it can be done for certain problems. Have a look at my example and details)
08-20-2020 01:31 PM - edited 08-20-2020 01:33 PM
Have you read through and implement the recommendations here? Have you talked to PicScope about the issues? They are usually happy to help.
https://www.picotech.com/library/oscilloscopes/streaming-mode
Craig
08-20-2020 05:34 PM
Yes, I read this and I also contacted with the people from Picoscope. However, the bottleneck here is really the speed of the data processing in Labview. So, they cannot help much.
08-20-2020 05:40 PM
Thank you, it was interesting to read. It sounds like that the newest version of Labview could actually help. With the current version, I have all 8 cores heavily loaded. Does it mean that the parallelization is efficient for my calculation?
08-20-2020 07:23 PM
No, the LabVIEW version makes no difference. If all your cores are at near 100% you need to find out if they are actually doing important work. (A poorly written program can saturate all cores doing nothing useful. Place a dozen empty greedy while loops and you'll see what I mean.)