04-21-2020 09:47 AM
Hello,
I would like to point out a performance issue in the NXG style XY Graph. I've attached a VI that includes a comparison between the Silver style XY Graph (first tab) and the NXG style XY Graph (second tab). Notice that the Silver graph seems reasonable (other styles are reasonable also). You can zoom, autoscale etc.
The NXG style XY Graph performance is abysmal! My computer grinds to a halt with this same exact dataset. Why?
I'm using LabVIEW 2019 Professional, Windows 10.
(I'd really like NI to officially respond to this)
Thanks.
Solved! Go to Solution.
04-21-2020 02:36 PM - edited 04-21-2020 02:37 PM
Hi immercd,
switch off anti-aliasing in the NXG-style graph to get the same speed as with your Silver-style graph.
(Or the other way around: enable anti-aliasing in the Silver-style graph to make it as slow as the NXG graph…)
04-21-2020 03:07 PM
Where is the anti-alias setting?
04-21-2020 03:42 PM
Aha, I found it! It's in the Legend, when you right click on a plot.
This is definitely the issue.
04-22-2020 12:35 AM
04-22-2020 01:08 PM
Thanks for the reminder about decimation to help with large data plot performance. I don't think 50K points is enough to warrant it.
I just wish NI would actually incorporate decimation into their graphs (or at least have an option to enable it)....this is a staple of any scientific plotting! It's been in the idea exchange for 10 years!
04-22-2020 01:55 PM
@immercd1 wrote:
Thanks for the reminder about decimation to help with large data plot performance. I don't think 50K points is enough to warrant it.
I just wish NI would actually incorporate decimation into their graphs (or at least have an option to enable it)....this is a staple of any scientific plotting! It's been in the idea exchange for 10 years!
According to this, it is incorporated.
However, that is not a reason a plot can be slow. Besides anti-aliasing, LabVIEW is a data flow language. The problem with trying to plot millions of points is not because LabVIEW isn't decimating the displayed data, it is because LabVIEW makes multiple copies of the data, one for display, one in the transport buffer, and another one. Making my own decimation didn't speed up the display but it greatly reduced the memory footprint, which in turn sped things up.
mcduff