11-03-2015 01:17 PM
I am experiencing a very odd and frustrating behavior in my application. When I first load and start it fresh, everything (about a dozen independent while loops, one for each instrument) executes at a reasonable pace and CPU load (~6%). However, after leaving the application running for several hours, the CPU load balloons to ~26% and the entire UI and instrument comms are as slow as molasses. Is anyone familiar with this "CPU leak" type behavior and how to prevent it? I need the application to operate consistently for weeks at a time.
The UI is large with several tabbed panels containing graphics, plots, and other controls. Core instrument data is held in hidden cluster controls for each, accessed via property nodes. Implemented in Labview 2012.
11-03-2015 02:07 PM
How's the memory use for your application? Does it grow over time too (according to Windows Task Manager)? I'm going to guess it's increasing too; if not, that would be unusual. Assuming you are seeing growing memory use, the most common cause is arrays that grow constantly. Is there anywhere that you add elements to an array in a loop, allowing the array to grow out of control? A subtler problem is constantly acquiring new references to named queues or notifiers - are you doing this? How about opening and saving files - each time you log new data, do you open the file, add to it, then close it (the wrong way, most of the time), or are you opening the file once when the code starts and keeping the reference open for the remainder of the run (the correct approach)?
JamesLM wrote:
The UI is large with several tabbed panels containing graphics, plots, and other controls. Core instrument data is held in hidden cluster controls for each, accessed via property nodes.
Well, that's definitely not a good idea. This is quite likely the worst way to hold onto data that needs to be accessed from multiple locations.
11-03-2015 02:15 PM
11-03-2015 02:36 PM
Hmm... With the CPU load at 26%, I wouldn't look at the CPU as the bottleneck. Rather, the increase in CPU load is indicative of something else going on. If the CPU were maxed out at 100% all the time, I'd be much more concerned with something directly eating up the CPU.
I like Nathan's theory of something related to memory being the bottleneck. If the system is paging to virtual memory, for example, things could really slow to a crawl.
In the absolute worst case, you could disable suspect areas of the code until the problem is reduced or eliminated, but often that is time consuming.
For what it's worth...
11-03-2015 03:32 PM
11-03-2015 03:40 PM
We definitely need to see some code.
11-19-2015 02:03 PM
The odd part is that it is a quad core system, but also is not maxing out any of the cores or memory capacity.
I have made some progress on this. The culprit appears to be the chart history length of the 4 charts in the GUI. I had these set high at about 10,000 points because the desire is to be able to visualize what's happened over the past few days with data still being plotted every 10 seconds. However, it seems that anything over about 5,000 is not practicable for LV2012's UI execution system. At that level, load hovers at about 20% and things work ok. At the default 1028, load levels at about 10% and instrument comms remain very fast.
Any suggestions on creating very long history charts while still having fairly frequent new data points?
11-19-2015 02:59 PM
Are you saving (streaming) the data? I think, in principle (but possibly tricky in practice) you could envision a "Browse through saved data" utility that runs in a pop-up Window. How much data are you taking (# channels * Sample Rate * Sampling Duration)? Another possibility could be to have a fairly short display (relative to the data) and use a Graph (instead of a Chart) to plot sections of it. I recall we had a situation something like this a few years ago -- I don't remember how we dealt with it, but my colleague will probably know (it was his data) ...
Bob Schor
11-19-2015 03:54 PM
What's the size of your monitor? 10,000 points is probably at least 5x the number of horizontal pixels on your monitor, so you're never displaying all that data at once. Decimate your data. If you don't need to zoom in on a particular part in more detail you can discard the data after decimation; otherwise hold onto it in an array so you can recalculate when the X axis scale changes. There are lots of decimation approaches and you'll have to pick one appropriate for your data. One approach that still allows you to see noise in data is to take the highest and lowest value from within a group of points.
11-20-2015 09:02 AM
@JamesLM wrote:
The odd part is that it is a quad core system, but also is not maxing out any of the cores or memory capacity.
I have made some progress on this. The culprit appears to be the chart history length of the 4 charts in the GUI. I had these set high at about 10,000 points because the desire is to be able to visualize what's happened over the past few days with data still being plotted every 10 seconds. However, it seems that anything over about 5,000 is not practicable for LV2012's UI execution system. At that level, load hovers at about 20% and things work ok. At the default 1028, load levels at about 10% and instrument comms remain very fast.
Any suggestions on creating very long history charts while still having fairly frequent new data points?
This is why you shouldn't keep data in front panel objects. Keep it in a shift register and only display e.g. the last 1000 points. The change should be easy enough and the result a big sigh of relief. 🙂
/Y