LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

CPU load when application is minimized is reduce 50% ?

Hi,

 

I realized that when my application is minimized, the cpu load drop 50%. When the front panel is open (not minimized) I have a consumption of 45W and when it's not minimized I have a consumption of 20W. (using Core Temp software).

 

I tried setting a wait delay of 2sec on all display loops but there is no correlation.

 

Any idea? I'm working to reduce the heat dissipation of the computer (inside a closed box) and the FP seem to be the main factor !!

 

Thank you.

Patrick

0 Kudos
Message 1 of 12
(1,142 Views)

Indeed, if you have a lot of front panel elements, there is additional work in drawing the graphics for the UI elements and updating them with the latest values.

Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
0 Kudos
Message 2 of 12
(1,139 Views)

I did a copy of the main VI without indicators and I have the same behavior.

0 Kudos
Message 3 of 12
(1,115 Views)

If your computer has high CPU use, blaming the front panel seems very simplistic.

 

The front panel is an essential part of the UI. What problem do you actually want to solve. How many loops are running (beside the "display loops") and how do they communicate? Do you have lots of value property nodes and local variables? What does your program do?

0 Kudos
Message 4 of 12
(1,071 Views)

There are 2 background loops that run without any "wait" delay. One that do image acquisition and the other one inspect the images. When minimized, those two loops seem to continue running normally (not slower then when the executable is not minimized).

 

All "displayed loops" have a wait delay of 250ms and I even tried a 2sec delay and even tried to remove all of them and keep the simple event structure loop. There is no changes on CPU load. There are some value property node but not that many, not that many local variable as well.. Loops are communicating using FGV and global variables.

 

I'll do more investigation tomorrow. I'll try running only the background loops and see if I have this phenomena.

 

thanks for helping

Patrick

0 Kudos
Message 5 of 12
(1,065 Views)
  • What is the program architecture? How many independent loops?
  • How does the program acquire images?
  • "Seem to run normally" is very vague. Do you have actual measurements?
  • What is the loop rate of the loops without waits? Have you tried adding a 0ms wait so they play nicer with other running parts?
  • What kind of value properties and locals (just scalars? Huge data structures?)
  • What do the "display loops" actually display"? (Just a few indicators? Gigantic images? Graphs with millions of points?). How busy is the UI thread?
  • What is the function of the "event structure loop" (I am not familiar with that term). What are the events handled?

 

Words are typically insufficient to describe a program. Feel free to share some code.

0 Kudos
Message 6 of 12
(1,014 Views)

I remember an old thread where Transparent borders with overlapping objects caused CPU spikes, probably due to constant redraws. 

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 7 of 12
(1,005 Views)

Hi,

 

Sorry for the delay, I put my energy on the startup of this application.

I tested again and there is no correlation with the numbers of indicators on the front panel ->I removed all indicators and tested the program and I have the same behavior. (20W minimized and 45W not minimized)

 

Added counters on my loops and it's as fast minimized or not minimized...

 

Hard to understand. 

I attached the FP (blured)

0 Kudos
Message 8 of 12
(927 Views)

Hi,

 

I have similar observation/problem. The effect is same (VI CPU load drops when minimized), but result is unwanted.

 

I made a VI to demonstrate this. On the FP you can see that the time between while loop iterations is 2ms when the VI is frontmost application on windows. But when I switch e.g. to web browser, then after a short while the delay between while loops jumps to 15ms. Then I change back to labview and the delay drops back to 2ms.

 

How to fix this? I have a feeling this has something to do with the wait(ms).vi, how I can replace it? I think operating system is altering the 1ms ticks.

 

I have windows 11 and LabVIEW 2017.

 

PasiSalminen_0-1734773015874.png

PasiSalminen_1-1734773342527.png

 

 

 

0 Kudos
Message 9 of 12
(121 Views)

The root cause is:

 

Global timer resolution requests (Windows 11)

The behaviour of the timer resolution on Windows changed with the release of Windows 10 v2004 to be per-process instead of system-wide as previously. This resulted in processes not setting a specific timer resolution on their own falls back to using the default timer resolution of 15.625ms (64 Hz).

 

I tried some solutions to fix the the root cause, but best solution is to not use wait(ms).vi. I change my application the way that e.g. queues wait until data available and so forth, problem solved.

 

0 Kudos
Message 10 of 12
(98 Views)