LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Différence entre graphes XY et déroulant au niveau mémoire (pour acquisition sur plusieurs jours)

Bonjour,

 

Je rencontre un soucis avec mes graphes, explication:

 

Je dois pourvoir afficher des données sur plusieurs jours sur 8 graphes (data en Y, et heure actuelle en X) , pour cela je donne la possibilité à l'utilisateur de modifier l'écart entre 2 points de 1 secondes à 10 secondes et le nombre maximum de points jusqu'à 100000.

 

Avant la maj j'utilisais des graphes XY, avec registres à décalage, et je supprimer les points du tableau selon le maximum de point sélectionné. 

Cela faisait un diagramme très chargé.

 

J'ai fais une maj et j'ai modifié des graphes XY en graphes déroulants, et j'accède désormais à ces graphes via des références et donc des nœuds de propriétés, et je gère le nombre de point via le nœud de propriété "historic data".

Je pensais que cela serait mieux mais malheureusement au fil du temps ma boucle ralentie de plus en plus jusqu'à dépasser la seconde au bout de 20 à 30 000 points.

 

Mon explication n'est peut être pas clair sans modèle (je vais essayé de faire un Vi de test) mais ma première question est simple:

 

Pour l'affichage de graphes avec beaucoup de point vaut il mieux utiliser une graph XY ou un graphe déroulant (en terme d'accès mémoire) ?

 

Je suis étonné d'avoir ce problème, car je suis persuadé d'avoir déjà utilisé des graphes déroulant avec autant de points dans le passé ...

 

 

Merci d'avance pour vos idées.

0 Kudos
Message 1 of 9
(196 Views)

@fulopiton wrote:

(translated)

Good morning,

 

I have a problem with my graphs, explanation:

 

I need to be able to display data over several days on 8 graphs (data in Y, and current time in X), for this I give the user the possibility to modify the gap between 2 points from 1 second to 10 seconds and the maximum number of points up to 100000.

 

Before the update I used XY graphs, with shift registers, and I deleted the points from the table according to the maximum number of points selected. 

This made for a very busy diagram.

 

I did an update and changed XY graphs to charts, and I now access these graphs via references and therefore property nodes, and I manage the number of points via the "historic data" property node.

I thought this would be better but unfortunately over time my loop slowed down more and more until it exceeded the second after 20 to 30,000 points.

 

My explanation may not be clear without a model (I will try to make a test Vi) but my first question is simple:

 

For displaying graphs with a lot of points, is it better to use an XY graph or a scrolling graph (in terms of memory access)?

 

I'm surprised I'm having this problem, because I'm sure I've used charts with this many points in the past...

 

 

Thanks in advance for your ideas.

.


 

Words are insufficient to give advice. Please show us a simplified version of your VI.

 

You only need an xy graph if the time between points is irregular.

Charts manage their own history and there should be no reason to use references and property nodes. You cannot change the history size at runtime, so please explain your use of the term "manage".

0 Kudos
Message 2 of 9
(193 Views)

Hi,

 

I though I posted in french section ... no matter.

Attached a simple project of the code.

 

Usually I set "Time interval" at 1s and "total nb of point " to 130000.

For the test I set 0.05s and the loop period to the minimum 1ms.

 

I added an indicator to check the maximum Loop period and fastly we can see that this time increase...

 

In real condition, everything works fine until the loop time increase up to 1 second, this append after approximately  7-8 hours (20000-30000 points).

 

0 Kudos
Message 3 of 9
(187 Views)

Yes, you also posted in the French section (where you attached the project). Let's keep the discussion here because there is a bigger audience.

 

Describing your code in simple terms, 

 

  • you acquire two points per iteration and the loop time (default 1ms) can be changed by the users.
  • You take a running average over a certain time period (default 50ms) and whenever that time has been reached you add a point to the graph.
  • You want to shorten the history by reading the chart history, trimming it, and writing the shorter history back.

This is a lot of juggling and unnecessary operations.

 

  • You don't need to built and reset arrays in an action engine just to take the mean. All you need to keep track of is the sum and the number of entries. Much less memory overhead.
  • Yes, keep all data on the diagram in a 2D array (time, y1, y2) and process for an xy graph.
  • It is silly trying to display 30000 points on a graph that is only maybe 1000 pixels wide. Do some decimation to keep the UI thread lean.
  • Do you really want the user to be able to change the history size and timing during the run. It would be significantly simpler to keep them fixed once acquisition starts.
  • Wouldn't it be more reasonable to read multiple points and read and average the buffered value at the longer time interval?
  • You seem to record the time after the last point for the average. Wouldn't the average time be more desirable?
  • Using charts for unevenly spaced data has a huge internal overhead.
  • ...
0 Kudos
Message 4 of 9
(156 Views)

Bonjour,

 

Vous aviez bien posté sur le forum francophone mais Altenbach répond également dessus en anglais.

 

J'ai regardé rapidement votre exemple de code. Au vue de la description du problème et du ralentissement, cela peut venir de manipulations répétés de grand tableaux surtout si on change la taille en court d'exécution.

 

Pour vous aider, je vous conseillerait de maintenir un buffer de données et de choisir les données à afficher à partir de ce buffer plutôt que d'utiliser le buffer du graph directement. J'ai récement fait une application avec des graphs XY sans problème de performances avec des milliers de points.

 

Par contre, je peux aussi vous conseiller de limiter l'affichage des données. Mon collègue Loïc avait mis au point une librairie de décimation qui permet de réduire les points à afficher sans perte d'infomartions.

 

voir le dépo github : Loysse/Decimation-Min-Max

 

Une présentation au LUGE avait été faite sur le sujet : LUGE - LUGE 2023.2
Vidéo youtube : Boostez vos graphes avec une décimation intelligente !

 

En espérant que ca puisse vous aider.

Maxime R.  

  CLA - Certified LabVIEW Architect / Architecte LabVIEW Certifié
  CTA - Certified TestStand Architect / Architecte TestStand Certifié

0 Kudos
Message 5 of 9
(140 Views)

my 2 cents: the issue with slowing down may be related to the use of memory. 

One workaround would be to create initialize the fixed arrays to plot in the beginning of the  loop, then at each iteration, replace with the new point. 

0 Kudos
Message 6 of 9
(130 Views)

Hi,

 

I was not in my office last week, I will work on my projet this week, and I will take into consideration all your advice. thank you very much.

I keep you informed.

 

😉

0 Kudos
Message 7 of 9
(73 Views)

I didn't have a look on the diagram yet but to quicky answer :

 

  • It is silly trying to display 30000 points on a graph that is only maybe 1000 pixels wide. Do some decimation to keep the UI thread lean. ---> Indeed, I quickly read some papers about "decimation", I should have a look ... but I would like to keep the "export to excel" option on graph right click (so keep all data), is it possible with decimation ?
  • Do you really want the user to be able to change the history size and timing during the run. It would be significantly simpler to keep them fixed once acquisition starts. ---> yes I want, because many people can use the GUI, the measurement can be very long (up to 1 week, with no possibility to stop), and the user must have the possibility to adjust the timing.
  • Wouldn't it be more reasonable to read multiple points and read and average the buffered value at the longer time interval? ---> Yes , it is what I've done before my update, With XY graph (and a lot of shift register) it works with 130000 points in the graph without any memory issue. I made some unprovment by using references, subVI, chart graphs (and make a diagram more "understanable"), but now I have memory issue with 20-30000 points . 
  • You seem to record the time after the last point for the average. Wouldn't the average time be more desirable? ---> This time don't need to be very precise, but you're definitevely right, I should change it.
  • Using charts for unevenly spaced data has a huge internal overhead. ---> I think too that it seems to be the main issue, this is the big difference before and after the update (XY Graph to chart ). I find the chart easier to use but it seems to not be a good idea for this application... do you have any link, articles about this ? Because I didn't find on the web a precise information about the memory using of the different graphs on labview ?
  •  
0 Kudos
Message 8 of 9
(61 Views)

Hello,

 

If you use decimation, you will lost direct usage of export data to excel because the graph will only data plotted and you remove data to plot it faster export will not work.

 

But you can override this menu entry in an event structure and extract the correct data from your data buffer and save it as Excel File.

 

If you take a look on the decimation algorithm that I sent, you will also see that it display only real point. if you calculate mean value or other values to minimize the data to display, you are showing data that doesn't exist.

 

It seems that you are working on an application that is gonna used by many people with large dataset. If you don't have a lot of expertise on that, which are not what we do everyday, it could be a good idea to ask some help from certified partner to help defining a good architecture and practices to reach your goal.

 

Best regards.

Maxime R.  

  CLA - Certified LabVIEW Architect / Architecte LabVIEW Certifié
  CTA - Certified TestStand Architect / Architecte TestStand Certifié

0 Kudos
Message 9 of 9
(50 Views)