LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LV 2024 - Casting Bundle with waveform data to variant and then back to bundle causes the Y array of the waveform data to be removed

Solved!
Go to solution

Hey fancy folk,

 

Question:

Is there a reason why casting a waveform array in a type deff cluster to a variant and then back to cluster then waveform array causes the waveform array to lose the Y array data?

 

Background:

I have some code that is running in a consumer producer loop where multiple producers (various part of my code) give the data to the same consumer.  When I got to testing the code, I noticed that the waveform data being sent by my Ai producer loop (PS V, PS I, MTR Tq) to my consumer loop loses its Y array data.  I then created a small scale version of the process as best I could and in the small scale version of things, there isn't an issue.  I can't release the code, but I did take screen snippets of my relevant code along with the probes and text inside the probes for specific things.  Below you can see these images.

 

Producer Loop

Matt_AM_0-1725476255200.png

 

Consumer Loop

Matt_AM_1-1725476275703.png

 

Probe 29 (ai data read from DAQmx as a 1D waveform being stored as a variant)

Matt_AM_2-1725476316655.png

 

 

Probe 31 (element I am enqueuing and the corresponding text file contains things in the "Adtl Data")

Matt_AM_3-1725476355773.png

 

Probe 35 (element dequeued by consumer and the corresponding text file contains things in the "Adtl Data")

Matt_AM_4-1725476404668.png

 

 

Probe 32 (data after being casted back to a variant)

Matt_AM_5-1725476417147.png

 

Proof of Concept VI that shows this works in a different VI

Matt_AM_6-1725476469197.png

 

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
Download All
0 Kudos
Message 1 of 4
(732 Views)
Solution
Accepted by topic author Matt_AM

Ignore me, I'm just bad at coding and the blank data is legit.  I didn't look at the time on my probes and going back I realized that the blank data was 2 secs behind the probe data from my producer.

 

Just have to figure out why my codes not working like I expect, but at least that answers why my Y array was "being removed".

 

-Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 2 of 4
(700 Views)

This does not solve your problem, but it looks like you are doing a DAQmx Read every 10ms. That is too often. Makes your reads every 100ms. There is overhead with each read call and going too short will lead to problems in the long term.

0 Kudos
Message 3 of 4
(672 Views)

@mcduff wrote:

This does not solve your problem, but it looks like you are doing a DAQmx Read every 10ms. That is too often. Makes your reads every 100ms. There is overhead with each read call and going too short will lead to problems in the long term.


I ended up going back to my original less memory efficient method of building my circular buffer via concatenating an array then trimming it to the proper length.  Main reason, I had to get stuff up and running quickly and "contacting an empty array onto the circular buffer array" won't throw an error.  It was a "time vs actually solving my true problem" trade off where getting it up and running was more important.

 

As far as how often I am reading, I am sending my "most recently read data" to a notifier, which is being read by my test's state machine (which is set to a 10 ms wait between states), so every iteration through my loop should (theoretically) be reading "the most recent data" to make its decisions.  I understand that using wait isn't going to "lock" my state machine to 10 ms and that sending my data to a notifier wont be at exactly 10 ms, but I'm good with my state machine being an "iteration of data late" (EG, notifier updates at iteration [i] but that is seen by my state machine at iteration [i+1]) is acceptable for making decisions vs having to wait 100 ms to make a new decision.   Again, I understand there are better solutions, but sometimes the good enough is... well... good enough as long as the test does what I want it to do.

 

Side note, my read Ai loop also sends the data to my log loop if it is told to log, so I am not worried about my log loop missing data.  I tend to design my consumer-producer loops (Ai read producer and display and log as consumers) as more of a broadcast that I have to manually add, vs something like the DQMH which handles that "automatically" via user events.  Main reason for not using a more robust architecture is that if I am not at the test when if it goes down, other people will be able to trace and understand my code so they can troubleshoot what went wrong.  Yes, training them with a more robust architecture is important, but its more important (to me) that others who work with me can read, understand, and modify my code to get the test fixed and back up and running ASAP.

 

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 4 of 4
(98 Views)