10-13-2023 05:02 PM
Yes, that's the thread.
These two threads I've posted are associated. However, reading this LabVIEW forum I'm learning/have learned it is not good to introduce new topics in a thread, as it derails. But these two threads are in fact the same idea.
I've shown I can get data into memory, large amounts of it -- that the memory really is there and contains the data I want. Now how do I get the data out and work with it without LabVIEW running out of memory in the process? One answer I know for sure and without any doubt is do not use indicators except to troubleshoot.
In order to perform math operations on large amounts of data in a set I must use arrays, as I see it. Arrays eat memory. But is there a way to reclaim that memory for LabVIEW's reuse? I don't know.
10-13-2023 05:30 PM
"so if the driver is doing it right, it should not be DBL as you claim."
That blew my mind; it's a new paradigm. You're saying that just because the waveform clearly claims (DBL) that it isn't. It doesn't blow my mind if what you're really saying there is that in order to get that waveform, which is double, math was done prior and that if I dig deeper I will see this. That I get -- no new paradigm, there. Correct, it's actually 16-bit integers that were converted to the doubles in the waveform. ...if that's indeed what you're saying.
moving on
So, I have loads of 16-bit samples in memory. Now how do I get them out and not have LabVIEW run out of memory in the process? That's what I am trying to learn with these threads.
Whatever I do, do not use indicators. This is one answer I know for sure is correct. But now what?
(I used indicators. This is how I know the returned 16-bit data is in memory. It really is there. I used MemoryMove and saw it. (Another detail: I'm not talking about LeCroy, now. I'm talking about PicoScope.) That's how I know it is there. To do this I created arrays to be used as the needed destination pointers. But whatever I do, I know I should not look at that data with indicators, because it eats memory. Those indicators can only be used for troubleshooting. Ok, I've got it. Now what?)
You folks here are much more knowledgeable than I. Heck! I'm still trying to use the forum efficiently to post a link without a lengthy url. I'm lucky I can plant screen capture pictures, despite how much you all deplore them. But still I know there are others out there just like me who get value from being able to look at a screen captured BD (block diagram.) Hence, the other meat and potatoes thread uses both. I uploaded both the vi (which traps a reader to LabVIEW 2018 or better) and I dropped in screen captures, too, so others can participate as well. I seek the professionals who, frankly often, speak a foreign language, but I am more than willing to get my answers from the novice. I just want answers. And one thing is known, for sure and without any doubt whatsoever, do not use indicators. 🙂
10-13-2023 05:36 PM
Briefly
"Companies produce these IDNet "drivers" as examples, not full blown solutions. They aren't APIs that expose every feature of their hardware. They usually provide the bare bones features of a primitive LabVIEW instrument control program; initialize communication, basic measurement setup, basic data transfer and close. "
I already reinvented the wheel once. I'd prefer not to do it again. You're saying dig deeper into their vi and get to the CLFNs behind it all -- go to their API itself.
I've done that and I have loads of returned 16-bit data in memory. Now how do I get it back out, work with it mathematically (which requires array creation, and the cursed indicator memory hogs) without LabVIEW running out of memory? That's what these threads are about.
10-13-2023 05:47 PM
I have no solution, yet.
I have one clearly understandable step I should take: do not use indicators. (Only look at whatever for troubleshooting purposes, in a temporary manner.)
10-13-2023 06:31 PM
Here is my 2 cents,
In summary, given you're very much restricted in resources you need to penny pinch a lot, may be even could end up at a state where you need to implement everything in C so you get full control of the memory and closer to the machine code.
There are some basic security concepts, just because the data is in memory somewhere doesn't mean anyone can access it. It is like, the bank has loads of money, why I can't I just go take it, because it is not yours, the person who owes you must transfer it to you, if it is in a different currency, it must be converted as well. The OS manages the memory spaces of each process that runs and it ensures the one process's memory cannot be accessed by another so the data is secure. The rules bend slightly where the top level process can access a memory space owned by a sub-process. Now, the next thing, though there is data in memory, it could be a different representation than what the other application wants. Whenever data has to be exchanged between two different apps written in different languages, Marshalling happens and results in duplication of memory unless the source app is intelligent to free up the memory after transferring to the target application.
I would be glad if experts can correct incorrect facts in my summary above.
There may not be a straight answer how to optimize your application, but through a series of iterations of optimizing your code to follow the best practices can get you to a better place but it may not be the best place you hoped for.
10-13-2023 06:34 PM
I found this:
"You can minimize data copies by using a shift register and iterating over the data by using the index array function along with the iteration count terminal in the loop to select the data to manipulate, then use replace array element with the same iteration count. You aren't guaranteed that a copy won't happen."
I found it in this very old and very short thread.
I'm thinking about how I could do that. I'm thinking what's being said is I should use MemoryMove, get the data into an array, and then send that array immediately to a shift register. (But don't look at it!) With four data sets I should use four shift registers. I should then feed the shift registers into loops and iterate, using element replacement to change the array, if needed, but then iterate back out into another array of doubles that then get sent to another shift register for further processing. In the end I will have eight shift registers going: four for the 16-bit data, and four for the double data.
Hmm... It seems we're getting somewhere, or so it seems to my novice eyes.
10-13-2023 06:44 PM
Santo_13: C and LabVIEW join via the Call Library Function Node (CLFN). That is in the realm of possibility, but LabVIEW needs that dll in its memory space. It'd be the best if I could remain in LabVIEW, entirely, I'm thinking.
Shift registers seem hopeful. They take up a lot of real estate on the BD and must run through everything in the state machine. Still, they seem hopeful nonetheless. That and CLFNs, using LabVIEW's built in functions. ...as I discover them and then figure out how to use them.
10-13-2023 08:54 PM
I just drilled a hole right through the center of my state machine.
"clean up wire" is your friend.
now I will adapt to shift registers, now that I've confirmed I still have my traces displayed in my driving vi. (I can't just dump every indicator!)
(it's all about confirm what's what (does it still work?) before you break it and try something new)
i so believe in K.I.S.S.
10-14-2023 12:35 AM
Ok, here's a puzzler.
What can cause an array full of data on one side of a case structure to become filled with zeroes on the other side?
The path straight through the case structure was tested and all was fine. This is why anything got changed on that path in the first place, because it was proven to be just fine, first.
channel 2 gets through, but channel 1 does not.
channel 1 data looks great until the other side of the case structure. it's like an open circuit was introduced, somehow, and there is an infinite resistance on just the channel 1 path when exiting the case structure. left side of the wall shows all the data. right side of the wall the data is now zeroes. hence, I lost channel 1 on the waveform display way down the line simply because the data won't go through the wall, now, when it did prior.
I just can't get the data through the wall.
it is the most bizarre labview experience I have ever had i'm pretty sure. this bug will definitely be a learning experience, for me.
ideas?
10-14-2023 12:50 AM - edited 10-14-2023 01:05 AM
Large data takes a bit of working knowledge. That link will help.
Oscilloscopes are used to display voltage vs time to a USER. By definition then they have a resolution of +/-0.5 minor divisions. At 10 major divisions times 5 minor divisions per major division you get +/-%1 of full scale easily fitting in an 8 bit ADC range. OK, LeCroy sometimes brags about 12 bit ADCs but, those 4 extra bits never improved the human eyeball! That's exactly why I will never pay the premium price LeCroy likes to overcharge for the useless bits.
If you need more than 8Bits you SHOULD NOT BE USING AN OSCILLOSCOPE! Your eyeball just isn't good enough. You need a data acquisition device.
OK you used an oscilloscope anyway, what now?, Ignore the raw data and set the scope to take the measurement you actually NEED! What are the measurement requirements? Strangely, you have not mentioned what information you want, only how you think you should use LabVIEW to get it. There is assuredly a better way than copying all of the scope's ADC acquisition records to your computer and making multiple copies of that data as floating point representation.