11-30-2018 10:23 PM
Thank you for the suggestions! I tried both Kay and Paul's suggestions on my VI, they don't seem to work. The total run time will still be the sum of run time + loop time. I have attached the VI with Paul's suggestion. Yet, if I use my idea to fix this, the elapsed time count will not be the same as run time value and I found this is a little odd. Is this a good fix for the problem?
I have also another question. It seems like the time elapsed count is a little off, see attached Front Panel picture. Is there a way to improve the count?
To answer mcduff's question - the reason I am not using the finite set of data point is because I'd like the user has an option to run the code continuously or to specify a run time.
12-02-2018 10:28 AM
Thanks for giving me "another chance to teach". Please forgive the somewhat-rambling nature of this Post -- I'm putting in the effort because I sense you are interested in becoming better (and maybe even proficient) in LabVIEW development, but you need some guidance.
I'm attaching a "partly-cleaned-up" version of your code, with lots of comments, in no particular order. I also know why your "timing" is off, and (more important) "how to fix it". That will come later. First:
Now, let's talk about Time, a key concept in LabVIEW (dealing, as it does, with timed events like DAQ, and with Time as a primitive Data Type (Timestamps). Here is a Principle -- DAQ Hardware Is the Best Clock! If you really want to time something, and you have DAQ functions, use the DAQ, don't depend on the CPU clock! You appear to be doing all of your Sampling at once, taking 10,000 samples a second for 10 seconds from 4 channels -- I certainly hope your cDAQ device has a 400,000-sample buffer! Assuming that it does, the DAQ Read is guaranteed to take 10 seconds, at which point it will transfer all of the data to the Array of Waveforms coming out of the DAQ Read function. Until this happens, almost nothing else in the Loop can run (as the Principle of Data Flow says it can't run until there is "Data on the Wire"). Now, everything else in the loop needs to run before the loop can finish, which might take a little extra time (and explains why your Elapsed Time is > 10 seconds).
How can you fix this? Simple, don't do all that processing before ending the While Loop! "But I have to process the data!", you say. Yes, but not in that loop. The Principle of Data Flow also means that LabVIEW can run two loops at the same time in something called the Producer-Consumer Design Pattern (there are LabVIEW Examples and Templates illustrating this). Here's how it works:
I'm attaching my revised version of your code. When you rewrite your DAQ loop and transform it into a Producer/Consumer design, try to copy the "style" of compactness, less white space, straighter (and shorter) wires, and a level Error Line ("The Error Line Runs Through It").
Post your New and Improved Code. Does it work (much) better?
Bob Schor
12-03-2018 09:53 PM
Thank you for the thorough elaboration, Bob. I will take some times to study the Producer/Consumer design and post the improved code.
Best,
Drake
12-12-2018 10:47 PM
Hi Bob,
I have modified my VI using your suggestion. Few questions have came up:
1) When the VI is running, the FFT Graph and the corresponding data value are working fine. Yet, When I pressed the stop button/when the VI stopped, the graph is cleared and the data value is not properly shown, see attached 123 picture.
2) I set the run time to be 10sec, yet the time elapsed indicator only shows 9sec. I believe the loop starts counting with 0, that's why it gives me 9. Is it necessary to have this fixed.
3) I get rid of the Get Time Function because even with the producer/consumer design pattern, the timestamp count will still not be an integer; and replaced with the loop count as the timestamps. Is this what you referred to as "DAQ Hardware clock"?
4) In my previous version, by changing the "timeout" of the DAQmx Read, let say 5sec, the FFT graph will produce a FFT for this 5sec. However, when I used producer/consumer design pattern, I noticed that the 5sec "timeout" of the DAQmx Read will multiply the total run time. For example, "Run Time" is set to 10sec, "timeout" is set to 5sec. the total run time will be 50sec instead of 10sec specified (the time elapsed indicator will still remain 9sec). How can I fixed this issue?
Thank you,
Drake
12-27-2018 08:42 PM
Hi All,
Does anyone have an suggestion how to proceed to the 4 problems above? I'm still having difficulties to figure it out.
Thank you,
Drake
12-28-2018 10:29 PM
@Drakelau wrote:
Hi Bob,
I have modified my VI using your suggestion. Few questions have came up:
1) When the VI is running, the FFT Graph and the corresponding data value are working fine. Yet, When I pressed the stop button/when the VI stopped, the graph is cleared and the data value is not properly shown, see attached 123 picture. Remember the Principle of Data Flow. How do you stop your Consumer? You kill the Queue, so the Consumer "errors out", delivering an empty Array to the Index Array function and hence an empty chunk of data to the Graph and all that follows in the Consumer. Stopping a loop by causing a deliberate Error is (in my opinion) clumsy, but effective. If you are thinking "Data Flow" (the First Principle of LabVIEW, and the Second, and Third, as well), you'll say "... and if there is an error, then I don't want to do any of the subsequent processing as it will overwrite my efforts with zeros or other garbage". [How do you do "If ... then" in LabVIEW?]
2) I set the run time to be 10sec, yet the time elapsed indicator only shows 9sec. I believe the loop starts counting with 0, that's why it gives me 9. Is it necessary to have this fixed. Did someone say "Principle of Data Flow"? Look at the order in which you increment and display the time.
3) I get rid of the Get Time Function because even with the producer/consumer design pattern, the timestamp count will still not be an integer; and replaced with the loop count as the timestamps. Is this what you referred to as "DAQ Hardware clock"? No. I haven't looked closely at your DAQ device, but if it is, say, an A/D converter, and you tell it "Take 1000 samples at 1 kHz", then you can figure out the Sample Time and know that if you design your Producer Loop properly, it will repeat exactly at the Sample Time. "Design your Producer Loop properly" means that you do almost nothing in the Producer except Sample and Export via a Queue (or Channel Wire).
4) In my previous version, by changing the "timeout" of the DAQmx Read, let say 5sec, the FFT graph will produce a FFT for this 5sec. However, when I used producer/consumer design pattern, I noticed that the 5sec "timeout" of the DAQmx Read will multiply the total run time. For example, "Run Time" is set to 10sec, "timeout" is set to 5sec. the total run time will be 50sec instead of 10sec specified (the time elapsed indicator will still remain 9sec). How can I fixed this issue? Uh, I don't get it! The TimeOut represents an error, namely you failed to get the Samples in time. Usually this happens because you failed to start the Acquisition, i.e. your Acquisition has a Trigger and you didn't send the Trigger. Another way is to start the Acquisition Task, wait a minute, then do the DAQmx Read, too late ....
DAQ devices have accurate clocks, and they aren't interrupted by, say, doing a Virus Scan, or a Disk Defragmentation. Knowing the Sample Rate and Sample Size, you can precisely time the Loop, and if you have a total Sampling Time you want, you can compute how many Loops you need to run (it should be an integer). Use a Structure that lets you "count" the number of Loops that you "know" you have to do. K.I.S.S.
Bob's Rule #1 (shamelessly "borrowed" from Peter Blume, who taught me LabVIEW Style, sadly overlooked by too many LabVIEW programmers) -- Every Block Diagram must fit on a single Laptop Screen. If you can't see all of your code and follow all the wires, debugging and refining code will be almost impossible. Think "Sub-VIs"!
12-29-2018 12:25 AM
@Bob_Schor wrote:
@Drakelau wrote:
Hi Bob,
I have modified my VI using your suggestion. Few questions have came up:
1) When the VI is running, the FFT Graph and the corresponding data value are working fine. Yet, When I pressed the stop button/when the VI stopped, the graph is cleared and the data value is not properly shown, see attached 123 picture. Remember the Principle of Data Flow. How do you stop your Consumer? You kill the Queue, so the Consumer "errors out", delivering an empty Array to the Index Array function and hence an empty chunk of data to the Graph and all that follows in the Consumer. Stopping a loop by causing a deliberate Error is (in my opinion) clumsy, but effective. If you are thinking "Data Flow" (the First Principle of LabVIEW, and the Second, and Third, as well), you'll say "... and if there is an error, then I don't want to do any of the subsequent processing as it will overwrite my efforts with zeros or other garbage". [How do you do "If ... then" in LabVIEW?]
2) I set the run time to be 10sec, yet the time elapsed indicator only shows 9sec. I believe the loop starts counting with 0, that's why it gives me 9. Is it necessary to have this fixed. Did someone say "Principle of Data Flow"? Look at the order in which you increment and display the time.
3) I get rid of the Get Time Function because even with the producer/consumer design pattern, the timestamp count will still not be an integer; and replaced with the loop count as the timestamps. Is this what you referred to as "DAQ Hardware clock"? No. I haven't looked closely at your DAQ device, but if it is, say, an A/D converter, and you tell it "Take 1000 samples at 1 kHz", then you can figure out the Sample Time and know that if you design your Producer Loop properly, it will repeat exactly at the Sample Time. "Design your Producer Loop properly" means that you do almost nothing in the Producer except Sample and Export via a Queue (or Channel Wire).
4) In my previous version, by changing the "timeout" of the DAQmx Read, let say 5sec, the FFT graph will produce a FFT for this 5sec. However, when I used producer/consumer design pattern, I noticed that the 5sec "timeout" of the DAQmx Read will multiply the total run time. For example, "Run Time" is set to 10sec, "timeout" is set to 5sec. the total run time will be 50sec instead of 10sec specified (the time elapsed indicator will still remain 9sec). How can I fixed this issue? Uh, I don't get it! The TimeOut represents an error, namely you failed to get the Samples in time. Usually this happens because you failed to start the Acquisition, i.e. your Acquisition has a Trigger and you didn't send the Trigger. Another way is to start the Acquisition Task, wait a minute, then do the DAQmx Read, too late ....
DAQ devices have accurate clocks, and they aren't interrupted by, say, doing a Virus Scan, or a Disk Defragmentation. Knowing the Sample Rate and Sample Size, you can precisely time the Loop, and if you have a total Sampling Time you want, you can compute how many Loops you need to run (it should be an integer). Use a Structure that lets you "count" the number of Loops that you "know" you have to do. K.I.S.S.
Bob's Rule #1 (shamelessly "borrowed" from Peter Blume, who taught me LabVIEW Style, sadly overlooked by too many LabVIEW programmers) -- Every Block Diagram must fit on a single Laptop Screen. If you can't see all of your code and follow all the wires, debugging and refining code will be almost impossible. Think "Sub-VIs"!
Believe it or not, I read his book cover-to-cover.
12-29-2018 11:35 AM
@billko wrote:
@Bob_Schor wrote:
Bob's Rule #1 (shamelessly "borrowed" from Peter Blume, who taught me LabVIEW Style, sadly overlooked by too many LabVIEW programmers) -- Every Block Diagram must fit on a single Laptop Screen. If you can't see all of your code and follow all the wires, debugging and refining code will be almost impossible. Think "Sub-VIs"!Believe it or not, I read his book cover-to-cover.
For those who are curious, "The LabVIEW Style Book", by Peter Blume. Billko has read it cover-to-cover, and I've done the same three or four times (I'm a "slower learner" than Bill ...).
01-03-2019 01:19 AM
Hi Bob,
Thank you again for your helpful explanation. I am able to solve most of the problems I have asked. Yet, I am still struggling with one problem. For my FFT Power Spectrum and PSD vi, I would like to have an user interface to control the number of samples to be used to perform the power spectrum graph. For example, the sampling rate of the DAQmx vi is 10000S/sec, an user might want to see the power spectrum graph for 5 sec acquisition period, which I will need to have the vi to perform a power spectrum plot of 50000 samples. I have tried different ways, but still couldn't get this to work. (I tried to alter the "number of samples per channel" of the DAQmx Read vi, but this will affect the acceleration graph and create other issues, which I do not prefer) Do you have any suggestion for this problem?
Thank you,
Drake
01-03-2019 06:40 AM
@Drakelau wrote:
Hi Bob,
Thank you again for your helpful explanation. I am able to solve most of the problems I have asked. Yet, I am still struggling with one problem. For my FFT Power Spectrum and PSD vi, I would like to have an user interface to control the number of samples to be used to perform the power spectrum graph. For example, the sampling rate of the DAQmx vi is 10000S/sec, an user might want to see the power spectrum graph for 5 sec acquisition period, which I will need to have the vi to perform a power spectrum plot of 50000 samples.
Use a lossy queue to act as a circular buffer for the data samples. Once you have enough samples, dequeue that data and perform the FFT. You will want to limit the queue's size, and therefore your FFT size, when you initialize the queue and use the Lossy Enqueue function to add samples to the queue.