04-03-2023 08:26 AM
Hi, guys!
I'm using USB-6251 board and I want to receive retriggered AI data and simultaneously generate a delayed single pulse used to trigger other devices. I've attached a VI (compiled from two examples found in NI community forums) in which a top part does retriggered AI task, and a bottom part produces counter output single pulse I needed. The question is what should be added there to make these tasks work together? A guess I could use two counters available in 6251 and it should be a correct task sequence. But I just could not understand how to wire these tasks (if it is possible).
Solved! Go to Solution.
04-03-2023 02:47 PM
Your code references both Dev1 and Dev2 -- do you have 2 different devices or was that an oversight?
I ask because a single USB-6251 *cannot* do all the things you want. It doesn't support retriggered AI directly, which is why you need the workaround of retriggering a counter pulse train for the AI task to use as a sample clock. But in order to retrigger, the counter task needs to be *finite* rather than continuous. And on M-series (62xx) devices like yours, there are only 2 counters and *both* get used up when generating a finite pulse train. You wouldn't have any available to generate a different delayed pulse in response to the trigger signal.
You might be able to set up an AO task to generate an equivalent delayed pulse, but it wouldn't be retriggerable in hardware. You'd need to stop and restart the task using software calls to rearm it for a future trigger. And if you were sure you could do *that*, you wouldn't have needed to muck about with the indirect retriggering scheme for AI.
-Kevin P
04-04-2023 01:59 AM
Actually it was not a code I was trying to run (it will not work), that's why there could be listed different devices. I have only one 6251 device and before posting a question here all the info I got from the internet told me it is impossible to solve the problem with my device. "there are only 2 counters and *both* get used up when generating a finite pulse train" - ruined all the plans. This forum was the last hope I've missed something.
Software triggers could be a solution, but I suppose it's not a good idea if it is nessesary to trigger AI tasks at 1kHz or even faster.
Anyway, thanks a lot!
04-04-2023 10:15 AM
What are you trying to do functionally? For each trigger, how long until both AI acquisition and delayed pulse finish? And what is the time interval between expected triggers?
Though your planned method can't work on a single 6251, it's possible there's a different approach that might.
-Kevin P
04-04-2023 11:49 AM
Ok, I have a square pulse train (5V, 5us, typically at 1kHz) from pulse generator and these pulses are used as triggers to start aquisition (I use 4 AI channels of my 6251). Sure, sample rate and number of samples to aquire in all the AI channels are consistent with period between trigger pulses (sample rates are HW-limited at 1MS per number of channels per second, so, 200 samples per AI channel at the rate of 250kHz looks aquired well). All I want to add here is to have an additional pulse (5V, 5us) which is synced with above mentioned trigger pulse and could be generated with user defined variable delay (from 0 to 1ms if 1kHz trigger pulse train is used).
PS. I do understand that this additional pulse could be generated with another pulse generator or different HW, but my 6251 looked quite promising to handle this task (also because of easily arranged LV-based UI).
04-05-2023 08:55 AM
Here's the best workaround I managed to conjure up. It'll depend on doing some post-processing:
1. Configure your AI task to be continuous sampling at your desired sample rate (250 kHz?).
2. Make a counter pulse output task that lengthens the incoming 5 usec trigger pulse. Make a minimal low time and initial delay (such as 50 or 100 nanosec) and a high time more like, say, 50 usec. This only requires 1 of 2 counters.
3. Also make a DI task for continuous sampling, configured to use "/Dev#/ai/SampleClock" as its sample clock source. Set it up to capture the pulse you generate in step #2. Be sure to start this task *before* starting the AI task.
4. Make the retriggerable delayed pulse task you want on the 2nd of your 2 counters.
With AI and DI sampling in sync, you can post-process to find the sample #'s where the trigger pulses occur. Starting from that point, you can retain N samples from AI, then search the DI data for the next low-to-high transition representing the next trigger pulse.
You should be able to keep up with 250 kHz sampling and near-real-time processing with a producer/consumer architecture. Your producer loop will read the same # samples from AI and DI (roughly 1/10 sec worth is usually a good starting point), bundle and enqueue the data, and pretty much nothing else.
The consumer will require more careful code work because the chunks being delivered won't exactly correspond to what you care about. If you follow my advice and retrieve 25k samples per read loop iteration, that'll be ~100 triggers worth, so you'll need to break things down from there.
You'll also need to manage the possible "overlap" situation where the 25k chunk ends with only *part* of the final trigger's worth of data. You'll need to retain that for use on the next consumer loop iteration when the rest of that data will arrive (along with an additional ~100 trigger's worth).
All in all, a little extra work to be done, but should be feasible.
-Kevin P
04-11-2023 11:12 AM
It's just a comment on youtube's video staying in my mind: "so many people overestimate their skills" (video is a compilation of people's fails).
"You'll also need to manage the possible "overlap" situation where the 25k chunk ends with only *part* of the final trigger's worth of data." - looks a litle bit confusing, but finally I'll try. Thanks a lot, Kevin.
04-11-2023 11:22 AM
Let me make a suggestion for trying out the part you find confusing -- put your focus on the processing to be done by the consumer loop. Create a "simulated" producer loop that feeds it "cooked" data that you've pre-defined. By doing this, you can know what results you should *expect* from the consumer's processing and troubleshoot the tricky part of managing the "overlap" I mentioned.
Do tweaks, kick the tires a bit, make sure it's robust against a variety of things you throw at it. And *then*, knowing that your processing is good, redo your producer loop to perform actual data acquistion.
-Kevin P