High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

Long records with PCI 5122

I have a PCI-5122 card with 8 MB (per channel) of memory.  I'm programming it with the Linux NI-SCOPE drivers, using C as my programming language.  The machine is an AMD Athlon X2 dual core processor, with 2 GB of memory, running SUSE 10.0.

I'm interested in acquiring long runs (full memory, so 4 megasamples) of data in a free-running mode: start an acquisition, let it fill the card's memory, fetch it, write it to disk, and do the next acquisition.  So far, things are almost working, but note quite: if I ask the card for a small number of records (less than about 32,725 or thereabouts), I get nice-looking waveforms.  However, there is a break point -- less than 32,768, though I haven't narrowed it down to less than that -- where the card begins to return values of 0.000711496 V (exactly) for every sample.  No warnings, no errors, no nothing -- just clearly-incorrect values.  This isn't just my code, either: if I modify the SaveToFile.c example to read more records, I get the exact same problem.

The problem seems to be independent of sample rate: I get the same problems at 10 Msps as at 100 Msps.  I'm interested in the latter rate, by the way.

I've been working around this by setting up a multi-record acquisition, but doing that leaves a 2 microsecond or so gap between records, which is really annoying, and which I would like to eliminate.

Any help would be appreciated.

Thanks.

0 Kudos
Message 1 of 5
(7,165 Views)
My first thought is that you are not properly error handling and that you in fact running out of onboard memory. Looking at the 5122 Hardware Specifications, the maximum number of records in the onboard memory for the 8MB/channel option is 21,845. However, you are setting up your acquisition for over 32,000. I am wondering if rather than throwing an error, the fetch just returns values of zero when it runs out of memory. How exactly are you determining what values you are seeing? Are you examining the data directly from the fetch or looking at it on a graph?

I also wanted to double check that you do in fact need multiple records and why you aren't using a single record. When you set up a multi-record acquisition, the digitizer will take the number of samples you defined for samples per record when the trigger occurs. It will then wait for another trigger and repeat the process.

You can see by the formula below that there is a cost for each record. You can get better efficiency by having less (or one) records with more samples.

Memory per Record = (Record Length × 2 bytes/S) + 200 bytes, rounded up to next multiple of 128 bytes or 384 bytes, whichever is greater

NI PCI-5122 Specifications

Regards,

Chris Delvizis
National Instruments
0 Kudos
Message 2 of 5
(7,138 Views)
Hi, Chris - thanks for the prompt response.

I realize now that I have mis-spoken in my original post: I'm trying to use a single record with high point count (i.e., in the call to niScopeConfigureHorizontalTiming, I set minNumPts to my desired number of records (ideally, 4 million, to fill the card's memory), and numRecords to 1).  For the current workaround described above, I twiddle both minNumPts and numRecords so that minNumPts < 32000 and minNumPts * numRecords is roughly the number of points I want, but, as I said, that leaves a 2 microsecond gap between records, which is not really acceptable.


0 Kudos
Message 3 of 5
(7,129 Views)
Hello Luke,

I think the easiest way to accomplish this task would be to acquire one record with 4 million samples and then fetch the samples in chunks in order to ensure that you do not overflow the device memory. This will require the manipulation of a pointer in the devices onboard memory in order to select the chunks, but should be pretty easy. Basically, will be selecting the size of a chunk to read (say 10 kS) and the total number of samples in the recored (say 4 MS). Then, we would read chunks of the specified size and move the pointer accordingly. If our original pointer to the beginning of our read is at 0 and we read our first chunk, we would then increment our pointer by the chunk size (10 kS) so that the next time we read we begin our read at 10 k. Then we would increment the counter again to 20 k, 30 k, 40 k and so on. This allows us to fetch chunks of a record without actually having to acquire the entire record. This also eliminates any gap in data between records since we are manually incrementing our read pointer to the correct location.

For an example of this application, I would suggest you check out the shipping example called FetchInChunks. As described in the NI-SCOPE Readme version 3.4:
C examples are located in /usr/local/natinst/niscope/examples.
I have to admit that I am running Windows XP, so the example name I referenced above is from my OS. If you cannot find an example called FetchInChunks (or something similar) let me know and I can post the source code from the C example in Windows.


Message Edited by Matt A on 04-22-2008 04:42 PM

Matt Anderson

Hardware Services Marketing Manager
National Instruments
0 Kudos
Message 4 of 5
(7,103 Views)
Luke,

Just so we're clear on the terminology:

In your first post, when you talk about ~32k records, you're saying that was actually the number of points (samples) you were trying to configure for one record, right?  You should certainly be able to configure a single record to have many more points than this.

Fetching in chunks will not necessarily solve your problem, since the driver should have reported an error if you tried to fetch data that had been overwritten.

We need to be sure that you are not missing errors reported by the driver.  Does your code trap the return value of all the niScope_ function calls?  If so, does it exit the normal program flow after an error occurs?

Patrick
0 Kudos
Message 5 of 5
(7,095 Views)