11-22-2018 12:00 AM
Hi there,
i am trying to acquire data from fpga (PXIe-7975R with NI-5783 at 100 MHz) at the rate of 100k samples/ ms(I16 ). while acquiring the data i am getting the samples as per my theoretical calculations and there is no problem with that. but while i am trying to write the data directly into a binary file (the file size should be 12*10^9 Bytes) my host code is running slowly. to overcome it i used queues, but while i am tying to acquire data through queues and write the data into binary file i am getting error "not enough memory to complete this operation", means the allocated buffer memory of my code is running out. please tell me if anybody knows how to handle this much data without effecting memory.
Thank you
Solved! Go to Solution.
11-22-2018 01:59 AM
Hi kiranteja,
my host code is running slowly.
To write 12GB/min to a file your filesystem must handle 200MB/s write speed.
Does your harddisc allow such rates?
(When you still suspect YOUR code is the problem then you should ATTACH YOUR code!)
11-22-2018 02:39 AM
To add to Gerds comment. I assume you have a loop that samples and a loop that writes to disk (through a queue)? How much data to you grab and write at once? You say 100k/ms, i hope that's not how you're Writing to disk (100k/ms as individual writes)? Assuming your disk can keep up with a 200MB/s (an SSD should) it's probably more efficient to sample and write in the 10-100ms scale.
Try grabbing 1E6/10ms or 10E6/100ms and see how that works.
With a 1ms write loop i'm not surprised you'll get a backlog and buildup of queue.
/Y
11-22-2018 03:14 AM
Hi Yamaeda,
thank you for your suggestion, i already tried with 500k/ 5 ms. But the loop iteration time was fluctuating between 6 ms to 8 ms.it took more time than expected to record the data. because of that i do not dare to go with high data rates.
11-22-2018 03:33 AM
Hi,
What is the software running on? As Gerd says an SSD should eat this up (most now do 500MB/s) however if you are running on a PXI controller with a spinning disk this is likely to struggle more and are likely to be too slow for 200MB/s. In this case, there may be nothing to be done in software, you would need to upgrade the drive.
Cheers,
James
11-22-2018 04:59 AM
hi James_McN,
I am using Conduant DM-8M (8 TB SATA SSD) which can support PXIe based systems. the overview of the SSD is mentioned in that PNG. the transfer rate is over 3GB/sec. the ssd is inserted into NI PXIe-1082 8-slot chassis with NI PXI-8840 embedded controller.
11-22-2018 07:59 AM
Hi Kiranteja,
Cool - that looks like a cool product. So how does that appear in the 8840 OS - does it show as 1 drive or is it configured as RAID? Are you able to run a crystaldisk or similar benchmark tool to the drive where you are writing from LabVIEW to confirm the setup is all happy?
Failing that, if you are able to share the code where you are writing that would be interesting. I wouldn't expect you to need too much special code to achieve this but obviously, something isn't working.
Cheers,
James
11-22-2018 09:03 AM - last edited on 12-18-2024 10:54 AM by Content Cleaner
What is the budget? The HDD-8261 clams to do 2 GB/s, the PXIe-8267 5 GB/s (see PXI Storage Modules). With "Contact for pricing" it doesn't sound cheap though.
Depending on the amount of data, a RAM disk can be convenient. Of course if it needs to be persistent, at some point it needs to go to a physical disk...
11-22-2018 10:12 AM - edited 11-22-2018 10:13 AM
@kiranteja93 wrote:
Hi Yamaeda,
thank you for your suggestion, i already tried with 500k/ 5 ms. But the loop iteration time was fluctuating between 6 ms to 8 ms.it took more time than expected to record the data. because of that i do not dare to go with high data rates.
I'd wager it takes well more than 1ms for the 100k loop. 😄
When it comes to SSDs it can be good to know that they internally usually have 256kb blocks, even if the file system have 2kb blocks (like NTFS), so any disk write will cause it to read the 256kb block, modify the memory and write it back. If those 4 SSDs are in raid it might be much larger blocks.
Try the 1000k/10ms or even 10M/100ms just to compare. You can't crash harder than you're already doing, right? 😉
/Y
11-22-2018 11:04 AM
You'll find that writing chunks to a file (appending the file) will be significantly slower than writing one contiguous chunk in one go. This is because of the file system. The OS needs to locate some free sectors, mark them as being used, link them to the file and then write. Add to this the actual speed of writing (which doesn't really affect you as you have cool hardware) and things can get gnarly.
A Trick I have used in the past for known-size files. If you are using 64-bit windows:Write the file beforehand. Fill it with zeroes. This way, Windows actually caches the disk sectors in RAM (which in 64-bit Windows can be a LOT). When you then overwrite the file, windows actually copies the contents to memory, and only flushes to disk at a later stage (also no searching for sectors, it's already mapped out). You can get amazing access times and throughput this way which is indistinguishable from really fast hardware. Caveat is the time it takes to generate the file in the first place and having lots of RAM free.