05-04-2011 05:09 PM
I'm trying to read a large .csv file (329 MB) and display a user selected column of data on a graph. I have a standalone executable that does this wonderfully, however, I cannot contact the developer of this application to ask questions. I just have the standalone executable, no code. So I'm trying to write my own application with a slight modification. The problem is I run out of memory for LabVIEW.
In the standalone execution the memory usage is as follows:
515 MB - baseline memory usage
523 MB - application starts
1.25 GB - open 329 MB .csv file
1.5 GB - open additional 63.6 MB .csv file
Data from the first column is displayed on a graph. Memory usage does not seem to change at all when I'm selection columns from either file for plotting on the graph. CPU usage is also minimal when selecting columns; at most an occasional spike of 20%.
In my application the memory usage is as follows:
515 MB - baseline memory usage
587 MB - LabVIEW starts
598 MB - application starts
657 MB - open 63.6 MB .csv file
1.10 GB - convert spreadsheet string to 2D array
2.04 GB - replace 1st row
and then LabVIEW runs out of memory. I've enclose the subVI that actually reads the .csv file and converts it to a 2D array. Why is LabVIEW running out of memory for my application and not the standalone application? Theories?
05-04-2011 06:22 PM
Is the standalone application written in LabVIEW? What version?
Can you attach a small data file showing how it looks like?
Is the bulk of the table numeric? Why are you carrying it around as strings?
Is the front panel of the subVI visible when you run it? Make sure it is closed to save the memory of the indicator data.
Such large data structures should never be in formatted files. Use a flat binary format instead!
05-05-2011 04:06 AM
As altenbach said, you should use binary data format, and never use strings if your data is in numeric form.
However, for your present application, as you have mentioned in the last two points, the string is being converted to 2d array (memory dynamically alloted here for the array), then replace (copy this huge array and to some other free location in memory, so now we have two copies of array inside memory, and after that replace the row).
Try doing this:
Use Read from Spreadsheet file function for numeric data(if you have numbers in your spreadsheet), with proper delimiter, then use In Place Element structure where you are replacing the 1st row of the array (that way you avoid making multiple array copies inside the memory).
Let me know if that works.
05-05-2011 11:29 AM
Hello faustina
Have you been able to experiment with what our esteemed and respected colleagues have suggested? Please let me know the status of your issue. I am committed to seeing it come to a resolution. Please keep me updated on your progress! Are you receiving an error code when LabVIEW runs out of memory? If so, could you please report it to me?
Thank you very much!
05-05-2011 05:15 PM
I have no idea what version of LabVIEW the standalone application was written in. I'm using 2010. I cannot supply a sample of the data file because of the nature of the data. I have no choice with the input file format. It is a .csv file. The majority of the data is numeric but there some characters. I never display the front panel of the subVI, just pass the 2D array to another subVI that determines which columns are numeric and which are characters and builds a numeric and string arrays accordingly.
However, the LabVIEW memory error occurs in the subVI I submitted during the last function call. LabVIEW goes into never-never land and I have to forceably shut it down from Windows.
05-05-2011 05:19 PM
@faustina wrote:
LabVIEW goes into never-never land and I have to forceably shut it down from Windows.
Just to eliminate the obvious....The VI you have attached has to breakpoints set (two red frames).
I assume your not running the program like that, or are you?
05-06-2011 05:14 PM
When I run the subVI there are not breakpoints.
I tried using Read from Spreadsheet file function and I still had lots of memory usage. When I opened the 63.6 MB .csv file converting directly to DBL memory usage went from 564 MB to 1.2 GB. When I opened it as string memory usage went up to 2.22 GB and ran out of memory. So it would be nice if I could convert everything to DBL. I would loose the columns containing string data.
I have not been about to figure out how to use the In Place Element Structure to replace rows or delete columns.
The error always get is "Not enough memory to complete this operation." Sometimes I also get this error;
"The top-level VI "FSPRd.vi" was stopped at unknown on the block diagram of "ExtractPortion.vi"
05-08-2011 11:57 PM
Try using InPlace Emelement structure, and if nothing works, read the file in chunks of data.
Inplace element structure should normally do the trick in memory overflow issues.
05-09-2011 05:29 PM
Hey all,
The in place element structure should be relatively straightforward, once you understand the concept behind it, which is that it keeps everything in the same memory location and does think about resizing it. Thus, deleting items in an in place structure would not function. Replacing items should however, as discussed in this help document: http://zone.ni.com/reference/en-XX/help/371361G-01/glang/in_place_element_structure/ and as shown here:
I tried to reproduce your issue, but I managed to make both notepad and word crash in the production of a large enough CSV file. My recommendation is go ahead and split the file up if at all possible. Since you said the code fails on the delete node, you could also simply use the array subset function to pull out all but the line you want to delete, rather than asking labview to give you a second memory location. On the other hand, on my computer the problem occurs at the VI which converts to a spreadsheet string--however, if I set .csv to false (meaning that VI does not have to convert to a float) it does not have a problem. My suggestion would be to convert it to a string first, then use the in place element structure to convert items one by one to the %f format, rather than %s. This is especially true since you are converting string to %f and then back to string (because you selected a 2-d string array as your output).
Thanks,
06-05-2013 12:26 PM - edited 06-05-2013 12:34 PM
I am running into a similar problem even simply reading "large" spreadsheet string files.
Just reading a 90 MB spreadsheet string file uses ~ 1 GB of memory using the labview "Read from Spreadsheet" VI.
labview memory usage at start: 132 MB.
After reading 89.625 MB spreadsheet string file:
labview memory usage: 1,105 MB.
This seems a little excessive, that means in just reading, not displaying or modifying the data in any way after the read spreadsheet string, labview is making approximately 10 copies of the data. I'd even be ok with that, but I cannot seem to deallocate the memory afterwards. I've tried adding the de-allocate memory function into the the subvi in a larger vi and in an executable, but the memory used by both labview and the executable stays at ~ 1GB. Even when I close all open vis entirely this memory usage persists. It is only by exiting labview entirely that I am able to reclaim that memory.
Does anyone have any ideas on how to either lower memory usage on a file read or how to reclaim the memory after the data has been read and converted into singles, doubles, etc format?
Edit: On a separate note, after a little more research, the deallocation works properly when reading file as a string, and then converting the data to doubles via spreadsheet string to array, but if string data type is used, the deallocation does not work.