04-29-2013 07:00 AM
Hi Alexander...
using a tutorial, I managed to save up to 4 second at full resolution and 350fps, specifying before starting the VI the number of images I want to save...Next step I will try to set up a continuous acquisition but first I have of course memory problem.
I have 8Gb of RAM on this workstation, and I presumed all the memory is used for buffering?
Based on your experience, it is wise to upgrade up to 64 Gb (the maximum) instead of creating virtual drivers (I'm using LabView 64bit)?
Thanks,
Antonio
04-29-2013 07:25 AM
good to hear you have a progress!
Do you store the frame number for each grabbed picture? In my case these numbers are the only indication of the losses. The number comes with a frame when you fetch it from cam. Missed frames I see as missing numbers in the sequence like ..4,5,8,9,10,11,13,...
Yes, memory is important.Virtal drive (VD) resides in RAM and you see it as drive X:\ for example. It is a bit easy with VD because after your experimnet you just copy files into the real HDD. If you store all frames in RAM then you have to implement the mechnism for post-experimnetal storage of the data ram->hdd. There is a risk that, for example, an accidental press on STOP button in LV or crash of your VI will lead to loss of all frames. Also, I am not sure how well LV is optimized for large amounts of data such as arrays or queue of frames... Take into account that Windows might attemt to swap large LV application (with its data) into HDD exactly during your experimnet. That may lead to unwanted delays and I/O bus occupation which is crucial for your lossless frame grabbing.
By the way, do you have CameraLink or Ethernet connection with your cam?
Please, keep posting to see an alternative way of grabbing!
Alexander
04-29-2013 07:55 AM
No, for now I save only the frames: it's a basic VI in wich you can set the number of frames you want to save and then it buffers the images, transforms into arrays and at the end, re-convert in images and save.
I "manually" checked the frame rate filming a stopwatch and counting..^_^'
Now I want to upgrade the RAM and see how many seconds I'm able to buffer, and then I'll try to elaborate a better VI. Unfortunately I'll use labview just for the acquisitions, and in the meantime I had to work on my phd doing totally different stuff and prepare my experiments so I can't use all my time for learning LabView.
I'm using a Basler acA2000-340kc CameraLink cam with NI PCIe-1433 grabber..
04-29-2013 08:31 AM
Antonio,
Actually, I am using the same cam and 1433 grabber. With "full mode" (two CameraLink cables) this camera is capable to captrure upto 735fps.
I am thinking to share my LV grabbing code. I am working at university but the research contract we have is with a commercial company, I'd have to ask my bosses if I can post my code here. I am very interested to share my code in a sake of getting feedback from community and people who can suggest improvements of the code. Despite 1-2% of frame losses I think this combination of Basler acA2000-340kc CameraLink cam and NI PCIe-1433 grabber is a good and cost effective choice. If it would be possible to get lossless grabbing then my next step is to employ second such cam+grabber on the same machine. It is challanging though...
My application is to study fast processes in steam explosion combined with liquid melt spreading under the water. Aka lava spreading into the sea floor. What subject do you study?
regards,
Alexander
04-29-2013 08:51 AM
Ah...lava...actually my project is in physical volcanology: I'm studying the dynamics of multiphase conduit flows and the associated seismic and acoustic signals created by gas transport in magmatic systems. So in my experiments I need to film the ascent, expansion and burst of Taylor bubbles. And yes, this camera and the grabber are a good combination for this kind of work. I hope to achieve some results soon, also because I need to start experiments. But being my first approach to LabView, is not so immediate!
Let me know if you'll be able to share the code, that it would be great have a sample like yours to use as start point and follow the right direction!
Best,
Antonio
07-22-2013 10:42 AM
Hi Antonio,
I'm also looking for ways to increase the frame rate of my camera. I'm using a Basler acA2040-180kc camera and NI PXIe-1435 board. I think in my case the maximum frame rate is slower than yours so it would be easier maybe.
Could you tell me where did you find the tutorial?
Thank you very much!
Tong
08-26-2013 03:38 AM
I don't know if this thread which I found only today is still actual, however it has some similarities to problems I've tackled in the last years. Our original configuration had as well a PCIE-1429 and an A504k. The former was the first component to go, in favor of a Bitflow Karbon PCIe-x8 for performance reasons (thus giving up IMAQ hardware support), shortly afterwards the camera was replaced with an Optronis CL600x2 which is about 4x more sensitive.
In my experience, designing a high speed video stream-to-disk system based Labview was a non-trivial task, which required some good care in choice of hardware, on the modality of writing to disk, and care in load balancing.
I can't share my whole code (which would be probably bloated and much too oriented for our specific application); however, a sort of manual of it is freely available over the net; there are some initial paragraphs elabolating on our design choices, which perhaps can be of help.
Enrico
08-26-2013 05:33 AM - edited 08-26-2013 05:34 AM
Enrico,
of couse this thread is still actual! Thanks a lot for the well made guidelines. It really shows the right approach for design and implementation of the video grabbing system.
I see that you can measure the frame losses. Have you made any kind of statistical measurements of the frame losses as function of frame rate and/or image size? You mentioned <1% lost frames in "normal operation", at what frame rate you mean normal operation? In my system I have max 735fps (720x512pxs, 1byte/pixel) and typical losses were 2.5-4%. When I reduced frequency of GUI updating in LV then losses became 1-1.5%. In my case all frames are stored into 24Gbyte virtual drive sitting in memory. But still, I would like to have losses well below 1%.
Another question: Gonzales stores software timestamp in milliseconds, have you tried to measure time (upon image arrival from camera) in microseconds? Having the timestamp measured more accurately would help to debug code and find sources of the delays.
/Alexander
08-26-2013 06:30 AM - edited 08-26-2013 06:39 AM
No, we have not done a complete statistical analysis of lost frames vs speeds and sizes. We did however some of latency. "Normal operation" in any event was anything like 1280x1024 @ 500fps or 880x640 @ 1000fps, probably mentioned somewhere in the manual too.
On a general level, I'd tend to say that we have 0 frames lost in normal conditions, but a slight risk of ringbuffer overrun when system activity is unexpectedly high, usually due to high GUI activity or window redraws. Ringbuffer overrun is macroscopic and can easily be monitored and detected in the stream. What the system may suffer of, rather, is a variable latency, meaning that an image uploaded in memory by the framegrabber is not analyzed for content or stored to disk in a deterministic time. Latency is disturbing in our application, which tries to deliver some external triggers based on values of ROIs, or decides whether to write or not depending on command lines; and that is sensitive too on GUI & graphic activity. The problem is that while in Labview it is very easy to achieve concurrency, in the non-RT OS there is little one can do to avoid intermissions due to graphics, or to prioritize different asynchronous threads (only timed loops can be prioritized, IIUC).
Frame loss can be very easily checked if the camera has an option of writing its own framestamp somewhere in the image. Both the Basler A504k and the Optronis CL600x2 do. Then it is just a matter of counting framestamps. Missing that, you have to devise some other way of stamping images, e.g. with some fast-response leds cycled by different outputs of a clock divider, kept in sync with the camera. We did try to film a LCD stopwatch only to discover that the decay time of the LCD display was ~0.2 sec, providing an useless measurement (the total number of frames matching the elapsed time, to the precision of both clocks, perhaps has some value).
Our ways of measuring latency involved turning on a led visible in the image, and measuring the time delay between the command signal and the brightening of the relevant pixels as detected by the program, the number of dark frames before the turning of the led when the same command line triggered the start of recording with pretrigger (another feature of my big system), and such. And looking at the jitter of our software timestamping of each frame.
I have considered using the High Resolution Relative Seconds.vi in order to use timestamping to the µs rather than to the ms, but that would not be the point; I have no way of measuring the timepoint at which the image enters the framegrabber, and is dma'ed to memory. Programmatically, I can only register the time at which a LV call says "there is a new image to enqueue" (some framegrabber polling VI exits) and at which various other LV threads process it. Our time jitters were up to several ms anyway, and irregularly depending on GUI activity for the µs to be repeatably significant.
04-21-2015 01:40 AM
hi
could u plz tell me which tutorial have u used or refered for ur problem . AS we are also facing the same pblm we have basler ace 200 camera with 340fpa (max ) and pxie 1435 card . we are not able to save more than 30 fps in an avi. is possible kindly send the vi u made for ur acquisition or guide us further .