11-08-2022 05:09 AM
Hello,
With the subvi I prepared in Labview, I scan as Line data and collect data from a camera and transform it into a 3D image. But I have a problem, when I constantly receive data from the system, I experience pixel losses in line data. There is a fixed camera system with a moving tape to take images. Is there any possibility that I can fix the problem in labview?
I am using .dll for camera data. Which structure should be used in software architecture to get stable data?
11-08-2022 07:17 AM
Hi idil,
@iErd wrote:
With the subvi I prepared in Labview, I scan as Line data and collect data from a camera and transform it into a 3D image. But I have a problem, when I constantly receive data from the system, I experience pixel losses in line data. There is a fixed camera system with a moving tape to take images. Is there any possibility that I can fix the problem in labview?
Maybe!
IT might help to attach your VI(s)…
@iErd wrote:
I am using .dll for camera data. Which structure should be used in software architecture to get stable data?
Which hardware do you use right now?
Which DLL do you use?
Which "structure" do you use right now in your software?
11-08-2022 08:19 AM - edited 11-08-2022 08:22 AM
This seems like a communication problem, not an image data processing problem.
As a first step, you should try to figure out where the loss occurs. (faulty camera? during transmission? Inside the dll? inside your LabVIEW code? Buffer overflow? Checksum errors? Faulty cable? Inefficient code? etc.)
Do you lose single pixels or entire lines? How do you know that pixels are lost? (e.g. lines too short?) How many bits/pixel? How do the failures look like?
So. Many. Question. (... and almost no useful information to troubleshoot)
I am not sure about your question about "which structure". A structure does not lose pixels! Correct program architecture would be important, but first we need to see what you are doing. Where does the 3D come in? What is the third dimension?
11-09-2022 12:36 AM
Hi,
I'm getting the image from the intensity graph indicator below.
The .dll was written in C++ with the data provided in the camera's SDK.
The subvi above enters a continuous loop inside a main vi.
There is a decrease in the data I get from the intensity graph every time. I am experiencing a decrease from 6190x16000 pixels to 6190x8000 pixels. The image size is getting smaller. Row data is missing when the system is receiving data.
Is this because of subvi or a memory bloat in main vi?
11-09-2022 01:38 PM - edited 11-09-2022 02:26 PM
Sorry, we typically don't do well with code images and prefer attached VIs. (A picture can only sometimes tell the whole story)
Whatever we can see is pure Rube Goldberg!
11-09-2022 01:54 PM - edited 11-09-2022 01:55 PM
Hi idil,
@iErd wrote:I am experiencing a decrease from 6190x16000 pixels to 6190x8000 pixels. The image size is getting smaller.
What is shown in "number of data" indicator?
What is the output of the loop iterator when the loop stops?
What determines when the loop stops? Where is that shared variable coming from?
What happens when you implement all suggestions from Christian?
11-09-2022 02:23 PM
11-09-2022 02:40 PM
11-10-2022 12:33 AM
Hi,
Maybe my reasons below will be a more useful explanation for why I did the solution like this.
1.For the calibration of the camera, SDK has not added enough data. For these reasons, I have to convert each line data data I get from the .xws file using a mathematical process.
2.For the disabled loop, I tried the operation inside the single for loop above, but in the system where I needed to capture a moving image, the data acquisition rate was slow with this operation. There is a pixel loss of half the image.
3.The global stop variable in the while loop changes to true/false with the camera sensor communication used in main.vi.
4. I had previously started using a for loop for 16K loops. But the image flow is fast and I have to do it with the use of sensors. This causes the system not to be able to exit the loop without obtaining a 16K image, and it causes the system to fail.
I would like to hear your opinions on my main question, since the inactive ingredients mentioned have no effect on the program.
Best Regards
11-10-2022 05:01 AM - edited 11-10-2022 05:06 AM
I understand that each pixel has an individual linear calibration (slope, offset) and my code would do that much more elegantly.
If you think the math is slowing you down, (doubt it!) you could just record the raw (blue) data and apply the scaling in post processing.
The rest of your comments make very little sense.
How many bits/pixel do you really have? Can you attach your calibration file?