01-30-2018 05:04 AM
Hi everyone, I am currently working on the idea of creating an automated marble maze using image processing and I have few questions to ask.
1) is it possible to use real time rectification with Labview and MyRio as it is done in this video. https://www.youtube.com/watch?v=r2zdY9wcdUg If so, how can it be achieved?
2) is it true that using the FPGA module would speed up the image processing for ball detection and PID impementation?
Thanks in advance
01-30-2018 07:45 AM
Robertocima wrote:1) is it possible to use real time rectification with Labview and MyRio as it is done in this video. https://www.youtube.com/watch?v=r2zdY9wcdUg If so, how can it be achieved?
That depends on what you call real time. And your camera resolution. And the rectification algorithm. And all other things happening on the MyRio.
With a "normal" camera, running at "decent" frame rate and "acceptable" resolution, I'd say it's likely to process all frames within an "acceptable" time.
@Robertocima wrote:
2) is it true that using the FPGA module would speed up the image processing for ball detection and PID implementation?
Some image processing will be faster on FPGA. Other procession will be near impossible. Raster image operations will probably be fast on FPGA. But operations that are not deterministic in how they work (e.g. not easy to parallelise) are hard on FPGA. I have no experience with vision on FPGA, but I'd expect it to be harder to develop then on RT or PC. I base this on the fact that everything on FPGA is harder to develop then on RT or PC. Any code change will set you back 10-600 minutes, depending on the complexity.
The PID's can be done on FPGA, and those are relatively easy, although each change still takes time to recompile.
for me, moving things to FPGA is a form of optimization. And that should be done only when needed. Obviously, some problems can only be solved with an FPGA. In your project, I'd try RT first.
01-30-2018 08:17 AM
Thanks for the information.
I am using a webcam with 30 frames per second. Also, I am using vision acquisition and assistant inside a while loop for ball identification and for obtaining the ball coordinates. The thing is that it takes around 100ms to complete one loop cycle (the ball is not detected that often) which I guess it is the reason why my PID controller is not working properly (am I right?). That is why I am wondering whether using the FPGA module would speed up this process.
In addition, which rectification algorithms are there in Labview that could be implemented in the program? (similar to the video https://www.youtube.com/watch?v=r2zdY9wcdUg )
Thanks
01-31-2018 03:25 AM
@Robertocima wrote:
Thanks for the information.
I am using a webcam with 30 frames per second. Also, I am using vision acquisition and assistant inside a while loop for ball identification and for obtaining the ball coordinates. The thing is that it takes around 100ms to complete one loop cycle (the ball is not detected that often) which I guess it is the reason why my PID controller is not working properly (am I right?). That is why I am wondering whether using the FPGA module would speed up this process.
100 ms seems reasonable, but it might very well be too slow. You should be able to calculate how much the ball has travelled in 100 ms to see how bad 100 ms is. Of course, since you are controlling the ball (theoretically\potentially) you could make it go very slow...
I'd dig a bit into the normal Vision API (LabVIEW's Vision VI's). Vision assistant produces layers around layers around these VI's. Going "low level' (removing those layers) might speed up things, but will definitely improve your Vision understanding.
How do you do the ball detection? The algorithm you use will influence the speed a lot. If you use a circle detection, this will probably be a convolution (template matching). That means (relatively) slow. If the ball has a very distinct color, a simple threshold and a blob detection might be faster. This is really a matter of experimenting and benchmarking a lot. Remember that a simple operation on an image is in fact a magnitude X pixel width X pixel height. So when time is important, every operation counts (and some more then others). Once the algorithm is lean and mean, you could consider moving some operations to the FPGA.
You might consider using a lower resolution. Note that half the width and height of the image is 25% in pixels. Enough is enough. You could even determine the rough position of the ball at low resolution, and then determine the exact position on the hi resolution using a ROI.
I'd be hard to say if the PID is working correctly or not. There's probably a lot going on between ball detection and the PID. I guess that's where computational models come in, although I usually experiment until it works. There's however no guarantee that it can work, so modelling might be unavoidable. Or you could consider a simulation. That could run at it's own (not real) time. That would exclude the timing factor, so you can tweak everything until it works non-real time...
This appears to be a fun and easy project, but it's definitely not very easy. Hopefully it's still fun though.
01-31-2018 03:34 AM
@Robertocima wrote:
In addition, which rectification algorithms are there in Labview that could be implemented in the program? (similar to the video https://www.youtube.com/watch?v=r2zdY9wcdUg )
There are perspective correction algorithms, but AFAIK they require predefined images (images with a raster of dots for example).
IIRC there's a VI that will correct the image given some points. That's what you need. Detection of the corners is something you'll have to do yourself. With those points you can correct the perspective\angle. This correction will take time. So you might be able to avoid it. If you detect those corners, and position the ball, you might be able to correct the position of the ball with the corner information.
In stead of detecting the corners you might be able to use the positions of the controllers. Another shortcut...
"Shortcuts" like that is what's it's all about if speed is an issue.
01-31-2018 09:11 AM
thank you very much wiebe@CARYA for the information given. I have been working on the ball detection since few weeks ago and I concluded that using thresholding and circle detection would achieved better results. I tried color matching but it does not work that well and I am using a lamp to create a constant light intensity as natural light affects the detection (varies too much).
Could you please explain me how modelling and computational modelling can be done? is it using matlab or something like that? I have never done it before so I am not quite familiar with this.
Thanks