The case for automated PnP

Chat about pick and place machines, reflow ovens, assembly techniques and other SMT tips & trix

Moderators: adafruit_support_bill, adafruit

Please be positive and constructive with your questions and comments.
User avatar
karlgg
 
Posts: 212
Joined: Sat Dec 27, 2008 2:41 pm

Re: The case for automated PnP

Post by karlgg »

I think you answered scsi's question, since I Google'd the company he mentioned and found this definition:
Region of Interest (ROI) readout defines an output image area that is smaller than the full resolution of the camera. For example, a 640x480 camera can define an output area of 320x240 or 640x64. The output is a sub-window with a reduced field of view.

scsi
 
Posts: 30
Joined: Sun Jan 10, 2010 9:09 pm

Re: The case for automated PnP

Post by scsi »

bootstrap wrote:
scsi wrote:Hey bootstrap, can your camera do ROI capture at a higher frame rate? In one of the applications I've been working on I needed over 50FPS in a fairly small region and ended up using off the shelf camera from The Imaging Source. They can't do ROI but deliver 744x480@60FPS in Bayer. Another vendor I'm looking at is Point Grey. Their cameras do ROI but quite expensive.

-scsi
What does ROI mean? If you are asking whether my camera system can be configured to readout "subwindows" in the full 2592x1944 sensor, the answer is yes. And the smaller the subwindow, the faster the frame rate can be. The subwindow can start at any x,y and be any width,height. The configuration can be changed at any time - the software on the PC simply sends commands to the camera controller over the same gigabit ethernet channel that sends image information to the PC. The command set will be fully defined, and every configuration register in the image sensor can be read and written.

I'm not sure whether I answered your questions, since I don't know what is ROI.
I missed this response somehow. I think you answered the ROI question. I just want to confirm that the CCD sensor that you are using does indeed support subwindow readout with higher frame rates. Some sensors don't allow it.

What would be the effective frame rate if I want to capture a window of 100x500 pixels in the middle of the frame? Will it get me my 100FPS or anything close to it?

-scsi

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: The case for automated PnP

Post by bootstrap »

scsi wrote:
bootstrap wrote:
scsi wrote:Hey bootstrap, can your camera do ROI capture at a higher frame rate? In one of the applications I've been working on I needed over 50FPS in a fairly small region and ended up using off the shelf camera from The Imaging Source. They can't do ROI but deliver 744x480@60FPS in Bayer. Another vendor I'm looking at is Point Grey. Their cameras do ROI but quite expensive.

-scsi
What does ROI mean? If you are asking whether my camera system can be configured to readout "subwindows" in the full 2592x1944 sensor, the answer is yes. And the smaller the subwindow, the faster the frame rate can be. The subwindow can start at any x,y and be any width,height. The configuration can be changed at any time - the software on the PC simply sends commands to the camera controller over the same gigabit ethernet channel that sends image information to the PC. The command set will be fully defined, and every configuration register in the image sensor can be read and written.

I'm not sure whether I answered your questions, since I don't know what is ROI.
I missed this response somehow. I think you answered the ROI question. I just want to confirm that the CCD sensor that you are using does indeed support subwindow readout with higher frame rates. Some sensors don't allow it.

What would be the effective frame rate if I want to capture a window of 100x500 pixels in the middle of the frame? Will it get me my 100FPS or anything close to it?

-scsi
The frame rate for 100x500 pixels should be much better than 100FPS. Off hand, I'd guess the frame rate would be more like 500FPS, maybe even better.

The 100x500 subwindow you mention is just about 1% of the pixels on my image sensor (50K pixels versus 5M pixels). But you can't get 100 times the speed, because each frame has a certain amount of fixed overhead required by the image sensor. Without that fixed overhead, you'd get 1500FPS for a 100x500 subwindow. WIth the fixed overhead, 500FPS is my guess.

But the bottom line is, the maximum frame rate increases as the subwindows get smaller.

The image sensor is Aptina MT9P401 ("bayer RGBG color" or "monochrome").

scsi
 
Posts: 30
Joined: Sun Jan 10, 2010 9:09 pm

Re: The case for automated PnP

Post by scsi »

bootstrap wrote:But the bottom line is, the maximum frame rate increases as the subwindows get smaller.

The image sensor is Aptina MT9P401 ("bayer RGBG color" or "monochrome").
Hey, that sensor can also do binning which is even more interesting. Too bad I can't process the stream at 500FPS in real time, even at 100FPS in tiny 100x500 window my primitive blob tracking "math" was consuming 80% on an 8-core machine.

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: The case for automated PnP

Post by bootstrap »

scsi wrote:
bootstrap wrote:But the bottom line is, the maximum frame rate increases as the subwindows get smaller.

The image sensor is Aptina MT9P401 ("bayer RGBG color" or "monochrome").
Hey, that sensor can also do binning which is even more interesting. Too bad I can't process the stream at 500FPS in real time, even at 100FPS in tiny 100x500 window my primitive blob tracking "math" was consuming 80% on an 8-core machine.
I'm not sure whether this will help you with overhead, but maybe it will. I plan to have the FPGA compute a few very simple kinds of image information in real time, and include that information in the packets. Mostly the information is to help quickly figure out how much camera pan and/or tilt since the last frame, and also help track objects moving in the frame.

For example, for each row and column of pixels, I plan to accumulate the "minimum pixel intensity", "maximum pixel intensity", and "total pixel intensity" (add up pixel intensities of the entire row/column), and "average pixel intensity". I'll be doing this separately for even and odd pixels in each row/column, so these results are available for each color (add the two values together to get the monochrome intensity for the row/column).

Objects (or parts of the field) that are especially dark, or especially bright, or especially red, green, blue, yellow, cyan, magenta can be very quickly located this way. When compared to the same information in the previous frame, it is fairly trivial to figure out how much pan and/or tilt has happened, and also/then to determine whether certain objects are moving through the field. All this information adds less than 1% to the frame, but ***massively*** speeds certain kinds of object recognition and image processing, because otherwise the CPU that receives the image would need to process every freaking pixel.

The information about each pixel row is appended to the each ethernet packet (that contains the row of pixel intensity), so the CPU can start drawing conclusions about pan/tilt/object-motion as the data arrives, even before the whole frame arrives. Obviously the controller cannot supply the information about pixel columns until the entire image is read from the image sensor. That information is provided in an extra 4 ethernet packets immediately after the last pixel row is sent.

If you're trying to track blobs, you might find this ***massively*** speeds that process. Or you might find it does nothing for you, depending on your exact requirements. As usual, the devil is in the details (of your requirements). If the information that I mention above is useless to an application, that information can be omitted from the ethernet packets to save bandwidth and CPU processing. If this information isn't what you need, but some other relatively simple "on the fly" processing would help, perhaps I can make the FPGA compute that information, assuming that information is general purpose enough and therefore useful to more folks than just you.

For my primary application (real-time robotics vision systems), these kinds of hardware assisted speed-ups are extremely helpful. For many applications, this information is useless.

scsi
 
Posts: 30
Joined: Sun Jan 10, 2010 9:09 pm

Re: The case for automated PnP

Post by scsi »

bootstrap wrote:If you're trying to track blobs, you might find this ***massively*** speeds that process. Or you might find it does nothing for you, depending on your exact requirements. As usual, the devil is in the details (of your requirements).
In my application I was tracking a bright red laser dot on a flat surface and this FPGA implementation would indeed massively speed up the process. In fact, this is very common laser range finder application that is used in robotics very often. In my case I was trying to implement a servo loop for a linear motor that would use the the laser range finder as an encoder feedback. Turns out that with my homemade motor (voice coil) I'd need at least 100FPS to keep the system stable. The project is on hold now due to some other priorities, but I will definitely come back to it in half a year and would love to try your camera system then.

ai_dude
 
Posts: 21
Joined: Fri Dec 11, 2009 1:49 am

Re: The case for automated PnP

Post by ai_dude »

Okay, so we've made some good progress on my PnP machine lately, and it's time to fill you in on the details. Here's a thread I created to discuss it... http://www.cnczone.com/forums/showthread.php?t=109767

Locked
Please be positive and constructive with your questions and comments.

Return to “SMT (Surface Mount Tech)”