The case for automated PnP

Chat about pick and place machines, reflow ovens, assembly techniques and other SMT tips & trix

Moderators: adafruit_support_bill, adafruit

Please be positive and constructive with your questions and comments.
bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: The case for automated PnP

Post by bootstrap »

ai_dude wrote:bootstrap,

My plan (which I think should work well) is to get the camera looking down on the PCB, and have it detect the fiducials first, do the math for each component location/rotation, and after that it should only have to look at each component when placing.

I've been assembling boards with stencil/oven lately, but hand-placing them, and have been learning a lot about how this works. With the proper amount of solder paste (0.006" thick for me), the components will move slightly and center themselves on the pads nicely, thereby reducing the ultra-stringent accuracy requirement.
Are any of those components BGAs or QFNs? QFPs are fairly easy in comparison... largely because you can see what you've done, and keep working until it looks perfect. My main worry is 0.50mm BGAs and QFNs. As far as I can tell, with these components, there is no substitute for "precise placement".

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

RE: The case for automated PnP

Post by bootstrap »

To provide context, the following links are images of the PCBs I need to assemble:

http://www.iceapps.com/img_6706.jpg == top of larger PCB : small PCB
http://www.iceapps.com/img_6721.jpg == bottom of larger PCB : small PCB

T: u01 is a BGA - EP3C5F256C8N - 256 balls
T: u02 is a QFP - C8051F120 - 100 pins
T: u03 is a QFP - 88E1111 - 128 pins
T: u05 is a BGA - CY7C1041DV33 - 56 balls
T: u06 is a BGA - CY7C1041DV33 - not always stuffed
T: u07 is a BGA - CY7C1041DV33 - not always stuffed
B: u10 is a QFN - max8717 - 28 pads - 0.50mm
B: u11 is a QFN - max8717 - 28 pads - 0.50mm
T: u20 is a QFN - 74LVC163BQ - 16 pads (u20 ~ u29)
T: u30 is a BGA - 74AUC16244 - 48 balls
T: u31 is a BGA - 74AUC16244 - 48 balls
T: u32 is a BGA - 74AUC32374 - 96 balls

T: top side
B: bottom side

The solo IC in the center of the top side of the small PCB is a 48-pad iLCC "image sensor". The iLCC package is pretty much the same as a QFN, except of course the top surface of the package is a glass window to let the image fall upon the image sensor. The other ICs on the small PCB (on the other side) will not usually be stuffed (though all the caps near the center of the PCB will be stuffed to bypass the 3 voltages the image sensor requires).

For me, "pick-and-place" isn't just for fun, it's for:
----- placing 0201 caps (see bottom side of BGA components on the larger PCB)
----- placing all BGAs precisely
----- placing all QFNs precisely

I can handle 0402s manually just fine. And maybe with a stereo microscope, vacuum pick-up pencil, and enough practice, I can handle the 0201s too. But so far at least, I cannot make myself believe that I can precisely place BGAs and QFNs by hand... EVERY TIME (which is what I need).

PS: I'm posting this message in the other two active PaP threads too, just in case.

mikeselectricstuff
 
Posts: 164
Joined: Fri Jun 11, 2010 9:21 pm

Re: The case for automated PnP

Post by mikeselectricstuff »

However, you may find that many generic image processing algorithms are MASSIVELY, INCREDIBLY, UNBELIEVABLY slow. Most often that's because they perform the process in the most generic manner, so the results are appropriate for any application --- except speed-sensitive ones. But you might find something.
But processors are massively fast these days. And you are working in a very controllable optical environment where almost all of the 'hard stuff' of eliminating the junk from the background isn't an issue.

Take a look at OpenCV

alex_dubinsky
 
Posts: 188
Joined: Wed Jul 30, 2008 5:17 pm

Re: The case for automated PnP

Post by alex_dubinsky »

Yeah, OpenCV is a good library. The routines are optimized.

But I'm not saying they couldn't or aren't needed to be even faster. But before you get lost in the thick woods of fpga land, check out GPU programming using NVIDIA CUDA. The most brutal truth is that GPUs have so many ALUs and operate at such high frequency and have so much memory bandwidth relative to FPGAs, that it's difficult to make an argument for FPGAs except in certain, specific situations. And of course, they are an order of magnitude easier to program. (Doing something basic in an FPGA may be novel, but not difficult. But writing an optimized algorithm requires mastery of the most challenging facet: data flow and memory. You would have to track the path of each byte across every clock cycle, every register, and every wire--along tortuous, ideal paths.)

alex_dubinsky
 
Posts: 188
Joined: Wed Jul 30, 2008 5:17 pm

Re: The case for automated PnP

Post by alex_dubinsky »

bootstrap wrote:By the way, I've been thinking about various machine configurations, and one idea I rather like involves component feeders. The idea is to feed only ONE feeder/component at a time until the PCB/panel doesn't need any more of that component. Then switch to the next component and repeat.

One potential advantage of this approach is... the component can be super close to the PCB. Imagine the PaP machine always grabs components from a specific fixed X,Y location. Immediately ABOVE that location is a fixed-position downward looking camera that sees each component before the component is picked-up.

Immediately BELOW that location - except offset by 2 inches or so towards the center of the PCB - is a fixed-position upward looking camera. After the pick-up head grabs a component, it need move only 1 inch or so before the upward looking camera would "snap a pic" of the component on the pick-up head to determine the X,Y,R of the component relative to the center of the pick-up head.

Then the pick-up head would continue along the same general trajectory to the appropriate X,Y,R to place the component. For modest size PCBs, the motion of the pick-up head is only about 3 inches plus half the diameter of the PCB... in other words, a very short distance.

To make this work, the feeders need to move. In my mental picture, the feeders are on a rotating lazy-susan type of table. When all of component #1 is finished being placed, the table rotates to put the next component at the fixed position where components are always extracted. This position doesn't need to be exact, of course, since the fixed downward-looking camera can see exactly where the center of each component is before the head moves to pick it up.
I think you should just change the reels by hand for now. The rest are good ideas--having a single feeder lets the cameras be fixed in place and reduces feeder costs. I think if switching reels is made to be a convenient operation, it would be very effective. Remember, reels have to be loaded into feeders anyway, and you never have enough feeders to leave reels in them. And don't use gravity feed. Just use a commercial feeder.

ai_dude
 
Posts: 21
Joined: Fri Dec 11, 2009 1:49 am

Re: The case for automated PnP

Post by ai_dude »

My needs are simple -- 0603's (though I have been thinking of using 0402's lately). SSOP's, TQFP's. PnP for me is to stop breaking my back assembling boards, and free me up to do other things.

Other than OpenCV, I had found ImageJ and some others. I have no problem throwing more hardware at the system to speed it up if image processing libraries become slow, but the simplicity of using a ready-to-run library is very very appealing.

Cheers,
-Neil.

blogger
 
Posts: 43
Joined: Tue Nov 24, 2009 5:59 am

Re: RE: The case for automated PnP

Post by blogger »

bootstrap wrote: http://www.iceapps.com/img_6706.jpg == top of larger PCB : small PCB
http://www.iceapps.com/img_6721.jpg == bottom of larger PCB : small PCB
Hitting the main / url was a mistake on my part.
I didn't know websites resizing browser windows without asking permission still existed in 2010.

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: RE: The case for automated PnP

Post by bootstrap »

blogger wrote:
bootstrap wrote: http://www.iceapps.com/img_6706.jpg == top of larger PCB : small PCB
http://www.iceapps.com/img_6721.jpg == bottom of larger PCB : small PCB
Hitting the main / url was a mistake on my part. I didn't know websites resizing browser windows without asking permission still existed in 2010.
It's an old webpage (as the date at the bottom indicates), and I wasn't trying to attract anyone to the website. I commented out the resize code. Sorry for any inconvenience.

User avatar
alphatronique
 
Posts: 231
Joined: Fri Jun 25, 2010 8:30 am

Re: The case for automated PnP

Post by alphatronique »

Hi bootstrap

i was interested to learn more bout you vision system you work on

i in the processes to make so pick&place upgrade kit and for now my only problem was vision

on my side my idea was to take a old pick place (Zevatech PM360/460 for now)
and rebuild it .
that machine was extremely well made and simple and low cost my last one cost me 1700$ for a never used machine ;-)
only problem was lack of vision and the dos operating system and the 386 pc .....

as for operating system that part was now near completed it was totally remake from scratch in delphi
i also able to make tube and cut-tape feeder

so whit a good vision system that may make really good setup

Best regard
Marc L.
Alphatronique inc.

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: The case for automated PnP

Post by bootstrap »

Alphatronique wrote:Hi bootstrap

i was interested to learn more bout you vision system you work on .

i in the processes to make so pick&place upgrade kit and for now my only problem was vision

on my side my idea was to take a old pick place (Zevatech PM360/460 for now)
and rebuild it .
that machine was extremely well made and simple and low cost my last one cost me 1700$ for a never used machine ;-)
only problem was lack of vision and the dos operating system and the 386 pc .....

as for operating system that part was now near completed it was totally remake from scratch in delphi
i also able to make tube and cut-tape feeder

so whit a good vision system that may make really good setup

Best regard
Marc L.
Alphatronique inc.
I designed the "vision system" to be:
----- high-resolution (2592h x 1944v x 12-bits per pixel)
----- high-speed (up to 15 frames per second)
----- monochrome or "bayer" (RGBG) color
----- good for image enhancement
----- good for image processing
----- inherently multi-camera
----- extremely flexible

And instead of making the design and interface obscure (as most modern companies do), I am purposely making the system as flexible as possible. In fact, my "prime directive" is to make the system easy for others to adopt and make part of their systems - whatever those systems happen to be.

If you look at the PCBs (see links in previous message), that might help you understand the following description.

Each "ice-vision system" is composed of 1 "ice-quad controller" and 1, 2, 3 or 4 "ice-eye cameras". The "ice-quad controller" is the larger 5.80" square PCB, inside a 6" x 6" x 1" aluminum case. Each "ice-eye camera" is the smaller 2.80" square PCB, inside a 3" x 3" x 1" aluminum case.

You can connect 1, 2, 3 or 4 "ice-eye cameras" to each "ice-quad controller", which can control and capture images from 1, 2, 3 or 4 "ice-eye cameras" simultaneously. The "ice-quad controller" has one standard gigabit ethernet RJ45 jack that connects directly to any gigabit ethernet RJ45 jack on any PC. Thus you only need this one cable and one connection to operate 4 cameras. Since this is ethernet, not USB or firewire, this system interfaces to Windoze, Linux, MAC or any computer with ethernet --- thus NO drivers are required (plus, we provide simple function libraries to make camera control, image capture and image processing easier). AND we provide the low-level protocol, so anyone can work at the lowest levels if they wish.

The "ice-quad controller" performs LOSSLESS image compression, computes basic image processing statistics, performs a few simple forms of image processing (if requested), and sends the image data to the PC via its gigabit ethernet interface.

In past years I had to do some fairly sophisticated image processing, and I found that most of my image processing algorithms seemed to "enhance" lossy compression artifacts even more effectively than real detail --- very annoying!!! That's why I implement lossless compression - so the exact value of every original pixel is recovered by the PC, and appears in the final images. This way image processing routines can "dig as much out of the image as they can" and not be thwarted by compression artifacts.

Also, I'm trying very hard to make this system rugged and reliable, but as cheap as possible. My intention is to make a complete system with 4-cameras cost under $1000... and hopefully only $750. Of course, this doesn't include lenses, since customers have such wildly different lens requirements. But good C-mount and CS-mount lenses are not very expensive in most focal-lengths and speeds. The "ice-eye cameras" accept CS-mount and C-mount lenses natively, but can also take simple adaptors to accept the many smaller sizes (16mm, 12mm, 8mm, 6mm, etc).

If you have any specific questions, just ask. I'd be happy to see you build one of these systems into a pick-and-place machine --- that would be very cool, and exactly the kind of application that I envisioned from the start. I bought quite a few prototype PCBs, hoping to be able to get units to early adopters. It will take me a few months to get enough of the microprocessor software and FPGA firmware working well enough to make the system viable for you, but if you can wait that long, it might be just perfect for your application. Just fire questions at me, and I'll fire back answers.

User avatar
alphatronique
 
Posts: 231
Joined: Fri Jun 25, 2010 8:30 am

Re: The case for automated PnP

Post by alphatronique »

Hi Bootstrap

good thing ,noting rush here

did you plant to have some kind of frame graber for input normal NTSC camera ?
since camera on machine head need to be really small for now i use elmo 17mm micro camera

for bottom vision that may a bigger camera since it on machine base

in any case you may contack me [email protected]

i may also help you on your pcb need ,i certified IPC PCB designer

Best Regard

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: The case for automated PnP

Post by bootstrap »

Alphatronique wrote: did you plan to have some kind of frame graber for input normal NTSC camera ?
since camera on machine head need to be really small for now i use elmo 17mm micro camera

for bottom vision that may a bigger camera since it on machine base

in any case you may contact me at [email protected]

i may also help you on your pcb need, i certified IPC PCB designer

Best Regard
To the extent possible, my design is completely "configurable" so it can be a component in any kind of device or application. Therefore it doesn't conform to any video standards, though it can be configured to generate frames at any rate (up to its maximum throughput) to match various standards.

Most cameras capture a whole image/frame, then lossy compress that image/frame to some standard format (MJPG, MPEG, etc), then transmit the image/frame. My system does NOT capture or store images/frames in the camera. As each horizontal row of pixels is received from an image-sensor, the pixels are lossless compressed, placed into a low-level ethernet packet, and the CRC32 for the ethernet packet is computed --- all "on the fly". When the entire row of pixels has been received (typically 2592 pixels), the ethernet packet is transmitted to the PC through the gigabit ethernet connection.

The camera never has more than two rows of pixels before they are transmitted to the PC. Therefore, the device has no "frame buffer" at all. Of course, the software on the PC that reads the incoming ethernet packets typically assembles the incoming packets into an image - which it then examines, processes and/or saves to disk or elsewhere.

Though this device is extremely flexible, remember the fundamental purpose of this device is to be a "robotics vision system". Which means, some "robot" (of some kind) needs to examine incoming images, draw inferences about the environment being viewed, and take actions to manipulate that environment. For any robotics application that needs to respond to the environment quickly, waiting for an entire image to accumulate in the camera, then waiting for a compression routine to compress the image, then waiting for the image to be transmitted to the PC, then waiting for the image to be decompressed in the PC --- before the vision software can "look" at the image --- is too large a delay. Since my device transmits each horizontal (or vertical) row of pixels immediately, the vision software can start looking for "activity" in each row of pixels as it is received - and the delay from the CCD chip to the vision software is only 1 pixel row (20 microseconds or less, depending on settings).

Each camera is small. If you were to mount the entire camera (in its aluminum case) onto your machine, that is 3" x 3" x 1" (76mm x 76mm x 25mm). But you can also just take the PCB that holds the CCD out of my case and mount it into your own housing - in which case it would be quite lightweight (~25 grams) and thin (~10mm).

Keep me informed of your progress. What you're doing is very interesting.

scsi
 
Posts: 30
Joined: Sun Jan 10, 2010 9:09 pm

Re: The case for automated PnP

Post by scsi »

Hey bootstrap, can your camera do ROI capture at a higher frame rate? In one of the applications I've been working on I needed over 50FPS in a fairly small region and ended up using off the shelf camera from The Imaging Source. They can't do ROI but deliver 744x480@60FPS in Bayer. Another vendor I'm looking at is Point Grey. Their cameras do ROI but quite expensive.

-scsi

bootstrap
 
Posts: 73
Joined: Wed Jun 02, 2010 8:47 pm

Re: The case for automated PnP

Post by bootstrap »

scsi wrote:Hey bootstrap, can your camera do ROI capture at a higher frame rate? In one of the applications I've been working on I needed over 50FPS in a fairly small region and ended up using off the shelf camera from The Imaging Source. They can't do ROI but deliver 744x480@60FPS in Bayer. Another vendor I'm looking at is Point Grey. Their cameras do ROI but quite expensive.

-scsi
What does ROI mean? If you are asking whether my camera system can be configured to readout "subwindows" in the full 2592x1944 sensor, the answer is yes. And the smaller the subwindow, the faster the frame rate can be. The subwindow can start at any x,y and be any width,height. The configuration can be changed at any time - the software on the PC simply sends commands to the camera controller over the same gigabit ethernet channel that sends image information to the PC. The command set will be fully defined, and every configuration register in the image sensor can be read and written.

I'm not sure whether I answered your questions, since I don't know what is ROI.

User avatar
alphatronique
 
Posts: 231
Joined: Fri Jun 25, 2010 8:30 am

Re: The case for automated PnP

Post by alphatronique »

Hi bootstrap

this is picture on my fiducial/teaching camera http://www.alphatronique.com/Zeva_vision.jpg

i have planing to not have need of very fast frame garbing since machine move part/camera near of it point then
take snapshot process image then generate X,Y , tetra correction so it "still" image processed
the same applies for part centering camera , in this case it used for correct centering and rotation

i what to keep it really simple system for have low cost as i may have already mentioned my goad was to made
was to offer DIY/Upgrade system for pick place , scsi have already mantioned that for now only maddel do something similar
buf i found that maddel software look to complicated for noting actually my Pick/place machine may run whit no pc monitor
since all setup was make on a off line pc (this off line pc was alredy done and aviable) so i put floppy on machine and hit F4
and it start .... all feeder type ,pick point ect ect was take once and put in a database

so i next coming BANNED it will work on motion controller on the machine itself , so read binary file and use it for mode machine to right position
the beast was to have vision system all in hardware so it remove pc from machine
i think it doable if keep to minimum so one algo for fiducial and one for part rotation
an edge detection and for cont pixed edge and camera center pixel

p.s. also on picture you see some prototype of cut tape feeder and stick feeder i work on ;-)

Best Regard
Marc Lalonde
Alphatronique inc.

Locked
Please be positive and constructive with your questions and comments.

Return to “SMT (Surface Mount Tech)”