8/31/2018 | 1 MINUTE READ

Robotic Bin Picking Made Simple(r)

Facebook Share Icon LinkedIn Share Icon Twitter Share Icon Share by EMail icon Print Icon



Facebook Share Icon LinkedIn Share Icon Twitter Share Icon Share by EMail icon Print Icon
Robotically bin-picking randomly oriented components has long been a challenge, one ordinarily solved by using a 3D vision system.

When Systematix (systematix-inc.com), a systems integrator, was presented with the task of developing an automated system to remove car seat lumbar actuator assemblies from a bin and into a wire nest for assembly, its first idea was to use a robot and a 3D sensor.

But then its engineers thought of something. They realized that each actuator in the bin didn’t have to be mapped in all three dimensions but two would suffice. They could simply mount a 2D camera on a vertical slide such that each component is simply measured in X and Y.

Because there are sheets of cardboard separating layers of the randomly oriented parts and those dividers are removed once the parts on top of them are removed, there would be the need to measure the Z axis (i.e., depth) just once per layer.

The result of this “thinking inside the bin” is a design featuring a six-axis Yaskawa robot (motoman.com), an IFM Effector 200 photoelectric distance-measuring sensor (ifm.com), and, mounted to an IAI servo-driven slide (intelligentactuator.com), a Cognex (cognex.com) In-Sight 8000 camera. 

The camera uses RedLine, the latest iteration of PatMax, the geometric pattern-matching technology that Cognex first patented in 1996. Up until then, pattern matching technology relied upon a pixel-grid analysis process called normalized correlation. That method looks for statistical similarity between a gray-level model or reference image of an object and portions of the image to determine the object’s X-Y position. PatMax instead learns an object’s geometry from a reference image using a set of boundary curves tied to a pixel grid and then looks for similar shapes in the image without relying on specific gray levels. This approach, now widely used by numerous machine vision companies, greatly improves how accurately an object can be recognized despite differences in angle, size and shading.

The system not only gets the job done in the required time, but presumably, the use of the long-proven tech was somewhat more cost-effective than a less-straightforward approach.