Process development for robot-assisted object removal using multisensory 3D measurement technology (TakeIT)

Finished
Sensor system in combination with robot arm i3mainz, CC BY SA 4.0

The development project “TakeIT” is intended to contribute to solving a central problem of automation technology and to create conceptual, structural and technological prerequisites that enable robots to recognise and grasp objects lying disorderly in a container. To enable robots to do this, they need a spatial description of the objects to be grasped that is as clear, precise and complete as possible. Despite the advanced measurement technology, this problem has not been solved satisfactorily to date. However, a significant step towards solving this problem can be achieved by using recently available 3D measuring cameras. Especially if the 3D point clouds generated by these cameras are combined with high-resolution images, thus compensating for any deficits in terms of detail resolution and geometric and optical characterisation.

Motivation

In automation technology, robots that can recognise and grasp objects lying disorderly in a container play a central role (“bin picking”). To enable robots to do this, they need a spatial description of the objects to be gripped that is as clear, precise and complete as possible.

Despite the advanced measurement technology, this is still a problem that has not been solved satisfactorily, since the measurement methods used so far do not simultaneously have the necessary speed, accuracy and completeness in capturing 3D surfaces. Recently available 3D measuring cameras can contribute to solving this problem with their high measuring frequency.

In order to compensate for the weaknesses of current 3D cameras with regard to their higher measurement noise, their lower resolution and the lack of morphological precision at edges, the 3D data sets are supplemented with high-resolution images from a simultaneously used industrial camera.

Activities

The measurement system consisting of a higher resolution monochrome 2D camera and a mobile 3D camera requires a common calibration and knowledge of the mutual orientation in order to exploit the potential of the measurement sensors and the possibility of compliant use. In a so-called “simultaneous calibration”, the camera parameters and their relative orientation are determined for both systems by means of a compensation process. This fixed relative relationship between both sensors enables the correspondence between image and 3D measurements to be restored.

This enables the interaction of both sensors with regard to the robust generation of an overall 3D model through the registration of individual images. From the overall model, the position and orientation of the individual objects to be removed are detected in a further processing step and transmitted to the robot. The positions are determined using segmentation algorithms applied to the 3D data set, combined with methods from image analysis.

Results

Various methods based on different concepts were tested for the fusion of the 3D data sets obtained from several spatial directions. Based on these findings, software modules were developed and implemented. With these modules, it is possible in the overall system to merge multi-sensory and multi-perspective 2D/3D data sets into one overall data set in real time. Based on this data set, the recognition and robot-assisted removal of objects takes place.

The project results primarily benefit the project partners Metronom Automation GmbH and Hirata Engineering Europe GmbH. Both companies are active in factory automation and robot-assisted manufacturing and quality control. In the future, TakeIT will be able to be used in their central fields of application.