2D/3D Image Acquisition with Data Fusion

A combined 2D / 3D sensor (Intel Realsense) mounted on the robot arm is used to capture 2D color images and a fused depth image of the entire box. The 3D image acquisition is performed from the moving arm, thus it is possible to fuse the images from several different angles and camera positions to a comprehensive 3D image of the objects.
This type of mounting also eliminates the need for an external fixed sensor that could restrict the arm’s movements.

  • Use of 2D cameras (B/W or color) and 3D depth imaging cameras from different manufacturers possible

  • Due to a uniform interface, sensors from different manufacturers can be used in the same way

  • Image acquisition is directly integrated into the DRS program flow, no additional external image processing program is required

  • Direct support of camera calibration by DRS

  • Different views of the same object are assembled into a coherent seamless 3D image

  • The images are taken from the moving robot arm with a sensor mounted to the arm itself

  • One moving sensor can replace several static sensors with their fixed recording areas

  • This allows shooting from positions that would hinder the movements of the robot arm if the sensor were mounted in this fixed position

  • Fusion improves data quality through noise reduction

  • Support for combined calibration of robot arm and sensor for sensors mounted on the arm itself

AI Supported Object Detection and Localisation

Trained artificial intelligence methods recognize the objects in the 2D image and pre-segment it. The positions determined in this way are then used to initialize the detection of the objects in the 3D depth images. The exact localization of the objects is determined by comparing the 3D image at these positions with a model of the objects. This comparison is repeated iteratively, thus the detected position of the object can be continuously improved. The initial start positions determined by AI methods accelerate this object recognition step and can lead to qualitatively better results.

  • Prepared models allow object detection and localization through precise matching with sensor data

  • The models are compared with the 3D scan data in an iterative process and the detected position of the object is continuously improved in this process

  • Heuristic recognition: without an existing model, grip or suction positions can also be generated for unknown objects purely by recognizing free surfaces

  • Object recognition can be supported by trained artificial intelligence methods

  • Objects can already be detected and pre-segmented in 2D camera images

  • These detected rough positions can serve as a starting point for classical object detection to speed it up and lead to better quality results

Motion and grasp planning

A gripping movement can be planned with the now known object positions and gripping points The tool used and the surrounding environment of the objects suggest arm positions from which the object can be picked up safely and without collisions. Objects that are easy to reach are grabbed first. Objects that are partially hidden by other objects are thus exposed over time and become more and more accessible.

The robot arm movements are checked in advance for possible collisions using the environmental information generated by the sensor system. By using full volume models of the robot and obstacles, safe and collision-free robot motion can be planned.

This trajectory planning takes place entirely in DRS and uses the robots own kinematics models, thus a motion can be planned for robot arms of all manufacturers regardless of robot types and their concrete planning capabilities.

  • Support of various tools: grippers, suction cups or also e.g. magnets

  • The robot trajectories are completely planned in DRS and transferred to the robot arm for execution in small sections

  • This allows to use all planning and movement possibilities of DRS with robot arm types regardless of their manufacturer

  • Dependence on manufacturer-specific properties of the robot arms is reduced

  • A transfer of applications between different robot arms, even from different manufacturers, is thus greatly simplified

  • Selection of the most favorable grasp from a variety of options allows objects to be handled safely in many different positions and situations

  • Motion paths can be executed on recognized surfaces, thus machining them no longer requires precise object positioning

  • For collision detection the extent of the robot arm itself and of detected objects in its vicinity are taken into account and a safe and collision-free path is calculated

  • This verification is based on volume models of the arm and the environment, which can also be constantly renewed from live data

  • Due to the comprehensive sensor data of the objects and the environment, collisions between the arm and the environment can be detected in advance and avoided in planning

Execution on the robot

The individually calculated arm movement is transmitted from the system directly to the robot arm for execution. In this process, the planned robot movement is sent piece by piece to the real robot arm and executed in a monitored manner. Differences in the capabilities of arms from different manufacturers are compensated, the end user does not have to create manufacturer-specific programs.

After picking up an object from the chaotic crate, it can be selectively fed into e.g. assembly or machining processes in its now precisely known position, or it can be placed again in an orderly manner.

  • The individually calculated arm movement is transmitted from the system directly to the robot arm for execution

  • Fine position commanding allows the robot arm to precisely execute the pre-planned paths and trajectories

  • Differences in the capabilities of arms from different manufacturers are compensated for without the end user having to create vendor-specific programs

For privacy reasons YouTube needs your permission to be loaded.
I Accept