Call2 Monitoring ActReMa

Results 1 - 5 of 5 sort by

In the ActReMa experiment, a robot delivers parts to a process station. The robot is equipped with a 3D scanning sensor. It must recognize objects in a box and grasp them. The experiment partners started their work according to the plan. For two scenarios: a mobile robot and a stationary robot, the sensor placement has been...
[Last edited Jul 28, 2011 ]
The experiment partners continued work on object recognition, grasp selection, and motion planning for picking objects out of a transport box. The primitive-based object recognition has been accelerated and made more robust. Now, 2D contours are also considered for recognition. The best visible object is selected. Grasps are...
[Last edited Sep 27, 2011 ]
The Metronom 3D sensor has been mounted on the mobile robot Dynamaid. Its measurements are more precise and less noisy than Kinect measurements. The experiment partners started work on the learning of object models from examples and active object recognition. For the learning of object models, scans from different views are...
[Last edited Nov 27, 2011 ]
For learning object models, we made initial scan alignment more robust by adapting the point-pair feature object detection and pose estimation method of Papazov et al. (ACCV 2010). We only allow transformations that are close to the expected ones in the RANSAC step. For active object perception, we extended the simulation...
[Last edited Mar 28, 2012 ]
We conducted experiments where the robot has to actively perceive partially occluded objects which showed that considering object detection results when planning the next view can help finding more objects. We integrated the components developed in ActReMa into a bin-picking application, performed by the cognitive service...
[Last edited Jun 15, 2012 ]