Public Summary Month 5/2012

We conducted experiments where the robot has to actively perceive partially occluded objects which showed that considering object detection results when planning the next view can help finding more objects.

We integrated the components developed in ActReMa into a bin-picking application, performed by the cognitive service robot Cosero at UBO. The robot navigates to the transport box, aligns to it, acquires 3D scans, recognizes objects, plans grasps, executes the grasping, navigates to a processing station, and places the object there.


Public Summary Month 3/2012

For learning object models, we made initial scan alignment more robust by adapting the point-pair feature object detection and pose estimation method of Papazov et al. (ACCV 2010). We only allow transformations that are close to the expected ones in the RANSAC step.

 

For active object perception, we extended the simulation to include the complete experiment setup, adapted the object recognition method to identify regions of interest, and integrated the planning of the next best view (NBV).