In order for future robots to successfully accomplish more and more complex manipulation tasks, they are required to be action-aware and possess naive physics reasoning capabilities. They need to possess a model regarding how the effects of their actions depend on the way they are executed. For example, a robot making pancakes should have an understanding that pouring the pancake-mix on the oven depends on the position and the way the container is held, or that sliding a spatula under a pancake may or may not damage it depending on the angle and the dynamics of the push.
The setup for data collection is depicted in the figure above. The system is provided with a sensor infrastructure that allows interaction with the virtual world by tracking the players’ hand motion and mapping them onto the robotic hand in the game. We tested out two different setups for the tracking. One with the help of a magnetic sensor based controller, which returns the position and orientation of the hand, together with a dataglove for finger joints positioning. In the second case we used two 3D camera sensors mounted on a frame which yields the pose and the skeleton of the tracked hand.
This dataset contains the following data:
Partly supported by the EU FP7 RoboHow project