Recently, we witness the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions they have been carefully designed for.. They are still far from the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Making the transition from performing everyday activities to mastering them, requires us to equip the robots with comprehensive knowledge bases and reasoning mechanisms. Robots that can master everyday activities have to perform natural language instructions such as "flip the pancake" or "push the spatula under the pancake". To perform such tasks adequately, robots must, for instance, be able to infer the appropriate tool to use, how to grasp it and how to operate it. They must, in particular, not push the whole spatula under the pancake, i.e. they must not interpret instructions literally but rather recover the intended meaning.

In this course, we will present recent research that investigates how such knowledge can be collected and provided, using a cloud-based knowledge service. We propose openEASE, a remote knowledge representation and processing service that provides its users with unprecedented access to knowledge of leading-edge autonomous robotic agents. It also provides the representational infrastructure to make inhomogeneous experience data from robots and human manipulation episodes semantically accessible, as well as a suite of software tools that enable researchers and robots to interpret, analyze, visualize and learn from the experience data. Using openEASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.