Ideas Lab








The Microsoft Kinect RGB-Depth camera overlays the floor in relation to the robot's position and orientation. The user selects a location on the floor with the mouse by moving his head. The user issues a cognitive command when the mouse cursor is over the desired location on the floor. The robot then rotates and moves to the desired location.
The User Interface: on the left is the depth camera output, with the relative height of the robot marker in displayed in pink. On the right is the RGB output of the Kinect, to determine the orientation of the robot in 3D relative to the Kinect camera.
The setup: The camera is oriented towards the robot, to detect the marker installed on top of it and the floor under it.
The robot: simple Lego NXT with a mounted 2D black and white marker.
Cognitive input: cognitive feedback is given by the Emotiv Epoc control panel. Once the input threshold of a trained cognitive command is reached, movement commands are sent to the robot
Uses: the uses for this technology are numerous, cameras could be installed in the house of movement limited patient. They would look at a computer screen and send the robot to retrieve various items in the house, open the door, bring them food, etc.

Post Your Idea