Researchers from the Ishikawa Oku Laboratory at Tokyo University, Japan have developed a concept called “Invoked Computing”, which replaces hardware such as keyboards and monitors with projections onto everyday objects.
A ubiquitous computing environment augments these objects with artificial affordances that are suggested by people through miming. For example, volume can be adjusted on a pizza box by touching the projected bar and moving your finger up and down, or a banana can function as a real handset by the simple gesture of bringing it to your ear. Directional microphones and parametric speakers hidden in the room would be used to interpret these gestures. The aim of the “invoked computing” project is:
To develop a multi-modal AR system able to turn everyday objects into computer interfaces / communication devices on the spot. To “invoke” an application, the user just needs to mimic a specific scenario. The system will try to recognize the suggested affordance and instantiate the represented function through AR techniques. We are interested here on developing a multi-modal AR system able to augment objects with video as well as sound using this interaction paradigm.
This video demonstrates the technology in action: