Garnering a great amount of attention at this year’s Las Vegas Consumer Electronics Show (CES), the Intel Perceptual Computing initiative is showcasing how the PCs and laptops of the future are directed by the way that we move, what we say, our facial characteristics, hand articulation and more.
Where devices such as Leap Motion, GestIC, Ubi and Tobi Rexx demonstrate how motion, voice and eye-tracking can individually control computing functions, an integrated approach (which Intel is calling “perceptual computing”) encompasses a wide range of human senses to make human-to-computer interactions more similar to instinctive human-to-human communication. The Samsung Galaxy Camera, for example, integrates touchscreen, gesture control and voice commands in a way that is, arguably, natural and intuitive computing.
As part of an open innovation strategy, the Intel Perceptual Computing Challenge invites developers to create applications for Intel Core Ultrabook devices and PCs that control games and productivity tools with unique combinations of speech recognition, gesture control, facial analysis and 2D/3D object tracking. The point is not to completely push the mouse and keyboard to the wayside, but to use perceptual computing to enhance the user’s experience to accomplish and enjoy a task.
Intel Director of Perceptual Computing, Achin Bhowmik, explains more about the Intel Perceptual Computing vision here:
Two of Intel’s Minority Report-esqe videos of the demonstrations being showcased at CES can be viewed below:
As pointed out in our CES video interview with Intel iQ Editor-in-Chief, Bryan Rhoads, the technologies of perceptual computing are already available. It just needs the best of our creative capabilities to put it all together.