As I mentioned in my last post, natural user interfaces (NUI) are set to be a revolutionary trend in technology over the coming years. Computers are gaining more of the senses we have as humans–the ability to see, hear, and sense things like movement and direction. Discussion on NUI often focuses on touch and multi-touch, and while we believe there is much more to this trend than these sensory inputs, we’re also big proponents of touch. The first prototype of Microsoft Surface was fairly rudimentary but even back then we could see the potential, not only for touch and multi-touch, but also multi-user multi-touch, and interaction with physical objects. This is what led Surface to be adopted in a variety of venues including banks, hotels, casinos, and mobile phone stores–the ability for multiple users to interact at the same time and to use objects like phones, bank cards, and drink glasses as part of the experience. There are few, if any, other touch devices today that have such versatility.

 
Fast-forward a few years and Surface 2.0 brings even greater potential. The unit itself has shrunk–in a good way. Announced at CES 2011, the Samsung SUR40 for Microsoft Surface is a 40” high-definition LCD panel that is only 4” thin. Despite this, it retains the ability not only to support multi-user multi-touch, but also to “see” without the need for cameras. Using PixelSense, the individual pixels within the display see what is touching the screen. In fact you have to see it to believe it, so check out the keynote demo from CES and this announcement video.
 
During CES we’ll have a session dedicated to Surface 2.0, talking about where it came from and where it’s going; but most importantly, how we can get involved in building applications for the SUR40. The good news is, you can get started right away and don’t need to carry a 40” panel home from Los Angeles. We hope to see you there as we talk about and show why we’re just scratching the surface of NUI today.​