Welcome to part two of my three-part series on how WPC gave us a peek into the future. If you missed it, part one is posted here:

And, when it’s up, you’ll be able to get to part three here:
  • The True Benefits of Computing
In the meantime, this section is about how computer interaction is evolving!
 
Remember Star Trek? I think one of the best scenes in the entire history of the franchise is this one.
 
I love it because when it came out it showed (at that time) exactly how far behind "modern" computers were from what we watched in the
 Star Trek shows. Now, if you know a little bit about computers, you know that a few simple key presses couldn’t get it to do what that computer seemed to do (invent an entirely new kind of material), but that oversight aside, it really highlighted how, in the future, people will just talk to computers. While I don’t think we are quite there yet (although the version of Windows 8 preview I received had "Narrator" built right in), there were a number of cool things shown in Microsoft Labs that I found to be very interesting.
 
Perhaps the most interesting one was when they took an otherwise normal Kinect and hooked it up above a flat surface (it could be a large TV or even just a wall being projected upon). From that vantage point, the Kinect was able to watch the movements of your arm and turn said surface into a virtual touch screen. And that, to me, really opened up a number of possibilities because it looked like you didn’t really have to touch the screen to make it work. An obvious use of this would be far more "reasonably priced" interactive monitors in places like the mall or the airport. Tie this together with some reasonable voice recognition ("Show me the Food Court" or "Highlight all the shoe stores") and now you get a truly interactive experience. But other, less obvious, examples could be in an operating room where the doctor needs to manipulate a medical scan—perhaps rotate it or zoom in. He could do that all by gesturing near the screen, without actually touching it, keeping his hands nice and sterile for the rest of the surgery. Or a teacher in a classroom could manipulate the lesson on the screen without ever having to touch it with her hands, making education more fun and engaging for the kids.
 
Another cool thing they showed with the Kinect was the ability to use it as an incredibly low-cost laser scanner. They actually showed it working live on stage where they built a 3D model of Jon Roskill sitting on a stool. This could then be sent to a 3D printer (like those from RepRap or Makerbot) to make a "near-instantaneous" (maybe five or 10 minutes later) model of that person. Aside from miniature representations of people, this approach could also be used to make small physical objects—like say a small plastic part from your child’s favorite toy. As a parent, I think this unfortunately happens to all of us at some point. Your kid "over-plays" with the best toy in the world and something breaks. In the past, that meant several minutes working with Krazy Glue (trying not to glue your fingers together) and then waiting for it to dry for several hours (just to be safe), all to just have it break again because it has been weakened. How much cooler would it be to put the pieces on a table, scan them from a few different angles, let some awesome software code figure out where the break is and virtually repair the pieces, and then send that file to a small 3D printer to make a whole new piece right there? The entire thing might take just 20 or 30 minutes, and then it would be good as new.
 
And the last nifty thing they showed around the future of interactivity was a remote collaboration tool. It was an overhead lamp with a built-in projector and camera. They did the demo from across the arena, but it could have just as easily been from across the planet. As person A put a drawing on his table, person B was able to see it on hers (projected on a blank piece of paper). Then person B could look at it and start drawing on it, showing how the design could be modified. As person B drew on her table, person A could see what she was drawing in real-time on his table. At the end of the collaboration, everything could be captured digitally and combined into one file. But beyond just playing a friendly game of tic-tac-toe (which was part of the demo), they could do the same thing with solid objects. Person A took that same bust of Jon Roskill and put it into the overhead’s field of view. Almost like magic, it not only showed up on person B’s table, but she could draw on the bust (coloring in the eyes, etc.) and the projector would then make those same modifications on person A’s bust. If both sets of collaborators had a 3D printer, then they could start making revisions of prototypes in near real-time (allowing for coffee breaks while the 3D printers print). If you’ve ever had to remotely work on a project with a team, I’m sure you could see how this could be useful – see the demonstration for yourself in this video of the full keynote. (Jump forward to about 1 hour and 56 minutes to see what I mentioned.)
 
Also, if you want more info, my friend Eric Ligman posted a full blog of the day two keynote.
 
So that’s it. These were some my observations on how Microsoft is enabling new ways to interact with computers in the near future. What do you think? Did any of these seem particularly interesting or are there other things you’re looking forward to soon? Let me know in the comments below.
 
– Eric​