What they did, is, process the information via a human perspective. Imagine you wake up, sit on a couch in a room which is not familiar to you. What do you do? You scan the environment and you scan your surrounding. Even if you are not aware, what your mind does, it’s mapping the surroundings (these things are cognitive and are stored in your subconscious and you use them on a daily basis – you are in the corner of the room. There is a wardrobe on the left of your arm. There is a chair on the right, ceiling is low – if you stand up you need to avoid these things in order not to get hit).

Same is with “Project Tango”. It gathers info regarding its surroundings. Now this might not sound so “sci-fi” but it is. Imagine you want to see someone in front of you. Imagine transferring the surroundings of one place (or from a person) to appear in front of you – you then get “augmented reality”. That can give you the ability to turn your home in to a virtual paintball field or an amusement park. You can create games and shoot zombies even.

Since it’s mixed reality, there’s no fear of running full-bore into the nearest wall after donning your VR headset and seeing a virtual ghost. With mixed reality, you’d see the wall, in whatever form the game rendered it. (You’ll still look like a goofball, but such stigmas are bound to change as VR becomes more common.) You’d be able to play a first-person shooter game in your own home or in any mapped room; you’d dodge incoming fire behind your couch and duck into a closet to reload.

Tango devices are able to so function because they are equipped with motion tracking, area learning, and depth perception capabilities. The combination of all three allows the device to create an incredibly accurate 3D map of an area. The infrared sensors have some limitations. For instance, the device will struggle to map bright windows, as sunlight transmits too much infrared and confuses the sensors. Still, despite such early-phase limitations, the results are still unlike anything you’ve ever seen.

But take the speculation to the big picture and your head may start spinning. Google could maintain a database that holds 3D maps of public spaces: streets, bike lanes, libraries, and so on. (Think Street View, but way more badass.) This information could then be used to further the navigational capacities of self-driving cars. The technology could assist visually impaired people with a form of sonar. It could also allow for major breakthroughs in robotics: As better robots are developed, they’ll be able to come prepackaged with knowledge of their surroundings — and the ability to discover and react to any changes. NASA is among many who, for this reason, are intrigued. One can imagine a grocery store that receives your list and creates a map for you to follow, in real time, of the most efficient route to each item. (If not that, you’ll at least be able to keep your kids both entertained and distracted as you finish up the shopping.)

Currently we have what it takes to make augmented reality – so why don’t you get a good programmer and see what you can do?