Can the new camera technology be used to map spaces as big as rooms?
If so, this could be game changing for allowing the creation of VR spaces quickly and inexpensively. E.g., play a VR game in your real house after mapping it with your iPhone X. Or better, do a detailed remodel in VR before doing it in real life.
A lot of commercial uses of VR technology (e.g., construction, industrial design, etc.) can benefit from inexpensive, and accurate mapping. Today, the alternative is to get an architect to build a model of your house, to build a crude version yourself, have a Hololens/Tango phone/or other nascent and expensive technology.
If not, what truly is the game changing aspect of these cameras + specialized compute for machine learning/neural nets? They have to have thought through dozens of use cases beyond photos, animated emojis, and other trivial entertainment... right?
Someone has already built a measuring tape app out of this thing: https://www.youtube.com/watch?v=nQpEWv9_6Cg. Not exactly mapping as you're describing, but it's an interesting use case.
They are going to expose some of this functionality as part of their AR libraries. We know this from their Snapchat demo, as well as the gaming demo (though that used the rear camera, not front one).
So... yes, developers will be able to get their hands on some of this tech, but how much can we do with it?
If room mapping was truly possible, I would imagine they would have done more in their demo with this, e.g. see the Hololens game where robots come flying out of walls, and hide behind your couch. That demo was more substantial than seeing a flat tabletop with a projected 3d game, or a projected 3d robots standing on a flat basketball court.
I think you guys are talking about two different sensors. The 3-D sensor (the one that works like Kinect) is only on the front of the phone and I assume only for short range. That's the one the Snapchat demo used in the one that Face ID uses.
The AR demos of games use the normal cameras on the back of the device. That stuff will be available to any device 6S and up.
If so, this could be game changing for allowing the creation of VR spaces quickly and inexpensively. E.g., play a VR game in your real house after mapping it with your iPhone X. Or better, do a detailed remodel in VR before doing it in real life.
A lot of commercial uses of VR technology (e.g., construction, industrial design, etc.) can benefit from inexpensive, and accurate mapping. Today, the alternative is to get an architect to build a model of your house, to build a crude version yourself, have a Hololens/Tango phone/or other nascent and expensive technology.
If not, what truly is the game changing aspect of these cameras + specialized compute for machine learning/neural nets? They have to have thought through dozens of use cases beyond photos, animated emojis, and other trivial entertainment... right?