Friday 2 October 2015

Photogrammetry lift

There is a way to get a Vive dev-kit.

And here they're talking about CG made from photogrammetry data, so that you can walk around a room-scale rendering of the photos.

http://steamcommunity.com/steamvr

Here's a link to a Russian photogrammetry app:

http://www.agisoft.com/

-----------------------

I've pasted the article here, just in case it goes....

It's not magic, but...

There's a good chance that you already own a high-end 3D scanner, capable of capturing beautifully lit and textured real-world scenes for virtual reality.

This process known as photogrammetry certainly isn't magic, but it's getting close. It involves using images from a conventional digital camera to create three-dimensional scenes, exploiting the differences in photos taken from different positions and angles. There's plenty that will confuse it - and getting decent source material can be a bit of a dark art - but when everything works properly, the results in room-scale VR can be both compelling and unsettling. Being virtually transported to another, real location - and then walking around in it - still feels like something from the far future. Some of the most immersive VR experiences I've had so far have involved scenes captured in this manner.



This article is aimed as a general introduction to photogrammetry for VR purposes, and as a compilation of hints and tips I've accumulated. This isn't the definitive guide, but it should help get you started and will warn of possible pitfalls along the way. Go forth, and capture the world!

Software

The software I've gained the most experience with is Agisoft PhotoScan, from a small Russian company in Saint Petersburg. While it has more of a GIS background (with plenty of features geared towards surveyors and aerial photography) it has gained quite a following in the more artistic side of computer graphics.

PhotoScan has proved highly configurable and versatile, but it certainly isn't the only software of its type available - alternatives include Autodesk's cloud-based 123D-Catch and Memento. Performed locally, the process requires some pretty chunky hardware for particularly advanced scenes, so expect to throw as much memory, CPU and GPU at it as you can afford. If you're careful, you can limit scene complexity to what your hardware is capable of processing in a reasonable amount of time. Although the eight-core monster at work is very much appreciated...

You may have heard of photogrammetry being used on the recent indie game The Vanishing of Ethan Carter, for instance - where it was used to capture surfaces, props and structures for use in a relatively traditional 3D art workflow. They have posted some excellent general advice online, mainly geared towards capturing individual props - whole scene capture has some differences.

How it works

The general principle behind photogrammetry involves having at least two photographs from different angles of every point you want three-dimensional information for - it will identify visual similarities and, using maths, figure out where these similar points are located in space. This does mean it is limited to static scenes containing opaque, non-specular surfaces - as mentioned, it's not magic.

The software will take your photos and, if there are enough common features between different photos, will automatically calculate all the camera positions. The process is a little like automatically stitching together photos for a panorama, only here you absolutely don't want the camera to stay in one place. Think lots of stereoscopic pairs, although it can work with most potential photo alignments.

Once it's aligned all the cameras (or not) and generated a sparse point cloud, you then get it to produce a dense point cloud - potentially hundreds of millions of points if you've overdone the quality settings. From that you can generate a mesh - with potentially millions of triangles - then get it to create textures for that mesh. (We've been able to render some surprisingly heavy meshes in VR!)



Hardware

I've been using some relatively humble equipment - a Canon EOS 7D digital SLR for most things, usually with the 17-55mm f/2.8 EF-S lens. For low-light stuff a tripod is really helpful, but I've got some surprisingly good results from just hand-held shots. You want as deep a depth a field as possible (f/11 seems a decent compromise between depth of field and sharpness) with as low an ISO as you can achieve (to stop sensor noise from swamping details).

Any decent digital SLR with a sharp lens should work well - for full VR scenes, a wide-angle lens can be incredibly helpful, making it far easier to capture the scene with a reasonable number of photos in a limited amount of time, while reducing the chance you've forgotten to get coverage of some vital area. The wider the lens, the trickier it can be to get a decent calibration, however - a Sigma 10-20mm lens of mine suffers from severe inaccuracy in the corners relative to a typical lens model, while a Canon 10-22mm lens is a fair bit better. The 17-55mm lens mentioned earlier, while a fair bit narrower at the wide end, has all its distortion corrected away - giving great all-round results at the expense of more photos being required. (It's also stabilised, making hand-held shots much easier in lower light or with a lower ISO.) Images from GoPros can be used too, but expect to fiddle around with lens calibration a lot.

I've had some fantastic results from a borrowed 14mm cinema lens on an EOS 5D Mark III - quite implausible hardware for general use, but stupendously wide angle (14mm on full-frame!) and with its minimal distortion being fully corrected away.

The process should work with photos taken with different lenses and/or cameras - for a room-scale scan I can do an overall capture with a super-wide-angle lens to make sure I've got full coverage, before going close-up on individual objects. Be sure to minimise the lens differences to those you actually need - if you have a zoom lens, make sure you're at fixed focal lengths (taping down the zoom ring is extremely useful!) or you'll end up with many different 'lenses' with different, more error-prone calibrations. Switching to manual focus and staying at a particular focus distance can also eliminate focus 'breathing'. (Make sure you switch back again afterwards!)

(A note on the fixed focal length issue: over around an hour and a half I captured a fantastically detailed scene in Iceland with my super-wide-angle lens. Since I was shooting in HDR, the camera was on a tripod, and I was moving the tripod and camera for each set of shots. For whatever reason, the zoom ring on the lens slowly drifted from 10mm to 14mm - the tripod meant I wasn't doing my usual hold-zoom-at-widest-with-my-left-hand trick. I managed to rescue the captured scene by creating an unwholesomely large number of lens calibrations in PhotoScan, evenly distributed across the shots - luckily the scene had enough photogrammetric detail in it to make this acceptable - but it still isn't as high quality as it could have been. So yes, even if you know about the potential problem - tape the zoom ring in place anyway. It could save you from a lot of frustration later.)

Continued in part two...

Photogrammetry in VR - part 2 of 3

21 September - Cargo Cult
Process

Since photogrammetry works on identifying small details across photos, you want shots to be as sharp and as free from sensor noise as possible. While less important for really noisy organic environments (rocky cliffs scan fantastically well, for example), it can be possible to get decent scans of seemingly featureless surfaces like painted walls if the photos are sharp enough. If there's enough detail in the slight roughness of that painted surface, the system will use it. Otherwise, it'll break up and potentially not scan at all - do expect to have to perform varying amounts of cleanup on scans. Specular highlights and reflections will horribly confuse it. I've read of people scanning super-shiny objects like cars by sprinkling chalk dust over surfaces - you won't get good texture data that way, but really good geometry.

Think of topologically simple, chunky scenes to scan. Old buildings, ruins, tree trunks, rocky landscapes, ancient caves and broken concrete all scan brilliantly - while super-fine details, reflective, shiny or featureless surfaces tend to break down. Do experiment to see what works and what doesn't. Scenes that are interesting in VR can be quite small - lots of layers of parallax and close-up detail can look fantastic, while giant scenes (stadiums, huge canyons and the like) can be oddly underwhelming. Think room-scale!

Scanning the scene can best be done in a robotic, capture-everything manner. For a wall, it can involve positioning yourself so the camera direction is perpendicular to the wall, ensuring you have its whole height in the viewfinder. Take a picture, take a step to the side, then repeat until you run out of wall. You can then do the same from a different height, or from different angles - remember those stereoscopic pairs - remembering to get plenty of overlap between images moving to 'new' areas, with plenty of photos taken when going around corners. 'Orbiting' objects of high detail can be useful - make sure to get photos from many angles, with adjacent photos being quite similar in appearance. If anything, aim to capture too much stuff - it's better that than being clever then missing something. Expect a couple of hundred photos for a detailed scene. (It's fine to use 'Medium' or even 'Low' when generating your dense point cloud - super-fine geometry isn't necessarily what you need for a full scene scan. Dense point clouds with too many points also require phenomenal amounts of memory to build meshes from...)



Most of the work I've done so far has involved low dynamic range imagery (which makes hand-held photography possible) - it's still possible to capture full lighting from a scene with careful manual exposure settings. Shooting RAW and then dragging the shadows and highlights into view in Lightroom can help a lot here - it's not quite HDR, but it helps. Being able to fix chromatic aberration in software is also extremely useful, removing instances of those separated colours which look so awful in texture maps. Export full-resolution JPEGs for processing in the software, making sure they're all of the same alignment (portrait or landscape) - otherwise it may decide they're from different cameras. Don't crop, distort or do anything else too drastic. One trick to use is to pull the highlights and shadows in for a set of images to be used for photogrammetric purposes, then re-export with more natural lighting for the texture generation stage.



A full high dynamic range workflow is possible in AgiSoft PhotoScan. Capture bracketed shots on your camera, then merge them to OpenEXR - you can later have the software export textures to OpenEXR for tonemapping at runtime to give full, real-world HDR lighting. You do need a tripod, and there's a lot more data involved - but you'll potentially get useful-for-photogrammetry detail from both shadows and highlights. For one scene I captured, geometry was generated from HDR images and then textures from carefully manicured TIFFs from Lightroom. Look into Nurulize's work relating to HDR photogrammetry capture here.

If you have something which appears in photos when it shouldn't (for example cars, walking pedestrians, bits of Mars rover, astronauts) it is possible to use image masks to eliminate them from consideration. Unwanted features such as lens flares and raindrops on the lens can also be removed in this manner.


Source imagery: NASA / JPL-Caltech / MSSS

Cleanup

One almost-unknown feature in Agisoft PhotoSCan that's proved incredibly useful is its ability to project photos on to another mesh. I've cleaned up a number of rough architectural scans by essentially tracing around the noisy geometry in a modelling program, then reimporting the super-clean mesh into PhotoScan as an OBJ. Since I kept the scale and alignment the same, I could get it to reproject the photos on to my new UVs - resulting in low-poly, super-clean models preserving all the light and texture present in the photos. (Things which can scan very badly, such as featureless flat surfaces, can be the easiest to model!)

This also gives the opportunity to obtain geometry in different ways. Kinect-based scans, Lidar scans, geometry from CAD models - so long as the scale and shape is appropriate, you'll be able to overlay high-resolution photo-sourced texturing this way. You won't get specular maps without a lot of extra work, but the sensible UVs will make them much easier to produce...



A frustrating thing to occur when scanning a scene is to have a seemingly perfect dense point cloud dissolve away into thin, stringy blobs - or even nothing at all. Office furniture suffers badly from this, as do other thin or detailed surfaces. A workaround can involve generating sections of mesh separately and then performing cleanup - compositing them all together into one mesh for final cleanup.

Natural scenes can really benefit from a simple, quick trick I found - involving putting a sphere or cylinder around the scene, then using PhotoScan's project-to-existing-UVs feature to project imagery on to it. While not necessarily 'correct', the human brain seems to appreciate being fully immersed in colour and detail - an incredibly quick scan I did of some redwoods forest became quite compelling this way. Scanned geometry is limited to the ground and treetrunks in the foreground - everything else is flat and 'fake'. For large-scale terrains, a simple-geometry version in the background can look great, with effectively a photosphere for the sky and more distant landscapes.



Continued in part three...

Photogrammetry in VR - part 3 of 3

21 September - Cargo Cult
Rendering

You've got your enormous meshes and textures - so now what? I've done some experimentation with Unity and it's super-capable of rendering this stuff in a brute-force manner. Make sure your textures are limited to 8192x8192 maximum, your shaders are set to unlit textured and that your meshes are rendering without casting shadows and you should be good. You can generate mesh colliders for each segment of mesh produced, should you want virtual objects to collide with your scanned geometry - it's also possible to set up lighting which roughly matches the 'real' lighting from the scene, with baked cubemaps from the scanned geometry. I have a custom shader for scanned geometry which can receive shadows and render with distance fog and a detail texture - it can look quite effective with virtual objects 'embedded' in the world like this.



Adding ambient audio and a few simple particle effects, moving clouds and similar can make the scene appear dramatically more alive – much more than a static, dead model.

Performance-wise, the brute-force-and-ignorance technique of getting Unity's default systems to render giant meshes with lots of huge textures works surprisingly well on a decent GPU. Something ridiculous like a 1.5 million triangle mesh from PhotoScan with twelve 8k DXT1-compressed textures should run fine on a typical VR-oriented system.

There are much more advanced ways of rendering this stuff should you choose - Amplify Creations are doing some interesting work with virtual texturing in Unity - the performance of which is nicely VR-friendly, we've discovered.

Conclusions

As mentioned earlier, this article is not meant as an exhaustive, step-by-step guide to photogrammetric scene scanning - rather as an introduction and collected set of hints and tips. An important thing to realise is that it's a technology that's easily within reach - needing just a consumer digital camera, some software and a half-decent computer for processing. It's hard to state quite how surreal these scanned scenes can be, where even rough test scans can come perilously close to transporting your brain to another location entirely.

I'm fascinated to see what people will end up scanning. Inaccessible places, disappearing locations, the mundane, the fantastic - all can be visited from the comfort of your own VR headset!

If you have access to the VR demos, you can find this content under SteamVR Demos > Iceland Cinder Cone and SteamVR Demos > Valve Demo Room.

1 comment:

  1. Well. My first try was a blobby mess. Will have another go. >.<

    ReplyDelete