If you know me at all, you’re probably aware that I write about and research the humanitarian uses of drones for a living. One aspect of today’s drone technology I find particularly interesting is how aerial imagery can be used to make 3D modeling, even with inexpensive consumer technology. I’ve been wanting to try it for a long time.
Well, I don’t currently have a UAV that I can program for autonomous flight, to create the pattern of transects that allow drone-shot images to overlap in an optimal way, so they can be stitched together to create maps and 3D models. I also don’t have a point and shoot camera, just a GoPro Hero 3+ with a fish-eye lens, which is rather less than optimal for mapping applications.
But as it turns out, with the help of the open source Visual SFM software, you can *still* get pretty good results. I was visiting my boyfriend Dan’s family in Southwestern Vermont last weekend, which is a really ideal place to mess around with drone mapping since there are very few people there to notice. My friend Matthew Schroyer of the Professional Society of Drone Journalists has been getting good 3D modeling results just by pulling out video imagery from drone videos shot by amateur pilots.
So, I figured I’d give it a go and see what we got. I flew my Phantom 2 over my boyfriend’s parent’s house in some approximation of a zig-zag pattern, with the GoPro 3 set to shoot an image every second – probably overkill, all things considered. I eyeballed the pattern, and since it was a bit of a windy day, it wasn’t as tight as I’d have liked it to have been.
With the initial fly-over done, we had a few hundred images that could be fed into Visual SFM, which Dan handled. Dan says the VisualSFM model used 378 photographs and took about 20 hours to render using his late-2013 Macbook Pro Retina laptop. That’s including the time required to render the image in MeshLab, which creates the mesh required for three-dimensional modeling and overlays the photographic texture on top of it. You can read about how you can use Visual SFM to crunch images over at the excellent Flight Riot.
Agisoft Photoscan performs all these functions inside of the same program, and is a more effective and powerful software, although unlike Visual SFM, it isn’t free. Dan ran the images through Agisoft Photoscan and added some still shots from a video we’d taken the day before, but it didn’t seem to make much of an improvement to capturing the backside of the house, which was quite fragmented. He ran it again with 75 photos, taking out the video stills, and got a better result with fewer artifacts.
Here’s the results with VisualSFM. You can manipulate the model we made with Visual SFM in Sketchfab at this link.
Here’s the first Agisoft Photoscan model.
And here’s the second Agisoft Photoscan model, with the Sketchfab link here.
The results obviously aren’t perfect, but considering how little effort or specialized equipment we used, I’m still impressed. I’m planning to have a good quality mapping UAV with a point and shoot camera and the ability to program transects up and running by July. I think that there’s some very interesting potential for story-telling and journalism with 3D modeling, and I want to figure out ways to experiment. Beyond that, it’s rather fantastic that I can use consumer-grade technology to made video-game like maps of the world around me.