Important news about HackSpace

We have some important news to share about HackSpace: Issue 81 was the last issue of HackSpace as a standalone magazine, and HackSpace has become part of The MagPi, the official Raspberry Pi magazine. Starting with issue 145, We’ll be adding pages to The MagPi to make room for the stories and tutorials you’ve come to expect from HackSpace.

Photogrammetry with your phone

By Glenn Horan. Posted

Decent 3D scanners typically cost anywhere from hundreds of pounds up to tens of thousands of pounds for professional units. They can produce accurate digital representations of real-world objects, but they’re not the only way of doing this. In this tutorial, we’ll walk you through how to get 3D models from physical objects using something most of us carry around with us every day – our phones. Theoretically, the scale of the models can be anything from a small object, such as a battery or USB pen, up to the size of large buildings, assuming you can get far enough away to get a decent selection of photographs (and have a powerful enough PC). We’re going to run you through the entire workflow from photos to 3D print using a statue as an example. We’ll be using Meshroom, which is fully open-source, and is set up out of the box for use by beginners to photogrammetry. 

Firstly, we need to install Meshroom from the official website alicevision.org/#meshroom. You’ll find links for Windows and Linux installations, along with some extra instructions for certain distributions of Linux. The official Meshroom website states that you will need a PC with an Nvidia GPU with a 'compute capability' of more than 2.0 (for reference, GPUs from up to ten years ago meet this requirement, but you can check your compute capability score here hsmag.cc/UhRHXY) and a chunky 32GB of RAM. We got by just fine with 16GB of RAM, and some steps can be skipped if you don’t have an Nvidia GPU. However, the quality of your 3D model will not be as good.

You will also need to install Microsoft Visual C++ Redistributable package from this link: hsmag.cc/wDCYak (see Figure 1). This file contains dependencies required for Meshroom (and other programs written in Visual Studio) to run. To open Meshroom, unzip the file you downloaded from their site and click on the Meshroom.exe file. Once it has booted up, you should be greeted with their pleasantly minimal UI. However, if you have any issues, check out the troubleshooting page on their GitHub site. 

Before you get out there and start taking hundreds of photos of inanimate objects, we recommend taking our chosen photogrammetry software, Meshroom, for a spin with their sample dataset to ensure that your PC can cope with the demands of this resource-intensive process. You’ll also get a feel for how long the process takes and get exposed to the quality of photos you’ll need to use for your project. The dataset can be found here hsmag.cc/PL0TgK. If you’re unfamiliar with GitHub, all you need to do is click on the green 'Clone or download' button and then 'Download ZIP'. Unzip the file, open the full folder, then select and drag all of the photos into the left sidebar in Meshroom. Next, click on File then Save As to save your project. We recommend creating a new folder to save your project, as Meshroom will create some folders at this location each step of the way – this is where you'll find your finished 3D model file once everything is completed.

Figure 1 – It’s not immediately obvious where you need to click to download with Microsoft Visual 
C++ Redistributable

No Nvidia? No problem! 

At this stage, we can just click Start and let everything run for a couple of hours, then come back to our 3D model of an anthropomorphised tree.  However, we’re going to use a feature from the documentation that will give us a rough model in around 20–30 minutes, rather than several hours. This is our recommended workflow when working with your own photos, as it saves several hours of waiting to find out whether or not your photos were successful. To do this, we need to dig in a little to the beating heart of Meshroom: the graph editor. This is a visualisation of each of the steps that are taken during the photogrammetry process and is updated in real time to show which steps have completed, which are in progress and, if applicable, which steps have failed by displaying a green, orange, or red bar respectively. A detailed description of each of these steps isn’t necessary for this tutorial, but if you’re interested, you can find a great overview here: alicevision.org/#photogrammetry

Right-click on the DepthMap node, and click on the fast-forward symbol beside Duplicate Node. This will copy the entire pipeline from this point onwards and give us a draft quality pipeline to work in. Depth mapping is the step in the 3D modelling process that an Nvidia GPU is required for and is easily the most resource- and time-intensive part, so we’re going to remove it from our draft workflow. Right-click on DepthMap2, then select remove node, and do the same with 'DepthMapFilter2'. Next, click and drag from the input circle (not output) on the PrepareDenseScene node to the input circle on the Meshing2 node, and finally connect the output circle of PrepareDenseScene to the imagesFolder circle in the Texturing2 node. Your pipeline should be the same as shown in Figure 2. Now we’re ready to get started, right-click on the Texturing2 node and click on Compute to get the ball rolling. 

Figure 2 – Clicking on a node will bring up its associated parameters, and give you the option to view the log of the work being completed

When everything has finished, you can double-click on certain nodes to see their results. Double-click on StructureFromMotion, Meshing2, MeshFiltering2, and finally Texturing2 to see them in the 3D viewer (Figure 3). If you have an Nvidia GPU, you can now proceed to testing the depth mapping stages. Right-click and select Compute on the Texturing node, and leave things to run for an hour or two. If everything has worked up until this point, then you’re ready to get started with your own photos – we used a piece of ominous modern art that was within range of this author’s daily exercise during lockdown (Figure 4). 

Figure 3 –  Our sample tree model with all layers showing. You can click on the individual cameras surrounding the model to have that image show up in the image viewer pane

Figure 4 – So far in our experiments, statues seem to work best – photogrammetry works well with large, textured subjects

GIGO – garbage in, garbage out 

Once you have all your photos uploaded to your computer, just follow the same process we used for our tree example. After the feature extraction step, you can quickly review the images on the left-hand side of your screen and check if there are any photos the software wasn’t able to use (they will have a red symbol in the top right of the thumbnail). You can also click on the three dots in the bottom right of the image viewer to check how many 'features' were picked out from any of your photos to help narrow down the types of images that work best. The Meshroom documentation suggests that if you have issues, you can increase the number of features and feature types that are extracted. However, we found that this massively increases the amount of time the workflow takes, and if you haven’t got a decent low-resolution mesh already, then it won’t help massively. If you have a complete scan at the end of the draft workflow and you have an Nvidia GPU, we highly recommend running the depth mapping steps. Check out the difference in resolution in Figure 5

Figure – With depth mapping on the left; without on the right

Sculpt fiction

So you’re looking at a great model in the Meshroom 3D viewer, now what? Before we can 3D-print our new model, we’ll need to clean it up using mesh editing/sculpting software. We’re going to use Autodesk Meshmixer (download it for free at meshmixer.com), but if you’ve used similar software before, feel free to use whatever you’re most comfortable with. 

Open Meshmixer, click import, then navigate to the folder where you saved your Meshroom project. In the MeshroomCache folder, you’ll find folders for each of the nodes in our pipeline. You’ll find OBJ files in each of the steps that output a 3D model, but we’ll want to lift the model from the MeshFiltering folder. There may be multiple folders in here named with a long ID number – one for each time you’ve run the MeshFiltering step. In each of these folders, you’ll find an OBJ mesh file, which is what we want to import. You can also use the OBJ file from the Texturing folder, but we found the untextured model is easier to work with at this stage.

 The model is going to import in a fairly random orientation, so the first thing we need to do is transform it into an upright position. Click on Edit in the Meshmixer toolbar on the left of the UI, followed by Transform to overlay the transform tool wheel   over our 3D model. Adjust the position of the model using the green, blue, and red curved lines by clicking on them and dragging. Then use the red, blue, and green arrows to move the model up and down relative to the grid (Figure 6). When you’re happy with the positioning, click Accept at the bottom of the Transform menu.

Figure 6 –  When transforming the model, align it as if the grid was your build plate to make your life easier when it comes to slicing the model

All about the base

Next, we’re going to cut around our statue to get rid of all the surrounding objects and give it a nice flat base. Click on Edit again followed by Plane Cut. You’ll notice a second grid has appeared in our viewing pane, along with a tool wheel similar to the one we saw in the Transform step. Use the tool wheel to move this new plane around to where we want to cut our model. You can move the plane so that it cuts along any axis if there are other sections you want to discard. When you’re happy with your selection, click on Accept in the Plane Cut menu. Repeat this with each plane around the model until you just have the area you’re interested in (Figure 7). You might notice some small fragments are floating around your model. We’re going to remove those by again clicking Edit, followed by Separate Shells. An object browser will appear with a list of all your fragments. Select the fragmented shells in turn and click on the bin icon in the bottom right of the object browser to be left with your model. 

Figure 7 – When plane cutting, you can click the thicker blue arrow above the tool wheel to toggle which side of the plane you want to keep and which will be discarded

Next up, we’re going to plug any holes in your model that aren’t supposed to be there. First, zoom in on any of the holes you wish to fill in. Click on Select from the toolbar on the left, then adjust the brush size to one that you’re happy with and draw around the edges of the hole. The area should turn orange as you select it (Figure 8). When you’ve selected all around the hole, release the mouse button and press F on your keyboard to fill the area. If you wish to delete a selection, you can follow this same process and then press X. You can also make the model solid at this stage by clicking on it, followed by Edit, then Make Solid.

Figure 8 – You can also use the brush to select and delete areas around the base of your model that were hard to delete using the plane cut

Optionally, you can smooth out any areas on the surface of your model by selecting Sculpt from the toolbar on the left, then clicking on Brushes, and selecting the RobustSmooth brush. At this stage, it’s important not to brush away any of the details of your model that we worked so hard to extract – use the strength slider to reduce/increase the smoothing effect. 

Finally, click on File, then Export and save your model in STL format, and now you have your final model ready for 3D printing. If the model is tiny or huge when you bring it into your slicer, simply adjust the scale until you're happy with the size. Check out our finished printed model in Figure 9.   

Figure 9 – Marble filament was the only choice for replicating this statue

Photography 101

The choice of camera you use will, of course, affect the quality of your scan, but while a market-leading digital SLR is better, modern smartphones are capable of producing images that are more than good enough quality for photogrammetry. As a general rule, try to keep everything in focus (the bokeh effect looks great on social media, but not so much for photogrammetry) and make sure nothing in your model is overexposed, or those areas will certainly end up as gaps in your final model. Contrary to other 3D scanning techniques, photogrammetry works best with a feature-filled background, as this allows the software to pick up frames of reference when trying to find out where different features exist in 3D space. Don’t have too many moving subjects in the background, though the odd car moving by shouldn’t be too detrimental.

Lighting, as with any photography, is very important – if taking photos of a subject outdoors, then cloudy days work best as there won’t be any solid shadows to throw off the software. Walk in circles taking plenty of overlapping photos (more is better here, anything from 30 to 300+, depending on the target) and try to get different heights, angles, and distances from the subject.

From HackSpace magazine store

Subscribe to our newsletter