Andrew Sink interview

By Andrew Gregory. Posted

Andrew Sink is a shining evangelist for 3D printing. He was there during the 3D printing boom of 2014, and kept the faith while the buzzwords died down and people refocused on what the potential of the technology was. He’s written software to make design for 3D printing more accessible to new users, and his work is on the shelves of big retailers, without you even realising it’s 3D-printed. He’s on a mission: to bring 3D technology to the world in a way that people can understand intuitively. We’re very lucky to have got a few minutes’ worth of his insights to share with you here.

HackSpace: You’re obviously not just a guy who uses 3D printing from time to time: you’re an evangelist. What excites you about 3D printing?

Andrew Sink: So, first of all, in my day job, I am a senior applications engineer for Carbon. Carbon is a 3D printing technology company based outside San Francisco. So my job is primarily interfacing with customers and designing parts for additive manufacturing specific to the Carbon process. I work on a team that’s primarily involved with lattice structures, so I spend a lot of time thinking how to design something that used to be a uniform block of material, as a conformal lattice, rather than a solid shape.

I was just at an expo in Detroit, Michigan, and I actually brought a bicycle with us to our booth. And it’s got a 3D-printed seat, which was designed by people on the team that I work on. And it’s designed for mechanical response; it’s designed for comfort, it’s designed for printability. Thousands of pages of thought have been dialled into this lightweight bike seat, and then you look at it and go ‘Oh, that’s pretty neat’.

But what’s really exciting about working at Carbon for me is I went into a bike shop to pick up that seat fairly recently. And it occurred to me that is the first 3D-printed part I’ve seen on a store shelf, that was a retail product. And when I was in Detroit with my wife, for the expo, we stopped in at a Dick’s Sporting Goods [a big sports shop in the USA]. And they had the Adidas 3D-printed midsoles, so they had shoes that have a 3D-printed sole. And so I bought my wife a pair. It’s cool to be able to walk into a store and buy something that was 3D-printed.

HS: What got you into 3D printing in the first place?

AS: I’ve been involved in additive manufacturing for about a decade now. I first used a 3D printer when I was in college; we had one in the engineering lab, and I saw it and instantly felt that this is the thing that I’m going to do for the rest of my life – it was such a very clear and amazing technology.

The applications were just immediately apparent. I was taking a class on SOLIDWORKS as a 3D CAD program at the same time; taking a model out of SOLIDWORKS, and then sending it to the printer and then holding it in my hands that same day was an absolute revelation. It absolutely just tied together all the work that I was doing in the digital world to the physical world. And it was immediately apparent to me that it was just such an untapped technology.

HS: And do you still have that same sense of wonder now that you use it day in, day out?

AS: I can’t walk by a 3D printer. I’ll stop and watch them for a couple of minutes. My house is filled with printers. It’s just fascinating to see the industry evolve so rapidly in ten years – materials, software, hardware. And it’s just, it’s a really cool experience.

HS: You’ve also been experimenting with photogrammetry, which goes one step further – taking an object from the real world, putting it into the computer, then taking it back out again. What are the applications for that?

AS: The application of photogrammetry is hard. You could be sitting in a room looking at the first camera and saying, ‘Well, what are we gonna do with it?’ It’s such a limitless technology. And it’s also very much in its infancy, and we’re still figuring out what it’s good for.

One of my favourite books is Timeline by Michael Crichton. It doesn’t discuss 3D printing directly, but it talks about people who accumulate these things called transcription errors, where they’re essentially turned into digital data streamed into a different area, and then re-materialised as physical objects. And over time, veins don’t line up all the way, bones are slightly misaligned… you accumulate these errors over time. When I started getting involved in 3D scanning, I just immediately thought of that book, because it really breaks down a lot of these core concepts really well. With photogrammetry, the thing you’re scanning is going to be the thing you’ve scanned, but it’s going to be different. It can be its own, you know, tangible item. The questions you get asked all the time with 3D scanning are, ‘how do I scan a boat?’ And that’s a very different workflow from ‘how do I scan a screw’, where you’re talking about precision on the magnitude of microns versus metres. 3D scanning, just like 3D printing, means a lot of different things to a lot of different people. So, where I spend a lot of my time is on the accessibility and early entrance, so I think a lot about somebody who’s never heard of 3D scanning before. They’re going to go to Google and type in ‘3D scanning’; what’s the first thing that they’re going to see? And how is that going to be applicable to them? So I’ve spent a lot of time making videos showing the process of photogrammetry. How do you stitch photos into a model? How do you use a LiDAR scanner to use sensors to detect how a model exists in space, and really try and break it down in a way where somebody with no practical experience can solve their particular problem?

HS: Photogrammetry sounds like something that’s really expensive to get into. Do you need to have access to a lot of specialist gear?

AS: Historically, it’s been very expensive. But that’s because it’s been computationally expensive. The hard part is the software. That’s where the industry has really lagged behind. A big part of that is conflicting messages from users, you know: I want to scan really small parts, very detailed, and I also want to fly a drone around my house and make a 3D model. Those two use cases are going to accumulate different amounts of data, and those are going to be processed differently.

Right now, there are apps on the App Store that are under $10, that you can use on pretty much any iPhone, a lot of Android phones, and create a 3D model within minutes. There’s this really great picture – my wife and I were in Chinatown in Boston. And it was a perfect day for 3D scanning – it was overcast, so the light was very diffused – typically, you get really harsh directional light from the sun, which makes it hard to scan stuff without adjusting the aperture settings on your camera. And it’s just, it’s kind of a mess.

We got this picture taken with us. I saw the statue right next to us, and I was like, ‘Oh, that would be a great 3D scan’. And it is, without a doubt, one of the best scans I’ve ever made; you can make out individual teeth on the sculpture. This technology is so accessible now that it took me maybe ten minutes to walk around and get the photo set, and then maybe another five minutes after it finished rendering to clean up the base. So, photogrammetry has absolutely gotten to a point where it’s affordable, it’s usable, and you can get up and running very quickly.

For photogrammetry, it’s really hard to get pictures of shiny stuff: bronze, for example, is terrible. Because it goes between dark and very bright, there are almost no gradients in bronze. You’re either looking at, like, gold, or you’re looking at black. And so a marble statue on an overcast day is just perfect. It’s worked out great. 3D scanning shiny objects is sort of like taking a picture of a mirror: you don’t really get a picture of the mirror, you just get a picture of everything in front of it.

HS: Did you use an iPhone when you made the mask of your own face?

AS: That was actually done using a fairly high-end industrial 3D scanner. A friend of mine did it while we were at a trade show and we had a bit of extra time. I sat on it for a while – what are you gonna do with a scan of your face? And early in the pandemic, a company launched a bounty programme on a facial recognition spoof. It was something like $10,000 if you could beat their biometric verification, and I thought, well, I’ve got a really nice-looking scan of myself, it probably wouldn’t be a stretch to try and beat this thing. That became my Covid quarantine project. I was at home. I had some time on my hands. And so I thought, what would a normal person do? I will make a hyper-realistic mask of my face.

I was going back and forth to make-up stores to get all kinds of different products to try and bring out highlights in the model. I was putting paint on the lips and stuff. It was creepy. It was absolutely awful. It worked in the sense that Google Photos will recognise the mask as a picture of me. So it will automatically tag me, which is cool. I’m able to set up an iPhone with the fake mask, but I’m not able to unlock it, so, whatever Apple uses for face ID does recognise it as a face, but it won’t recognise it as the same face twice. Google recognising it as a face though, that was a huge win.

HS: You have a YouTube channel where you talk about 3D technology; you’ve also written some software to make it easier to create 3D models, haven’t you?

AS: Yeah, so I’m a big fan of low poly art. I grew up with GoldenEye, on the Nintendo 64. So, I have a very deep appreciation for doing more with less. You look at the textures of these models, and it’s like a guy, and that guy has a, you know, a texture on him. So if you see a face, it’s the eyes and the nose. And then he moves his head, you’re like, oh, it’s kind of a rectangle shape, you know?

One of the things I found was that there weren’t a lot of really intuitive workflows for creating low poly art without using programs like Blender, which is very powerful, but not very intuitive, so it’s very difficult for beginners. I set out with a goal to design a web-based app to make low poly models. So the URL is lowpoly3d.xyz – the whole thing is written in JavaScript. And you can basically upload a 3D model and then you can select the destination amount and remove a certain number of edges, and it collapses the model into a low poly model, which you can then download.

One of the reasons that I wanted to work on the low poly site was just because at this point, hardware has pretty well outstripped software in 3D printing. We have very, very well-made machines that are made very inexpensively across a wide range of technologies. But, on the design side, you kind of have two options: professional parametric CAD, and then more sculpting programmes, like ZBrush, or Maya. Those both have pretty steep learning curves. So I wanted to create a tool that was easy to use, where somebody could say, ‘Hey, I downloaded a model, and I’m going to reduce the poly counts. And now I’m going to print it out, and then it’s done.’ That was the goal. And I’m always excited to see what people are making with it.

And then there’s also the STL to ASCII generator. Again, if you grew up around the time that GoldenEye was out, you’ve probably seen ASCII art as well. You can upload a 3D model, and it’ll basically apply a filter on top of that model that shows up as ASCII art. It’s pretty cool. They’re both designed to be as approachable as possible: you just drag a model and start hitting buttons, you really can’t mess up. And I think that’s very encouraging for beginners; maybe they’ve got a 3D printer for Christmas and they want to learn more about the modelling side, but they don’t have access to expensive or difficult-to-learn tools.

HS: What do you see as the future, or possible futures, in the 3D printing multiverse?

AS: Let’s split this into two separate parts. On the hobbyist side, I think hardware is commoditised at this point. You can buy a printer that’s mechanically sound for about $200. That part’s really easy, I think. The improvements will come in the form of ease of use and quality of life upgrades. So things like: can the printer tell you when it’s out of filament? Can the printer detect when it’s not actually printing? Those features are available right now on more expensive machines, and I think, as those become more prevalent, printers will become easier to use, and consumers will be more incentivised to try them. There was this big boom in like 2014, where every newspaper had a think-piece like ‘is this the dawn of 3D printing’? And it turned out that nobody knew how to design for 3D printing, so nobody was actually making their own stuff. For the last eight years or so, we’ve seen that hardware just continuously gets improved. A lot of the slicing software has seen incremental improvement, but we’re still not really at a place where the design software is matching that speed of improvement. So I think on the consumer side, as those quality of life features start to make their way down, the less expensive printers and the software becomes more intuitive. I think that’s going to really help drive adoption.

HS: And how about the industrial side?

AS: As the technology gets faster, the materials become more durable, and have better mechanical properties, you’ll get to a point where you’re going to buy a car, and your shifter knob is going to be 3D-printed, and you’re not going to know it. Let’s say there’s a factory that makes cars, and they find out that one of their tail-lights – the bracket that they ordered 5000 of – doesn’t fit onto the frame, and they need to print a shim. That is where 3D printing will come in and save the day, because you can solve this problem quickly, cheaply, reliably, without having to make tooling to mould this thing. I see a real future for 3D printing in solving problems that are below the surface, not necessarily in people buying things for the novelty of owning things that are 3D-printed.

HS: A lot of the time I see 3D printing projects online, someone’s made the 1,000,000th Dungeons and Dragons or Warhammer miniature. I look at it and think I’ve seen this so many times before, and it’s just the same as an injection-moulded thing, but less efficient. Do you think 3D printing is still exciting?

AS: My Nana, my Italian grandmother, when she first saw a 3D printer, it was an old printer made of wood, and it was printing a fork. And she took one look at it and just immediately she was like, ‘Oh, so you just draw something and it pops out’. Zero explanation required. It just immediately clicked in her head. And I thought that was just such a powerful thing. And so I try to make sure that, when I’m working on things like photogrammetry, or 3D scanning or the low poly generator, I’m thinking about what people are going to think when they see it.

Let me take you back to that first camera analogy I used earlier. Just like there are millions of little Baby Yodas and Dungeons and Dragons figures floating around, there’s also millions of photos online of flowers, right? But people aren’t going to stop taking pictures of flowers because it’s already been done. There’s something really uniquely personal about making something yourself. When you download something and print it out – it’s an accomplishment – you made this. This is something you’ve brought into the world. I think that’s a really special thing.  It also leads to all sorts of things – maybe they get curious; maybe they learn CAD and go and create more things. Or, maybe they decided to become a photographer, and take pictures of flowers.

From HackSpace magazine store

Subscribe

Subscribe to our newsletter