Let's see...

Vectorisation for plots.

A few posts ago I plotted a bunch of toruses, which was the first attempt at using a view-graph to render the contours of 3D models. I was pretty happy with the result except a) it was in C++ which was becoming a chore, and b) it was unoptimised and had a whole lot of unwanted artefacts due to the pen lifting and dropping way too often.

Since then my code has switch from Janet + C++ to Halfp + Rust and I've been able to implement the view-graph again, but also to optimise the output. Well, I thought so... it seemed fine in my tests but when trying again to render the toruses there are still a few unwanted marks due to the pen lifts and drops. Plus it's always super wobbly on the far side of the plots (usually the top) since the arm holding the pen on my plotter is just sticking out, about 20cm, and has no counter-weight or any other form of stability. It's tempting to maybe buy a new, better plotter, but we're talking probably $1,000 AUD for something decent, which is not quite justified. Yet. :)

So here's this weekend's attempt. Better but still needs work. While watching it plot I was wondering why it was still lifting the pen so often, especially when it usually just puts the pen down again without moving. Need to debug the optimiser.

toruses_slightly_better

After generating solids like dodecahedrons and toruses from code I thought next I could try an imported object, like the Utah Teapot!.

The problem though is it's made up of bezier surface patches which need to be converted to triangles or quads, which actually is great in a way -- you can decide how fine to tesselate the surfaces for purpose. Maybe you want it chunky, maybe you want it smooth. But the official geometry isn't purely convex, or even particularly solid. The spout and handle intersect with the main pot, just pushing through to inside, unseen. This doesn't work very well with my view-graph code.

The view-graph takes a mesh which I've generated in a DCEL format, which is really efficient and powerful for well formed meshes, like those you'd generate or use in a video game. But for arbitrary geometry, the view-graph is lost. So I had a choice: fix the teapot to fit into a DCEL or come up with a different way to render it.

And I chose the latter because I'm thinking in the future I may want to render 3D objects and scenes which just aren't easily put in a DCEL or, like the teapot, come from a 3rd party. So how to render non DCEL objects? Well, here's my first go -- instead of toruses, here's 25 randomly rotated teapots.

teapots_rough

It looks a bit stylised, which is definitely a goal for this project eventually, but for now I'm aiming for simplicity. But still, this is an entirely new method for rendering and I think it's working pretty well. The approach for this is to ray-trace the scene to a z-buffer and then perform edge detection and vectorisation. The ray-tracing and edge detection via a Sobel filter is pretty straight forward (thanks to a BVH crate for the rays and a cool skeletonise crate for the post-processing) but I was worried about the vectorisation of the raster image containing the edge pixels.

There are a couple of libraries, even a decent Rust crate, which will do this for me. But they're not really designed for my purpose -- they generally work with a 2d image and group similar colours together into areas and then try really hard to smooth out any pixel-oriented artefacts like aliasing and staircasing, but they're not trying to generate only lines.

But then I realised I have the geometry for the scene still. All of my contours will actually have equivalent triangle edges and so all I need to do is take the edge pixels in the raster image and work out which edge they belong to. This is actually pretty tricky and why the teapots above are all scratchy looking (along with persisting faults with the line-optimiser, like the toruses up top). Working out which edge-pixel belongs to which triangle-edge needs a bunch of heuristics for cases where they don't quite actually hit the mesh, but just miss it, or if we're looking at a surface which is almost perpendicular to the eye and several geometry edges could be the one we're after.

But I think there are ways to improve this, which is what I'll work on next.