Can the use of voxels instead of polygons expand the possibilities for 3D printing? Yes! Here I will explain why, and show an example.
The widespread adoption of volumetric representations can provide many benefits for additive manufacturing, both in the design stages and for production. In this post I will give some examples to demonstrate the practical value of using voxels over the polygon meshes that have become the standard way of storing and showing three dimensional objects.
Having worked as a specialist in 3D visualization tasked with preparing data for viewing on 3D displays, and with many years of 3D design and modeling experience, I have had the opportunity to face many challenges working with large data sets. Hitting limits of memory and processing power again and again, I have come up with a well-formed set of strategies for working with 3D data, and one of the most important lessons is that some data structures are more appropriate for different situations, depending on the nature of the information.
Generating Test Data
I have generated a sample dataset from a mathematical object known as the Mandelbulb fractal. This object has a large amount of surface detail. While the actual fractal surface is infinite, here a volumetric rendering has been made, creating a cube of 8.6 billion binary (black or white) voxels that measures 2048 voxels on a side. I processed that data to smooth out the noise and down-sampled it to a 512 cubed set, which contains a more manageable 134 million voxels. This set is 8-bit greyscale, and so the memory footprint is 131 Megabytes. I then used the Marching Cubes algorithm to generate a triangle mesh that approximates the shape. The resulting mesh has about 7 million polygons, and it’s size depends on the format. Typically, meshes in memory use about 12 bytes per vertex, and the number of vertices is about half the number of faces, which themselves occupy a a few bytes, so for our example we will approximate the memory usage at around 120MB, roughly the same as the voxel version.
Comparing The Results
The memory footprint for our test datasets are well matched in size between the voxel and polygon versions, and the object is not an unrealistic example of a form that one would desire to print. The object is actually topologically simple in that it does not have any holes. While it has many tiny details, we are only concerned with details that are at a scale that can be printed. For the sake of this example, we are going to make the assumption that our object has voxels that are 0.1mm in size, corresponding to an object roughly 2 inches wide. I also created compressed versions of the data, since that is a common method of preparing files for transmission, and it gives a more accurate estimate of the files “entropy”; it reduces the redundancy to give us insight into how efficient the representation can be. Here are the files sizes for the different versions:
Original 2048 cubed binary set: 1,073,741,824 bytes (1GB)
Processed 512 cubed greyscale set: 134,217,728 bytes (131MB)
PLY isosurface mesh: 7 million polygons, 132,070,054 bytes (132MB)
STL isosurface mesh (internal voids and small shells removed): 327,767,084 bytes (320MB)
I have provided links to the compressed data on Thingiverse for your examination. Taking a look at the space used on disk, the benefit is obvious. The RAW data is 131MB, the STL is 320MB. After compression the difference is even more dramatic. The STL is down to 135MB, but the RAW data is only 10MB! It is also important to remember that the RAW data still contains much more information. The isosurface mesh is only a two-dimensional membrane representing one value, while the volume data is greyscale and contains 256 discrete values, which can generate membranes at any value. The data essentially contains an extra “dimension”, which in mathematical terms is not insignificant and has important real-world implications for 3D printing.
How is this useful?
There are two important ways that the application of 3D printing is especially well matched to volumetric representation. First, it is very common to use lattice structures in additive manufacturing because detail is often essentially free, and is sometimes even beneficial to the efficiency of the production process. These structures can make an object so complicated that it becomes difficult to work with. While this is sometimes manageable with polygons, the solid model representation usually used in product design (boundary representation, a.k.a B-Rep) is much more unwieldy, and generating dense internal lattices is simply not practical using software like Solidworks and other CAD applications. Since the reproducible detail in proportion to object sizes being produced by today’s printers can be represented as voxels while remaining within a memory footprint that is workable with today’s computers, it is possible to generate and interact with these complicated models in a volumetric form.
The second characteristic has to do with material representation. The voxel data could easily correspond to a lookup table referring to hundreds of materials. Currently, if you want to take advantage of Objets multi-material printing, you must provide a separate STL mesh for each material combination. If you desire a smooth transition from hard to soft material for example, you will have to dissect your model into many complicated non-planar volumetric segments, a near-impossible feat for all but the simplest forms. Those forms would then be converted into polygons, and extremely small facets must be used to ensure continuity of the material.
Let us imagine we want to print our fractal object using Objet’s multimaterial printing. we want to modify the surface finish by creating a “skin” of soft material over a harder structure. In this case, the volume version of the object would remain unchanged, but the polygonal version not only doubles, but triples in size because the outside skin must have an inside and an outside. The model size in STL format is now about a Gigabyte in size, and has more than 20 million polygons. The voxel data, however, is still 10Mb zipped, and can fit in video memory to be displayed at interactive rates (modern GPU’s have 3D texturing capability)
How do we use it?
Much like the infrastructure we’ve established around combustion-engine vehicles instead of electric ones, we are now finding that a transition to the next technology is challenging. There are several software packages that can help work with this type of data, but the system is not yet mature, and there is no integration between the data types. For example, both Materialise and Netfabb have software for generating internal lattices, what Netfabb calls “Selective Space Structures”, but all of this software accepts only polygon meshes as both input and output, which takes away the potential benefit of volumetric representation. I have discussed with Alexander Oster, the CEO of Netfabb, the possibility of completely eliminating polygons from the 3D printing pipeline. I sent him some test data and he said he would look into supporting voxel-based input that would be converted directly to output usable by 3D printers. [Edit: July 19th, 2012 - I've just received more information on the "Volumetric Printing" feature in the new Netfabb 4.9.2 release. This feature refers to a method of calculating the feed rate of the filament based on it's volume rather than trial and error, for better control of material deposition. Slice import is not yet implemented (three weeks would have been a miraculous pace to get that done). Hopefully we'll see it in the next round.] Unfortunately, home printers use FDM technology, which means that a tool path (a drawing pattern for the extruder nozzle) will still have to be generated from the volume data. This limits the utility of the approach, though it will still have some advantages for multi-material printing. Volume input should still be adopted so the technology can mature. When a voxel-based printing technology is affordable for the home, the software to support it will be ready. The next step from here is integrating this functionality directly into the solid modeling applications so we can define volumetric material properties and structures while maintaining the fidelity of the solid model. The application that comes closest to this level of integration is Sensable’s FreeForm haptic modeling system. Unfortunately the user will be outputting only polygon meshes or B-Rep solids from this system, but, as they were just purchased by Geomagic I am sure we will be seeing some changes coming up.
There are other benefits of voxel representations related to manifoldness and hole-filling. I will relate these in further posts.
You can find the Mandelbulb on Thingiverse here.
Read more on the subject in my post on Engineering.com here: