« Posts under Body Scanning

ThreeForm Is Now On Patreon

nova_power_close2_logo-wide2_half

ThreeForm is now on Patreon creating design on the body: Wearable Tech, printable body scans, and fashion accessories.

https://www.patreon.com/ThreeForm

Here is my writeup for my Profile there:

What is ThreeForm?
I describe ThreeForm as the physical interface between the body and technology. Creations that interact with the body should respect the body, and the approach used by ThreeForm makes this possible. As a long term initiative it pursues esoteric technical goals described below, but as my guest here on Patreon, you’d probably rather know in what form these ideas are manifested in a way that you can download, print and enjoy right now. ThreeForm creates:

 - 3D printable Figurines and Characters. These sculptures are based on real people. They are professionally optimized and designed for 3D printing at home. I do “3D Photoshoots” with a variety of performers, dancers, and models to create a library of poses. These are carefully sculpted to perfection and tested to ensure they print beautifully. Some are further developed to include custom outfits and accessories that are used in real life fashion shows and performances, and many of those will be shared here for you to print in miniature. Your contributions are shared with the models, so continued support ensures my team and I can create many more unique poses and outfits.
 - Wearable Technology. These models take the form of cases, accessories and designer packaging for popular devices and computing platforms like Arduino, Raspberry Pi, Adafruit products, and OpenBCI.


 - Fashion Accessories. Smoothly transitioning from wearable technology, the design are morphing into fashion accessories. 3D printing enables universal customization, so our technology no longer needs to take the same form. It can be stylized to our liking. Expect to find customization templates that  may be entirely about style, or contain a functional element from the wearable tech category.

The mission of ThreeForm is to explore design on the body leveraging two synergistic technologies: 3D scanning and 3D printing. I conceptualize these as a portal between the digital world and the physical world. In the context of the body, this enables design to respect the body by conforming perfectly to our shape and movement. ThreeForm seeks to develop this approach by solving mechanical and material challenges in the 3D printing technology, by refining design techniques and developing software solutions to optimize workflows to make the combination of these technologies practical and useful by reducing time, labor and cost, while increasing functional capabilities.

In addition to experimental styles and gadgetry (which is the fun part) I also develop research tools and assistive devices such as the Ultracortex platform developed in collaboration with OpenBCI. Current laws limit the extent to which 3D printing can be applied in a medical context, and devices take a lot of time and money to gain approval, so while legislation will take several years to catch up, I am not waiting around. For five years I’ve been developing solutions to make these technologies more applicable to improving people lives, and these will all transition from research and experimentation to daily life as soon as possible.

While technology and society will continue to evolve and adapt to new technologies, a lot is possible right now, and I would like to share the excitement of exploring that with you. Your support is greatly appreciated.

 

Reward Tiers Offered on Patreon:

$3 Patreon Tier:
At this tier you’ll receive several new designs per month, as well as access to all previous designs released at this tier. The figurines at this tier feature basic editing to ensure printability, and enough detail to print around 6″ high at the resolution of a typical home printer.

  • 4 printable reconstructed scans with basic edits
  • 4 raw body scan poses per month
  • Preview renderings and animations and photos
  • 1 new wearable tech electronics case per month

$5 Patreon Tier: Body Scan Package

Editing scan data is a lot of work. At this tier, scans have had 2-5 hours of editing to enhance detail. These scans are optimized to fill print around 8-9″ tall. Manual support structures are often added for easier, more reliable, higher quality prints. Wearable tech packages are more complete with a variety of options included.

  • 4 1/8th scale sculpted figurines per month
  • 4 additional (for 8 total) raw body scan poses per month
  • 1 Wearable tech variety archive with at least 4 variants included.
  • Hi-res artwork based on scans
  • 1 new printable small wearable accessory per month
  • Works in progress and early releases
  • Plus all previous rewards$10 Patreon Tier: Digital Avatar package
    This tier features fully developed avatars/characters with complete outfits. Professional quality. These are delivered in package with a variety of 3D formats, part configurations, and polygon counts. These include versions that take advantage of multi-material, dual extrusion, or full color printers.

    • 1 new High detail digital avatar package per month
    • 2 additional 1/8th scale sculpted body scans
    • 2 Bonus 3D printable designs
    • Plus all previous rewards$15 Patreon Tier: Sculpture Package
      In addition to figurines and wearable designs, I also create original work based on the scans.

      • 1 new sculpture per month.
      • Plus all previous rewards

      $20 Patreon Tier: Digital Fashion Package

More about the scans and process:

The scans used in this work were made in association with various other artists and models. The scans capture their natural shape and pose with limited detail depending on which scanning method is used. When I started ThreeForm I used a method called photogrammetry which is based on photographs.

I still occasionally use this for clients who do not have access to a scanner, but it is fairly crude and requires a lot of cleanup. I now use laser or structured light based systems.

There is a multi-stage process of cleaning up the data (removing noise and errors) and  reconstructing the scan to fill in missing areas. At this stage the model is printable, but for select poses I do additional sculpting work to add in missing details like fingers and toes. I also smooth out the seams of any garments to restore their natural shape. The models are not nude during scanning, but wear minimal undergarments to ensure accurate measurements and shape of custom outfits. At this stage each scan has taken 5-15 hours of work. For exceptional poses that will be used to build an avatar, full replacement of the head is required to add realistic eyes, hair, etc. Many outfits have several layers made of hundreds of spline curves and surfaces, composed of several thousand individual control points.

By the time final meshes are exported, a full outfit could take around 100 hours. The separate meshes are imported into rendering software to create realistic lighting and materials, which may take several dozen more hours, and may take several days to process the renderings in the case of an animation. Bringing the designs into reality with 3D printing is an even more elaborate process. There are many different printing processes, materials with varying properties, a variety of post-processing techniques to change those properties, and many solutions available for mechanical enhancement such as coatings, textiles and straps.


Assembly can include adhesives, fasteners, sewing, or heat bonding. I do want to  to better document this if there is interest. If you have any questions or requests, please ask.

Designing The Ultracortex

10-20-map

This is a blog post I wrote for the OpenBCI Kickstarter. Designers who create products that function on and around the body have long used statistical measurements derived from anthropometric studies. In cases where custom fitting is impractical, the goal is to properly fit as many people as possible, perhaps offer a selection of sizes or adjustment functions for fine-tuning. For the Ultracortex project, it was apparent that simple measurements like head circumference were not nearly good enough for our purposes. One of the most important features of the Ultracortex is the accurate electrode placement, which uses the 10-20 research standard. This establishes the locations of the electrodes using a parametric definition based relationships between key dimensions, much like those used in CAD software. The system is defined by starting with two anatomical reference points, the Nasion (the depression at the top of the nose) and Inion (the bump at the back of the skull). From these points, a series of divisions of the connecting lines establishes the electrode locations. A research-grade system would require these measurements to be taken on the scalp during set-up, which is a time-consuming and laborious process. An EEG headset is much faster than manual placement, but often compromises the accuracy of the locations, and so affects the accuracy of the data. To make use of the vast quantity of research data collected using 10-20 system, the electrode locations need to be as accurate as possible. In most cases, flexible arms or stretchable fabric is used to roughly approximate the placement. The trade-off of fast set up time for accuracy is often worth it. OpenBCI needed a headset that would be part of a platform which can be used as a serious research tool. While the application is more rigorous than a toy or game interface, it still needed to be quick and easy to use. Accuracy and stability of the electrodes was given a very high priority, along with the requirements that the headset be comfortable, fully adjustable, and finally, it needed to be produced on a desktop 3D printer. Bringing all these components together in one design, the Ultracortex was born.

Every head is different, not just in size but shape. Rather than simply measuring heads, we knew that 3D scanning would be the best method to capture the shape of the cranium and establish 3D locations for the electrodes. For the final headset design, a framework would hold the electrodes at the correct position and angle, and adjustable electrode holder would allow fine-tuning for different head shapes. Having created many wearable 3D printed designs in my ThreeForm brand, I saw that in many cases, the more accurately a design fits one person, the less accurately it fits everyone else. To create the best possible balance between all possible head shapes, we used 3D scanning to fit the parametric model to many heads, then averaged the 3D electrode locations together.

We had considered using available MRI datasets, but they tend to be fairly low resolution (256^3 voxels), and they often crop out the scalp, since the brain is the area of interest. Surface scanning is far more accurate and using this approach, and we were able to establish electrode locations with precision of better than a millimeter when using highly accurate white-light scanning. I happened to have my own head already scanned from a previous project to get the ball rolling, but to create our database of head to average together, we sought out volunteers willing to shave their heads for scanning. We made an effort to get the most diverse set of samples possible.

 

Structured light scanning at 3D Systems in NYC

 

Since high-accuracy industrial scanners were not always available, we double scanned some subjects to compare the high quality data with more readily available consumer grade 3D sensors like the 3D Systems Sense. While the detail of these devices is not nearly as high, the accuracy on a head is surprisingly good. As long as the scene contains sufficient geometry for alignment, the continuous surface of a shaved scalp captures nicely.

 

Blending craniums for the Meta Dome

 

We aligned the different heads and visualized the difference. You would think one head is much like another, but once you see them in 3D, the differences are readily apparent, and the surface distance can be very large, even though the heads are about the same size. This confirmed our process, while a bit elaborate, would be much more accurate than simply choosing one random “medium size” head, and also gave us the data for how much adjustment range was needed for the electrodes.

To create our average geometry, we created a curve network similar to the 10-20 system on each head. Each head resulted in a mesh with the same structure, but different shape. The consistent topology allowed us to morph between different heads, groups of people, or blend all the heads into a single average. The resulting average of all heads created what we call the “Meta Dome.” As we scan more heads, they are added to the Meta Dome database and accuracy increases. This allows us to select various population groups, so we could, for example, make a headset tuned for the average women of a certain demographic, or a child of a certain age, rather than subjecting everyone to a single population-wide average. For the Ultracortex Mark III, our average was very consistent with typical circumference measurements, according to the anthropometric data we referenced. We offset these appropriately for large and small sizes, and have ranges of typical head measurement ranges to help people choose. The adjustment range of the electrodes creates enough overlap that there is a properly fitting Ultracortex for just about everyone.

 

The full design process from 3D scan to custom-fit Ultracortex

 

Moving forward, we will increase our head-scan database size and consider any other factors we can find that influence function and comfort, such as hair styles and different types of electrodes. We are very excited to have the opportunity to also design fully custom headsets, which frees us from the constraints of general fitting, and allows us to truly focus on getting the best possible performance. In some cases will also have MRI data to target particular brain regions as accurately as possible. We have already done preliminary analysis on general datasets measuring brain-to-scalp distance, to get as close as possible to the area of focus.

 

Brain-to-scalp distance visualization from MRI data

 

Measuring electrical activity at the scalp surface is made much harder by the distance and the material between the location of the activity and the spot we measure. Hair, the skin of the scalp, the skull, and cerebrospinal fluid and other layers all diffuse and attenuate the signal we seek to explore. There is a lot of room for improvement in how the brain is analyzed using EEG techniques. With the accurate electrode placement of our 3D-printed headsets, the high-quality data produced by the OpenBCI board, and the advanced signal processing techniques our software team is applying in their custom algorithms, OpenBCI is pushing forward the state of the art in DIY brain research.

Digital Apparel Defined

The BodyHub avatar generator interface from Body Labs

The field of development known as “Wearable Technology” is suffering from a labeling issue. The technologies referred to are usually specific products that are not just wearable, but must be worn. They are designed to function in the context of the body. What constitutes “technology” is also fairly vague, since the phrase refers not just to electronic gadgets, but all sorts of innovations like specialized textiles, materials with engineered properties, prosthetic devices, or sensors and displays, in addition to “smart” products that have some level of computational ability and often communicate with other technology. I’ve broken down Wearable Tech into categories to provide an organizational framework for my work in custom-fit 3D printed products before, such as in my presentation at the 2013 Rapid convention (PDF transcript, video). Rather define these things for the audience every time we mention or discuss this family of technologies, it would be much easier to find relevant sub-sets that can be grouped under a single label. The phrase I use to describe my area of work is “Digital Apparel”. Of the many word choices available, “apparel” from the Old French “apareillier”, meaning “to make fit”, is an ideal choice to pair with the digital and technological processes that distinguishes the subject.

Digital Apparel includes a collection of information that defines a product worn on the body. The definition includes a three dimensional geometric definition that can be adjusted for different individuals. Customization can be defined through simple sizing variations or through parametric and generative components. In addition to geometry, an Item of Digital Apparel must be designed with specific processes and materials in mind, and specific intent in terms of function and aesthetics. It may also include transformational, iterative, or reactive geometries which are currently being covered by the oddly chosen phrase “4D [printing]“. The final criterion is that the product is made primarily through industrial processes.

The best way to address the many possible interpretations and misinterpretations is as question-and-answer. So here we go:

Is Digital Apparel clothing made on the computer?
It is the definition of the “clothing” while it is still on the computer. After production it is currently called either clothing (apparel), or wearable technology.

Is Digital Apparel referring to designs for 3D printed clothing and wearable technology?
It can, if the definition of those things is complete. Most advanced commercial wearable technology is produced along with PLM (Product Lifecycle Management) systems (Such as the ones from Siemens that work with their NX CAD software), and so contains the required information to meet the definition. 3D printed clothing is usually just a static 3D model, but if all the requirements are completed, such as a selection of sizes, and design for a specific material and 3D printing process, the design will meet the definition. As a simple example, a 3D design for something like shoes or a hat in a selection of sizes (while maintaining all key dimensions and features), designed with appropriate tolerances for FDM printing in PLA on a home printer, would meet the definition.

What about simple items like jewelry?
While they could be available in sizes and have material and process specified, the spirit of the definition is geared toward enabling discussion of pieces that distinguished by having larger size or complex shape that necessitates accurate size and shape, or specific functional requirements that require some knowledge by the designer about the context on the body. It also meant to refer to objects that are not purely decorative (have some technology component), so this usage would not be appropriate.

Smart watches and necklaces have tech, are they digital apparel?
Wearable technology, while an awkward phrase, is most pointedly referring to these gadgets, so that is the more appropriate phrase to use. If you wouldn’t call the resulting product “apparel”, you probably should not call the design an example of Digital Apparel. There is not a hard line between them however, particularly because the digital design and customization allows many more variations of form that don’t fall cleanly into categories like “Shirt” and “Pants”. I do think the Digital Apparel definition should be extended to forms that do not entirely envelope the torso, and so in some sense could be called “Accessories”, though that term is broad enough that it also refers to very small items. The phrases are not exclusive, but refer to different things. The core features of Digital Apparel are the customization and production information aspects that accompany the design intent.

Are there examples of of on the market already?
Yes. I offer 3D printed accessories and garments that are customized based on body scans and manufactured via 3D printing. There is software on the market such as Marvelous Designer that can be used to create a 3D dimensional design and output to 2D patterns for production. These patterns are cut through CNC, but are still assembled by hand, which falls short of being produced “by an industrial process”. However, there still can be no hard line, since even 3D printed designs are partially processed by hand. The important characteristics that helps distinguish Digital Apparel from traditional manufacture are the production specifications; which material, how they are cut and assembled- all of that information is needed for a complete Digital Apparel design. Doing something like downloading a 2D pattern, manually cutting the cloth and assembling it does not meet this definition. Aside from the fact the re-sizing would require experienced knowledge of the craft, there are numerous decisions made by the fabricator that would produce variability from one producer to the next. There is currently a fully automated robotic assembly project sponsored by DARPA for producing military uniforms that are custom fit for each individual. This would be an example of a Digital Apparel system. There are also several 3D weaving projects and 3D printing projects (such as the work I do with ThreeForm) that are carefully designed to be produced as repeatable, variable products. They are rare and expensive, not yet in mass produced form, but I would count them. One can also find examples of Mass-customization online, such as systems from Nike that create customized designs through a web interface. As final assembly of these products becomes more automated, the results become more pure examples of Digital Apparel.

If I extract an outfit from a video game for example, size it to myself, and 3D print it, is it Digital Apparel?
No. The model would have to be thickened and heavily modified in many ways to make it print and function. All of that input requires planning and design to match with particular materials, processes, and functionality of the result. It could be possible for a designer to formalize this information and creative a derivative work which is Digital Apparel, from just about any geometry they choose, as long as it meets all the criteria.

Is Digital Apparel a format?
No. There are a number of systems available for managing product information, and many of them pass on production information to the manufacturer. Again, this is not a black-and-white distinction, since for example a whole line of products might be designed to print in nylon12 using the common selective laser sintering 3D printing process. The actual process and material selection is inherent in the product and planned by the designer, but the producer may only offer that one choice, so there is no need to specify in any digital file. It should suffice that all the implications of the production process were planned by the designer to achieve a specific result, and as long as the design is “fit for the environment” that it is released into, it will successfully meets the definition.

How would anyone know what formats and software to use, and how to communicate the correct information?
Right now, all that is required is for the files to be compatible with the equipment, and all relevant information to be communicated along with the file, whether grouped into an archive and sent via FTP, transferred via a PLM system, or simply communicated along with an email attachment. Standards have not yet been created, and all of these things are adapted to the context, which is why this is not a consumer-level activity at this time.

What are the next steps? When do consumers get to buy Digital Apparel?
They can right now. In addition to hiring a specialist to create flexible digital designs, companies are creating software to simplify various parts of the process. The New York company Body Labs has a system that makes it simpler for companies to get input from customers to do customization in a less ‘couture’ arrangement. Ultimately we will all have digital profiles that are compatible with online shopping systems that will place orders and begin to manufacture customized garments – often with integrated technology components – with very little effort and relatively low cost.

The tension between the selective forces comes from the fact that those systems work best when many people wants the same thing, and customization works best when the customer wants a different thing from what is being commonly offered. I feel there will always be a spectrum of solutions, from the mass-produced simple functionality like a custom-fit piece of clothing with a simple sensor or indicator, to completely bespoke “wearable systems” that may enable complicated sets of interacting functionality, such as treatment for health issues or enhanced capabilities. At the extreme end of this development one might have most of their biological and functional needs covered, as in examples where a scientist might explore a remote area or even another planet.

All of these products that interact with the body must meet certain requirements to begin to evolve toward these final forms. Leaving out any specifications means work must be repeated for each iteration, and much of what was learned might be lost. It is by formalizing communication and design technique that standards will emerge to push forward development of both wearable technologies and Digital Apparel.

 

Body Scanning For Upcoming Fashion Show

Shelly Lynn scanning

I am closely involved in organizing the upcoming fashion show at the Inside 3D printing conference in April. I’ve already cast a team of models who are being scanned to create custom 3D printed outfits. Here, we see Mel and Shelly Lynn being scanned over at Body Labs.

 

 The scanner Body Labs uses at their office is an 0ff-the-shelf laser scanning booth that was originally designed to measure soldiers for Army uniforms, I believe. While it is not extremely detailed, it is very fast, scanning top-to-bottom in just a few seconds. Most models were scanned 10-20 times, but Mel and I spent a good couple of hours and kept having ideas. We created more than 100 scans – a marathon session for sure.

All of these scans have been added to the Body Labs database and are used to refine their algorithm to create more accurate body shape and pose estimations. Obviously this technology is great for customizing designs, but a more short-term goal is for us all to be able to go shopping online with our digital profile and be sure that everything we buy will fit perfectly.
Another unique capability brought by the pose estimation algorithm is that it enables a point-to-point correspondence between subjects. That means that measurements can be compared between individuals or the same individual in different poses. By adapting a generic mesh to the shape and pose of a scan subject, the mesh can be adjusted to match, thus creating a digital avatar that can be posed and animated. By applying this process using the same input mesh, the output avatar is topologically consistent between individuals. By having the same structure, the avatars – or designs created from  them – can be morphed and blended. Below is a demonstration; a continuous morph between all models in the entire team.

This body-morphing ability was later used to good effect during the show. Since some parts of the lineup were not predictable (in terms of which model would wear which piece at one time), it was desirable to create generic accessories with a high likelihood of a good fit. This was done by population averaging of our team. Runway models are not average compared to the US population – they tend be tall and slender. By averaging them together I created a virtual model (dubbed “Ave”), which I used to create general-fit designs.
I later gave a presentation explaining this process, as well as cranium averaging used for the Ultracortex project, as part of the proceedings at the Rapid 3D technology conference. The presentation was called “Reverse Engineering The Body For Product Design”.