3D Facial Scanner Project

 

I started this project for Eurocom 2 years ago when I was leading the

facial department next to our motion capture studio.

Unfortunately, I didn’t have the chance to finish it because sadly Eurocom
closed down at the end of 2012.

 

 

 

 

The main goal of this project was to build a cheap version of the famous “Light Stage” based on the amazing research of Paul Debevec at USC and other talented scientists.
Like many 3D artists, I’m a great admirer of Paul Debevec’s work and his valuable contribution to the 3D industry.
I found the Light Stage technique much more interesting than the usual scanning method: it provides more accurate details and also the cross polarisation provides a decent diffuse map without reflection and a specular map weight. Tim Hutton, a PhD graduate and very talented programmer, joined me on the project. Tim was in charge of the software side while I was designing and building the hardware in between our internal productions. Richard Smith joined us and we started a lot of digital simulation and some small scale experiments to make sure we understood the scientific papers.


Here’s a video of our very first test.The quality doesn’t match the real potential
of the scanner yet but it shows a promising result of a world space normal map.
This render is just a moving light in front of the normal map projected onto a
simple image plane.

 

 

 

The first challenge was to drive the brightness of 156 high power LEDs in order to produce
the different gradient direction during the shooting sequence. The common technique for
varying the brightness of a High Power LED is to use a PWM (Pulse Width Modulation) signal.
There are a lot of different drivers already available on the market but most of them work with
slow PWM ( but fast enough for human eyes).
The problem is that digital cameras are much more sensitive to strobing lights than human eyes are.
After a few experiments we set our PWM frequency to 20 khz and I built the first version of
our electronic driver. Because of my very limited electronic skills, I didn’t manage to get a safe
linear response of the prototype. We asked John Thorpe, a freelance electronic engineer, to help
us on the project. After a week, John came back with a modified circuit 1000 times more
accurate then the one I built. I was really amazed!
On the control side, we used 4 Arduino micro controllers to generate a 156 PWM signal at
20khz each.

 

 

 

One of the 156 LED drivers

 

 

 

 

 

 

The power & control unit that I designed and built.

 

 

 

 

 

 

 

 

 

 

 

The geodesic frame wasn’t difficult to design apart from the crazy angles on the hubs. I checked my design several times before I got the parts manufactured.

I also had to insert a door on the side of this complex geometry.

 


The wiring took me a considerable length of time. I didn’t want to spend too much time on a fancy network to communicate with all the 156 LEDs so I decided
to connect each driver with multipair cables.
I ended up by wiring nearly a mile of cables and I can’t count the number of connections I’ve soldered.

 

 

 

 

The cross polarisation is an important step in the process. One use of the cross
polarisation is to separate the specular reflectance from the diffuse light
using a combination of polarising filters in front of the light source and the camera.
It is possible to process the word space normal map from the diffuse pictures like
we did in our early test but the result won’t be as accurate as using the specular.
Unfortunately, our early test on reflection gave poor results and I didn’t have
a chance to investigate the problem. I suspect my cheap circular polariser was not
efficient enough to get a good dynamic range on the specular extraction.

 

 

 

 

 

 

 

I think the major constraint with this technique is the time it takes to capture 8 pictures with a DSLR camera. It’s almost impossible to stay perfectly still
for a second even without trying to capture an extreme expression. Realigning the pictures was part of our main problem to solve and
we knew that would be difficult considering the difference of light direction on all shots.
We were using DSLR EOS D40 at the time but I was planning do some tests with a mirrorless camera. There is no mechanical movement involved in mirrorless
systems and therefore a Sony A77 can shoot up to 12 frames per second for a reasonable price.

 

On a final note, this project was probably the most challenging I’ve ever worked on but it was also fascinating to be able to build such a unique device.
I would have liked to finish this awesome project and use it in production but fate can be very disappointing sometimes.... c’est la vie...

I’d like to thank my ex managers Jose Garcia and Phil Hackney and my ex directors Mat Sneap, Hugh Binns and Tim Rogers who trusted me and gave me the
opportunity to work on a project like this.
Thanks to the facial team: Jon Jones, Richard Smith, Mike Boylan, Ed Peretti and Tim Hutton.

 

 

Thanks for reading,


Fabrice