MAS.131 : Computational Camera and Photography  
 
  ABOUT ASSIGNMENT1     ASSIGNMENT2   ASSIGNMENT3 ASSIGNMENT4 FINAL PROJECT      
 
    [assignment4]
   
 

 

   
MULTI-LAYER 3D DISPLAY FABRICATION

In this project, I have created a 3D display by using provided software and a stack of printed transparencies held between clear acrylic sheets. This is a method introduced by Wetzstein and his peers from the MIT Media Lab’s Camera Culture Group. This work demonstrates how to fabricate a glasses-free 3D display prototype by modifying, rendering, and capturing light fields and displaying them using the hardware kit and the source code provided in MATLAB. Here are the links to the reference papers and project assignment sheets.

RELATED LINKS:

> Link to the Reference Paper
> Link to Assignment handout
Figure25 Capturing light fields using iPhone 4G  
   
  This project page is composed of two main parts. First, I show an example of capturing light fields using a smart phone camera by devising a simple mechanism with tripod, metal beam, tapes and lego blocks. Next, I introduce using Kinect to 3D scan and capturing light fields through 3D design software.
 
Figure26 Uniform array of 7 by 7 images    
  For this part of the project, the idea was to capture a uniform array of 7 by 7 images, spanning a total field of view of 10 degrees horizontally and vertically. However, the device had limitations in accurately modifying the degrees. Here are the results after the processing the captured images by running MATLAB script.
Figure27 After running source code in MATLAB  

 

 

Figure28 Original photo close-up view    

 

     
  For the second part of my exercise, I used an open source Kinect 3D scanning application developed by Kyle McDonald. I learned about this tool through Lining Yao’s online tutorial for 3D Scanning & 3D Printing. If you would like to learn more, I suggest visiting the following links.

RELATED LINKS:
> Link to the Lining's tutorial
> Link to the Kinect to Stl download


Figure29 3D scanning using Kinect and open-source application    
  After 3D scanning my face, I imported the Stl file to Rhino OSX which is currently freely available software. In Rhino, I created a virtual environment that was similar to the real world settings by projecting four plane lightings to the scanned files. Then I manually moved a virtual camera(50mm) according to a grid system that I have created to capture the light fields of the scanned model. The final framed masks were rendered much more successfully this time. The video illustrates how the 3D display can work with regular LED screen light.

Figure30 Uniform array of 7 by 7 images captured using Rhinoceros 3D software    
   
Figure31 After running source code in MATLAB    
   
Figure32 Final result video capture | vimeo link    

Download source code from here >> click to download <<

     
sourcecode from the paper's website used in Matlab   PREV TOP NEXT
   
   

 

     
 
2011 ©
Instructors: Ramesh Raskar, Douglas Lanman, cameraculture.media.mit.edu
MIT Media Lab Lecture: F1-4 (E14-525), assignments done by Austin S. Lee austinslee.com