MAS.131 : Computational Camera and Photography  
 
  ABOUT ASSIGNMENT1     ASSIGNMENT2   ASSIGNMENT3 ASSIGNMENT4 FINAL PROJECT      
 
    [final project]
   
 

 

   
DIY LAYERED 3D REFLECTION

For the final project, I present a prototype project, "DIY Layered 3D Reflection", a system for constructing and displaying captured 3D video depths using reflections on layered transparencies.

By demonstrating the usage of the Pepper’s Ghost Technique for fabricating multi-Layer 3D displays, we propose an uncomplicated and relatively low-cost building procedure that can possibly be used for homemade 3D teleconference with illusionary holographic effects. The project concludes with an illustration of the benefits and limitations of the project and suggests a future direction for the project.

KEY WORDS:
DIY, Design, everyday device, video capture, 3D display, construction kits.
Figure33 DIY Layered 3D Reflection: Real-time 3D Capture on Layered Reflective Display. Current prototype hardware design is based on i3DG  
   
  "DIY Layered 3D Reflection" is a system for constructing and displaying captured 3D video depths using reflections on layered transparencies. By demonstrating the usage of the Pepper’s Ghost Technique for fabricating multi-layer 3D displays, we propose an uncomplicated and relatively low-cost building procedure that can possibly be used for homemade 3D teleconference with illusionary holographic effects. This paper focuses on utilizing online open-source materials to enable individuals to produce a DIY version of an existing device to create 3D content, capture real-time 3D depth data, and render the 3D information in half-holographic form.
 
Figure34 Application using MATLAB source code from related work(see Assignment4)    
  RELATED WORK:
Layered 3D by G. Wetzstein et al. [6], introduces glassesfree attenuation-based light field displays using an inexpensive fabrication method: separating multiple printed transparencies with acrylic sheets. Once building toolkits are provided, the display prototype can be easily fabricated using the MATLAB source code, which is available online. After a hands-on fabrication procedure, the builder may utilize the platform to learn how to modify, render, or capture their own light fields and display them. (see Assignment4)
I propose to explore the Layered 3D project by using a real-time 3D scanning system and fabricating a display that will work with the Pepper’s Ghost Technique.
Figure35 Related work - 3D Layer project building kit assembled together  

 

 

The goal is to create a platform for real-time 3D scanning and rendering that is available to people with a DIY mindset by applying everyday technology and open source toolkits. For the purpose of quick exploration, our current hardware design builds upon the design of i3DG, developed by media artist Jitsuro Mase. His design borrows the Pepper's Ghost technique to render 3D effects. i3DG is an accessory for smart phones or tablets with large surface displays that brings instant 3D viewing to the device. However, the accessory will not work without a pre-produced visual content that requires the usage of expensive design software. To provide instant generation of the 3D content, I use coding that eliminates manual labor, relying on design software.

Figure36 Real-time source content video generated using C++ coding in openFrameworks platform  

 

     
  CONCLUSION AND FUTURE WORK:
We created three applications: first, we use the original MATLab source code used in the Layered 3D project and apply it to our prototype project. Second, we capture 3D information by using a depth camera and generate multiple layers of content video one in front of the other. Each content video renders a certain threshold of the real space based on the distance. The third application uses a regular webcam, and we gradually increase the blur effect on the source videos with each further layer. The content video layers reflect onto the DIY prototype device, creating an illusionary 3D effect.


Figure37 Application using depth camera and custom coding.    
  Apparently, the result of the second application of the depth camera was most interesting as it creates the effect of a real
time 3D object rendered in a half holographic form to shift from each layers from front to back and vice versa based on the real space location.
In the future, we want to combine all three applications. First, we will capture the surrounding light field and achieve a real-time depth-of-field rendering effect and add spatial depth to the prototype display. Also, we plan to add more layers of reflective surfaces to obtain higher threedimensional resolution and apply the technique on various types of screens using different hardware design.
Figure38 Application using regular webcam and custom codinge    
  ACKNOWLEDGMENTS:
I thank Professor Ramesh Raskar, Douglas Lanman from MIT Media Lab’s Camera Culture group for instructing the Computational Camera and Photography class and providing guidelines for the proposal, Tangible Media Group’s Research Assistances, Jinha Lee, Lining Yao and Daniel Leithinger who supported the prototype project with resources.

Figure39 Working prototype video documentation | YouTube video link    

Source code available upon request >> aslee@media.mit.edu <<

     
sourcecode was developed in openFrameworks platform   PREV TOP
   
   

 

     
 
2011 ©
Instructors: Ramesh Raskar, Douglas Lanman, cameraculture.media.mit.edu
MIT Media Lab Lecture: F1-4 (E14-525), assignments done by Austin S. Lee austinslee.com