MAS.131 : Computational Camera and Photography  
 
  ABOUT ASSIGNMENT1     ASSIGNMENT2   ASSIGNMENT3 ASSIGNMENT4 FINAL PROJECT      
 
    [assignment2]
   
 

 

   
UNDERSTANDING LIGHT FIELD CAMERA

Imagine taking a series of images from different viewpoint on a plane. For each image, a ray starting from the viewpoint on the camera plane can be applied to every pixels. This database of rays is called ‘light field.’ I drew a simple diagram to help myself understand the basics of this concept. (Figure 9)

Let’s say that ‘y’ is the distance between the target object and the camera’s lens. ‘x’ is the distance from the lens to the sensor. With simple math, you can figure out how shifting each pixels of every images can define the distance, ‘y.’ (If ‘y’ is smaller, depth of field becomes shorter. If ‘y’ is larger, depth of field becomes further)
Figure9 Very simple drawing to help understand the basics of how light field digital refocusing work  
   
 

 
Figure11 Variable depth of field, infinity(left) and focused image(right) by using shift + add strategy    
 
With an array of cameras, you can obtain a set of rays by capturing an array of images. By using this light field data, it is possible to generate new images with different field of depth. This is an exercise to achieve digital refocusing using photos taken by an array of cameras. The basic idea is to shift and add the pixels to achieve digital refocusing effects. Here is a link to the LightField Camera paper.

RELATED LINK:
http://graphic.stanford.edu/papers/CameraArray

For this part of exercise, I used an archive of photo set from one of Jinyi Yu's lecture. The effect was achieved by using C++ in openFrameworks platform.
Figure12 Video capture of the variable depth of field, infinity to focused image by using shift + add strategy | vimeo link    

 

     
  DIGITAL REFOCUS & SEE-THRU EFFECT:
Based on Stanford University's paper on plane+parallax calibration, I was able to learn how light field consists of images of a scene taken from different viewpoints. They introduce how it is possible to obtain refocusing and see-through effect by using the light field. The goal of this exercise was to acheive the same effect using my own photos and coding. The original photos in this project were taken by Nikon Coolpix P80 camera with 10.1 megapixel sensor. I used C++ and openFrameworks library for this project. The source code can be found on the bottom of this page.

Figure13 Original images used for exercise 2 and 3    
   
Figure14 Refocus and See-Through result    
   
Figure15 Refocus and See-Through Xcode Run output    
   
Figure16 Video capture of Refocus and See-Through effect using openFrameworks | vimeo link    
   
Figure17 Video capture of Refocus and See-Through effect using openFrameworks | vimeo link    
     
 

main.cpp

#include "testApp.h"
#include "ofAppGlutWindow.h"

//========================================================================
int main( ){

ofAppGlutWindow window;
ofSetupOpenGL(&window, 1024,768, OF_WINDOW);
ofRunApp( new testApp());

}

 

//========================================================================
testApp.h

#ifndef _TEST_APP
#define _TEST_APP
#include <iostream>
#include <sstream>

using namespace std;

#pragma once
#include "ofMain.h"

class testApp : public ofBaseApp{

public:

void setup();
void update();
void draw();
void keyPressed (int key);

ofImage *pic;
int picCount;

int location;
int fullSize;
unsigned char*pixels;
float shifter;

};

#endif

 

//========================================================================
testApp.cpp

void testApp::setup(){

picCount = 16;


location = 0;
pic = new ofImage[picCount];
std::stringstream ss;
for(int i=1; i< picCount; i++){
ss.str("");
ss << "lowtoys" << i << ".bmp";
cout << ss.str() << endl;
pic[i].loadImage(ss.str());
fullSize = pic[i].width*pic[i].height;
ofSetWindowShape(pic[i].width, pic[i].height);
pixels = pic[i].getPixels();
ofSetVerticalSync(true);
ofEnableAlphaBlending();


int w =pic[i].getWidth();
int h = pic[i].getHeight();
pixels = new unsigned char [w*h*3];


for(int y=0; y<h; y++){
for(int x=0; x<w; x++){
int index = (y*w+x)*3;

printf("%d %d\n", x, y);
// colorTexture.allocate(w,h, GL_RGB);
unsigned char r = pixels[index+0]; // r
unsigned char g = pixels[index+1]; // g
unsigned char b = pixels[index+2]; // b


}
}
}
// colorTexture.loadData(pixels, w, h, GL_RGB);
// shifting();
shifter =7.0f;

}

//--------------------------------------------------------------
void testApp::update(){

}

//--------------------------------------------------------------
void testApp::draw(){

int mouseloc = mouseX;

ofSetupScreen();

int mouseloc = mouseX;

ofSetupScreen();
shifter = 0.7f + mouseX/16;
// shifter = 2.3f + mouseX/16;


for (int i = 0; i < 16; i++) {
int w = pic[i].getWidth();
if(mouseloc < w/16){
ofSetColor(255, 255, 255,35);
pic[i].draw(-(shifter*i),0);
}

for (int j = 1; j <= 16; j++) {
if (w/16 * j <= mouseloc && mouseloc < w/16*(j+1)){
ofSetColor(255, 255, 255,35);
pic[i].draw(shifter*j-(shifter*i),0);
}
}

}

}

//--------------------------------------------------------------
void testApp::keyPressed(int key){
if(key == 'p'){
shifter +=0.25f;
setup();
}
else if(key == 'o'){
shifter -=0.25f;
setup();
}

}

     
sourcecode was developed in openFrameworks platform   PREV TOP NEXT
   
   

 

     
 
2011 ©
Instructors: Ramesh Raskar, Douglas Lanman, cameraculture.media.mit.edu
MIT Media Lab Lecture: F1-4 (E14-525), assignments done by Austin S. Lee austinslee.com