Kinect

Point Clouds

Read Chapter 3 of Greg Borenstein's Making Things See



The Kinect captures the depth image as a set of 3D points.
Simple OpenNI can return that set of points.
When you access the set of 3D points, you are dealing with vectors instead of integers.

Vectors are a way of storing points with multiple coordinates in a single variable. It is a collection of values that describe relative position in space, and the difference between two points. You can think of vectors as something traveling in a direction.

Using vectors will simplify your code and the vector class provides a set of functions for common mathematical operations that happen over and over and over again. You'll be able to store a vector and access it's x, y and z component. This is especially handy when you deal with large sets of 3D data.

Processing provides a class to handle vectors. It is called PVector:
PVector p=new PVector(1,2,3);
println("x="+p.x+" y="+p.y+" z="+p.z);
Here is how you could use Processing to draw a point cloud:
/*
http://shop.oreilly.com/product/0636920020684.do
 Making Things See by Greg Borenstein
 */
import processing.opengl.*;
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup() {
  size(1024, 768, OPENGL);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
}
void draw() {
  background(0);
  kinect.update();
  // prepare to draw centered in x-y
  // pull it 1000 pixels closer on z
  translate(width/2, height/2, -1000); 
  // flip y-axis from "realWorld"
  //otherwise your point cloud will be upside down
  //remember that in Processing the top is 0
  rotateX(radians(180)); 
  
  //points are colored using stroke()
  stroke(255); //
  // get the depth data as 3D points
  PVector[] depthPoints = kinect.depthMapRealWorld(); //
  for (int i = 0; i < depthPoints.length; i++) {
    // get the current point from the point array
    PVector currentPoint = depthPoints[i];
    // draw the current point
    point(currentPoint.x, currentPoint.y, currentPoint.z); //
  }
}

Because you are now working in 3D space, the image does not fit perfectly into a rectangle. This is because at the edge of the rectangle, everything gets cut off regardless of whether that object is in the foreground or the background. Anything cut off in the image, doesn't get represented in the point cloud.

You can also notice that you can see through objects when represented as a point cloud. The density of the point cloud is limited by the images resolution. While you have many points, you do not have an infinite number of them.

Rotating and Translating

Because rotating and translating when applied to the world are additive, you'll need to use pushMatrix() and popMatrix() to isolate a set of transformations so that they don't affect anything outside of them.

To make your point cloud interactive, you'll need to use the draw() method.
/*
http://shop.oreilly.com/product/0636920020684.do
Making Things See by Greg Borenstein
 */
import processing.opengl.*;
import SimpleOpenNI.*;

SimpleOpenNI kinect;

// variable to hold current rotation
// represented in degrees
float rotation = 0;

void setup() {
  //3D space
  size(1024, 768, OPENGL);
  
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
}
void draw() {
  background(0);
  kinect.update();
  
  // prepare to draw centered in x-y
  // pull it 1000 pixels closer on z
  translate(width/2, height/2, -1000);
  
  // flip the point cloud vertically:
  rotateX(radians(180));
  
  // move center of rotation
  // to inside the point cloud
  translate(0, 0, 1000);
  
  //rotate with the mouse
  //this function keeps the named axis fixed
  rotateY(map(mouseX, 0, width, -PI/2, PI/2));
  stroke(255);
  
  PVector[] depthPoints = kinect.depthMapRealWorld();
  
  // notice: "i+=10"
  // only draw every 10th point to make things faster
  for (int i = 0; i < depthPoints.length; i+=10) {
    PVector currentPoint = depthPoints[i];
    point(currentPoint.x, currentPoint.y, currentPoint.z);
  }
}

Play with the sketch. The kinect only sees the surface of the objects. The point cloud is a distribution of points on the surface, so you won't have a 3D model.