Skip to content

Camera

The HoloPlay API does not currently have built-in functions for camera control. The assumption here is that cameras in OpenGL can be implemented a number of ways and it would be better to provide general instructions for camera manipulation that could be applied in any circumstances. If a built-in camera that does this automatically is preferable, email us or message us and let us know!

The goal with the camera is to emulate what the human eye would see if the 3D scene were real in physical space. The easiest way to think of this is imagining that the screen at the base of the Looking Glass were a window pane: a flat rectangular portal through which we're viewing the 3D objects.

Field of View

The standard model Looking Glass screen is roughly 4.75" vertically. If we assume the average viewing distance for a user sitting at their desk is about 36", our field of view should be about 14°. There is no correct answer, as it all depends on your expected user's distance from the Looking Glass, but we've found the most success using this figure.

FOV

View Cone

The Looking Glass has a valid viewing angle of about 40-50° total, or 20-25° from center. We try to emulate this in software by starting our view rendering -20° from center and moving to +20° from center.

View Cone

The most intuitive approach to this might be to choose a pivot point and let the camera revolve 40° around it, but that produces an effect called toe-in, which is not what we want. Remember the window pane analogy: if there were a window grill at the same depth as the screen, we'd want it to be drawn flat on the screen from all viewing angles, not rotated from the sides.

No Toe-in Toe-in
Offset Toe-in

Frustum

To achieve this, we want to move the camera horizontally (change the view matrix) and shift the frustum in the opposing direction (change the projection matrix). We also want to keep the near clipping plane close to emulating the real depth of the Looking Glass; it's okay to let it come forward a little bit, but it quickly becomes unbearable if it's too far out. For the far clipping plane, we can get away with a little more recess, because the difference between each view is less intense way out back.

Frustums

The focal plane, or convergence plane, is where all the views converge. This virtual plane is analogous to the physical screen at the base of the Looking Glass. When framing subject matter to be displayed in the Looking Glass, it's best to center it within this focal plane, because that's where the subject will appear as crisp and in focus as possible.

Offset

It's important to keep that in mind when choosing an approach to positioning the camera and calculating offset. There are a number of values that are determined by one another, so we want to be able to control the most useful values, and let the rest be determined--the most useful being the camera size and focal plane position. The FOV was already set previously, based on our imagined average user distance.

Camera Offset

Given a value Camera Size (which is the vertical radius of the focal plane), the FOV, and the center position of the focal plane, we can determine how far back the camera should be positioned locally on the z axis. Once we know that, we can also determine what the offset should be, given the offset angle for each view.

The pseudo code would look something like this:

fov = 0.244 // 14° in radians
viewCone = 0.698 // 40° in radians
cameraSize = 5 // for example
aspectRatio = 1.6 // derived from calibration's screenW / screenH
totalViews = 45 // derived from the quilt settings
focalPosition = (0, 0, 0) // the center of the focal pane

cameraDistance = -cameraSize / tan(fov / 2)
cameraPosition = focalPosition + (0, 0, cameraDistance)

for (int view; view < totalViews; view++)
{
    // start at -viewCone * 0.5 and go up to viewCone * 0.5
    offsetAngle = (view / (totalViews - 1) - 0.5) * viewCone

    // calculate the offset
    offset = cameraDistance * tan(offsetAngle);

    // modify the view matrix (position)
    viewMatrix[0, 3] += offset

    // modify the projection matrix, relative to the camera size and aspect ratio
    projectionMatrix[0, 2] += offset / (cameraSize * aspectRatio)

    // render and copy the view to the quilt
    render()
    copyViewToQuilt()

    // reset view and projection matrices
    resetViewMatrix()
    resetProjectionMatrix()

}

drawLightfield()
// done!

Of course, there are ways to do this more efficiently, but this is one of the more straightforward ways to set up multi-view rendering. Feel free to experiment and adapt this approach to your project's needs!