The goal with the camera is to emulate what the human eye would see if the 3D scene were real in physical space. The easiest way to think of this is imagining that the screen at the base of the Looking Glass were a window pane: a flat rectangular portal through which we're viewing the 3D objects.

The 8.9" Looking Glass screen is roughly 4.75" vertically. If we assume the average viewing distance for a user sitting at their desk is about 36", our field of view should be about 14°. There is no correct answer, as it all depends on your expected user's distance from the Looking Glass, but we've found the most success using this figure.

The Looking Glass has a valid viewing angle of about 40-50° total, or 20-25° in each direction from center. Due to refractive properties of the glass itself, the digital view angle that we find that renders best is 35°. We emulate this in software by starting our view rendering -17.5° from center and moving to +17.5° from center.

The most intuitive approach to this might be to choose a pivot point and let the camera revolve 35° around it, but that produces an effect called **toe-in**, which is not what we want. Remember the window pane analogy: if there were a window grill at the same depth as the screen, we'd want it to be drawn flat on the screen from all viewing angles, not rotated from the sides.

| |

Offset | Toe-in |

To achieve this, we want to move the camera horizontally (change the view matrix) and shift the frustum in the opposing direction (change the projection matrix). We also want to keep the near clipping plane close to emulating the real depth of the Looking Glass; it's okay to let it come forward a little bit, but it quickly becomes unbearable if it's too far out. For the far clipping plane, we can get away with a little more recess, because the difference between each view is less intense way out back.

The focal plane, or zero parallax plane, is where all the views converge. This virtual plane is analogous to the physical screen at the base of the Looking Glass. When framing subject matter to be displayed in the Looking Glass, it's best to center it within this focal plane, because that's where the subject will appear as crisp and in focus as possible.

It's important to keep that in mind when choosing an approach to positioning the camera and calculating offset. There are a number of values that are determined by one another, so we want to be able to control the most useful values, and let the rest be determined -- the most useful being the camera size and focal plane position. The FOV was already set previously, based on our imagined average user distance.

Given a value Camera Size (which is the vertical radius of the focal plane), the FOV, and the center position of the focal plane, we can determine how far back the camera should be positioned locally on the z axis. Once we know that, we can also determine what the offset should be, given the offset angle for each view.

The code would look something like this:

// set up the camera with the view and the shader of the rendering objectvoid HoloPlayContext::setupVirtualCameraForView(int currentViewIndex, glm::mat4 currentViewMatrix) {// field of view is always 14 according to measurementconst float fov = glm::radians(14.0f);// calculate the camera distance, which is actually static when camera size keeps the samefloat cameraDistance = -cameraSize / tan(fov / 2.0f);// start at -viewCone * 0.5 and go up to viewCone * 0.5float offsetAngle = (currentViewIndex / (qs_totalViews - 1.0f) - 0.5f) * glm::radians(viewCone);// calculate offsetfloat offset = cameraDistance * tan(offsetAngle);// calculate the offset that the camera should move// move the camera by offset and camera distance at x and z directionviewMatrix = glm::translate(currentViewMatrix, glm::vec3(offset, 0.0f, cameraDistance));float aspectRatio = getWindowRatio();// main projection is always the sameprojectionMatrix = glm::perspective(fov, aspectRatio, 0.1f, 100.0f);// changes the projection matrix to be parallelprojectionMatrix[2][0] += offset / (cameraSize * aspectRatio);}

Feel free to experiment and adapt this approach to your project's needs!