Developing for Ultraleap Hand Tracking on Looking Glass

This tutorial is provided for your convenience/rapid experimentation, but we do not regularly update the samples. As such, we highly recommend you follow the documentation provided by Leap Motion for production.

First, make sure you have all the right software:

How does the Leap Motion Controller work?

Unlike depth cameras like the Kinect or RealSense, the Leap Motion Controller doesn't actually generate a depth map. It does also use infrared emitters and receivers, but to get an isolated, high-contrast image of the hands. It then feeds the last few frames into some pretty amazing computer vision operations to get a feasible idea of what all your hand bones are doing. That happens in the continuous LEAP service (runtime), which the LEAP Unity SDK then queries for tracking information. The SDK itself is responsible for higher-level tasks like generating the colliders, meshes, pinch detection and so on.

What is Ultraleap hand tracking good for in the Looking Glass?

The most straightforward use case for hand tracking may also be the most powerful: using the collider hands to interact with simple 3D physics never gets old. Sophisticated interaction is possible with (relatively solid) pinch detection and contextual UI's that pop out and attach to your hands.

The LEAP runtime

If it doesn't appear to be running in the tray, you can search for a "Leap Motion Control Panel" program, which should start it up. The icon turns green when the LEAP is detected, and you can double click it to open the control panel. If your Leap Motion Controller has been sitting around since 2013 (or if you just want to make sure it's performing optimally) I'd advise using the calibrator on the troubleshooting tab. If you ever want to check the version, that's in the About tab.

The LEAP SDK

I'd strongly recommend downloading the example scene, which I've configured to be a pretty optimal starting point for LKG + Ultraleap. You could also start from one of the example scenes that come with the LEAP SDK, but most are set up for VR, with some stuff we don't want. Just don't even try to set it all up from scratch, that's asking for trouble!

To get it running, import the LEAP SDK, HoloPlay Unity Plugin, as well as the example scene all into one Unity project. Then open the LEAP+LKG scene. You should see this:

LeapServiceProvider/Controller

The central component of the LEAP SDK is LeapServiceProvider.cs, which usually lives on a "Leap Motion Controller" gameobject. Note this is distinct from LeapXRServiceProvider.cs, which is for headset-mounted tracking instead. The service provider draws a cone gizmo with white lines representing the tracking space of the LEAP, up to 2-3 feet away or so in meatspace. Scaling the controller object scales the cone and thus the range of motion of the hands in the scene. If the hands are a child of this object then everything will scale uniformly allowing you to make the hands fit your scene.

The tracking space

As for positioning and scaling the tracking space, we generally let the visible area of the Looking Glass correspond to about a foot of physical height, starting at about 4 inches from the LEAP (the minimum for solid tracking). A more subtle consideration is the angle relative to the Looking Glass, which sits at a 61 degree angle to the table. Picture it this way: when moving your meat hands around on a plane parallel to your table, do you want the 3D hands to look like they're moving parallel to the table as well, or parallel to the bottom of the Looking Glass block? A conceptually accurate -29 degree offset always feels like too much for some reason, so I usually set it to somewhere in between, -15 degrees or so.

Angled 0 degrees

Angled 15 degrees

Angled 30 degrees

If you're moving or animating the HoloPlay Capture, you can make the controller a child of it to have the hands stay localized in-frame as the camera moves around. Keep in mind that changing the capture size will require you to manually scale and reposition the controller (and the pinch distance, see below).

The hands

The hands in the example scene are made up of a few separate components:

  • Graphical (meshes, renderers)

  • Physical (colliders, rigidbodies)

  • Attachments (transforms for connecting things to joints)

  • Pinch detector (pinchy pinch)

Graphical hands:

There are basic Capsule Hands made up of primitives, and the somewhat nicer rigged meshy ones called LoPoly Rigged Hands, which I use for everything.

Physics hands:

There are hands that use box colliders and hands that use capsule colliders. I think capsules are almost always superior, unless you happen to have some particularly boxy fingers? Some of the prefabs contain wrist/arm colliders too, but only the capsule hands can actually draw meshes for the wrist/arm. I'd suggest simply deleting the "Forearm" collider objects if you're using the LoPoly Rigged Hands. NOTE: If you do ever want to use these alternate prefabs, pull one of each hand (L/R) into the scene and replace the old references on the HandModels object.

Attachment hands:

These are very useful if you want to have something following or pinned onto a specific hand joint. ALWAYS USE THIS instead of trying to attach something to the graphics or physics hands, sometimes it re-generates those so any attachments might get lost.

The Attachment Hands component has a nice little window where you can toggle which bones have attachments, which summons new objects into existence. Any children you add to these will then be attached, and will pivot around the attachment point unless you zero out their local position and rotation.

In many cases, the palm transform point is the smoothest/most reliable, but there's plenty of times when you want something to follow or interact with only a fingertip. For that sort of thing, It's usually way easier to set up specific colliders and functionality on an attached object rather than on a part of the hand prefab itself.

There's also a script called AttachmentHandEnableDisable.cs which is critical in many cases- it enables/disables the attachments to match the state of the hand they're on. It's most often preferable to turn off the attachment when the hand disappears, rather than having it stick to the last tracked position. You need one for each hand, and they each need a reference to a hand. Yes, I do think this simple functionality should just be a bool in AttachmentHands.cs instead.

Pinching:

PinchDetector.cs references a HandModel (either physics or graphics) and triggers UnityEvents upon pinch and release. The trigger distance is in world space and you'll need to change it if you scale the controller. If the "ControlsTransform" bool is true, the PinchDetector object will update to move around relative to the hand. This is a useful way to get the pinch position, the midpoint between the thumb and index fingertips.

Final notes

  • The attachment and pinch scripts don't actually need to be their own objects if you prefer to condense the whole setup.

  • There's a key binding in the LEAP SDK that will override the HoloPlay Capture's ctrl+E toggle binding.

  • We've occasionally seen the Leap Motion Controller struggle when going through USB hubs or questionable cables.

  • It's also sensitive to infrared blasts from other equipment (vive lighthouses, depth cameras, roombas) and from direct sunlight, so keep that in mind.

  • When installing the Leap Motion Controller on a desk or horizontal surface, in order to prevent malfunction, check "Auto-orient Tracking" in the Leap Motion Control Panel. This setting will assume the LED is at the front. Also, from the Visualizer, ensure that you are in "Desktop mode".

Last updated

#502: August 18 Changes

Change request updated