๐Holograms in the real world
Capturing the real world in 3D is exciting, but can be challenging. The documentation below will guide you along with a few suggested methods for capturing the real world in 3D.
Last updated
Capturing the real world in 3D is exciting, but can be challenging. The documentation below will guide you along with a few suggested methods for capturing the real world in 3D.
Last updated
The latest iPhone Pro models use Lidar to capture depth information along with color data. Apps like Record3D are a great app for easy video capture. You can also use special cameras like the Azure Kinect, Intel Realsense or Luxonis OAK cameras to capture depth using time of flight sensors or stereo disparity. The Azure Kinect is one of the highest quality depth cameras we've tested and has a 4K color image sensor as well which makes it ideal for capturing people. The Azure Kinect also works with software like Depthkit which allows you to really take advantage of the high resolution of the Kinect.
The Realsense and Luxonis cameras do tend to be cheaper and smaller than the Azure Kinect, so if you're just getting started or just want to experiment those may be a better starting point.
The simplest way to capture a light field for Looking Glass is by moving a camera left to right, this can be done with a normal camera rail, but requires a good bit of setup. Light fields are unique since they capture the scene from all the perspectives you end up seeing on a Looking Glass, allowing for all the effects like reflections and refraction to be captured.
Learn more about capturing light fields here.
Photogrammetry is a technique that involves taking a large number of photos of an object or scene from a variety of different angles and perspectives. You can then input this data into a computer to calculate what the scene looked like in 3D. This video by Azad goes over Reality Capture and Polycam, and compares the results from these methods.
Neural Radiance fields (NeRFs) are a cutting-edge area of innovation in computer science. They use similar techniques to photogrammetry to determine the pose of input photos, then use computer vision techniques and machine learning to interpolate between those poses.
The results are quite impressive and allow for a whole new way to capture scenes with pretty much any camera. Tools like LumaAI and Nvidia's Instant NERF make it possible to turn your videos into 3D scenes that you can then turn into holograms. Definitely check out Wren's video on Neural Radiance fields here if you want to learn more.
Photogrammetry and Neural Radiance Fields are great for capturing static scenes, but don't currently allow for capturing movement or dynamic motion. For motion you'd need to rely on Volumetric Capture solutions, apps like Volu work great if you want to just capture from your phone. If you're looking for higher production quality, studios like Microsoft's Mixed Reality Capture Studios make some of the best volumetric captures in the industry currently. Check out below this awesome WebXR Demo of a soccer player captured with Microsoft's Mixed Reality Capture software. Bonus, you can view this directly in a Looking Glass with WebXR.
A few hologram enthusiasts are building their own light field capture rigs using everything from a bunch of smart phones and go pros to custom camera circuitry. If you like the look of these, be sure to join our Discord to see even more cool creations from our community.