Smartphones could soon be able to generate photorealistic 3D holograms, thanks in part to an AI model developed by researchers at MIT. The AI system developed by the MIT team determines the best way to generate holograms from a series of input images.
Researchers from MIT have recently designed AI models that enable the generation of photorealistic 3D holograms. The technology could have applications for VR and AR headsets, and the holograms can even be generated by a smartphone.
Unlike traditional 3D and VR displays, which simply produce the illusion of depth and which can cause nausea and headaches, holographic displays can be seen by people without causing eye strain. A major roadblock towards the creation of holographic media is handling the data needed to actually generate the holograph. Every hologram is comprised of a massive amount of data, needed to create the “depth” the hologram has. Because of this, generating holographic typically requires a massive amount of computing power. In order to make holographic technology more practical, the MIT team applied deep convolutional neural networks to the problem, creating a network capable of quickly generating holograms based on input images.
The typical approach for generating holograms essentially generated many chunks of holograms and then used physics simulations to combine the chunks into a complete representation of an object or image. This differs from the typical approach used to generate holograms. In the traditional method, images are sliced apart and a series of lookup tables are used to join the hologram chunks together, as the lookup tables mark the boundaries of the different hologram chunks. The process of defining boundaries of holographic chunks with look tables is quite a time-consuming and processing power intensive.
According to IEEE Spectrum, The MIT team designed another method of generating holograms. Using the power of deep learning networks, they were able to slice images into chunks that could be re-complied into holograms using far fewer “slices”. The new techniques takes advantage of the ability of convolutional neural networks to analyze images and separate images into discrete chunks. This new method of analyzing and chunking images greatly reduces the number of total operations a system has to carry out.
In order to design their AI-powered holographic generator, the research team began by constructing a database comprised of around 4000 computer-generated images, with a corresponding 3D hologram for assigned to each of these images. The convolutional neural network was trained on this dataset, learning how each of the images was tied with its hologram and the best way to use features to generate the holograms. When the AI system was provided unseen data with depth information, it could then generate new holograms from this data. The depth information is supplied through the use either lidar sensors of multi-camera displays and rendered as a computer-generated image. Some new iPhone have these components, meaning that they could potentially generate the holograms if linked to the right type of display.
The new AI-driven hologram system needs much less memory than the classic methods. The system can generate 3D holograms at 60 frames a second in full color with a resolution of 1920 x 1080 using around 620 kilobytes of memory while running on a single commonly available GPU. The researchers were able to run their systems on an iPhone 11 producing around 1 hologram a second, while a Google Edge TPU the system could render 2 holograms per second. This suggests that the system could be adapted to smartphone, AR devices, and VR devices in general. The system could also have applications for volumetric 3D printing or in the design of holographic microscopes.
In the future, improvements to the technology could introduce eye-tracking hardware and software, enabling holograms to dynamically scale in resolution as the user looks at particular places.