A quick digression into a fun topic. A common limitation in present-day electronics is the size and 2D nature of the display. If it must be mobile this is further constrained, and becomes especially severe in wearable electronics, such as watches. Workarounds include projection and bringing the display closer to the eye, as in the newly re-emergent Google Glass and other AR applications. But even here it still seems we are constrained by a traditional 2D display paradigm, while sci-fi movies are almost routinely tempting us with unaided (no special viewing devices required) holographic projections – Iron Man and Star Wars provide just a couple of recent examples.
Of course sci-fi and reality don’t have to connect but the genre has been very productive in motivating real application advances, so why not here also? Following the movies, these could deliver room-sized projections (Death Star plans or the internals of Jarvis) or small projections (R2D2 projection of Princess Leia). By no longer tying image size to device size we could make compute devices as inconspicuous as we please while being able to deliver images of significantly greater size and richness (3D) than those we can view today on flat screens.
One nature-inspired development in this direction starts with fixed holograms, usually seen on credit cards and used for security. These are built on the way Morpho butterfly wings display color, not through pigment but by refraction and diffraction through scales on the wings, which causes constructive interference only for blue light. Instead of using the conventional split laser beam technique, this method combines images taken from multiple perspectives, through a computed combination sounding rather like a form of tomography, into a pattern of bumps and dips which can be written lithographically onto a sheet of plastic. The resulting hologram can be viewed in normal light, supports a full spectrum of colors and doesn’t require special viewing aids. This team (University of Utah) plans to extend their work to use liquid crystal modulators to support varying images. I imagine this direction could be interesting to the TI DLP group who already dominate the projection world.
HoloLamp introduced their AR projection system at CES this year, where they won an award. Again no special visual aids required but this already supports programmability and movement. There is some interesting and revealing technology in this solution. Apparently it uses face-tracking and motion parallax to create a 3D perception (not clear how that would work for multiple viewers?). They also claim it allows you to manipulate the 3D objects with your hands via stereo detection at the projector (an earlier report said that manipulation was only possible through the controller). Unsurprisingly, these guys use DLP ultra-short throw projectors (again TI, I assume). HoloLamp are running on one seed round raised last year.
Voxon Photonics with their VoxieBox has a different approach. They sweep a projection surface up and down very fast, so fast that the surface effectively disappears and all you see is the volume projection. (They also introduce some new terminology – instead of pixels they talk about voxels, a volume pixel, defining degrees of resolution and required refresh rates.) They are early stage, one seed round of funding so far and now looking for VC funding.
Finally, how can we put holographic display on a watch or a phone? A team at RMIT University in Australia is working on an answer. The trick of course is to build a solution on something resembling modern semiconductor processes. But there’s a catch – visible light wavelengths range from ~390 to 700nm, rather larger than modern process feature sizes. That’s important in this application because holographic methods of the standard type use phase modulation to create an illusion of depth, significantly limiting possible phase shifts (and therefore 3D appearance) that can be generated in much smaller dimensions. The Australian team has solved this using a topological insulator material (based on graphene oxide), in a 25nm dimension, to build an optical resonant cavity to accumulate sufficient phase shifts for 3D projection. Images produced so far are low resolution but multi-color. This team also sees potential to overlay a resonant cavity film on a liquid-crystal substrate to support varying images.
Perhaps yet again, technology is catching up with the movies.
Share this post via:
Next Generation of Systems Design at Siemens