You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
This is a multi lens camera system that lowers cost and increases accuracy in self driving vehicles and robotics. This system should also show dramatic improvements in standard photography. This could drive demand for a number of semis including the type of processing it uses depending on function. This is just one of many approaches to solving challenges involved in machine vision. It is an ideal example of the semi/nanotech sectors ability to increase functionality while lowering cost at a dramatic rate. I see the cost of even this system drop significantly over the coming years. This technology will definitely become part of a platform used in everything from transportation to robotic surgery. The use of multiple cameras of various types all linked to a processor unit holds tremendous promise in numerous areas.
Any thoughts or comments on this solicited and welcome, for I feel this could literally be a ground breaking approach using a combination of existing technologies that will end up having a very wide range of uses.
LIDAR can see through fog, but nothing in this article suggest that the technology from startup Light can match that, so no, it wouldn't really replace LIDAR in automobiles.
Seems a stretch. Lidar covers a range out to 100m, the Light camera is not going to give you range and velocity information anything like that far away. And others have used multiple cameras on wider separation to get that effect. Cars will use sensor fusion using camera, lidar, sonar, and vehicle network to create multiple views and drive differently to humans.
I feel a combination of cameras including thermal imaging/night vision in combination with a limited lidar augmentation might be the best combination and get around the limitations of both. Also what about an infrared/ultraviolet laser to augment the cameras vision in fog and darkness? Any comments or thoughts on this would be appreciated.
LIDAR can see through fog, but nothing in this article suggest that the technology from startup Light can match that, so no, it wouldn't really replace LIDAR in automobiles.
Daniel, I think you're right. I think the founders know it too. From the article:
"By our second meeting in Tokyo, he said, 'Where I believe this has the most value is in autonomous driving,' " Grannan said. "We were like, 'Oh, wow.’'Applying it to automotive hadn't been a thought of ours."
True thermal vision is much harder than illuminated IR. Array sensor LIDAR, where every pixel measures a distance, also yield an image. Their efficiency for image production at IR even out to 1300nm is far better than true thermal which needs to be sensitive to 10s of microns, generally based on nanoscale bolometers or similar since the photon energy will not displace electrons. If you pulse the LED or laser which is generating illumination you can get powerful flashes and you can eliminate background (if you pulse 1% duty cycle, you can ignore 99% of daytime background glare). Or you can use both passive illumination and the flash to differentiate different kinds of object in the daytime. But in the daytime you also have regular color cameras (which unless you are offroading without headlights, also work at night).
In general there are lots of signals to work with. The mix of cost and coverage of various scenarios will take a while to settle.
Tanj, thanks for the info, it will be interesting to see what combination of technologies win. This should make for some very interesting SOCs involving advanced packaging and various mems. TSMC will be the obvious choice to make these for they have the broadest skill set and technology of any fab. This will open up a whole new ecosystem applied in numerous areas with an array/combination of sensing technologies. Exciting Times
TSMC are not a big name in MEMS are they? I thought other fabs specialized in that. And even some of the sensor work is done by specialists. I would expect IR sensors to need some exotic chemistry. The sensor fusion part is a fairly conventional chip (most of the heavy lifting is by algorithms a DSP is good for) and if they all need to be integrated into an MCM then there are companies outside of TSMC who provide that service. If there is one winner it is likely to look more like a Broadcom or a Marvell, buying up successful parts to assemble a complete set.