You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
This is an indicator that Apple will need ever more sophisticated processors and memory for running AI on the phone. They intend to keep extending the distance between them and the competition fueling an accelerating arms race in the semi sector. This will not only benefit the semi sector, but everything else. The number of opportunities will also accelerate and Silicon Valley will be the epicenter of these opportunities. The SemiWiki community has the power of a front row seat as this knowledge revolution plays out.
The founders of the acquired startup are experts at image recognition, so it's likely that future Apple iPhone and iPad devices will automatically detect your friend's faces and let you tag them in photos, much like what Facebook does today. It's an incremental feature for cameras, but not general purpose AI where we talk to our iPhones and have a conversation like with the HAL 9000 in the 2001 Space Odysey movie from film maker Stanley Kubrick.
I see no reason AI can't be split between the device and cloud. Any comments or thoughts on this approach would be appreciated. Also how would TSM putting Crossbar memory in the SOC play into this.
I was at the Linley Processor Conference this week and there was a lot of talk about Convolutional Nueral Networks (CNN) for doing visual recognition. The main take away is that these systems learn and can be retrained with no code changes. The learning is compute intensive, but running the CNN for recognition can be done in mobile platforms once the learning is done. So servers would be needed in the learning stage, but the actual application can run locally. Maybe Apple is looking at this kind of play.