WP_Term Object
(
    [term_id] => 18324
    [name] => Expedera
    [slug] => expedera
    [term_group] => 0
    [term_taxonomy_id] => 18324
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 8
    [filter] => raw
    [cat_ID] => 18324
    [category_count] => 8
    [category_description] => 
    [cat_name] => Expedera
    [category_nicename] => expedera
    [category_parent] => 178
    [is_post] => 
)
            
Screenshot 2023 10 16 at 2.20.33 PM
WP_Term Object
(
    [term_id] => 18324
    [name] => Expedera
    [slug] => expedera
    [term_group] => 0
    [term_taxonomy_id] => 18324
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 8
    [filter] => raw
    [cat_ID] => 18324
    [category_count] => 8
    [category_description] => 
    [cat_name] => Expedera
    [category_nicename] => expedera
    [category_parent] => 178
    [is_post] => 
)

Expedera Proposes Stable Diffusion as Benchmark for Edge Hardware for AI

Expedera Proposes Stable Diffusion as Benchmark for Edge Hardware for AI
by Bernard Murphy on 02-05-2024 at 6:00 am

Stable diffusion image min

A recent TechSpot article suggests that Apple is moving cautiously towards release of some kind of generative AI, possibly with iOS 18 and A17 Pro. This is interesting not just for Apple users like me but also for broader validation of a real mobile opportunity for generative AI. Which honestly had not seemed like a given, for multiple… Read More


WEBINAR: An Ideal Neural Processing Engine for Always-sensing Deployments

WEBINAR: An Ideal Neural Processing Engine for Always-sensing Deployments
by Daniel Nenni on 05-29-2023 at 10:00 am

Option 1

Always-sensing cameras are a relatively new method for users to interact with their smartphones, home appliances, and other consumer devices. Like always-listening audio-based Siri and Alexa, always-sensing cameras enable a seamless, more natural user experience. However, always-sensing camera subsystems require specialized… Read More


Deep thinking on compute-in-memory in AI inference

Deep thinking on compute-in-memory in AI inference
by Don Dingee on 03-09-2023 at 6:00 am

Compute-in-memory for AI inference uses an analog matrix to instantaneously multiply an incoming data word

Neural network models are advancing rapidly and becoming more complex. Application developers using these new models need faster AI inference but typically can’t afford more power, space, or cooling. Researchers have put forth various strategies in efforts to wring out more performance from AI inference architectures,… Read More


Area-optimized AI inference for cost-sensitive applications

Area-optimized AI inference for cost-sensitive applications
by Don Dingee on 02-15-2023 at 6:00 am

Expedera uses packet-centric scalability to move up and down in AI inference performance while maintaining efficiency

Often, AI inference brings to mind more complex applications hungry for more processing power. At the other end of the spectrum, applications like home appliances and doorbell cameras can offer limited AI-enabled features but must be narrowly scoped to keep costs to a minimum. New area-optimized AI inference technology from… Read More


Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Ultra-efficient heterogeneous SoCs for Level 5 self-driving
by Don Dingee on 09-14-2022 at 6:00 am

Ultra-efficient heterogeneous SoCs target the AI processing pipeline for Level 5 self-driving

The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology… Read More


CEO Interview: Da Chuang of Expedera

CEO Interview: Da Chuang of Expedera
by Daniel Nenni on 12-03-2021 at 6:00 am

Da Chuang CEO Expedera

Da is co-founder and CEO of Expedera. Previously, he was cofounder and COO of Memoir Systems, an optimized memory IP startup, leading to a successful acquisition by Cisco. At Cisco, he led the Datacenter Switch ASICs for Nexus 3/9K, MDS, CSPG products. Da brings more than 25 years of ASIC experience at Cisco, Nvidia, and Abrizio.… Read More


A Packet-Based Approach for Optimal Neural Network Acceleration

A Packet-Based Approach for Optimal Neural Network Acceleration
by Kalar Rajendiran on 11-08-2021 at 10:00 am

6 Optimal Work Unit Designed for DLA

The Linley Group held its Fall Processor Conference 2021 last week. There were a number of very informative talks from various companies updating the audience on the latest research and development work happening in the industry. The presentations were categorized as per their focus, under eight different sessions. The sessions… Read More