Ceva webinar AI Arch SEMI 800X100 250625
WP_Term Object
(
    [term_id] => 6435
    [name] => AI
    [slug] => artificial-intelligence
    [term_group] => 0
    [term_taxonomy_id] => 6435
    [taxonomy] => category
    [description] => Artificial Intelligence
    [parent] => 0
    [count] => 632
    [filter] => raw
    [cat_ID] => 6435
    [category_count] => 632
    [category_description] => Artificial Intelligence
    [cat_name] => AI
    [category_nicename] => artificial-intelligence
    [category_parent] => 0
    [is_post] => 
)

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Ultra-efficient heterogeneous SoCs for Level 5 self-driving
by Don Dingee on 09-14-2022 at 6:00 am

Ultra-efficient heterogeneous SoCs target the AI processing pipeline for Level 5 self-driving

The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology… Read More


Samtec is Fueling the AI Revolution

Samtec is Fueling the AI Revolution
by Mike Gianfagna on 09-07-2022 at 6:00 am

Samtec is Fueling the AI Revolution

It’s all around us. The pervasive use of AI is changing our world. From planetary analysis of weather patterns to monitoring your vital statistics to assess health, it seems as though smart everything is everywhere. Much has been written about the profound impact AI is having on our lives and society. Everyone seems to agree that… Read More


Coherency in Heterogeneous Designs

Coherency in Heterogeneous Designs
by Bernard Murphy on 09-01-2022 at 6:00 am

Ncore application

Ever wonder why coherent networks are needed beyond server design? The value of cache coherence in a multi-core or many-core server is now well understood. Software developers want to write multi-threaded programs for such systems and expect well-defined behavior when accessing common memory locations. They reasonably expect… Read More


A clear VectorPath when AI inference models are uncertain

A clear VectorPath when AI inference models are uncertain
by Don Dingee on 08-22-2022 at 10:00 am

Achronix VectorPath Accelerator Card with Speedster 7t1500 FPGA for running AI inference models and more

The chase to add artificial intelligence (AI) into many complex applications is surfacing a new trend. There’s a sense these applications need a lot of AI inference operations, but very few architects can say precisely what those operations will do. Self-driving may be the best example, where improved AI model research and discovery… Read More


Intelligently Optimizing Constrained Random

Intelligently Optimizing Constrained Random
by Bernard Murphy on 07-12-2022 at 6:00 am

Potential coverage problems min

“Who guards the guardians?” This is a question from Roman times which occurred to me as relevant to this topic. We use constrained random to get better coverage in simulation. But what ensures that our constrained random testbenches are not wanting, maybe over constrained or deficient in other ways? If we are improving with a faulty… Read More


Stalling to Uncover Timing Bugs. Innovation in Verification

Stalling to Uncover Timing Bugs. Innovation in Verification
by Bernard Murphy on 06-29-2022 at 6:00 am

Innovation New

Artificially stalling datapaths and virtual channels is a creative method to uncover corner case timing bugs. A paper from Nvidia describes a refinement to this technique. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue… Read More


Qualcomm’s AI play

Qualcomm’s AI play
by Anand Joshi on 06-21-2022 at 10:00 am

int nvda qcom

Qualcomm is a common name in mobile industry for chips. The company has generated $33 billion in revenue in 2021 and continues to march ahead with its innovations. However, Qualcomm doesn’t get the same visibility and mention as Nvidia and Intel in the world of AI chips. By our estimate, Qualcomm’s contribution to … Read More


A Fresh Look at HLS Value

A Fresh Look at HLS Value
by Bernard Murphy on 06-21-2022 at 6:00 am

Streaming min

I’ve written several articles on High-Level Synthesis (HLS), designing in C, C++ or SystemC, then synthesizing to RTL. There is unquestionable appeal to the concept. A higher level of abstraction enables a function to be described in less lines of code (LOC). Which immediately offers higher productivity and implies less bugs… Read More


How to Cut Costs of Conversational AI by up to 90%

How to Cut Costs of Conversational AI by up to 90%
by Dave Bursky on 06-20-2022 at 10:00 am

20 Tbps 2D NoC

The burgeoning use of conversational artificial intelligence (CAI) in consumer and business applications places a heavy computational burden on both front-end and back-end systems that provide the natural language processing (NLP). NLP systems rely on deep learning (a subset of machine learning) to automate speech recognition,… Read More


HLS in a Stanford Edge ML Accelerator Design

HLS in a Stanford Edge ML Accelerator Design
by Bernard Murphy on 06-16-2022 at 6:00 am

AI for stanford min

I wrote recently about Siemens EDA’s philosophy on designing quality in from the outset, rather than trying to verify it in. The first step is moving up the level of abstraction for design. They mentioned the advantages of HLS in this respect and I refined that to “for DSP-centric applications”. A Stanford group recently presented… Read More