The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology… Read More
Artificial Intelligence
Samtec is Fueling the AI Revolution
It’s all around us. The pervasive use of AI is changing our world. From planetary analysis of weather patterns to monitoring your vital statistics to assess health, it seems as though smart everything is everywhere. Much has been written about the profound impact AI is having on our lives and society. Everyone seems to agree that… Read More
Coherency in Heterogeneous Designs
Ever wonder why coherent networks are needed beyond server design? The value of cache coherence in a multi-core or many-core server is now well understood. Software developers want to write multi-threaded programs for such systems and expect well-defined behavior when accessing common memory locations. They reasonably expect… Read More
A clear VectorPath when AI inference models are uncertain
The chase to add artificial intelligence (AI) into many complex applications is surfacing a new trend. There’s a sense these applications need a lot of AI inference operations, but very few architects can say precisely what those operations will do. Self-driving may be the best example, where improved AI model research and discovery… Read More
Intelligently Optimizing Constrained Random
“Who guards the guardians?” This is a question from Roman times which occurred to me as relevant to this topic. We use constrained random to get better coverage in simulation. But what ensures that our constrained random testbenches are not wanting, maybe over constrained or deficient in other ways? If we are improving with a faulty… Read More
Stalling to Uncover Timing Bugs. Innovation in Verification
Artificially stalling datapaths and virtual channels is a creative method to uncover corner case timing bugs. A paper from Nvidia describes a refinement to this technique. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue… Read More
Qualcomm’s AI play
Qualcomm is a common name in mobile industry for chips. The company has generated $33 billion in revenue in 2021 and continues to march ahead with its innovations. However, Qualcomm doesn’t get the same visibility and mention as Nvidia and Intel in the world of AI chips. By our estimate, Qualcomm’s contribution to … Read More
A Fresh Look at HLS Value
I’ve written several articles on High-Level Synthesis (HLS), designing in C, C++ or SystemC, then synthesizing to RTL. There is unquestionable appeal to the concept. A higher level of abstraction enables a function to be described in less lines of code (LOC). Which immediately offers higher productivity and implies less bugs… Read More
How to Cut Costs of Conversational AI by up to 90%
The burgeoning use of conversational artificial intelligence (CAI) in consumer and business applications places a heavy computational burden on both front-end and back-end systems that provide the natural language processing (NLP). NLP systems rely on deep learning (a subset of machine learning) to automate speech recognition,… Read More
HLS in a Stanford Edge ML Accelerator Design
I wrote recently about Siemens EDA’s philosophy on designing quality in from the outset, rather than trying to verify it in. The first step is moving up the level of abstraction for design. They mentioned the advantages of HLS in this respect and I refined that to “for DSP-centric applications”. A Stanford group recently presented… Read More
Flynn Was Right: How a 2003 Warning Foretold Today’s Architectural Pivot