Banner 800x100 0810

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier
by Daniel Nenni on 06-13-2025 at 10:00 am

Dan is joined by Moshe Sheier, Ceva’s vice president of marketing. Moshe brings with him more than 20 years of experience in the semiconductor IP and chip industries in both development and managerial roles. Prior to this position, Mr. Sheier was the director of strategic marketing at Ceva.

Dan explores the history of Ceva with Moshe and how that history has created a strong force in the industry. Moshe describes the many markets and application areas that Ceva supports for edge computing, including DSP, audio, radar, vision and motion in the sensing area and inference with its scalable NPU architecture. This flexible and scalable architecture allows Ceva to support many customers in the implementation of efficient workloads from low power to safety critical.

Moshe describes the experience Ceva has developed over 10 years with its scalable NPU development, delivering consistent performance and efficiency improvements while also ensuring its hardware and software are future-proof. Its self-contained hardware architecture and broadly adopted software allow Ceva to deliver end-to-end solutions across many markets and customers.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Krishna Anne of Agile Analog

CEO Interview with Krishna Anne of Agile Analog
by Daniel Nenni on 06-13-2025 at 6:00 am

Agile Analog Krishna Anne headshot photo

Krishna has over 30 years of expertise in the semiconductor industry, holding senior roles at Rambus, AMD and Broadcom. As a serial entrepreneur, he co-founded SCI Semi Ltd and previously established DataTrails and Secure Thingz.

 Tell us a bit about your career background. What are you most proud of?

Over the course of my 30 year career in the global semiconductor industry, I have been fortunate to work across a broad range of roles. My first job was as a digital circuit designer, then I gradually transitioned into marketing, business development and corporate management positions with major semiconductor players such as Rambus, AMD and Broadcom. I am proud of the fact that I have been able to draw on my extensive experience, network and industry knowledge to help establish cutting-edge companies, including SCI Semiconductor, DataTrails and Secure Thingz.

Why did you decide to join Agile Analog?

I was invited to advise Agile Analog on its product roadmap and Go-To-Market strategy. After spending time with the team and technology I was highly impressed by the capabilities of the Composa tool and the quality of IP generated through the automation platform. The strength of customer engagements and recent project deliveries further validated the company’s great potential. I am passionate about working with entrepreneurial teams, and Agile Analog represents a unique opportunity to leverage my experience in engineering, GTM and P&L management to help drive revenue growth and scale the business.

What key problem is Agile Analog solving?

The shortage of skilled analog engineers in the semiconductor industry is a significant problem. Composa addresses this gap by enabling the rapid automation, design and redesign of mixed-signal IP blocks – whether adapting to changing specifications or migrating across process nodes. This is a solution the industry has long been waiting for – helping to radically reduce the complexity, time and costs associated with traditional analog design.

What are your main focus areas and challenges over the next six months?

Agile Analog offers an expanding portfolio of IP across data conversion, power management, IC monitoring, security and always-on IP. Over the next six months, the focus will be on building strategic industry partnerships and delivering integrated subsystem level IP solutions. There is clear market demand for our customizable, process agnostic products. As a lean organization, our key challenge is prioritizing our roadmap and aligning our efforts with the industry verticals that present the greatest potential for scalable growth.

What markets and applications are the company’s strongest?

Agile Analog has secured business wins across a wide range of verticals and applications, including consumer, enterprise data centers, security, space and industrial sectors. Analog security IP and anti-tamper IP are a big focus for us at the moment. Security has become a critical consideration for every SoC being developed today and our security IP products are ideally suited to address this.

What new product developments are your team working on?

The Agile Analog team is working to expand our portfolio of security IP and anti-tamper IP beyond our existing voltage glitch detector and clock attack monitor IPs, in order to be able to detect a wider range of physical attacks. Another important area for us is developing our range of data conversion solutions to include higher resolution and higher sample rate solutions. We are seeing great interest in our existing products across process nodes and we are just about to start working on bringing our ADC to the latest TSMC node for one of our strategic customers.

What is the long-term vision for Agile Analog?

Agile Analog’s unique technology, Composa, is our key differentiator from other analog IP companies. With the power of automation, we can quickly deliver IP tailored to precise specifications and process nodes. Our vision is to become the leading provider of high-value analog subsystem IP and licensable automation tools that enable accelerated IP development.

Which industry events are Agile Analog attending this year and why?

2025 has already been a busy year for Agile Analog, with the team taking part in many of the major foundry events held by Intel, TSMC and Samsung. In late June we will be at DAC and then in the second half of the year we will be going to the GlobalFoundries events, as well as more TSMC events. It’s pretty full-on!

Contact Agile Analog

Also Read:

2025 Outlook with Christelle Faucon of Agile Analog

Overcoming obstacles with mixed-signal and analog design integration

Agile Analog Partners with sureCore for Quantum Computing


Caspia Technologies at the 2025 Design Automation Conference #62DAC

Caspia Technologies at the 2025 Design Automation Conference #62DAC
by Mike Gianfagna on 06-12-2025 at 10:00 am

Caspia Technologies at the 2025 Design Automation Conference

Security will be an important topic at DAC this year. The hardware root of trust is the foundation of all security for complex systems implementing AI workloads. Thanks to new and sophisticated techniques the hardware root of trust is now vulnerable and must be protected. But adding deep security verification to existing design flows is challenging. Security experts are needed, and there just aren’t enough people with the requisite skills to make an impact.

Caspia is changing this by combining massive data about weaknesses and vulnerabilities with unique GenAI technology to deliver expert security verification to all teams and all design flows. You can learn more about Caspia’s ground-breaking approach to security verification at several events during DAC.

Sunday Workshop

Two of Caspia’s founders, Dr. Mark Tehranipoor and Dr. Farimah Farahmand, along with Dr. Hadi Mardani Kamali will host the Third AI/CAD for Hardware Security Workshop at DAC on Sunday, June 22, from 9:00am – 5:00pm in room 3001 on level 3 of Moscone West.

Building on the success of the first and second CAD for Security Workshops, this third workshop aims to embrace the transformative intersection of AI, CAD, and hardware security. The goal is to drive innovation at the intersection of AI-driven solutions and hardware design security.

Tuesday SKYTalk

Dr. Mark Tehranipoor, co-founder of Caspia Technologies will present an inspirational SKYTalk in the DAC Pavillion (second floor of Moscone West) on Tuesday, June 24 from 1:00 – 1:45pm entitled Opening a New Innovation Frontier with Large Language Models for SoC Security.

As complex SoCs become prevalent in virtually all systems, these devices also present a primary attack surface. The risks of cyberattacks are real, and AI is making them more sophisticated. As we also deploy AI into the SoC design process, it is imperative that secure design practices are incorporated as well.

Existing security solutions are inadequate to provide effective verification of complex SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding, advanced reasoning, and program synthesis tasks.

Recognizing this opportunity, Dr. Tehranipoor proposes leveraging the emergent capabilities of Generative Pre-trained Transformers (GPTs) to address the existing gaps in SoC security, aiming for a more efficient, scalable, and adaptable methodology. In this presentation he will offer an in-depth analysis of existing work, showcasing achievements, prospects, and challenges of employing LLMs in SoC security design and verification tasks

Meet with Caspia Executives and Technologists

Caspia will have senior architects and executives on hand to provide demos and discuss Caspia’s GenAI security capabilities at the InterContinental Hotel on June 23 and 24. On Monday, June 23 at 4:30 you can also meet with the company’s Chairman of the Board, Dr. Wally Rhines during a special happy hour event.

If you’re interested in reserving a spot for a discussion/demo or to attend the happy hour, drop a note to Caroline McCarthy at cmccarthy@caspiatechnologies.com. Please mention you saw the invitation on SemiWiki.

To Learn More 

You can learn more about Caspia’s ground-breaking security technology in this interview with the company’s CEO on SemiWiki. You can also visit the Caspia website here.  See you at DAC.

DAC registration is open.

Also Read:

CEO Interview with Richard Hegberg of Caspia Technologies

Podcast EP245: A Conversation with Dr. Wally Rhines about Hardware Security and Caspia Technologies


Defacto at the 2025 Design Automation Conference #62DAC

Defacto at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-12-2025 at 8:00 am

62nd DAC SemiWiki

Defacto has been a leading provider of SoC integration tools for large-scale designs for years. Most major semiconductor companies already use their solutions, and several customers will be presenting how they leverage the Defacto solution (SoC Compiler) at the upcoming DAC conference.

This year, Defacto is announcing a major release of their tool: SoC Compiler 11.0. We just blogged about it—check it out here.

This new version reaches a new level of automation. The tool can now interoperate with IP configuration tools and auto-connect IPs to generate a complete SoC ready for both simulation and logic synthesis in minutes. Format complexity isn’t an issue for Defacto’s SoC Compiler, since it can digest and generate RTL (System Verilog, Verilog & VHDL), IP-XACT, UPF, and SDC formats.

With this major release, IP-XACT users will find compelling reasons to migrate to Defacto. The Defacto tool has reached a production level of maturity while covering all essential IP-XACT mechanisms including packaging, querying, linting, checking, assembling, editing using TGI, and reporting (memory maps, registers, etc.) – all while maintaining a strong link with RTL.

Another key differentiator of the Defacto tool is physical awareness of the generated RTL, pre-synthesis. Specifically, Defacto software can capture physical requirements to improve PPA by providing design refactoring and restructuring capabilities.

With this major release, Defacto demonstrates that very complex design hierarchy manipulations can be performed incredibly quickly.

At this year’s DAC, they’ll be presenting a poster jointly with Arm on Monday, June 23, where Arm will explain how they restructured an 18,000-instance design with millions of connections and multiple hierarchy levels in less than an hour.

They’ve even been selected to participate in the DAC Poster Gladiator competition (also on Monday, June 23).

The presentation title is: A New Methodology to Generate Multiple SoC Configurations Quickly

Last but not least, Defacto is unveiling AI-based capabilities at this DAC. They’ve built an AI assistant to help with tool usage, and more importantly, assist Defacto users to easily generate Tcl and Python scripts. They’ll be providing live demonstrations at their booth.

You should definitely stop by their booth on the first floor (booth #1527). They have delicious French chocolates to sample.

How do you find them at DAC? They typically have a giant ball floating above their booth—you can’t miss it on the first floor.

Make sure to contact them here to schedule a meeting at their booth. Hope to see you there!

DAC registration is open.

Also Read:

SoC Front-end Build and Assembly

Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs

2025 Outlook with Dr. Chouki Aktouf of Defacto


LUBIS EDA at the 2025 Design Automation Conference #62DAC

LUBIS EDA at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-12-2025 at 6:00 am

62nd DAC SemiWiki

At the Design Automation Conference (DAC) 2025 in San Francisco, LUBIS EDA returns as an exhibitor to showcase its latest innovations in formal verification automation, helping semiconductor teams move faster and with more confidence through the most complex verification challenges.

LUBIS EDA is a fast-growing EDA startup based in Germany, dedicated to unlocking the full potential of Formal Verification (FV). We work with leading semiconductor companies worldwide — from semiconductor heavyweights, AI chip startups to established IP vendors — helping them prove correctness for their most mission-critical IP.

Our team consists of experts in formal methods, tool automation and leadership, and we’re known for combining high-performance consulting with user-friendly EDA software. If you’ve ever hit a wall in your verification project and wished FV was easier, faster, or more scalable — come talk to us.

Turnkey formal sign-off services

Getting formal verification right is hard — and scaling it across multiple projects is even harder. That’s why LUBIS EDA offers Turnkey Formal Sign-Off Services for companies who want to accelerate time-to-coverage closure without building a full in-house FV team.

We provide a complete formal sign-off flow tailored to your RTL and verification goals: from property definition to coverage analysis and documentation. Our engineers bring battle-tested Assertion IP and domain expertise across AI, data center, processor and other designs.

Customers using our turnkey services report:

  • Effort reduction in reaching formal closure and coverage goals
  • Rapid ramp-up for critical blocks (caches, interconnects, decoders, etc.)
  • Confidence in verification completeness with formal checklists and traceable requirements

At DAC, you’ll get a firsthand look at how our process works and how it helps teams avoid costly late-stage bugs.

ReCheck – Regression management tailored for Formal Verification

Formal Verification teams often struggle to keep pace with design changes and limited regression results. Their usually tasked to integrate their assertion into existing UVM regression runs. ReCheck is our response: a smart regression engine built specifically for formal, supporting scenario-based debugging, incremental runs, and coverage tracking.

With ReCheck, teams can cut the risk of rework late in the project, while staying lean on manual efforts. We’ll be demoing ReCheck in live sessions at our booth and sharing lessons learned from deploying it in production.

AppBuilder – Generation framework for Assertion IP

Formal doesn’t scale without automation. AppBuilder is our development framework to help engineering teams build Assertion IP (AIP) systematically, bind their AIP, and integrate with your existing EDA tools and CI pipelines. Whether you’re a startup building your first IP or a seasoned team formalizing an SoC, AppBuilder makes AIP development predictable and repeatable.

Booth #2620

  • See live demos of ReCheck and AppBuilder
  • Learn from real customer case studies in AI and data center chips
  • Discover how to accelerate your verification plans
  • Meet our Founders and formal engineers

Whether you’re just getting started with formal or trying to bring structure to your advanced flows:

Let’s Connect

We’re excited to meet you at DAC 2025!

Contact LUBIS EDA

DAC registration is open.

Also Read:

CEO Interview: Tobias Ludwig of LUBIS EDA

Automating Formal Verification


Legacy IP Providers Struggle to Solve the NPU Dilemna

Legacy IP Providers Struggle to Solve the NPU Dilemna
by Admin on 06-11-2025 at 10:00 am

Fracture the IO Network

At Quadric we do a lot of first-time introductory visits with prospective new customers.  As a rapidly expanding processor IP licensing company that is starting to get noticed (even winning IP Product of the Year!) such meetings are part of the territory.  Which means we hear a lot of similar sounding questions from appropriately skeptical listeners who hear our story for the very first time.  The question most asked in those meetings sounds something like this:

“Chimera GPNPU sounds like the kind of breakthrough I’ve been looking for. But tell me, why is Quadric the only company building a completely new processor architecture for AI inference? It seems such an obvious benefit to tightly integrate the matrix compute with general purpose compute, instead of welding together two different engines across a bus and then partitioning the algorithms. Why don’t some of the bigger, more established IP vendors do something similar?”

The answer I always give: “They can’t, because they are trapped by their own legacies of success!”

A Dozen Different Solutions That All Look Remarkably Alike

A long list of competitors in the “AI Accelerator” or “NPU IP” licensing market arrived at providing NPU solutions from a wide variety of starting points. Five or six years ago, CPU IP providers jumped into the NPU accelerator game to try to keep their CPUs relevant with a message of “use our trusted CPU and offload those pesky, compute hungry matrix operations to an accelerator engine.”  DSP IP providers did the same.  As did configurable processor IP vendors.  Even GPU IP licensing companies did the same thing.  The playbook for those companies was remarkably similar: (1) tweak the legacy offering instruction set a wee bit to boost AI performance slightly, and (2) offer a matrix accelerator to handle the most common one or two dozen graph operators found in the ML benchmarks of the day: Resnet, Mobilenet, VGG.

The result was a partitioned AI “subsystem” that looked remarkably similar across all the 10 or 12 leading IP company offerings: legacy core plus hardwired accelerator.

The fatal flaw in these architectures: always needing to partition the algorithm to run on two engines.  As long as the number of “cuts” of the algorithm remained very small, these architectures worked very well for a few years.  For a Resnet benchmark, for instance, usually only one partition is required at the very end of the inference. Resnet can run very efficiently on this legacy architecture.  But along came transformers with a very different and wider set of graph operators that are needed, and suddenly the “accelerator” doesn’t accelerate much, if any, of the new models and overall performance became unusable.  NPU accelerator offerings needed to change. Customers with silicon had to eat the cost – a very expensive cost – of a silicon respin.

An Easy First Step Becomes a Long-Term Prison

Today these IP licensing companies find themselves trapped.  Trapped by their decisions five years ago to take an “easy” path towards short-term solutions. The motivations why all of the legacy IP companies took this same path has as much to do with human nature and corporate politics as it does with technical requirements.

When what was then generally referred to as “machine learning” workloads first burst onto the scene in vision processing tasks less than a decade ago, the legacy processor vendors were confronted with customers asking for flexible solutions (processors) that could run these new, fast-changing algorithms.  Caught flat-footed with processors (CPU, DSP, GPU) ill-suited to these news tasks, the quickest short-term technical fix was the external matrix accelerator.  The option of building a longer-term technical solution – a purpose built programmable NPU capable of handling all 2000+ graph operators found in the popular training frameworks – would take far longer to deliver and incur much more investment and technical risk.

The Not So Hidden Political Risk

But let us not ignore the human nature side of the equation faced by these legacy processor IP companies.  A legacy processor company choosing a strategy of building a completely new architecture – including new toolchains/compilers – would have to explicitly declare both internally and externally that the legacy product was simply not as relevant to the modern world of AI as that legacy (CPU, DSP, GPU) IP core previously was valued.  The breadwinner of the family that currently paid all the bills would need to pay the salaries of the new team of compiler engineers working on the new architecture that effectively competed against the legacy star IP.  (It is a variation on the Innovator’s Dilemma problem.) And customers would have to adjust to new, mixed messages that declare “the previously universally brilliant IP core is actually only good for a subset of things – but you’re not getting a royalty discount.”

All of the legacy companies chose the same path: bolt a matrix accelerator onto the cash cow processor and declare that legacy core still reigns supreme.  Three years later staring at the reality of transformers, they declared the first-generation accelerator obsolete and invented a second one that repeated the same shortcomings of the first accelerator. And now faced with the struggles of the 2nd iteration hardwired accelerator having also become obsolete in the face of continuing evolution of operators (self-attention, multiheaded self-attention, masked self-attention, and more new ones daily) they either have to double-down again and convince internal and external stakeholders that this third time the fixed-function accelerator will solve all problems forever; or admit that they need to break out of the confining walls they’ve built for themselves and instead build a truly programmable, purpose-built AI processor.

But the SoC Architect Doesn’t Have to Wait for the Legacy IP Company

The legacy companies might struggle to decide to try something new.  But the SoC architect building a new SoC doesn’t have to wait for the legacy supplier to pivot.  A truly programmable, high-performance AI solution already exists today.

The Chimera GPNPU from Quadric runs all AI/ML graph structures.  The revolutionary Chimera GPNPU processor integrates fully programmable 32bit ALUs with systolic-array style matrix engines in a fine-grained architecture.  Up to 1024 ALUs in a single core, with only one instruction fetch and one AXI data port.  That’s over 32,000 bits of parallel, fully-programmable performance.  The flexibility of a processor with the efficiency of a matrix accelerator.  Scalable up to 864 TOPS for bleeding-edge applications, Chimera GPNPUs have matched and balanced compute throughput for both MAC and ALU operations so no matter what type of network you choose to run they all run fast, low-power and highly parallel.  When a new AI breakthrough comes along in five years, the Chimera processor of today will run it – no hardware changes, just application SW code.

Contact Quadric

Also Read:

Recent AI Advances Underline Need to Futureproof Automotive AI

2025 Outlook with Veerbhan Kheterpal of Quadric

Tier1 Eye on Expanding Role in Automotive AI


A Novel Approach to Future Proofing AI Hardware

A Novel Approach to Future Proofing AI Hardware
by Bernard Murphy on 06-11-2025 at 6:00 am

Tensilica NeuroEdge min

There is a built-in challenge for edge AI intended for long time-in-service markets. Automotive applications are the obvious example, while aerospace and perhaps medical usage may impose similar demands. Support for the advanced AI methods we now expect – transformers, physical and agentic AI – is not feasible without dedicated hardware acceleration. According to one report, over 50% of worldwide dollars spent on edge AI by 2033 will go to edge hardware rather than software, cloud or services. The challenge is that AI technologies are still evolving rapidly yet hardware capability once built is baked in; it cannot be upgraded on the fly like software. However AI must be upgradable through the 15-year life of a typical car to continue to align with regulatory changes and not drift too far from user expectations for safety/security, economy, and resale value. Reconciling these conflicting needs is placing a major emphasis on future-proofing AI hardware – anticipating and supporting (to the greatest extent possible) the AI advances that we can imagine. Cadence has a novel answer to that need, a co-processor to support an edge NPU and handle offload for (non-NPU) AI tasks that are known and tasks that are not yet known.

How can you future-proof hardware?

CPUs can compute anything that is computable, making them perfect for general-purpose computing but inefficient for matrix or vector arithmetic as measured by performance and power consumption. NPUs excel at solving the large linear arithmetic problems common in transformers in attention and multi-layer perceptron operations, but not the non-linear operations required in AI for functions like activation and normalization, or custom vector or matrix operations not built into the NPU instruction set architecture (ISA). These operations are generally handled through specialized hardware.

Unfortunately, AI model evolution isn’t limited to the core functions represented in say the ONNX operation set. New operations continue to appear, sometimes for fusion, sometimes for complex algorithms, perhaps for agentic flows, operations which can’t be mapped easily onto the NPU. A workaround is to offload such operations to a master CPU as needed, but there can be a big performance (and power) overhead where repeated offloads are necessary, potentially very damaging to ultimate product performance.

Much better is a programmable embedded co-processor which can be tightly coupled to the NPU. Programmable so it provides full flexibility to add new operations, and tightly coupled to minimize overhead in offloading between the NPU and the co-processor. That is the objective of the Cadence NeuroEdge AI Co-Processor – to sit right next to one or more NPUs, delivering low latency non-NPU computation as a complement to NPU computation.

The Tensilica NeuroEdge 130 AI Co-Processor

This co-processor approach is already proven in leading chips from companies like NVIDIA, Qualcomm, Google and Intel. What makes the Cadence solution stand out is that in embedded applications you can pair it with your favorite NPU (later with an array of NPUs). You can build as tight an NPU subsystem as you’d like between the NeuroEdge Co-Processor and your NPU of choice, perhaps with shared memory, to deliver the differentiation you need.

Also of note, this IP builds on the mature Cadence Tensilica Vision DSP architecture and software stack, so it is available today. What Cadence has done here is interesting. They trimmed back the Vision DSP architecture to focus just on AI support, reminiscent of how NVIDIA transitioned their general-purpose GPU architecture to AI platforms. This reduction results in a co-Processor with 30% smaller area for a similar configuration and 20% lower power consumption for equivalent workloads, while maintaining the same performance. Designers can also use the proven NeuroWeave software stack for rapid development. Connection to your NPU of choice is through the NeuroConnect API across AXI (or a high bandwidth direct interface). Cadence adds that this core is ISO26262 FUSA-ready for automotive applications.

One more interesting point: A common configuration would have NeuroEdge acting as co-processor supporting an NPU as the primary interface. Alternatively, NeuroEdge can act as the primary interface controlling access to the NPU. I imagine this second approach might be common in agentic architectures.

My Takeaways

This is a clever idea: a ready-made, fully programmable IP to accelerate non-NPU functions you haven’t yet anticipated, while working in close partnership with an NPU. You could try building the same thing yourself, but then you have to figure out how to tie a vector accelerator (DSP) and a CPU (with floating point support) tightly around your NPU, then run exhaustive testing on your new architecture, and also rework your software stack, etc. Or you could build around a proven and complete complement to your NPU.

Can this anticipate every possible evolution of AI models over the next 15+ years? No one can foresee every twist and turn in edge AI evolution, but a tightly coupled co-processor with fully programmable support for custom vector and scalar operations seems like a pretty good start.

You can learn more about the Cadence Tensilica NeuroEdge 130 AI Co-Processor HERE.

Also Read:

Cadence at the 2025 Design Automation Conference

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000


Mixel at the 2025 Design Automation Conference #62DAC

Mixel at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-10-2025 at 10:00 am

62nd DAC SemiWiki

Mixel, Inc., a leading provider of mixed-signal interface IP, will exhibit at booth #2616 at Design Automation Conference (DAC) 2025 on June 23-25. The company will demonstrate its latest customer demos featuring Mixel’s MIPI PHY IP and LVDS IP. Mixel’s customers include many of the world’s largest semiconductors and system companies targeting mobile and mobile-adjacent applications such as automotive, consumer/industrial IoT, VR/AR/MR, wearables, healthcare, and others.

At DAC, Mixel will be showcasing many MIPI customer demos who have integrated Mixel’s IP into their end product or SoC. One example is NXP, which leverages Mixel’s IP in multiple different applications. At DAC, Mixel will be showcasing an IoT application in the i.MX7ULP Applications Process featuring Mixel’s MIPI D-PHY DSI TX IP. This SoC is integrated into the Garmin Edge 830 Cycling GPS. Another MIPI D-PHY customer includes the Lattice Crosslink-NX low-power FPGA which features Mixel’s MIPI D-PHY Universal IP. Lattice FPGAs were announced to enable advanced driver experiences on Mazda Motor Corporation’s CX-60 and CX-90 models.

Mixel’s MIPI IP can be configured as a transmitter (TX), receiver (RX), or universal configuration which is the superset supporting both TX and RX in the same hard macro. Mixel also offers a unique MIPI implementation call TX+ and RX+. These proprietary MIPI IP configurations (patented in the case of RX+) allow for full-speed production testing without requiring a full MIPI Universal configuration, resulting in a substantial reduction in area and in standby power.

Mixel recently announced the availability of its latest MIPI C-PHY/D-PHY combo IP. This supports MIPI C-PHY v2.1 at 8.0Gsps per trio and MIPI D-PHY v2.5 at 6.5Gbps per lane. With 3 trios and 4 lanes, the IP supports over 54 Gbps aggregate bandwidth in C-PHY mode and 26 Gbps in D-PHY mode. At DAC, Mixel will have multiple customer demos featuring previous generations of its MIPI C-PHY/D-PHY combo IP. Hercules Microelectronics HME-H3 low-power FPGA leverages Mixel’s C-PHY/D-PHY combo RX IP and is in mass production. Hercules Microelectronics’s customers include Blackview which is in production with the HME-H3 FPGA in the Blackview Hero 10 foldable smartphone. Synaptics has also integrated Mixel’s MIPI C-PHY/D-PHY combo TX IP in Synaptics’s DisplayPort to dual MIPI VR bridge IC, the VXR7200.

Mixel has been focused on providing auto grade IP for many years. To further support its automotive customers, Mixel achieved ISO 9001 Quality Management System Certification and ISO 26262 certification for automotive functional safety. The Mixel MIPI product development process was certified up to ASIL-D with multiple products certified up to ASIL-B. Mixel will be showcasing one automotive customer demo from GEO Semiconductor (acquired by indie Semiconductor in 2023), the GEO GW5 geometric processor, featuring Mixel’s MIPI D-PHY CSI-2 TX and MIPI D-PHY CSI-2 RX IPs.

Mixel invites you to visit booth #2616 at DAC 2025, to meet their experts and explore how these technologies can benefit your SoC designs.

If you can’t make it DAC, you can learn more at https://mixel.com/

DAC registration is open.

Also Read:

Cadence at the 2025 Design Automation Conference

proteanTecs at the 2025 Design Automation Conference

Breker Verification Systems at the 2025 Design Automation Conference


Analog Bits at the 2025 Design Automation Conference #62DAC

Analog Bits at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-10-2025 at 6:00 am

Analog Bits at the 2025 Design Automation Conference

Analog Bits attends a lot of events. I know because I see them a lot in my travels. Lately, the company has been stealing the show with cutting-edge analog IP on a broad range of popular nodes and a strategy that will change the way design is done. Analog Bits is quietly rolling out a new approach to system design. One that delivers a holistic approach to power management during the architectural phase of design The company believes this is the only way to achieve the required power and performance for demanding next-generation AI systems.

In their words, “Analog Bit is the leading energy management IP company, making power safe, reliable, observable and efficient.” There is a lot to unpack in that statement, and a lot to see at the Analog Bits booth #1320 at DAC. Let’s look at the IPs Analog Bits has available across several foundries. There are many more in the works.

TSMC 2nm

Analog Bits recently completed a successful second test chip tapeout at 2nm, but the real news is the company will be at DAC with multiple working analog IPs at 2nm. A wide range PLL, PVT sensor, droop detector, an 18-40MHz crystal oscillator, and differential transmit (TX) and receive (RX) IP blocks will all be on display.

TSMC 3nm

Four power management IPs from TSMC’s CLN3P process will also be demonstrated. These include a scalable low-dropout (LDO) regulator, a spread spectrum clock generation PLL supporting PCIe Gen4 and Gen5, a high-accuracy thermometer IP using Analog Bits patented pinless technology, and a droop detector for 3nm.

Other TSMC Nodes, 4nm to 0.18u

Analog Bits has been an OIP partner with TSMC since 2004 and has a large portfolio of clocking, sensors, SERDES and IO IP’s. You can check out the availability at the company’s product selector website at https://www.analogbits.com/product-selector/.

GlobalFoundries 12LP, 12LP+, 22FDX

An integer PLL, FracN/SSCG PLL, PCI G3 ref clock PLL, PVT sensor, and power on reset are all available in both GF 12LP and 12LP+. A PCI G4/5 ref clock PLL is available on GF 12LP+. A broad array of automotive IP is also available in GF 22FDX including voltage regulators, power on reset, PCT sensors and IO’s.

Samsung 4LPP, 8LPU and 14LPP

An integer PLL, PVT Sensor, power on reset, and droop detector are available on Samsung 4LPP. An automotive grade PLL, PVT sensor, and PCIe Gen4/5 SERDES are available on Samsung 8LPP/8LPU. An integer PLL, PVT sensor, and PCI Gen4 SERDES are available on Samsung 14LPP.

About the Intelligent Power Architecture

Optimizing performance and power in an on-chip environment that is constantly changing with on-chip variation and power-induced glitches can be a real headache. Multi-die design compounds the problem across many chiplets.

The Analog Bits view is that this problem cannot be solved as an afterthought. Plugging in optimized IP or modifying software late in the design process won’t work. The company believes that developing a holistic approach to power management during the architectural phase of the project is the answer.

So, Analog Bits is rolling out its Intelligent Power Architecture initiative. There is a lot of IP and know-how that work together to make this a reality. If power optimization is a challenge, you should stop by booth #1320 at DAC and see what solutions are available from Analog Bits.

To Learn More

You can find extensive coverage of Analog Bits on SemWiki here.  You can also visit the company’s website to dig deeper. See you at DAC.

DAC registration is open.

Also Read:

Analog Bits Steals the Show with Working IP on TSMC 3nm and 2nm and a New Design Strategy

2025 Outlook with Mahesh Tirupattur of Analog Bits

Analog Bits Builds a Road to the Future at TSMC OIP


SoC Front-end Build and Assembly

SoC Front-end Build and Assembly
by Daniel Payne on 06-09-2025 at 10:00 am

SoC Compiler flow min

Modern SoCs can be complex with hundreds to thousands of IP blocks, so there’s an increasing need to have a front-end build and assembly methodology in place, eliminating manual steps and error-prone approaches. I’ve been writing about an EDA company that focuses on this area for design automation, Defacto Technologies, and we met by video to get an update on their latest release of SoC Compiler, v11.

With SoC Compiler an architect or RTL designer can integrate all of their IP, auto-connect some blocks, define which blocks should be connected, and create a database for simulation and logic synthesis tools. Both the top-level and subsystems can be built, or you can easily restructure your design before sending it to synthesis. Using SoC Compiler ensures that design collaterals such as UPF, SDC and IP-XACT are coherent  with RTL. Here’s what the design flow looks like with SoC Compiler.

Another use of the Defacto tool is when physical implementation needs to be linked to RTL pre-synthesis. More precisely, when Place and Route of all the IP blocks isn’t fitting within the area goal, you capture the back-end requirements and create a physically-aware RTL to improve the PPA during synthesis, as the tool also has power and clock domain awareness. When building an SoC it’s important to keep all of the formats coherent: IP-XACT, SDC, UPF, RTL. Using a tool to keep coherence saves time by avoiding manual mistakes and miscommunications.

In the new v11 release there has been a huge improvement in runtime performance, where customers report seeing an 80X speed-up to generate new configurations. This dramatic speed improvement means that you can try out several configurations per day, resulting in faster time to reach PPA goals. What used to take 3-4 hours to run, now takes just minutes.

One customer of Defacto had an SoC design with 925 IP blocks, consisting of 4,900 instances, 5k bus interface connections, and 65k ad hoc connections, where the runtime to make a complete integration completed in under just one hour.

V11 includes IP-XACT support and management of: TGI,  Vendor Extensions, multi-view. The latest UPF 3.1 is supported. Improvements to IP-XACT include support of parameterized add_connection, and Insert IP-XACT Bus Interface (IIBI).

There’s even some new AI-based features that improve tool usability and code generation tasks. You can use your own LLM or engines, and there’s no requirements to train the AI features.

Users of SoC Compiler can run the tool from the command line, GUI, or even use an API in Tcl, Python or C++ code. Defacto has seen customers use their tool in diverse application areas: HPC, security, automotive, IoT, AI. The more IP blocks in your SoC project, the larger the benefits of using SoC Compiler are. Take any existing EDA tool flow and add in the Defacto tool to get more productive.

Summary

During the past 17 years the engineering team at Defacto has released 11 versions of the SoC Compiler tool to help system architects, RTL designers and DV teams become more efficient during the chip assembly process. I plan to visit Defacto at DAC in booth 1527 on Monday, June 23 to hear more from a customer presentation about using v11.

Related Blogs