100X800 Banner (1)

CEO Interview with Eelko Brinkhoff of PhotonDelta

CEO Interview with Eelko Brinkhoff of PhotonDelta
by Daniel Nenni on 12-14-2025 at 1:00 pm

Eelko Brinkhoff PhotonDelta

In 25 years of working in economic development, Eric gained a lot of experience and knowledge in the field of Foreign Direct Investments (FDI), internationalisation of SME’s, innovation cooperation and economic development. I have built a strong network in the Netherlands and abroad towards business, government, knowledge institutes and universities.

In his role as CEO of PhotonDelta the challenge is to mature the organisation after a period of rapid growth and to become an internationally recognised accelerator for the photonic chip industry. PhotoDelta plays a key role in making the integrated photonics ecosystem indispensable for the goals and challenges that we face today. Photonic chips will become critical in various applications such as quantum computing, robotics, sustainable agriculture and autonomous driving. PhotonDelta as a Dutch world leading ecosystem will be a driving force to make this happen.

Tell us about your organisation.

PhotonDelta is a non-profit organisation supporting an end-to-end value chain for photonic chips that designs, develops, and manufactures innovative solutions that contribute to a better world. We do so by creating global awareness and promoting the benefits and potential of the Dutch and European photonic chip industry and its technologies. Leveraging funding from the National Growth Fund, alongside strategic investments, we catalyse the acceleration of the photonic chip industry.

What problems are the companies that you work with solving?

The PhotonDelta Ecosystem is an end-to-end value chain for photonic chips that designs, develops, and manufactures innovative solutions that contribute to a better world. The ecosystem is at the very forefront of photonic chip research, pioneering new products and solutions.

What application areas are you seeing the most exciting developments in?

Right now, we see adoption in markets like Datacom to be able to send more data while using less energy (think AI-demand), sensing solutions for healthcare diagnostics, and photonic chips for quantum computing at room temperature.

What keeps businesses in this industry up at night?

Companies in this industry often worry about how to scale fast enough to deliver affordable chips without sacrificing quality or efficiency. That pressure is amplified by the need to secure necessary funding to support expansion. Businesses also struggle with a complex regulatory environment that can slow progress and increase costs. On top of that, finding and retaining the specialized talent required for chip design, engineering, and manufacturing remains a major challenge.

How do companies normally engage with your organisation?

We support an ecosystem of more than 70 startups and scale-ups, where we support them with programmes on Talent, Tech, Funding, and Internationalisation. Via our internationalisation effort, we initiate, guide, and support new partnerships in business development and technology cooperation in key markets in North America, Europe, and Asia

What success have companies within your ecosystem seen recently?

Companies across the PhotonDelta ecosystem have seen a wave of meaningful progress recently, reflecting both technological maturity and growing commercial traction. Several startups have advanced from promising R&D into concrete milestones. Recent examples of success include Aluvia Photonics, which secured new funding that will allow the company to expand its aluminium-oxide photonic integrated circuit technology and accelerate collaboration with partners throughout the ecosystem. The photonics specialists at Surfix have been working with leading oncologists at the world-renowned NKI (Netherlands Cancer Institute), to create a photonics-based point-of-care testing platform that’s helping save lives today from hypercortisolism or Addison’s Disease. And in another example, PHIX has partnered with Ligitek, Leverage, and ITRI  to develop next-generation high-speed and energy-efficient optical transceivers to address global challenges in data connectivity. This advancement in high-speed optical engines strengthens the Netherlands-Taiwan collaboration and prepares new semiconductor packaging innovations for scalable volume manufacturing. 

You can find more success stories here – https://www.photondelta.com/news/

What’s next for the industry?

The industry is heading into a phase of accelerated growth driven by stronger public and private investment, including initiatives like a potential EU Chips Act 2.0. This funding will be key to reducing PIC production costs and simplifying packaging, both essential for wider market adoption. At the same time, photonics technologies are set to expand rapidly in sustainability-focused sectors such as food, health, and energy, where real-world demand is increasing.

To keep pace, companies will need to deepen their capabilities in hybrid integration, quantum-ready technologies, and scalable design tools that streamline development. Equally important is cultivating a global talent pool and strengthening alignment between international ecosystems to avoid fragmentation. Finally, the industry will continue pushing for shared design and manufacturing standards, enabling greater compatibility across sectors and faster time-to-market.

Overall, the next phase will be defined by scaling through investment, better technology platforms, coordinated talent development, and standards that support broad commercial deployment.

Contact Photon Delta

Also Read:

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors

CEO Interview with Brandon Lucia of Efficient Computer

CEO Interview with Dr. Peng Zou of PowerLattice


CEO Interview with Haber Ma of ADCERAX

CEO Interview with Haber Ma of ADCERAX
by Daniel Nenni on 12-14-2025 at 12:00 pm

Haber Ma ADCERAX

Haber Ma is the CEO of ADCERAX and leads the company’s global strategy in advanced ceramic materials for semiconductor and high-performance industrial applications. With a background in engineering, precision manufacturing, and international supply chain development, he has overseen ADCERAX’s transition from a traditional ceramics producer to a specialized supplier of semiconductor-grade alumina, zirconia, SiC, and Si₃N₄ components.

Haber has played a key role in establishing ADCERAX’s material engineering capabilities, machining precision standards, and customer collaboration model focused on reliability, purity, and long-term stability. Under his leadership, the company has expanded its portfolio to include ceramic robot arms, ESC-related components, plasma-erosion-resistant ceramics, and advanced furnace materials. He advocates for deeper integration between ceramic material science and semiconductor equipment engineering to support the industry’s scaling and purity demands.

Tell us about your company.

ADCERAX is an advanced ceramics manufacturer specializing in semiconductor-grade alumina, zirconia, silicon carbide (SiC), silicon nitride (Si₃N₄), aluminum nitride (AlN), and ZTA materials. Our mission is to deliver high-purity, high-reliability ceramic components that support the performance, stability, and cleanliness requirements of modern semiconductor equipment.

We focus heavily on engineering collaboration with tool OEMs and subsystem suppliers, providing fully custom mechanical parts, chamber insulation ceramics, ceramic end effectors, electrostatic-chuck-related components, and structural parts for vacuum, etch, deposition, and thermal processing systems. With vertically integrated machining, precision grinding, and testing capabilities, ADCERAX helps customers accelerate development, improve system reliability, and achieve long-term supply chain resilience.

What problems are you solving?

Semiconductor manufacturing environments push materials to extremes—high plasma energy, aggressive chemistries, rapid thermal cycling, and strict particle control. Many traditional materials cannot maintain dimensional stability, surface integrity, or dielectric reliability under these conditions. The smallest contamination event or micro-crack can jeopardize yield.

ADCERAX addresses these challenges with engineered ceramics that maintain high mechanical strength, low particle generation, and exceptional corrosion resistance. Our SiC and Si₃N₄ components withstand plasma erosion in etch and CVD chambers, while high-purity alumina and AlN ensure dielectric stability for wafer handling and isolation applications. In thermal processing, components such as precision alumina tube solutions support consistent temperature distribution and long-term furnace reliability. Our goal is to remove material-related failure modes so customers can focus on equipment performance rather than replacement cycles.

What application areas are your strongest?

Our strongest applications lie in semiconductor equipment—particularly where extreme environments demand stable, clean, and long-lasting ceramic components. Key product categories include:

  • Electrostatic chuck (ESC) ceramics: high-purity dielectric materials and structural components for vacuum and plasma environments.
  • Ceramic robot arms and end effectors: ultra-clean, lightweight, and thermally stable components for wafer transport and automation.
  • LPCVD / diffusion furnace tubes: alumina and quartz-alternative ceramics for long-cycle thermal processing.
  • Chamber insulation and isolation ceramics: rings, plates, and liners engineered for plasma-erosion resistance.
  • SiC and Si₃N₄ mechanical components: ideal for corrosive chemistries and high-temperature subsystems.

These applications leverage our expertise in purity control, microstructure engineering, and precision machining for semiconductor equipment platforms.

What keeps your customers up at night?

Semiconductor equipment manufacturers face growing challenges around reliability, contamination control, and supply chain stability. A single ceramic component failure—whether from plasma erosion, thermal shock, or insufficient machining precision—can lead to extended downtime or yield loss.

Customers worry about:

  • particle generation from material microfractures
  • long-term erosion in aggressive plasma chemistries
  • surface roughness drift affecting chamber cleanliness
  • CTE mismatch causing assembly instability
  • inconsistent global supply of critical ceramic parts

At ADCERAX, we help mitigate these risks through rigorous material qualification, tight-tolerance machining, and repeatable production processes. Our engineering collaboration model ensures each component is optimized for the customer’s system environment rather than treated as a generic ceramic part.

What does the competitive landscape look like and how do you differentiate?

The market for semiconductor ceramics is highly specialized, with a small group of manufacturers offering semi-grade purity and machining accuracy. Many suppliers focus on standard industrial ceramics, but semiconductor applications require an entirely different level of microstructural control, polishing precision, and contamination management.

ADCERAX differentiates by combining:
  • semi-grade purity control for low alkali and low metallic contamination
  • tight-tolerance machining for high-precision wafer handling and chamber components
  • advanced materials portfolio (Al₂O₃, AlN, SiC, Si₃N₄, ZTA) aligned with semiconductor needs
  • custom component engineering rather than fixed catalog items
  • fast prototype-to-production cycles, enabling faster OEM development timelines

Our integration of engineering, machining, and materials expertise allows us to serve as both a supplier and a technical partner.

What new features or technology are you working on?

ADCERAX is expanding its R&D around high-purity ceramics for next-generation etch and deposition platforms. This includes improvements in:

plasma-erosion-resistant SiC and Si₃N₄ for advanced etch chemistries
high-thermal-uniformity alumina structures for diffusion and LPCVD systems
ultra-flat, ultra-clean ceramic plates for wafer handling subsystems
advanced polishing and surface engineering to reduce particle generation

We are also developing enhanced metrology and inspection methods to support OEM qualification requirements, including micro-defect detection and advanced surface analysis. Our long-term direction is enabling ceramic components that last longer, shed fewer particles, and support higher equipment uptime.

How do customers normally engage with your company?

Most customers approach ADCERAX through engineering-driven collaboration. They typically provide drawings, CAD files, or prototype requirements, and our team works closely with their engineers to refine material selection, tolerances, and design features.

We support the full cycle from prototype builds to mass production, offering:
  • material and design consultation
  • rapid sampling and custom machining
  • batch production with strict quality control
  • global logistics support for long-term supply programs

For semiconductor OEMs and subsystem suppliers, we often join early in the design phase to ensure every ceramic part meets performance, reliability, and cleanliness expectations. Website: https://www.adcerax.com

Also Read:

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors

CEO Interview with Brandon Lucia of Efficient Computer

CEO Interview with Dr. Peng Zou of PowerLattice


Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur

Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur
by Daniel Nenni on 12-12-2025 at 10:00 am

Daniel is joined by Mahesh Tirupattur, chief executive officer at Analog Bits. Mahesh leads strategic planning to develop and implement Analog Bits’ vision and mission of enabling the silicon digital world with interfacing IP to the analog world. Additionally, Mahesh oversees all aspects of Analog Bits’ operations to ensure efficiency, effectiveness, and financial security while maintaining strong relationships with key stakeholders, customers, and employees.

In this far-reaching discussion, Mahesh begins with an overview of some key Analog Bits accomplishments for 2025. He spends some time on the company’s relationship with TSMC, including the awards Analog Bits has won over the years and the latest 2 and 3nm IP. He describes in some detail the five joint papers Analog Bits presented at the TSMC OIP event with high-profile partners.

Dan also discusses the Intelligent Power Architecture with Mahesh, who explains what it is and how it impacts chip design. Analog Bits Pinless IP is also explored as Mahesh describes how it works and where it becomes very useful for dense, advanced designs. How Analog Bits power sensors are enabling the broad deployment of AI is also discussed.

Dan ends the conversation by exploring Mahesh’s recent presentation that explains how analog designers and winemakers are similar.

Contact Analog Bits

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


MZ Technologies Launches Advanced Packaging Design Video Series

MZ Technologies Launches Advanced Packaging Design Video Series
by Daniel Nenni on 12-12-2025 at 6:00 am

MZ Technologies Video Series SemiWiki

In a significant move aimed at empowering semiconductor and systems-design engineers, MZ Technologies has announced the launch of a new video series focused on advanced packaging design. This initiative comes at a time when the semiconductor industry is rapidly shifting toward multi-die, 2.5D/3D integration, heterogeneous chiplets and package-level innovation, and MZ Technologies is positioning itself at the heart of that transformation.

MZ Technologies has built its reputation on offering cutting-edge co-design tools for chiplet/package integration through its GENIO family of products. The company’s solutions enable system-level exploration of packaging choices and dielectric, thermal, mechanical and interconnect tradeoffs, allowing designers to make informed decisions early in the flow. Their focus on “path-finding” for multi-die systems addresses a classic pain point in advanced packaging: which architecture to choose and how it will behave.

With the new video series, MZ Technologies aims to open the black box of advanced packaging: presenting real-world challenges, design strategies and best practices in navigating the complex world of multi-die, 3D integration, thermal/mechanical stress, interposer and heterogeneous assembly. Early episodes already highlight topics such as heterogeneous 3D integration, multi-die design and advanced packaging challenges, often featuring insights from the company’s founder and CEO, Anna Fontanelli.

The timing is notable. As traditional approaches to scaling (i.e., monolithic Moore’s Law) decelerate, system-architects and packaging engineers are increasingly required to rethink design from the package outward. MZ’s video series effectively addresses this shift by presenting the “why” and “how” of advanced packaging, not just the tool. For example, the series helps clarify how thermal gradients and mechanical stresses in 3D-packaged systems can cause warpage, delamination, or interconnect failure, a major barrier for adoption.

In practice, making multi-die designs work requires coordinated decisions — from chiplet placement and interconnect architecture to thermal path planning and substrate materials. MZ’s video series appears to take a system-level storytelling approach: detailing how to break down the complexity, weigh trade-offs, and optimize early in the engineering timeline. The value of such content is especially high for designers who must navigate the sizing of interconnects, plan for mechanical reliability, and reconcile package-PCB interactions. By sharing expert-level conversations and architectural case studies, the series helps demystify what might otherwise remain a niche specialized discipline.

Another key benefit: the series functions as a bridge between EDA tool-capabilities and real-world design problems. Instead of simply promoting software, MZ uses the video format to build thought leadership, establishing the company not only as a tool vendor but also as an educator and industry advocate for advanced packaging design. It supports a marketing strategy that emphasizes deeper engagement with potential users, who may be architects, package engineers, reliability specialists or system integrators.

For companies embarking on advanced packaging projects, the video series offers a practical resource. Engineers might use episodes as discussion starters: internal training, design-kickoff meetings, or to guide cross-discipline teams (chip, package, board) toward a shared understanding of assembly constraints and system-level goals. As 2.5D/3D heterogenous systems become increasingly common in AI, high-performance computing, and edge devices, the demand for such educational content grows.

View the videos here.

Bottom line: MZ Technologies’ launch of an advanced-packaging design video series is a strategically sound and timely move. It reflects both the evolving needs of semiconductor packaging and the company’s ambition to position itself as a thought leader. For design teams facing the paradigm shift toward multi-die and 3D integration, the series promises to be an invaluable guide, one that goes beyond tool features to address underlying architectural and process challenges.

Also Read:

Video EP4: A Deeper Look at Advanced Packaging & Multi-Die Design Challenges with Anna Fontanelli

Video EP3: A Discussion of Challenges and Strategies for Heterogeneous 3D Integration with Anna Fontanelli

2025 Outlook Anna Fontanelli MZ Technologies


Superhuman AI for Design Verification, Delivered at Scale

Superhuman AI for Design Verification, Delivered at Scale
by Mike Gianfagna on 12-11-2025 at 10:00 am

Superhuman AI for Design Verification, Delivered at Scale

There is a new breed of EDA emerging. Until recently, EDA tools were focused on building better chips, faster and with superior quality of results. Part of that process is verifying and debugging the resultant design. Thanks to ubiquitous AI workloads and multi-chip architectures, the data to be verified and debugged is exploding, along with the scope of the specs and test plans. The size of the resultant datasets and the complexity of the relationships to be explored is a task that is simply too large for a design team of any size to handle in a time frame of relevance.

As observed by me and many others, AI turns out to be the problem and the solution for several classes of problems. This is one of them. A new company called Bronco AI focuses on end-to-end design verification with a new breed of AI that can be deployed in an existing design flow and alleviate the verification problem on day 1. No lengthy startup. Bronco’s technology can tackle a specific problem and easily scale for the entire enterprise. And the technology gets better over time in a unique way that protects the customer’s proprietary design knowledge.

I recently had the opportunity to get an overview of what Bronco AI can do, along with a live demonstration. This discussion was truly a new and unique experience. Superhuman AI for design verification, delivered at scale.

Here are some of the details of what I learned.

Who is Bronco AI?

David Zhi LuoZhang

I spoke with David Zhi LuoZhang, co-founder & CEO of Bronco AI. David has a computer science degree from the University of Pennsylvania and an economics degree from The Wharton School. Before co-founding Bronco AI, David was working on AI Fighter Jets at Shield AI, training human-beating F-15s and F-16s on efficient hardware. He then turned down a role at SpaceX working on Starlink embedded algorithms to start Bronco AI. David is something of a renaissance person. He has a knack for seeing problems differently from the rest of us and developing unexpected solutions. I thoroughly enjoyed our meeting.

As I mentioned, Bronco accelerates the end-to-end design verification flow, from specification analysis to verification planning, test bench bring-up, and simulation debug. This process represents a combination of complex planning tasks in the early design phase and a series of highly complex analysis and debug tasks as the design matures. The time and effort required for this work is substantial. Some of the debug tasks require highly detailed analysis of massive data sets with huge numbers of subtle interactions. Bronco has focused on a very real problem here.

Before getting into the demo, I explored some of the things that make Bronco AI unique with David. An important one is securing the customer’s IP. David explained that Bronco’s customers have invested substantial resources building proprietary AI models and flows. These capabilities represent the company’s competitive edge and so must be protected behind the company’s firewall. David went on to point out that Bronco’s tools can operate in the customer’s environment and Bronco never trains on the customer’s data. This approach ensures private data stays private.

Another one is that Bronco’s platform continues to learn and improve as it solves more problems, making it more powerful and valuable over time, just like an expert human design verification engineer, but far more scalable. David explained that Bronco’s architecture facilities this learning and scalability.

A common AI foundation and a unified data model help the system to learn by optimizing the parameters that control the AI for the specific problems of a given customer. This optimization is unique for each customer and is shared across all the tools in the platform. This adaptability is quite challenging to accomplish in the context of agentic AI systems. It turns out that both David and his co-founder Jeffrey Pan had previously done research on interpretability and robustness of AI algorithms. This background is the foundation for some of Bronco AI’s differentiation.

The diagram below provides some visibility into the how the AI agents are organized, enabling the system to improve over time, easily bolt into existing flows, and protect the customer’s sensitive data along the way.

Proprietary AI agents that are secure and specialized for DV

David also explained another important attribute of how Bronco’s tools are used. The system doesn’t take the quality of design data for granted. Rather it can analyze and improve the quality and completeness of all inputs before they are used for subsequent training and optimization. This is the AI era version of preventing garbage in, garbage out.

The Demo

The demo illustrated how Bronco AI performs debug. David explained that debug is one of the most complex and data intensive parts of design verification. So, this is a good pressure test of the entire system.

The example design was an open-source network on chip (NoC) application. David explained that some of Bronco’s customers have similar subsystems or use the same standards in their designs, so a real, mainstream application was being debugged. He began by describing the high-level process Bronco AI uses to perform debug.

All of the existing information about the design is presented to the tool – simulation runs, waveforms, logs, design files, specs, etc. and a description of the problem. This is communicated easily in natural language. Bronco AI then makes a playbook of how it will approach the debug task

This playbook is quite detailed and provides substantial documentation about the design and its issues. The tool then analyzes observed behavior, looking for anomalies and potential root causes. The playbook informs a lot of this work. The system continues its work until either a root cause is found, or the problem is localized to the point where a ticket can be created to point an engineer to the location requiring further analysis. This process is summarized in the diagram below.

Bronco AI debug process

The specific debug task was to find the reason for a time-out that was observed in a regression run. David explained that this bug was chosen since time-out problems are particularly difficult to debug since they provide very little information. David specified the location for the design files (the Inputs block above) in then simply typed:

Why is my NoC hanging/timing out? Please debug.

I then watched as the tool built a detailed playbook of how to approach the debug task. There was clearly a lot of analysis going on in real-time regarding elements such as router blocks, network interfaces and hand-shaking protocols. The tool continued to analyze the circuit in greater detail, running multiple processes in parallel.

After about 25 minutes of elapsed time, the tool localized the root cause and presented three best explanations that covered both testbench and RTL issues. David explained the tool indeed got the right answer in its list of explanations.

Without a tool like this, the design team would come to work after a night of regression runs and begin sifting through mountains of data to find the anomalies and begin the debug process. This contrasts with arriving at work and being presented with a detailed playbook for each problem found and a localized root cause for each problem. At that point, the power of this tool became quite clear to me.

David shared a quote from one of Bronco’s customers, etched, a well-funded startup building data center AI ASICs on TSMC 4nm. That really drove home the value.

“Bronco helps our DVs get a head start on their work and helps us bank institutional knowledge, automate tasks and elevate DVs to be like scientists.”

To Learn More

If you want to elevate your design verification team to improve design quality and time-to-market, you should seriously consider adding Bronco AI. This tool will vastly improve your design verification efficiency and quality and get better at it over time.

You can request your own private demo and see the tool for yourself. Just go to the Bronco AI webpage and click on Request a Demo in the upper right. And that’s how you can get superhuman AI for design verification, delivered at scale.


The Quantum Threat: Why Industrial Control Systems Must Be Ready and How PQShield Is Leading the Defense

The Quantum Threat: Why Industrial Control Systems Must Be Ready and How PQShield Is Leading the Defense
by Daniel Nenni on 12-11-2025 at 8:00 am

image0

Industrial control systems (ICS) underpin the world’s most critical infrastructure: power grids, manufacturing plants, transportation networks, water systems, oil and gas facilities, and chemical processing operations. For decades, these systems relied on isolation, proprietary communication protocols, and hardware longevity as de facto security measures. But Industry 4.0, cloud integration, and the rapid expansion of industrial IoT have removed traditional boundaries, exposing ICS to a dramatically larger cyberattack surface. Now, an even bigger disruption looms: the coming era of quantum computing.

Quantum computers capable of running algorithms such as Shor’s threaten to break today’s public-key cryptography, specifically RSA and elliptic-curve cryptography. These algorithms currently secure authentication, firmware verification, remote connections, and confidential data flows across ICS and operational technology. Experts estimate that cryptographically relevant quantum computers may appear within the next decade, but adversaries do not need to wait. “Harvest-now, decrypt-later” attacks are already underway, where encrypted ICS traffic is collected today and stored for future decryption once quantum machines reach maturity. The long equipment lifespans common in industrial environments (often 10 to 30 years) often means many assets being deployed right now will still be in active use when quantum attacks become practical.

ICS environments face several unique challenges that make the quantum transition especially urgent. First, industrial devices are resource constrained. PLCs, RTUs, and embedded sensors often operate with limited memory and compute power, making it difficult to implement next-generation cryptographic algorithms without specialized optimization. Second, industrial networks have become deeply interconnected. The erosion of the once-reliable “air gap” exposes control-layer equipment to the same threats facing enterprise IT. Third, ICS operate in safety-critical environments where any compromise, whether firmware tampering, signal injection, or command spoofing, can have real-world physical consequences, from halted production to public safety hazards.

Governments and security agencies such as CISA, NIST, and ENISA have issued increasingly clear warnings: quantum computing will render today’s cryptography obsolete, and migration will be long, costly, and technically complex. For ICS operators, doing nothing is the riskiest option.

This is where PQShield has emerged as a global leader in practical, deployable post-quantum cryptography (PQC) for industrial systems. Founded in 2018 as a spin-out from the University of Oxford, PQShield has played a central role in the development, standardization, and commercialization of the PQC algorithms now selected by NIST: ML-KM (Kyber) for key establishment, ML-DSA for digital signatures, and SLH-DSA for hash-based signatures. As one of the few companies contributing to every stage of the NIST PQC process, PQShield brings unmatched cryptographic pedigree to the ICS market.

However, PQShield’s contribution extends beyond research. The company has built the industry’s most complete, production-ready PQC portfolio for embedded and constrained environments, the exact conditions found in ICS. PQShield’s PQMicroLib, for example, brings high-security PQC to microcontrollers with as little as ~13 kB RAM, making it a practical retrofit for brownfield industrial devices. For new greenfield deployments, PQShield provides side-channel-resistant hardware IP cores, quantum-secure boot, and firmware update mechanisms to protect the entire trust chain of industrial systems. Hybrid classical-plus-PQC libraries support gradual migration, maintaining compatibility with existing infrastructure while building quantum resilience.

PQShield is also deeply active in ICS-relevant standards bodies, including the Industrial Internet Consortium (IIC) and ISA/IEC 62443 working groups. Its customers span semiconductor manufacturers, automotive suppliers, energy operators, and defense organizations—sectors where security and long-term reliability are paramount.

To prepare for the quantum era, ICS operators should start by conducting a full cryptographic inventory, identifying where vulnerable algorithms are used in authentication, key exchange, VPNs, and firmware verification. Next, organizations should adopt NIST-standard PQC algorithms and prioritize quantum-resistant secure boot and OTA update mechanisms. Finally, they should work with specialized partners, such as PQShield, to develop crypto-agility strategies and ensure both new and legacy systems can be upgraded over time.

Bottom line: The quantum threat is real, inevitable, and approaching fast. Organizations that move early will avoid expensive, last-minute retrofits and significantly reduce operational risk. With companies like PQShield providing the tools, standards alignment, and engineering depth needed to secure industrial systems, the path to quantum-safe ICS is now both achievable and urgent.

Contact PQShield

Also Read:

Think Quantum Computing is Hype? Mastercard Begs to Disagree

Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move

Formal Verification: Why It Matters for Post-Quantum Cryptography


AI Deployment Trends Outside Electronic Design

AI Deployment Trends Outside Electronic Design
by Bernard Murphy on 12-11-2025 at 6:00 am

Balancing quality with speed

In a field as white-hot as AI it can be difficult to separate cheerleading from reality. I am as enthusiastic as others about the potential but not the “AI everywhere in everything” message that some emphasize. So it was interesting to find a survey which looks at the deployment reality outside our narrow domain of electronic and systems design, surveying nearly 800 businesses worldwide who are applying generative AI in financial services, government and healthcare. There will be some differences from our usage/plans but there should be enough in common that we ought to pay attention to the important challenges they find in scaling beyond early trials. The report is quite detailed on several topics. I am just picking a small number that attracted my interest.

Adoption rates are significant, usage still not widespread

Within the survey set, about 30% of employees are using GenAI daily (one or several times a day) and about 50% at least once per week. Perhaps these limts simply reflect corporate restrictions on access to generative tools, perhaps they reflect a learning curve especially in changing habits, both reasons entirely understandable. Then again 64% of employees said they don’t see value in using AI in their work. Maybe that is an education problem but is certainly a barrier to overcome in plans to deploy AI more widely.

Data quality/accuracy remains a problem

Nearly 70% of respondents said they have delayed rollouts due to issues with accuracy. They attribute this to outdated or irrelevant data or hallucinations. Many said that half of their data was more than 5 years old. They continue to add new data without flushing out old data, which inevitably leads to data rot (redundant, obsolete, trivial/low value data) especially in data used for training. Might sound familiar to anyone tasked with pruning regression datasets.

They also point out that this problem is compounded by data generated by GenAI itself, growing by 22-40% per year. They don’t comment further on this point, but I would guess that a non-trivial percentage of that generated data might also be considered rot.

One personal experience here. I have a recent model robot vacuum/floor mop and wanted to know how to remove the mop. I used a Google AI Overview header provided with my search and found a video as a RAG endpoint. Except the video was for replacing the head on a hand floor mop. Complete miss, which surprised me. I usually think of RAG (a search leading to an endpoint on a human-generated text or video) as reliable. But only if the search leading to that endpoint is reliable, it appears. (In a previous blog I pointed to a paper which aims to improve accuracy in RAG relevance.)

Confidence in quality is more important than speed

There is significant concern (67% of respondents) that employees will lose the ability to distinguish truth from fiction in material produced by GenAI tools. Or at least they may become more careless in checking GenAI outputs. If I ask a tool to generate an email response to a customer request, will I check it carefully, line by line, or just scan to make sure it looks OK?

Aside from potential damage to the business caused by generation errors, frequent mistakes will damage in-house confidence in the AI initiative. The report leans toward at least balancing higher quality with speed. I would go further in saying that quality should get more emphasis. It is more important to build a solid base than to grow deployment quickly. I continue to believe that the best applications are those which intrinsically support robust cross-checks.

Still an exciting journey but one that that requires continued oversight and caution.


CAST’s Breakthrough in Automotive IP: The MSC-CTRL Microsecond Channel Controller

CAST’s Breakthrough in Automotive IP: The MSC-CTRL Microsecond Channel Controller
by Daniel Nenni on 12-10-2025 at 2:00 pm

CAST MSC CTRL SemiWiki

In a significant advancement for automotive electronics, Semiconductor intellectual property provider CAST has unveiled the MSC-CTRL Microsecond Channel Controller IP core. This new core empowers ASIC and FPGA designers with a deterministic, microsecond-precise serial interface for connecting to smart power and sensor devices. As vehicles evolve toward greater electrification and autonomy, the demand for custom SoCs that integrate advanced control and diagnostics has surged. Traditionally, designers relied on specific MCUs for such interfaces, but CAST’s offering provides a flexible, licensable alternative, enabling broader innovation in automotive subsystems.

The MSC-CTRL core stands out for its multi-protocol support, consolidating several variants of the Microsecond Channel technology into a single configurable block. This includes the standard Microsecond Channel (MSC), the enhanced Microsecond Channel Plus (MSC-Plus), and compatibility with the foundational Microsecond Bus (µSB) concept from early engine-control systems.

Peter Dumin, CAST’s senior product manager, emphasized the core’s role in modern designs: “Customers are pushing more control and diagnostics into their own SoCs, but still need the precise timing and rich feedback they get from MSC-based power and sensor devices.” By embedding this IP, engineers can create custom interfaces that rival those in proprietary MCUs, fostering competition and customization in the automotive sector.

From a technical standpoint, the core integrates seamlessly as a 32-bit AMBA APB peripheral on the system side. It features DMA-friendly triggers, interrupts, and configuration registers, making it straightforward to incorporate into platforms like AUTOSAR-based engine control units. This compatibility allows it to operate beneath the AUTOSAR Microcontroller Abstraction Layer or complex drivers, streamlining software development. The protocol’s heritage in short-reach, chip-to-chip links—connecting controllers to actuators like injector drivers, ignition coils, or sensor front ends—ensures low-latency communication essential for real-time applications.

What truly sets MSC-CTRL apart is its emphasis on deterministic timing, a critical factor in real-world actuator control. Unlike generic SPI or network buses, MSC technology guarantees microsecond-accurate actuation aligned with the controller’s timers. Latency and jitter are bounded and predictable, simplifying control-loop design and calibration. Additionally, a dedicated upstream channel delivers detailed status and diagnostics from external devices, eliminating the need for separate return lines per slave as in traditional PWM setups. This shared channel supports multiple slaves efficiently, reducing wiring complexity and enhancing system reliability. Engineers can thus confidently predict that “this injector, coil, or valve channel will switch at this time, every cycle,” even over a shared serial bus.

Functional safety is another cornerstone, with built-in features tailored for ISO 26262-compliant designs. These include communication-integrity checks like parity on critical paths, configurable glitch and spike filtering, timeout detection, and diagnostic flags with interrupts for error reporting. A hardware emergency-stop mechanism allows safe halting of outputs without CPU intervention, vital in safety-critical scenarios. For ASIL-classified systems, CAST offers an optional ISO 26262 ASIL-B safety package, comprising a Safety Manual, Failure Modes, Effects, and Diagnostic Analysis (FMEDA), and Failure Modes and Effects Analysis (FMEA). This package accelerates integration into SoC-level safety assessments, reduces the burden of treating the IP as a “black box,” and helps demonstrate compliance with functional-safety goals for timing-critical communications.

MSC-CTRL complements CAST’s broader automotive IP portfolio, including controllers for TSN, CAN, LIN, and SENT buses. This synergy enables engineers to construct end-to-end interfaces—from vehicle-level networks to the final actuator wiring—all sourced from one vendor, minimizing integration risks and accelerating time-to-market.

Available now royalty-free for ASIC and FPGA implementations, the core positions CAST as a key player in automotive innovation. Founded in 1993, CAST specializes in silicon IP, offering microcontrollers, processors, compression engines, and security modules for diverse applications. For details, interested parties can contact CAST at info@cast-inc.com or visit www.cast-inc.com.

Bottom line: This launch arrives at a pivotal moment, as the automotive industry grapples with the complexities of electric vehicles and advanced driver-assistance systems (ADAS). By democratizing access to high-precision interfaces, MSC-CTRL could accelerate the shift toward more integrated, efficient, and safe automotive electronics.

Also Read:

CAST Simplifies RISC-V Embedded Processor IP Adoption with New Catalyst Program

RANiX Employs CAST’s TSN IP Core in Revolutionary Automotive Antenna System

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs


Radio Frequency Integrated Circuits (RFICs) Generated by AI Based Design Automation

Radio Frequency Integrated Circuits (RFICs) Generated by AI Based Design Automation
by Admin on 12-10-2025 at 10:00 am

Figure1

By Jason Liu, RFIC-GPT Inc.

Radio frequency integrated circuits (RFICs) have become increasingly critical in modern electronic systems, driven by the rapid growth of wireless communication technologies (5G/6G), the Internet of Things (IoT), and advanced radar systems. With the desire for lower power consumption, higher integration, and enhanced performance, the complexity of RFICs has escalated correspondingly.

The design of RFICs is considered to be one of the most challenging areas in IC design due to the frequency dependent parasitic effects and time-consuming simulations, particularly the electromagnetic (EM) simulations. Till now, RFICs design remains heavily reliant on the expertise and intuition of specific experienced designers, requiring numerous iterative tuning and manual optimizations due to the nonlinear interactions between active and passive circuits. Conventional design flows, illustrated in the top half of Fig. 1, tend to be time-consuming and inefficient. Therefore, exploring efficient and automated methodologies to streamlining the RFIC design while ensuring optimal performance has become a key focus of research and industry. Here we introduce an AI-enabled automated design flow of end-to-end RFIC synthesis framework, integrating multiple precise modeling and optimization algorithms. As shown in the bottom half of Fig. 1, this process enables automated circuit synthesis of a DRC/LVS clean layout including placement and routing. Compared to traditional manual design flows that require repeated iterations between circuit design, layout, and EM simulation, the proposed approach enables efficient exploration of the extensive design space, which is one of the most significant challenges in design automation.

The overall framework of the proposed automated RFIC design flow is depicted in Fig. 2. This methodology is organized into three stages: circuit topology selection and specification definition, parameter optimization, and layout synthesis, with each stage being tightly integrated. The proposed flow begins with the selection of an appropriate circuit topology and the definition of key performance specifications, which should meet the functional requirements. For automation, the specifications are formalized into quantifiable targets and boundaries, which systematically guide the parameter optimization process and enables thorough exploration of the solution space, ensuring that the circuit satisfies all required standards.

Fig.2 Automated Design Flow of RFICs.

The second stage of the automated flow is circuit parameter optimization based on the collaboration of multiple optimization algorithms, which includes various black-box optimization approaches. A black-box problem refers to an optimization scenario where the internal structure of the objective function is unknown and only its output for given inputs can be observed. Black-box optimization algorithms are designed to efficiently optimize such functions, especially when evaluations are costly, by adaptively selecting evaluation points. RFIC design inherently involves strongly coupled, nonlinear, multi-objective trade-offs (e.g., NF, gain, matching, linearity, power, and area) over a high-dimensional design space. These characteristics make RFIC design a typical black-box optimization problem, well-suited for advanced algorithms such as BO, genetic algorithms (GA), particle swarm optimization (PSO) etc. The final stage of the automated design flow of RFICs focuses on layout synthesis. Once the circuit parameters are optimized, the corresponding schematic is automatically translated into a physical layout using parameterized cells in conjunction to the optimization results. Placement and routing are subsequently performed within a RL-based Actor-Critic proximal policy optimization (PPO) framework, where the state is defined by the position and orientation of each device, the action corresponds to the movement direction and distance for the next placement step, the reward function is designed to optimize key layout metrics such as area utilization and density. Once the placement is finished, routing is performed by algorithm that efficiently determines the shortest path for signal wires while avoiding layout rule violations. The detailed algorithms of place and route will be presented in the future work.

To demonstrate the viability and effectiveness of the proposed automated design flow, it is applied to two different LNAs in 40-nm CMOS technology: a 2.4 GHz differential Cascode LNA and a 5.5 GHz two-stage differential CS LNA. For the first case, the automatically generated schematic and layout are presented in Fig. 3, where two transformers are used for input and output matching networks.

The cross-couple capacitor structure is introduced to neutralize Cgd, enhance gate-drain isolation and reduce nonlinear distortion. This example features a design space of 18 design variables and 7 optimization objectives. By applying the proposed automated design flow, circuit and layout (DRC/LVS clean) synthesis are accomplished within minutes. Fig. 3 shows the synthesized layout of the proposed differential cascode LNA, which occupies a die area of 0.38 × 0.94 mm2. Fig. 4 illustrates the post-simulated NF and S-parameters as well as the pre-simulated results, all specifications are satisfied. The post-simulated S21 of the proposed LNA shows a 3-dB bandwidth of 2 GHz to 2.7 GHz. The detailed simulation results are compared between pre-layout and post-layout simulations, revealing slight differences.

A 5.5 GHz two-stage differential CS LNA is also generatively designed within a couple of minutes, where the generated schematic and layout are shown in Fig. 5, in which three transformers implement the input, interstage and output matching networks while the cross-couple capacitor structure is applied to each CS stage. This architecture introduces approximately ten additional design variables (26 in total), substantially expanding the design space and increasing optimization complexity. As shown in Fig. 6, both pre- and post-simulated NF and S-parameters meet the targets. The Table in Fig. 6 shows close agreement between the pre- and post-simulations.

Finally, an automated design flow of RFICs based on various AI models and algorithms is presented. Moreover, this design flow has been impletemented in RFIC-GPT, a tool readily to be tested online: https://rfic-gpt.com/ .

Also Read:

Propelling DFT to New Levels of Coverage

AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation

How PCIe Multistream Architecture Enables AI Connectivity at 64 GT/s and 128 GT/s


Ceva-XC21 Crowned “Best IP/Processor of the Year”

Ceva-XC21 Crowned “Best IP/Processor of the Year”
by Daniel Nenni on 12-10-2025 at 8:00 am

CEVA XC21 Award Social 251125

In a resounding affirmation of innovation in semiconductor intellectual property (IP), Ceva, Inc. (NASDAQ: CEVA) has been honored with the prestigious “Best IP/Processor of the Year” award at the 2025 EE Awards Asia, held in Taipei on December 4. The accolade went to the Ceva-XC21, a groundbreaking vector digital signal processor (DSP) core that redefines efficiency in 5G and 5G-Advanced communications. This victory underscores Ceva’s unwavering commitment to delivering high-performance, low-power solutions that propel the next era of connected devices, from cellular IoT modems to non-terrestrial network VSAT terminals.

The EE Awards Asia, organized by EE Times Asia and now in its 12th year, stands as Asia’s premier recognition for excellence in electronics engineering. Attracting nominations from global industry leaders, the awards celebrate breakthroughs in categories spanning IP cores, power management, AI accelerators, and more. This year’s event, coinciding with the EE Tech Summit, drew over 500 engineers, executives, and innovators, highlighting Asia’s pivotal role in shaping global semiconductor trends. Ceva’s win in the IP/Processor category—amid stiff competition from giants like Arm and Synopsys, signals the XC21’s transformative potential in an era where power efficiency and scalability are non-negotiable for 5G deployment.

At the heart of the Ceva-XC21 is its advanced architecture, building on the proven Ceva-XC20 foundation while introducing true dual-threaded hardware for contention-free multithreading. This design features dual processing elements, instruction, and data memory subsystems, enabling seamless parallel execution of complex workloads. The processor supports a versatile 9-issue Very Long Instruction Word set, accommodating integer formats (INT8/16/32) alongside half-precision, single-precision, and double-precision floating-point operations. A dedicated instruction set architecture (ISA) accelerates 5G New Radio (NR) functions, making it ideal for enhanced Mobile Broadband, ultra-Reliable Low-Latency Communications, and massive Machine-Type Communications.

What sets the XC21 apart is its scalability and efficiency. Available in three variants (Ceva-XC210, XC211, and XC212) each offers configurable single- or dual-thread options with 32 or 64 16-bit x 16-bit multiply-accumulate (MAC) units. This modularity allows designers to tailor the core to specific needs, from compact RedCap devices to high-throughput industrial terminals. Compared to its predecessor, the widely adopted Ceva-XC4500, the XC21 delivers up to 1.8x performance uplift in the XC212 variant while slashing core area by 48% in the XC210 model. The XC211 maintains equivalent performance at just 63% of the previous die size, achieving a CoreMark/MHz score of 5.14 for superior control code execution.

These metrics translate to tangible benefits: unprecedented power savings for battery-constrained IoT endpoints, reduced bill-of-materials costs for consumer gadgets, and enhanced AI/ML integration for smarter edge processing. Interconnectivity is equally robust, with up to six AXI4 bus interfaces via an AMBA matrix for high-bandwidth data flows, ensuring effortless SoC integration. Software support further eases adoption, including a unified programming model compatible with the Ceva-XC4500 ecosystem, an optimizing LLVM C compiler, and comprehensive debug tools like JTAG and real-time trace.

“We are thrilled and deeply honored by this recognition from the EE Awards Asia jury,” said Amir Panush, CEO of Ceva. “The Ceva-XC21 embodies our vision of democratizing 5G Advanced connectivity making it accessible, efficient, and future-proof. In a market projected to see 5G connections surpass 2 billion by 2026, this DSP empowers our licensees to innovate without compromise, from smart wearables to satellite backhaul systems.”

This isn’t Ceva’s first triumph at EE Awards Asia; the company previously clinched the same category in 2023 for the Ceva-XC22 and in 2024 for the NeuPro-Nano NPU, cementing its legacy as a trailblazer in edge IP. The XC21’s success reflects broader industry shifts: as 6G horizons emerge, demand for versatile, energy-efficient processors intensifies. Analysts at Gartner forecast that by 2028, 75% of enterprise-generated data will be processed at the edge, necessitating IP like the XC21 to handle multi-protocol stacks (LTE/5G/NTN) with minimal overhead.

Looking ahead, Ceva’s roadmap hints at even bolder integrations, blending XC21’s vector prowess with AI accelerators for hybrid edge-cloud paradigms. For developers, the implications are profound: shorter time-to-market, lower power envelopes, and scalable designs that future-proof against evolving standards. As Gideon Wertheizer, Executive VP of Research and Development, noted, “Winning ‘Best IP/Processor’ validates our relentless focus on architecting for tomorrow’s challenges today.”

In an industry often dominated by raw compute power, the Ceva-XC21 reminds us that true excellence lies in balance, performance without excess, innovation without waste. This award not only elevates Ceva’s profile but also accelerates the proliferation of intelligent, connected ecosystems across Asia and beyond. As 5G matures into a ubiquitous fabric, the XC21 stands as a cornerstone, weaving efficiency into the wireless future.

Contact Ceva Here.

Also Read:

United Micro Technology and Ceva Collaborate for 5G RedCap SoC and Why it Matters

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

A Remote Touchscreen-like Control Experience for TVs and More