DAC2025 SemiWiki 800x100

CEO Interview: Oshri Cohen of Cybord

CEO Interview: Oshri Cohen of Cybord
by Daniel Nenni on 01-03-2024 at 10:00 am

Oshri Cohen CEO of Cybord

Oshri Cohen is an executive officer with 20+ years of vast management experience in operations, engineering, and supply chain in multinational high-tech companies. Before joining Cybord, Oshri headed Supply chain at Nvidia Networks. Prior to that Mr. Cohen held the global procurement and logistics at Mellanox (Acquired by Nvidia). Oshri holds a BSc. in Industrial Engineering from Ruppin academic centre and MSc in system Engineering from Technion University.

Tell us about your company?

Cybord is pioneering the use of visual AI ensuring electronic component quality, authenticity, and traceability and is leading the AI revolution for the electronics manufacturing industry. Cybord’s technology analyzes 100% of the components directly on the assembly line, helping OEMs and EMSs ensure that only reliable, defect-free components are integrated into products, with unmatched 99.9% accuracy. By providing manufacturers with complete transparency, Cybord helps identify defects, counterfeits, and irregularities that could otherwise go unnoticed.

Our solution empowers OEMs and EMSs to minimize costly scrapping, rework, and recalls, while ensuring only reliable, defect-free components are used in production. Designed to be both cost-effective and highly efficient, Cybord is transforming the electronics manufacturing industry by uncovering the “unknown unknowns” and driving unparalleled reliability across the supply chain.

What role does AI have in your product?

AI is at the heart of Cybord’s solution, transforming the way manufacturers ensure quality and reliability of electronic components by utilizing big data and AI across the assembly line to give manufacturers unparalleled visibility and access into their production processes. Leveraging a growing and vast database of billions of unique electronic components, our machine learning model constantly evolves and learns from a diverse data set, such as multi-vendor environments and high-volume production lines across a variety of industries to provide accurate analysis and detect quality, authenticity, and traceability issues within milliseconds. The visual AI engine analyzes data from 100% of electronic components on the assembly line, inspecting images of both the top and bottom of the components to detect and alert about even the most subtle abnormalities including defects, damage, corrosion, and structural irregularities, verify component origins, and prevent defective parts from advancing in the manufacturing line – all in real time.

What problems are you solving?

Electronic components are the lifeblood powering everything from telecoms to automobiles to data centers. By addressing the risk of faulty, damaged, and low-quality electronic components entering production lines, Cybord is the first line of defense for OEMs and EMSs to safeguard product quality and ensure brand reputation. Additionally, geopolitical factors around the world have led to “point of origin” restrictions being levied on components. For instance, Country A may impose restrictions on automobiles that prohibit the use of any components sourced from Country B. Cybord can identify and corroborate a component’s country of origin and original manufacturer, providing OEMs and EMSs with the ability to verify authenticity and standard compliance to align with any governmental regulations and industry benchmarks.

What application areas are your strongest?

 Cybord’s solutions serve all industries where electronic component reliability is key (i.e. all industries that rely on electronic circuit boards) – including data centers and servers, telecoms, medical devices, automotive, aerospace, and more. One of Cybord’s standout features is its ability to provide “MRI-style” detection and inspection, allowing us to detect even the most subtle abnormalities in electronic components. For example, in automotive applications, we ensure that smart vehicles perform safely and reliably, while in telecommunications, we guarantee continuous network performance. In medical devices, our solutions help maintain the highest standards of component integrity for life-critical applications. These are just a few of many examples of how Cybord is verifying the quality and traceability of the electronics that are so essential to the fabric of modern life.

What keeps your customers up at night?

Recalls—plain and simple. For an OEM, a recall is the worst-case scenario. Not only do they involve a hefty financial cost to address, but they also pose significant reputational damage that can take years to recover from. The disruption to operations, customer trust, and the long-term impact on brand value are substantial.

Cybord helps mitigate these risks by preventing recalls and minimizing their reach when they do happen. Our AI-driven solution ensures that every component is inspected for defects, authenticity, and traceability before it is placed on a board. By providing real-time insights and accurate analysis, Cybord gives OEMs the peace of mind that their products are safe, secure, and of the highest quality. This leads to fewer defects, reduced rework costs, and fewer recalls—allowing companies to focus on innovation without the constant worry of quality control failures.

Also Read:

 


Alphawave Semiconductor Powering Progress

Alphawave Semiconductor Powering Progress
by Daniel Nenni on 01-03-2024 at 6:00 am

Alphawave Semiconductor Chiplets

Do you know who had another great year? Alphawave Semi did. Despite being relatively young in the industry (founded in 2017), the company has quickly gained recognition for its advancements in high-speed connectivity solutions.

They specialize in developing high-speed connectivity solutions for Data centre, AI, 5G wireless infrastructure, Data networking, Autonomous vehicles, and Solid-state storage. Their focus on advanced connectivity and signal processing technologies has positioned them as key players in the industry.

Alphawave Semi is all about pushing the boundaries of high-speed connectivity. They design and develop semiconductor IP (Intellectual Property) solutions that enable fast and efficient data transfer within electronic devices. Their expertise lies in working closely with customers and partners developing advanced connectivity and signal processing technologies.

It has been an honor to work with Alpha Semi CEO Tony Pialis and his growing team of top notch professionals. As far as semiconductor ecosystem CEOs I would put Tony in my top 10, absolutely.

In my opinion chiplets will be one of the most disruptive semiconductor technologies and Alphawave Semi will be one of the companies leading the way starting with:

IO Chiplets

Reconfigurable ZeusCORE100 SerDes IO with integrated protocol controllers, security IP and AresCORE (that is, D2D-UCIe) IP that enables up to 1.6T of throughput at MR, XLR, and PCIe/CXL reaches.

  • Medium Reach Optical Driver Chiplet
  • Extra Long Reach Ethernet Chiplet
  • Combo PCIe/CXL/Ethernet Chiplet
  • 1.6T high speed IO Chiplet
Accelerator Chiplets

High performance, Arm® or RISC-V-based accelerator chiplets—enables data acceleration through Arm or RISC-V multi-core accelerator SOCs)

Memory Chiplets

Low Latency, high speed DDR5 and memory controller; includes multi-core CPU with L1 and L2 caches.

Chiplets can contribute to significant cost reductions by reducing chip design time, more efficient use of wafers, resulting in improved flexibility and scalability in both design and manufacturing.

While the adoption of chiplets may involve initial challenges in terms of design and integration, the long-term benefits, the ability to keep Moore’s Law moving along more than justifies the return on investment.

Bottom line: To further reduce the overall risk of implementing a chiplet based design strategy, working with ecosystem experts like Alphawave Semi is critical, absolutely.

Alphawave Semiconductor in 2023:

Alphawave Semi Partners with Keysight to Deliver Industry Leading Expertise and Interoperability for a Complete PCIe 6.0 Subsystem Solution

Alphawave Semi Elevates Chiplet-Powered Silicon Platforms for AI Compute through Arm Total Design

Alphawave Semi Spearheads Chiplet-Based Custom Silicon for Generative AI and Data Center Workloads with Successful 3nm Tapeouts of HBM3 and UCIe IP

Alphawave Semi Expands Collaboration with Samsung, Adds 3nm Connectivity IP to Meet Accelerated AI and Data Center Demand

Alphawave Semi Showcases 3nm Connectivity Solutions and Chiplet-Enabled Platforms for High Performance Data Center Applications

Alphawave IP Achieves Its First Testchip Tapeout for TSMC N3E Process

About Alphawave Semi

Alphawave Semi is a global leader in high-speed connectivity for the world’s technology infrastructure. Faced with the exponential growth of data, Alphawave Semi’s technology services a critical need: enabling data to travel faster, more reliably and with higher performance at lower power. We are a vertically integrated semiconductor company, and our IP, custom silicon, and connectivity products are deployed by global tier-one customers in data centers, compute, networking, AI, 5G, autonomous vehicles, and storage. Founded in 2017 by an expert technical team with a proven track record in licensing semiconductor IP, our mission is to accelerate the critical data infrastructure at the heart of our digital world. To find out more about Alphawave Semi, visit: awavesemi.com.

Also Read:

Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Interface IP in 2022: 22% YoY growth still data-centric driven


Will the Package Kill my High-Frequency Chip Design?

Will the Package Kill my High-Frequency Chip Design?
by Bryan Preble on 01-02-2024 at 6:00 am

Figure6

Understanding the electromagnetic (EM) coupling between various elements of a high-frequency semiconductor device is vital for meeting design specifications and ensuring reliable operation in the field. These EM interactions include not only the silicon chip but also extend to the package that encloses it. However, it may be only towards the end of a project that the IC or systems designer gets round to create and simulate EM models that include both on-die metals as well as the package layers. It is not uncommon to find that the inclusion of the package layers with the on-die metals model causes a degradation in performance that may cause specifications to be violated. To avoid this, Ansys provides a solution that can easily add package layers to a silicon technology’s metal stack-up in order to extract complete models with both on-silicon and package layers early in the design process.

Ansys’ suite of on-chip electromagnetic analysis tools operate on IC layouts at the pre-LVS design stage (Ansys RaptorX™) and the post-LVS signoff stage (Ansys Exalto™). The chip analysis can include portions of the package layout and/or package layers to extract a complete EM model that can be simulated with a SPICE circuit simulator. The Ansys tools rely on precise information about the interconnect process technology used in the manufacture of each layer. Process information is provided by silicon foundries in various formats, including Design Rule Manuals (DRMs) and technology files – such as iRCX, ITF, and ICT files – that may be unencrypted or encrypted. The process for capturing the technology stack-up compiles a collection of Ansys format technology files by mapping foundry provided process technology information onto physical layout information in OpenAccess or GDSII stream format (see Figure 1). These compiled technology files also support other Ansys on-chip EM tools including Ansys VeloceRF™ (inductive device layout synthesis) and Ansys RaptorQu™ (for superconducting quantum design).

RaptorX is a silicon-optimized electromagnetic solver, and it comes with a very useful wizard called Process Configurator that makes it easy to create and modify Ansys technology files, even for complex chip-package configurations. As shown in Figure 1, Process Configurator creates Ansys technology files that can contain just the foundry metal stack-up or can contain the foundry metal stack-up plus selected additional package layers. The input to the Process Configurator wizard for the foundry metal stack-up is the process information provided by the foundry. If die and package layers need to be co-extracted, then the package layer information for the layers of interest also needs to be included.

Figure 1: The Ansys Process Configurator wizard gives designers easy control of the chip-package configuration and enables what-if analyses

If the foundry technology file is unencrypted, or the package layer information is unencrypted, the Process Configurator wizard will let you explore various process-related “what-if” scenarios by editing the properties of the die and/or package layers and compiling different versions of the Ansys technology files. The Process Configurator allows designers to add or subtract substrates, backplanes, conductors, dielectrics, and vias including Through-Silicon Vias (TSV). The technology properties that can be edited with Process Configurator are metal thickness, metal conductivity, dielectric thickness, and dielectric constant. In order to complete the Ansys technology files the compiler also requires the GDS stream layer map file and the layer mapping information.

Some examples of modifying an unencrypted technology for “what-if” experiments include:

  • modifying the substrate thickness and properties to explore effects of coupling through the substrate
  • adding TSVs in an exploratory 3DIC stack up
  • setting up a technology file for Wafer-on-Wafer (WoW) technology

adding package layers to see their effect on the EM device – as will be shown in the following example

The input files and information for Process Configurator can be processed using both a UI and a batch-mode command script . The outputs of Process Configurator are the compiled Ansys process technology files used by the Ansys EM tool suite. The Process Configurator has the very useful capability to visualize a technology cross-section, which makes it easy to verify the correct sequence and connectivity of the technology layers. Unencrypted technology layer properties like thickness, resistivity, and dielectric constant are also displayed in the cross-section viewer. If the technology is encrypted then the cross-section viewer shows the layer sequence and connectivity, but the layer thicknesses are not to scale, and material properties are not reported.

Figure 2 below shows a stack up of a fictional example technology file. The left panel displays the substrate characteristics on the bottom layer, the cumulative layer height starting from the substrate, the layer and via names on the left, and the dielectric thickness and dielectric constant (er) on the right. The Conductor section in the right panel lists the conductors with their thickness and resistivity (r), and the Vias section shows the via resistance and area.

Figure 2: Example of Process Configurator display of an unencrypted silicon stack-up with all parameters reported and conductor thicknesses shown to scale

The red box in Figure 3 below highlights a via and package layer that have been added to the stack-up. This stack-up, with the package layer and via included, was used for the simulation results described in the following paragraphs that show how the package layer can affect the performance of an EM device.

Figure 3: Example of an unencrypted silicon stack-up with added package layers highlighted in the red box

To illustrate how Process Configurator can be used to explore the effect of a package on a chip we created a simple layout example: It consists of an EM device – a single-ended octagonal spiral inductor – that was extracted using RaptorX. The resulting electrical model was then simulated in a SPICE-level circuit simulator to analyze the performance first with, and then again without, a package layer placed above it. Figure 4 below shows RaptorX’s physical mesh for the inductor without the package layer.

Figure 4: Ansys RaptorX’s physical mesh for the inductor without a package layer

Next, the same inductor was used, but a rectangle of the package layer was placed above it. Figure 5 below shows the RaptorX mesh of the inductor with the package layer included.

Figure 5: Ansys RaptorX’s physical mesh for the inductor including a covering package layer

RaptorX generated an S-parameter model for each inductor, which were then simulated for Inductance and Quality Factor across a frequency range. Figure 6 shows the inductance of the two inductors plotted across frequency. Comparing the plot of inductance at 3 GHz for the package layer included (green) shows a 28% decrease in inductance, and 33% decrease in the resonance frequency versus the simulation results for the model without the package layer (red).

Figure 6: Inductance over frequency plot showing the significant impact of adding package layers to the simulation

In Figure 7 below, the Quality Factor (Q) of the two inductors is plotted across frequency. Comparing the simulation plot of Q for the package layer included (green) shows a 38% decrease in Max Q value and a 21% decrease in max Q peak frequency versus the simulation results for the model without the package layer (red).

Figure 7: Quality factor over frequency plot showing the significant impact of adding package layers to the simulation

In summary, these simulation results illustrate the stark changes in device behavior that are seen when package layers are included in a simulation. Modeling package layers together with on-die metals can reveal degradation in performance that may violate a specification or cause the device to fail. Ansys has developed the Process Configurator to make it very easy for IC and System designers to capture even the most complex multi-layer packaging configurations and to facilitate quick experimentation. It encourages a shift-left approach with early what-if exploration to help designers find the best possible solution for optimizing their final product and avoid late-stage surprises.

Also Read:

Keynote Speakers Announced for IDEAS 2023 Digital Forum

Ansys Revving up for Automotive and 3D-IC Multiphysics Signoff at DAC 2023

Keynote Sneak Peek: Ansys CEO Ajei Gopal at Samsung SAFE Forum 2023


2024 Big Race is TSMC N2 and Intel 18A

2024 Big Race is TSMC N2 and Intel 18A
by Daniel Nenni on 01-01-2024 at 6:00 am

Intel PowerVia backside power delivery

There is a lot being said about Intel getting the lead back from TSMC with their 18A process. Like anything else in the semiconductor industry there is much more here than meets the eye, absolutely.

From the surface, TSMC has a massive ecosystem and is in the lead as far as process technologies and foundry design starts but Intel is not to be ignored. Remember Intel first brought us High Metal Gate, FinFETs, and many more innovative semiconductor technologies. One of which is backside power delivery. BPD can certainly bring Intel back to the forefront of semiconductor manufacturing but we really need to take it in proper context.

Backside power delivery refers to a design approach where power is delivered to the back side of the chip rather than the front side. This approach can have advantages in terms of thermal management and overall performance. It allows for more efficient heat dissipation and can contribute to better power delivery to the chip components. It’s all about optimizing the layout and design for improved functionality and heat distribution.

Backside power delivery has been talked about in conferences but Intel will be the first company to bring it to life. Hats off to Intel for yet another incredible step in keeping Gordon Moore’s vision alive.

SemiWiki blogger Scotten Jones talks about it in more detail in his article: VLSI Symposium – Intel PowerVia Technology. You can see other new Intel technology revelations here on SemiWiki: https://semiwiki.com/category/semiconductor-manufacturers/intel/.

TSMC and Samsung of course will follow Intel into backside power delivery a year or two behind. The one benefit that TSMC has is the sheer force of customers that intimately collaborate with TSMC ensuring their success, not unlike TSMC’s packaging success.

Today any comparison between intel and TSMC is like comparing an Apple to a Pineapple, they are two completely different things.

Right now Intel makes CPU chiplets internally and outsources supporting chiplets and GPUs to TSMC at N5-N3. I have not heard about an Intel TMSC N2 contract as of yet. Hopefully Intel can make all of their chiplets internally at 18A and below.

Unfortunately, Intel does not have a whale of a customer for the Intel foundry group as of yet. Making chiplets internally does not compare to TSMC manufacturing complex SoCs for whales like Apple and Qualcomm. If you want to break up the BPD competition into two parts: Internal chiplets and complex SoCs that is fine. But to say Intel is a process ahead of anybody while only doing chiplets is disingenuous, my opinion.

Now, if you want to do a chiplet comparison let’s take a close look at Intel versus AMD or Nvidia as they are doing chiplets on TSMC N3 and N2. Intel might actually win this one, we shall see. But to me if you want the foundry process lead you need to be able to make customer chips in high volume.

Next you have to consider what does the process lead mean if you don’t have customer support. It will be one of those ribbons on the wall, one of those notes on Wikipedia, or a press release like IBM does. It will not be the billions of dollars of HVM revenue that everybody looks for. Intel needs to land some fabless semiconductor whales to stand next to TSMC, otherwise they will stand next to Samsung or IBM.

Personally I think Intel has a real shot at this one. If their version of BPD can be done by customers in a reasonable amount of time it could be the start of a new foundry revenue stream versus the NOT TSMC business I have mentioned before. We will know in a year or two but for me this is the exciting foundry competition we have all been waiting for so thank you Intel and welcome back!

There is an interesting discussion in the SemiWiki forum on TSMC versus Intel in regards to risk taking. I hope to see you there:

Intel vs TSMC in Risk Taking

Also Read:

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

How Intel, Samsung and TSMC are Changing the World

Intel Enables the Multi-Die Revolution with Packaging Innovation


Podcast EP200: Dan and Mike’s Top Ten List For the Semiconductor Industry

Podcast EP200: Dan and Mike’s Top Ten List For the Semiconductor Industry
by Daniel Nenni on 12-29-2023 at 10:00 am

Dan is joined by podcast producer and collaborator Mike Gianfagna for Semiconductor Insiders episode 200. Dan and Mike look over the past two years (and 200 podcasts) to develop a top ten list of changes and innovation in the semiconductor industry. There is a lot of back-story detail on each topic in this far-reaching discussion.

The topics discussed are:

Dan: Innovation and advances at TSMC, Intel, and SMIC, changes in the automotive industry, and the RISC-V movement.

Mike: Semiconductors becoming a household word, the explosion of AI, the impact of AI on chip design, and the CEO change at Synopsys.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Transformation Model for IP-Centric Design

The Transformation Model for IP-Centric Design
by Kalar Rajendiran on 12-29-2023 at 6:00 am

HelixIPLM Accelerate Semiconductor Development with IP Centric Design

Semiconductor designs have been progressing over time to address wider product varieties and designs with increasing complexity. Organizations have been addressing intense time-to-market pressures by leveraging globally dispersed team resources. The project-centric design methodology, which once worked well with smaller projects and longer timelines is struggling to meet the demands of the modern semiconductor landscape. It creates isolated silos, discourages reuse, and fuels redundancies. Traceability becomes very difficult at best, and global collaboration strains under the weight of manual coordination and disparate data management systems. Mistakes and design gaps have become prohibitively expensive, and the global distribution of design teams presents new complexities, especially with dynamic geopolitical realities.

More efficient design practices are needed to achieve ambitious goals while controlling costs and meeting time-to-market demands. One promising solution is the IP-Centric Design approach.

IP-Centric Design: A Paradigm Shift

An IP-Centric Design approach reframes the entire design process, placing reusable intellectual property (IP) blocks at the heart of the development cycle. These pre-verified, optimized modules become the building blocks of new chips, fostering a host of benefits. Re-purposing proven IP allows design teams to leapfrog past repetitive tasks and costly respins, propelling them ahead of the competition. A centralized repository of IP fosters seamless collaboration and end-to-end project tracking, streamlining workflows and ensuring accountability. Pre-optimized IP guarantees reliability and performance, translating into robust, dependable products that consumers trust. IP-Centric Design is inherently scalable, effortlessly adapting to accommodate ever-growing design footprints and larger product portfolios.

A Roadmap for IP-Centric Design

To switch from project-centric design methodology to IP-Centric Design methodology will not happen overnight. Transitioning to IP-Centric Design is not without its challenges. It’s a mindset shift, not merely a methodology change. Legacy systems, ingrained habits, and cultural resistance may pose initial hurdles. Different teams may have varying levels of maturity in specific areas, and implementing the model requires careful planning and organizational buy-in.

Perforce has published a whitepaper presenting a Transformation Model that provides a blueprint for organizations to navigate this journey. The Model describes five key levels of transformation that organizations have to undergo in order to successfully achieve IP-Centric Design methodology.

Level One: Embracing IP-Centric Design

Design blocks are modeled as intellectual properties (IPs), each with a list of dependencies. This creates a versionable, hierarchical model of the design outside regular data management tools. The definition of IP broadens, encompassing not only pre-packaged design blocks from third-party providers but any block in the design, including those specific to a project, shared between projects, or delivered by central teams.

Level Two: Discovery & Reuse

The modeled IPs, independent of underlying data management, evolve into a dynamic catalog. The catalog allows users to search, filter, and comprehend available IPs based on metadata, irrespective of their location and content. This leads to seamless IP discovery and standardizes project configuration, embedding traceability into the design process. Teams can now make informed build-vs-buy decisions, fostering a culture of reuse that streamlines design efforts.

Level Three: Enterprise Development at Scale

Level Three addresses the challenge of scaling the system to meet the needs of a widespread community of users around the globe. The system must be architected for horizontal scaling, allowing teams to add hardware resources as the team and user numbers grow. Local or near-local response times are critical for effective IP discovery and reuse, especially as different teams collaborate across projects and design centers.

Level Four: Built-In Traceability

The central system becomes the single source of all data and metadata pertaining to designs. Users and workflows can find any design data and metadata they need, ensuring full traceability. This level is crucial for industries governed by standards, where compliance requires proof and provenance of designs. Effective integrations at this level enable organizations to confirm adherence to standards like ISO 26262, ITAR, or CFR21.

Level Five: Planning at Platform Scope

The highest level of IP-Centric Design involves modeling all components of a platform in a unified system. It goes beyond tracking work-in-progress to provide a platform for planning projects and anticipating upcoming needs. This level enables users across the enterprise to view existing IPs and plans for IPs in progress, fostering collaboration, influencing designs, and streamlining efforts across design teams.

Summary

The Transformation Model for IP-Centric Design emerges as a strategic blueprint for success to overcome the challenges of exponential complexity, global collaboration, and intense time-to-market pressures. The journey to IP-Centric Design may seem daunting, but the rewards are undeniable. By taking the first step and partnering with trusted solution providers, organizations can navigate this journey and unlock the full potential. Perforce offers semiconductor solutions that provide the foundation, structure, and scalability necessary for successful implementation. Visit the product page.

Organizations leveraging Perforce can expect improved collaboration, accelerated design cycles, informed build-vs-buy decisions, and streamlined efforts across design teams. Click here to request information.

Also Read:

Chiplets and IP and the Trust Problem

Insights into DevOps Trends in Hardware Design

IP Lifecycle Management for Chiplet-Based SoCs


Achieving a Unified Electrical/Mechanical PCB Design Flow – The Siemens Digital Industries Software View

Achieving a Unified Electrical/Mechanical PCB Design Flow – The Siemens Digital Industries Software View
by Mike Gianfagna on 12-28-2023 at 10:00 am

Achieving a Unified Electrical:Mechanical PCB Design Flow – The Siemens Digital Industries Software View

Let’s face it, designs are getting harder, much harder. Gone are the days when the electrical and mechanical design of a system occurred separately. Maybe ten years ago this practice was acceptable. Once the electrical design was completed (either the chip or the board) the parameters associated with the design were then given to the package or PCB design team to implement the physical delivery of the design. The handoff was done once, and each team lived in its own world. Those days are gone. In current day design complexity, the electrical design impacts the mechanical design in subtle ways. Similarly, the mechanical design of the system, including things like choice of materials has a profound impact on what is possible electrically. One must break down the walls and collaborate continuously, or accept the likelihood of project overruns and failure.  A comprehensive and informative white paper was recently published on this topic for PCB design. Read on to understand achieving a unified electrical/mechanical PCB design flow – The Siemens Digital Industries Software view.

Why Now?

Entitled Unifying ECAD-MCAD PCB design through collaborative best practices, the Siemens white paper begins by setting the stage for why a unified PCB design flow is so important now. Most SemiWiki readers will be familiar with this trend. The overall demands for PCB design have also been discussed in detail in this SemiWiki post.  The new Siemens white paper cites four trends in electronic design that are making a unified flow so urgent:

Compute power: Since the advent of the microprocessor, there’s been an astronomical increase in the compute power that chips can deliver – a trillion-fold over six decades. Given the slowing of Moore’s Law, future performance gains in semiconductors will be driven by, among other factors, advanced packaging flows.

Engineering discipline convergence: The “smaller, denser, faster” mantra associated with today’s products is magnifying the importance of ensuring that electromechanical compatibility is addressed prior to the first fabrication – waiting until manufacture to validate electronic and mechanical compatibility is clearly leaving things until too late.

Sustainability: The environmental impact of the manufacture of electronic devices is starting to get more scrutiny, as is the worldwide energy consumption of devices during their working life. This one is quite important to Siemens.

AI in electronics design: The fourth trend is the rise of AI in electronics design. AI might be considered a product of electronics, yet AI can also help with electronics design.

The white paper goes into a lot more detail on these topics. Links are coming so you can see the whole picture, as well as learn more about the Siemens approach.

What’s Needed?

The white paper covers a lot of ground. Here are some of the topics that are examined:

The importance of ECAD-MCAD collaboration: An integrated ECAD/MCAD collaboration environment enables electrical and mechanical design teams to work together throughout the entire design process in real time. And this can spell the margin of victory for a complex design project. The specific benefits of a well-integrated approach are discussed.

Ways ECAD-MCAD collaboration can be improved: A lot of engineering development teams still struggle to break free of legacy practices, which were perfectly good in their day but fall short in the present day. Specific approaches to improving the process are discussed.

A multi discipline, multi domain workflow supporting real time collaboration

Keys to successful ECAD-MCAD collaboration: Efficient collaboration between ECAD and MCAD domains enables both to optimize an electronics design within tight form-factor constraints while still meeting quality, reliability, and performance requirements. In this section, specific approaches to design methodology and data sharing are presented. The goal is a multi-discipline, multi-domain workflow that supports real-time collaboration, as illustrated in the figure.

A toolkit for collaborative engineering: Now that some of the reasons collaboration is so important and some of the obstacles to its adoption have been discussed, this section looks at the solutions available to support ECAD- MCAD collaboration.

Accelerating PCB design: The Siemens Xcelerator business platform ecosystem is presented, with details of scope, capabilities, and benefits for design teams worldwide.

To Learn More

If you’re involved in complex system design this white paper is a must read. You can access the full text here.  There is also a great podcast from the authors of the white paper available here.  You can now learn about achieving a unified electrical/mechanical PCB design flow – The Siemens Digital Industries Software view.


Will Chiplet Adoption Mimic IP Adoption?

Will Chiplet Adoption Mimic IP Adoption?
by Eric Esteve on 12-28-2023 at 6:00 am

Adoption theory

If we look at the semiconductor industry expansion during the last 25 years, adoption of design IP in every application appears to be one of the major factors of success, with silicon technology incredible development by a x100 factor, from 250nm in 2018 to 3nm (if not 2nm) in 2023. We foresee the move to chiplet-based architecture to soon play the same role that SoC chip-based architecture and massive use of design IP has played in the 2000’s.

The question is how to precisely predict chiplet adoption timeframe and what will be the key enablers for this revolution. We will see if diffusion of innovation theory can be helpful to fine-tune a prediction, determine what type of application will be the driver. Chip-to-chip interconnect protocol standard specifications allowing fast industry adoption, driving applications like IA or smartphone application processor quickly seems to be the top enabler, but EDA tools efficiency or packaging new technologies and dedicated fab creation, among others, are certainly key.

Introduction: emergence of chiplet technology

During the 2010-decade, the benefits of Moore’s law began to fall apart. Moore’s law stated transistor density doubled every two years, the cost of compute would shrink by a corresponding 50%. The change in Moore’s law is due to increased in design complexity the evolution of transistor structure from planar devices, to Finfets. Finfets need multiple patterning for lithography to achieve devices dimensions to below 20-nm nodes.

At the end of this decade, computing needs have exploded, mostly due to proliferation of datacenters and due to the amount of data being generated and processed. In fact, adoption of Artificial Intelligence (AI) and techniques like Machine Learning (ML) are now used to process ever-increasing data and has led to servers significantly increasing their compute capacity. Servers have added many more CPU cores, have integrated larger GPUs used exclusively for ML, no longer used for graphics, and have embedded custom ASIC AI accelerators or complementary, FPGA based AI processing. Early AI chip designs were implemented using larger monolithic SoCs, some of them reaching the size limit imposed by the reticle, about 700mm2.

At this point, disaggregation into a smaller SoC plus various compute and IO chiplets appears to be the right solution. Several chip makers, like Intel, AMD or Xilinx have select this option for products going into production. In the excellent white paper from The Linley Group, “Chiplets Gain Rapid Adoption: Why Big Chips Are Getting Small”, it was shown that this option leads to better costs compared to monolithic SoCs, due to the yield impact of larger. These chip makers have designed homogenous chiplet, but the emergence and adoption of interconnect standard like Universal Chiplet Interconnect Express (UCIe) IP is easing adoption of heterogeneous chiplet.

The evolution of the newer, faster, protocol standards is picking up speed as the industry keeps asking for higher performance. Unfortunately, the various standards are not synchronized by a single organization. New PCIe standards can come one year (or more) earlier or later than the new Ethernet protocol standard. Using heterogeneous integration allows silicon providers to adapt to the fast-changing market by changing the design of the relevant chiplet only. Considering advanced SoC design fabrication requires massive capital expenditures for 5nm, 4nm or 3nm process nodes, the impact of chiplet architectures is tremendous to drive future innovation in the semiconductor space.

Heterogeneous chiplet design allows us to target different applications or market segments by modifying or adding just the relevant chiplets while keeping the rest of the system unchanged. New developments could be launched quicker to the market, with significantly lower investment, as redesign will only impact the package substrate used to house the chiplets. For example, the compute chiplet can be redesigned from TSMC 5nm to TSMC 3nm to integrate larger L1 cache or higher performing CPU or number of CPU cores, while keeping the rest of the system unchanged. Chiplet integrating SerDes can be redesigned for faster rates on new process nodes offering more IO bandwidth for better market positioning.

Using heterogeneous chiplet will offer better Time-to-Market (TTM) when updating system, reusing the part of system with no change if it’s designed in chiplet. This will also be a way to minimize cost when keeping some functional chiplet on less advanced nodes, cheaper than the most advanced. But the main question is to forecast when the chiplet technology will create a significant segment of the semiconductor market? We will review the IP adoption history as chiplet and IP are similar, both have to break the NIH syndrome to become successful. We will extract the main causes of chiplet adoption and build a forecast, using the innovation theory and the defined category (Innovators, Early Adopters, etc. see Figure below).

Figure 1: Innovation Theory (Reminder)

We will review ARM CPU IP adoption through 1991 to 2018 and IP adoption history through 1995 to 2027, and check how this adoption rate stick with the Innovation Theory.

We will explain why chiplet adoption will be boosted, reviewing the technology and marketing related reasons:

  • From IP-based SoC to chiplet-based system
  • Interoperability, thanks to chiplet interconnect preferred protocol standard
  • Explaining why high-end Interface IP are key for Chiplet adoption
  • Design-related challenges to solve.
  • Last but not least, investment made by foundry

Finally, we can build a tentative chiplet adoption forecast, based on innovation theory. Just to mention, the industry just moved in the “Early adopters” phase, seeing numerous IP and chiplet vendors serving HPC and AI.

If you download the white paper, you will enjoy with all the text, and numerous pictures, some of them beeing created exclusively for this work.

By Eric Esteve (PhD.) Analyst, Owner IPnest

Alphawave sponsored the creation of this white paper, but the opinions and analysis are those of the author. Article can be found here:

https://awavesemi.com/resource/will-chiplet-adoption-to-mimic-ip-adoption/

Also Read:

Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Interface IP in 2022: 22% YoY growth still data-centric driven


Fail-Safe Electronics For Automotive

Fail-Safe Electronics For Automotive
by Kalar Rajendiran on 12-27-2023 at 10:00 am

MegaTrends Driving the Need for Next Generation Silicon Capabilites

The automotive industry is on the brink of a revolutionary transformation, where predictive maintenance and monitoring are taking center stage. In a recent webinar panel session, industry experts delved into the challenges, current approaches, and future innovations surrounding the guarantee and extension of mission profiles.

proteanTecs hosted that webinar with the following experts as panelists:

Heinz Wagensonner, Sr. SoC Designer, CARIAD (software division of Volkswagen Group)

Jens Rosenbusch, Sr. Principal Engineer, SoC Safety Architecture, Infineon Technologies,

Xiankun “Robert” Jin, Automotive SoC Safety Architect, NXP Semiconductors.

Gal Carmel, Executive VP, GM, Automotive, proteanTecs.

Ellen Carey, Chief External Affairs Officer, Circulor, moderated the panel session.

The key themes that emerged were the increasing reliance on artificial intelligence (AI), the importance of real-time monitoring, and the need for a paradigm shift in industry thinking. The following are the salient points that came out of that panel session. You can access that entire panel session on-demand from here.

Current Challenges

The conversation began by acknowledging the challenges faced by the automotive sector. For instance, the introduction of a Central Gateway controller connected to the cloud for extended periods poses challenges for reliability and safety. Traditionally, managing uncertainties involved building margins into design, fabrication, and testing processes. However, this approach may become unsustainable in the future.

Current Approaches

To address these challenges, the industry is shifting towards a more proactive and predictive maintenance approach. Rather than relying solely on built-in margins, the emphasis is on implementing health monitors or sensors that continually assess the device’s status. This data is aggregated and analyzed, potentially through machine learning, providing insights that were previously inaccessible. This newfound understanding enables decisions such as swapping devices before imminent failure, a concept known as predictive maintenance.

Collaboration and Standardization

The transition to predictive maintenance is not a journey undertaken by individual companies but requires collaborative efforts within the automotive industry. One significant initiative mentioned during the panel session is the creation of a framework for automotive predictive maintenance. A technical report, TR 9839, was published during the past summer, paving the way for the Third Edition of ISO 26262 standard. This collaborative approach involves stakeholders, including semiconductor vendors, original equipment manufacturers (OEMs), and regulatory bodies.

The Role of AI in Predictive Maintenance

The integration of AI emerged as a crucial factor in revolutionizing predictive maintenance. AI’s ability to analyze vast datasets and identify patterns that may elude human observers makes it a valuable tool for predicting failures. Whether optimizing production processes or analyzing failures in the field, AI plays a pivotal role in enhancing efficiency and accuracy.

AI is not just about finding known issues but uncovering latent defects or anomalies that may lead to failures. The application of AI in the analysis of sensor data from millions of vehicles in a fleet opens up possibilities for early detection of potential failures. However, the discussion also highlighted the importance of standardizing AI applications to ensure accuracy and reliability.

On-Chip Monitoring for Real-Time Insights

A critical aspect of transforming automotive maintenance is the adoption of on-chip monitoring. The traditional process of failure analysis, involving sending faulty components back for analysis, was deemed slow and inefficient. On-chip monitoring, if implemented effectively, can provide real-time insights into the behavior of silicon while the vehicle is in operation.

The Future Landscape

As the automotive industry moves towards autonomy and increased connectivity, the need for a flexible and adaptive approach to maintenance becomes paramount. The speakers emphasized a change in thinking, where a cross-platform, data-driven approach is embraced. This involves creating a common language, pooling insights, and utilizing a combination of hardware mechanisms and software analytics to drive proactive maintenance.

Summary

The panel session highlighted the industry’s dynamic shift from reactive to proactive maintenance strategies. The integration of AI and on-chip monitoring represents a leap forward in enhancing reliability, reducing costs, and improving overall product quality. Collaboration among industry stakeholders, standardization efforts, and a change in thinking towards a vertical approach will be key in shaping the future of automotive maintenance. As the industry navigates this transformative journey, the focus remains on leveraging technology to ensure vehicles not only meet but exceed reliability and safety standards.

You can listen to the entire panel session here.

Also Read:

Building Reliability into Advanced Automotive Electronics

Unlocking the Power of Data: Enabling a Safer Future for Automotive Systems

proteanTecs On-Chip Monitoring and Deep Data Analytics System


Information Flow Tracking at RTL. Innovation in Verification

Information Flow Tracking at RTL. Innovation in Verification
by Bernard Murphy on 12-27-2023 at 6:00 am

Innovation New

Explicit and implicit sneak paths to leak or compromise information continue to represent a threat to security. This paper looks a refinement of existing gate level information flow tracking (IFT) techniques extended to RTL, encouraging early-stage security optimization. Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Register Transfer Level Information Flow Tracking for Provably Secure Hardware Design. This article appeared at DATE 2017 and has gathered an impressive 53 citations. The authors are from the University of California San Diego.

This group earlier developed gate-level IFT (GLIFT) technology, launching as a product under Tortuga Logic (later rebranded as Cycuity). Information flow techniques offer a more general and formal approach to modeling and reasoning about security properties than testing by vulnerability use-cases. The method generalizes by propagating taint information alongside regular logic evaluation, flagging say a signal sourced from an insecure domain controlling a condition selection in a secure domain. Combining this with formal verification methods offers potential for strong security guarantees.

Extending analysis to RTL enable several improvements: scalability to larger circuits, application earlier in design and a somewhat improved understanding of higher-level dependencies in design intent without need for user-supplied annotations. The authors also describe a method by which designers can tradeoff between security and verification performance in serving differing market needs.

Paul’s view

Security verification is something I care about deeply – we can’t trust digital data without it, and this includes my own personal data! This month’s paper is an easy read highlighting one of the mainstream techniques in security verification: adding “tainted” (i.e. compromised or no longer trusted) bits to all signals in a design and enhancing gate models propagate tainted bits through gates as well as signal values.

Tainted bit propagation is conceptually almost identical to “X-propagation” in mainstream design verification flows: if a signal is tainted then it’s as if we don’t know its value since we don’t trust the value it has.

This paper proposes two things: first, doing tainted bit annotation and propagation at the RTL level rather than the gate level; and second, doing the equivalent of what mainstream EDA tools call “X-pessimism removal”. The latter refers to not marking the result of an operator as X simply because at least one of its inputs is X, but rather to mark it as X only if it truly is X based on the definition of that operator. For example, consider c = a & b. If a=0 then c=0 even if b is X. Equivalently, in security verification speak, if a=0 and a is not tainted, then c=0 and is not tainted even if b is tainted. Seems easy for “&”, gets a bit more tricky for if, else and case constructs.

As you can expect, the paper concludes with some benchmarking that clearly shows propagating tainted bits at RTL is much faster than at gate-level, and that “precise” tainted-bit propagation (i.e. tainted-bit pessimism removal) reduces the false positive rate for tainted-bits at design outputs by a significant %. All this benchmarking is done in a formal proof context, not a logic simulation context. Case closed.

Wish you all a Happy Holidays!

Raúl’s view

Information Flow Tracking (IFT) is a computer security technique which models how information propagates as a system computes. It was introduced back in the 70’s by Denning, a good introduction and survey can be found here. The basic idea is to label data with a security class and then track these labels as the data is used for computations. For the purposes of the reviewed paper, the label is just a bit indicating data is “taint” (label=1, untrusted).  The most conservative approach in using this label, is that the output of any operation involving taint data is taint; or said inversely, only operations with all input data being not taint yield a not taint output. The paper relaxes this approach somehow as I’ll explain later.

Computer security aims at maintaining information confidentiality and integrity. Confidentiality means that information is only disclosed to authorized entities. IFT verifies if secret information can ever be disclosed by tracking that all locations it flows to are also secret (not taint), e.g., a secret key does not leak outside a restricted memory space. Integrity is the inverse: to maintain the accuracy and consistency of data, untrusted entities are not allowed to operate on trusted information. How information flows throughout a computing system is crucial to determine confidentiality and integrity. IFT is among the most used techniques for modeling and reasoning about security.

The paper reviews existing IFT approaches at the gate level and RTL level. At the gate level reconvergence is a source of imprecision. In a multiplexer a taint select signal will yield a taint output even if both inputs are not taint. Modeling a multiplexer at the RTL level allows to fix this. Existing RTL level approaches involve the need of modifying the RTL code. The system implemented by the authors, RTLIFT, fixes both above shortcomings. It provides a library of RTL operators which allows to implement different approaches to IFT such at tainting outputs if any input is taint (conservative) or a more nuanced approach such as tainting outputs for a multiplexer only if one of the data inputs is taint (avoids false positives). It also provides an automatic translation of an RTL design in Verilog to an IFT-enhanced version which can be used for verification purposes.

The results on cryptographic cores show RTLIFT to be about 5 times faster than gate-level IFT (GLIFT). On a collection of 8 adders, multipliers, and control path logic, RTLIFT shows a 5%-37% decrease in false positives (false taint) over GLIFT for a simulation of 220 random input samples.

A comprehensive paper on security, that extends IFT to RTL, a very enjoyable read!

Also Read:

ML-Guided Model Abstraction. Innovation in Verification

Cadence Integrates Power Integrity Analysis and Fix into Design

Accelerating Development for Audio and Vision AI Pipelines