SILVACO 073125 Webinar 800x100

First third-party ISO/SAE 21434-certified IP product for automotive cybersecurity

First third-party ISO/SAE 21434-certified IP product for automotive cybersecurity
by Don Dingee on 08-14-2024 at 6:00 am

ISO/SAE 21434 and UNECE WP.29 R155

Increased processing and connectivity in automobiles are cranking up the priority for advanced cybersecurity steps to keep roads safe. Electronic vehicle interfaces, including 5G/6G, Bluetooth, Wi-Fi, GPS, USB, CAN, and others, offer convenience features for drivers and passengers, but open numerous attack vectors for hackers. Many vehicles now provide over-the-air (OTA) update capability for infotainment systems and mission-critical vehicle software, adding to security concerns. Synopsys has taken a bold step in achieving third-party ISO/SAE 21434 certification for its ARC HS4xFS Processor IP, with more 21434-certified IP in the pipeline.

Second certification effort for ARC processor IP in automotive

Automotive industry observers are likely familiar with ISO 26262, the functional safety (FuSa) standard that assesses system behavior and any potential degradation in the face of hardware and software faults. ISO 26262 was the initial automotive standard certification focus for the Synopsys ARC Processor IP family, and many ARC core variants, including the ARC HS4xFS Processor IP, now contain FuSa features certified to ASIL D levels.

Cybersecurity presents similar concerns for automotive designers but requires defining a substantially different framework. “For starters, a comprehensive cybersecurity approach for automotive needs much tighter communication between automakers and suppliers,” says Ron DiGiuseppe, Automotive IP Segment Manager for Synopsys. “It also requires commitments at the executive level in creating dedicated cybersecurity assurance teams that partner with the product development teams, overseeing due diligence and working hand-in-hand with product development teams to enforce processes for creating and maintaining ISO 21434-certified IP.”

ISO/SAE 21434 “Road vehicles – cybersecurity engineering” defines such a framework, standardizing roles and responsibilities for various groups during different stages of automotive product development. It comprehensively addresses policies, processes, and procedures in a Secure Development Lifecycle (SDL) with specific criteria that each stage of development must meet before proceeding.

Initial support for ISO 21434 from European stakeholders

A broader European effort is piggybacking on ISO 21434, seeking to harmonize vehicle regulations. UNECE WP.29 extends into cybersecurity and software updates with two recent additions, R155 and R156. R155 sets up a path with uniform provisions for approval of vehicles designed to ISO 21434 and its cybersecurity risk management system.

“Manufacturers and car owners have a vital self-interest in protecting vehicles against cyberattacks,” says Meike Goelder, Product Management Cybersecurity at Bosch (see the full Bosch video “100 Seconds: The Importance of Cybersecurity“). “Attacks aim at manipulating safety-critical parameters, violating privacy by stealing customer data, or even hijacking a car, and new ways of attacking whole fleets only multiply the danger.” She sees both ISO 21434 and UNECE WP.29 R155 helping ensure the cyber compliance of cars.

image courtesy Bosch

DiGiuseppe points out that although UNECE WP.29 R155 is a European effort, taking the lead in defining an approval process, it sets a de facto standard for auto manufacturers selling in global markets. “To help ensure automakers and their suppliers comply with cybersecurity risk management, we selected an appropriate product, the ARC HS4xFS Processor IP, aligned our organization and processes, assessed for compliance, and obtained ISO 21434 third-party certification by SGS-TṺV Saar expediently.”

Installing cybersecurity risk management engineering processes

Achieving ISO 21434 certification involves two distinct phases: assessing the potential vulnerability of a product and providing an organizational structure for ongoing incident response should any occur. Vulnerability is approached by a Threat Analysis and Risk Assessment (TARA), creating a risk score based on four factors. Having a risk score helps drive informed decisions about treating risks, either through the development process or with specific modifications to a product.

image courtesy Synopsys

We asked DiGiuseppe the obvious question: did the ARC HS4xFS Processor IP require any modifications to become ISO 21434-certified IP? The assessment found no security vulnerabilities, which SGS-TṺV Saar confirmed in their audit, so no changes were needed. He indicates other products, including ARC NPU and ARC-V processor and interface IP, are in the certification pipeline, and they hope for similar outcomes but are ready to make any necessary product modifications.

The Synopsys cybersecurity risk management engineering processes include the Security Development Lifecycle, Security Risk Assessment, and IP Security Incident Response Team. As is often the case with systems standards certification, vulnerability management expertise is valuable to help identify, diagnose, and communicate vulnerabilities and the best mitigation approach. “This is the first IP product third-party certified to ISO 21434 in the industry – and we now have processes and teams in place to certify more of our IP products,” DiGiuseppe concludes. This breakthrough is welcome news for SoC designers, whether at automakers or third-party suppliers, who must get ISO 21434 certification with their products. More background on the ISO/SAE 21434 standard and the Synopsys ISO/SAE 21434-certified IP details are online.

Technical bulletin on ISO/SAE 21434 by Ron DiGiuseppe:
The Promise of ISO/SAE 21434 for Automotive Cybersecurity

Details on Synopsys’ industry-first third-party ISO/SAE 21434-certified IP product:
Synopsys Advances Automotive Security with Industry’s First IP Product to Achieve Third-Party Certification for ISO/SAE 21434 Cybersecurity Compliance


Circuit Simulation Update from Empyrean at #61DAC

Circuit Simulation Update from Empyrean at #61DAC
by Daniel Payne on 08-13-2024 at 10:00 am

Empyrean SPICE min

A familiar face in EDA, Greg Lebsack met with me in the Empyrean booth at DAC this year on opening day to provide an update on what’s new. I first met Greg when he was at Tanner EDA, then Mentor and Siemens EDA, so he really knows our industry quite well. The company was a Silver level sponsor of DAC this year, and Empyrean offers tools for circuit verification, covering: Aging, Electrical Over-Stress (EOS), Monte Carlo, cell characterization, RF simulation, co-simulation, GPU-powered simulation, channel simulation, SPICE simulation. They also have EDA tools for RF, digital, flat panel design, foundry and advanced packaging design.

SPICE simulators

I learned that their SPICE simulator running on GPU was popular, and as NVIDIA releases a new GPU, then the ALPS-GT tool quickly gets updated and released. In fact, NVIDIA is a customer of Empyrean and they use ALPS-GT for transient analysis and ALPFS-RF for harmonic balance simulations. A poster session on Tuesday was presented by both NVIDIA and Empyrean about using ALPS-GT:

  • GPU ACCELERATED HARMONIC BALANCE SPICE SIMULATION
    Qikun Xue, NVIDIA, San Jose, CA
    Chen Zhao, Empyrean Technology, Santa Clara, CA

Booth presentations showed Alps and Alps RF on both Monday and Tuesday. Other customers mentioned by Greg were MPS designing CMOS power supply chips, Diodes doing EM/IR analysis on power management ICs, and WillSemi running reliability analysis. PMIC designs are typically using 180nm process nodes, while the ALPS circuit simulator is also certified for use with the leading-edge Samsung 3nm node.

To support the six product lines at the company, they have grown to 1,200 people, up from 700 just two years ago. Tools in their digital SoC design space cover five areas:

  • Qualib – process library analysis and validation
  • Liberal – standard cell, memory and IP characterization
  • XTop – timing closure and ECO tool
  • XTime – timing and design reliability analysis
  • Skipper – layout integration and analysis
Empyrean DAC Booth

Summary

Every time that I meet with contacts at Empyrean the company has grown, and I learn about new customers and market segments being served. Their booth at DAC looked larger this year and included more staff than ever before. Having a tier-one customer like NVIDIA certainly grabbed my attention, and really cements Empyrean in this IC circuit simulation marketplace as a trusted EDA vendor. It’s a bit poetic how NVIDIA GPUs are being used to simulate new NVIDIA ICs, accurately and faster than ever before for both transient analysis and RF analysis.

Stay tuned on SemiWiki for updated news from this rising EDA vendor in blogs to come.

Related Blogs


Why Glass Substrates?

Why Glass Substrates?
by Sharada Yeluri on 08-13-2024 at 6:00 am

Intel Glass Substrates
Borrowed from Intel’s presentation on glass substrates

The demand for high-performance and sustainable computing and networking silicon for AI has undoubtedly increased R&D dollars and the pace of innovation in semiconductor technology. With Moore’s Law slowing down at the chip level, there is a desire to pack as many chiplets as possible inside ASIC packages and get the benefits of Moore’s Law at the package level.

The ASIC package hosting multiple chiplets typically consists of an organic substrate. This is made from resins (mostly glass-reinforced epoxy laminates) or plastics. Depending on packaging technology, either the chips are mounted directly on the substrate, or there is another layer of silicon interposer between them for high-speed connectivity between the chiplets. Interconnect bridges instead of interposers are sometimes embedded inside the substrate to provide this high-speed connectivity.

The problem with organic substrates is that they are prone to warpage issues, especially in larger package sizes with high chip densities. This limits the number of dies that can be packed inside a package. That is where substrates made of glass could be game changers!

Advantages of Glass Substrates

✔ They can be made super flat, allowing for finer patterning and higher (10x) interconnect densities. During photolithography, the entire substrate receives uniform exposure, reducing defects.

✔ Glass has a similar thermal expansion coefficient as the silicon dies above it, reducing thermal stress.

✔ They don’t warp and can handle much higher chip density in a single package. Initial prototypes could handle 50% more chip density than organic substrates.

✔ Could seamlessly integrate optical interconnects, giving rise to more efficient co-packaged optics.

✔ These are typically rectangular wafers, which increase the number of chips per wafer, improving yield and reducing costs.

Glass substrates could potentially replace organic substrates, silicon interposers, and other high-speed embedded interconnects inside the package.

However, there are some challenges: Glass is brittle/fragile and prone to fractures during manufacturing. This fragility needs careful handling and specialized equipment to prevent damage during manufacturing processes. Ensuring proper adhesion between glass substrates and other materials used in semiconductor stacks, such as metals and dielectrics, is challenging. The differences in material properties can lead to stresses at the interfaces, potentially causing delamination or other reliability issues. While glass has a thermal expansion coefficient similar to silicon, it can differ significantly from materials used in PCB boards/bumps. This mismatch can cause thermal stresses during temperature cycling, impacting reliability and performance.

Lack of established industry standards for glass substrates leads to variability in performance across different vendors. As the technology is new, there is not enough long-term reliability data. More accelerated life testing is needed to gain confidence in using these packages for high-reliability applications.

Despite the disadvantages, glass substrates hold great promise for HPC/AI and DC networking silicon, where the focus is on packing as much throughput as possible inside ASIC packages to increase the overall scale, performance, and efficiency of the systems.

Major foundries like Intel, TSMC, Samsung, and SKC are heavily investing in this technology. Intel is leading the pack with test chips introduced late last year. However, it will be another 3-4 years before this inevitable transition to glass substrates happens for high-end silicon.

I can’t wait to see more innovations that push the boundaries of technology!

Also Read:

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC

TSMC Advanced Packaging Overcomes the Complexities of Multi-Die Design


PieceMakers HBLL RAM: The Future of AI DRAM

PieceMakers HBLL RAM: The Future of AI DRAM
by Joe Ting on 08-12-2024 at 6:00 am

PieceMaker Memory

PieceMakers, a fabless DRAM product company, is making waves in the AI industry with the introduction of a new DRAM family that promises to outperform traditional High Bandwidth Memory (HBM). The launch event featured industry experts, including a representative from Samsung, highlighting the significance of this innovation.

Today, customers are already exploring the use of low-density HBLL RAM for large language models. According to Dr. Charlie Su, President and CTO of Andes Technology, a leading RISC-V vector processor IP provider, “High-bandwidth RAM, such as HBLL RAM, is widely discussed among AI chip makers. When paired with Andes vector processors and customer accelerators, it creates great synergy to balance compute-bound and memory-bound issues.” Eight HBLL RAM chips can deliver a 4 GB density for smaller language models, with a staggering bandwidth of 1 TB per second and at a low cost.

The Need for Advanced DRAM

Since last year, large language models (LLMs) have grown in size and complexity. These models require varying amounts of memory to store their parameters, but one constant remains: the need for high bandwidth. Currently, the landscape of DRAM includes low-power DDR, GDDR, and HBM. However, there is a notable gap in high bandwidth but lower density options, which is where PieceMakers’ new HBLL RAM comes into play.

The name “HBLL RAM” stands for High Bandwidth, Low Latency, and Random Access. Compared to HBM, HBLL RAM offers two additional characteristics that make it superior: low latency and random access capabilities. This innovation addresses the needs of AI applications by providing lower density with high bandwidth.

The current generation of HBLL RAM, now in production, offers a low density of 0.5 GB and a bandwidth of 128 GB per second. Future generations are being designed with stacking techniques to further enhance performance. The strategy involves increasing data rate vertically and expanding IO width horizontally. Similar to HBM, HBLL RAM uses 512 IO and data on 1K IO, with future generations set to boost the frequency.

When comparing HBLL RAM to HBM, the advantages are clear. At the same density, HBLL RAM provides much higher bandwidth. Conversely, at the same bandwidth, it offers lower density. This improvement is quantified by the bandwidth density index, which measures the maximum bandwidth per unit density (GB). HBLL RAM significantly outperforms HBM, low-power DDR, and GDDR in this regard.

Bandwidth and Energy Efficiency

Typically, discussions about bandwidth focus on sequential bandwidth. However, the granularity of random access is equally important. HBLL RAM excels in random access performance, outperforming HBM, which has good sequential bandwidth but poor random access capabilities.

In terms of energy efficiency, HBLL RAM is more power-efficient because it delivers the same bandwidth with a smaller array density or page size. This efficiency stems from its innovative low-density architecture, first introduced at ISSCC in 2017. A single HBLL RAM chip provides 128 GB per second bandwidth across eight channels, with all signal bumps located on one side of the chip. This design results in latency that is approximately half of traditional DRAM, with superior random access bandwidth.

Real-World Applications and Simplified Interfaces

Jim Handy, a respected industry analyst, highlighted HBLL RAM’s potential in an article where he illustrated its placement between level three cache and DRAM. In fact, simulations using HBLL RAM as level four cache yielded impressive results: latency was halved, and average bandwidth increased significantly compared to systems without HBLL RAM.

The simplicity of the memory controller is another advantage, as PieceMakers provides it directly to customers.  The interface for HBLL RAM is simple and SRAM-like, involving only read and write operations, plus refresh and mode register set.

One of PieceMakers’ demo boards and a customer’s board exemplify this innovation, utilizing an ABF-only design without CoWos (Chip-on-Wafer-on-Substrate), advanced packaging technology that can be 2-3 times more expensive than traditional flip-chip packaging.Looking ahead, PieceMakers plans to stack HBLL RAM similarly to HBM but without the need for CoWos. This 2D stacking approach, as opposed to 2.5D, promises further cost reductions.

In conclusion, PieceMakers’ HBLL RAM represents a significant leap forward in DRAM technology for AI applications. It offers superior bandwidth, lower latency, and enhanced energy efficiency, making it a compelling choice for future large language models. With the potential to provide up to 128 GB to 16 TB per second, HBLL RAM is set to revolutionize the AI industry.

Joe Ting is the Chairman and CTO, PieceMakers

Also Read:

Unlocking the Future: Join Us at RISC-V Con 2024 Panel Discussion!

Andes Technology: Pioneering the Future of RISC-V CPU IP

A Rare Offer from The SHD Group – A Complimentary Look at the RISC-V Market


A Post-AI-ROI-Panic Overview of the Data Center Processing Market

A Post-AI-ROI-Panic Overview of the Data Center Processing Market
by Claus Aasholm on 08-11-2024 at 8:00 am

Datacenter Supply Chain 2024

With all the Q2-24 results delivered, it is time to remove the clouds of euphoria and panic, ignore the performance claims and the bugs, and analyse the Data Center business, including examining the supply chain up and downstream. It is time to find out if the AI boom in semiconductors is still alive.

We begin the analysis with the two main categories, processing and network, and the top 5 semiconductor companies supplying the data center.

The top 5 Semiconductor companies that supply the data center account for nearly 100% of networking and processing. Once again, the overall growth for Q2-24 was a healthy 15.3%, all coming from processing. Networking contracted slightly by -2.5%, while processing grew by 20.3%. As Nvidia stated that the company’s decline in networking business was due to shipment adjustments, the growth numbers likely do not represent a major shift in the underlying business.

From a Year-over-year perspective, the overall growth was massive, 167%, with processing growing by 211% and networking by 66%.

As can be seen from the Operating Profit graph, the operating profit growth was much more aggressive, highlighting the massive demand from Data centers for Nvidia in particular.

The combined annual Operating profit growth was 522%, with processing accounting for a whopping 859% and networking growing 211%.

The quarterly operating profit growth rates aligned with the revenue growth rates, indicating that operating profits have stabilised and favour Processing slightly, as seen below.

Companies and Market shares

Even though Nvidia is so far ahead that market share is irrelevant for the GPU giant, it is vital for the other suitors. Every % is important.

The combined Datacenter processing revenue and market shares can be seen below:

While Nvidia has a strong revenue share of the total market, it has a complete stranglehold on the profits. The ability to get a higher premium is an important indicator of the width of the Nvidia moat. Nvidia’s competitors are trying to push performance/price metrics to convince customers to switch, but at the same time, they are commending Nvidia AI GPUs as Nvidia is running with a higher margin.

The shift in Market share can be seen below:

While this is “Suddenly, Nothing Happened,” the key takeaway is that despite the huff and puff from the other AI suitors, Nvidia stands firm and has slightly tightened its grip on profits.

The noise around Blackwell’s delay has not yet impacted the numbers, and it is doubtful that it will hurt Nvidia’s numbers, as the H100 is still the de facto choice in the data center.

The Datacenter Supply Chain

The shift in the Semiconductor market towards AI GPUs has significantly changed the Semiconductor supply chain. AI companies are now transforming into systems companies that control other parts of the supply chain, such as memory supply.

The supply situation is mostly unchanged from last quarter, with high demand from cloud companies and supply limited by CoWoS packaging and HBM memory. The memory situation is improving, although not all suppliers are approved by Nvidia.

As can be seen, the memory companies have been the clear winners in revenue growth since the last low point in the cycle.

Undoubtedly, SK Hynix has been the prime supplier to Nvidia, as Samsung has had approval problems. The latest operating profit data for Samsung suggest that the company is now delivering HBM to Nvidia or other companies, and the HMB supply situation is likely more relaxed.

GPU/CPU Supply

TSMC manufactures almost all of the processing and networking dies. The company recently reported record revenue for Q2-24 but is not yet at maximum capacity. CoWoS is the only area that is still limited, but TSMC is adding significant capacity every quarter, and it will not impact the key players in the Data Center supply chain.

Also, the monthly revenue for July was a new record.

While nothing has been revealed about the July revenue, it is likely still driven by TSMC’s High-Performance Computing business, which supplies mainly to the data center.

The HPC business added $3B without revealing the customer or the background of the company. As Apple used to be the only 3nm customer and normally buys less in Q2, it looks like it is a new 3nm customer and that is most likely a Datacenter supplier.

It could be one of the Cloud Companies that all are trying to leverage their own architectures. Amazon is very active with Trainium, Inferential and Graviton while Google has the TPU.

Also, Lunar Lake from Intel and the MI series from AMD could be candidates. With Nvidia’s Blackwell issues, the company stays on 4nm (5nm) until Rubin is ready to launch.

The possibility of Apple starting using M-series processors in their data centers is also possible.

The TSMC revenue increase is undoubtedly good news for the Data center market, which will continue growing in Q3, no matter what opinions investment banks have on the ROI of AI.

The Demand Side of the Equation

The AI revolution has caused the explosive growth in Data center computing. Analysing Nvidia’s current customer base, gives an idea of the different demand channels driving the growth.

2/3 of the demand is driven by the large Tech companies in Cloud and consumer, where the last third is more fragmented around Enterprise, Sovereign and Supercomputing. The last two are not really driven from a short term ROI perspective and will not suddenly disappear.

A number of Banks and financial institutions have recently questioned the investment of the large tech companies into AI which have cause the recent bear run in the stock market.

I am not one to run with conspiracy theories but it is well known that volatility is good for banking business. I also know that the banks have no clue about the long term return in AI, just like me, so I will continue to follow the facts, while the markets go up and down.

The primary source of funding for the AI boom will continue to be the large tech companies

Tech CapEx

5 companies represent the bulk of the CapEx that flows to the Data Center Processing market.

It is almost like the financial community treats the entire CapEx for the large Cloud customers as a brand new investment into a doubtfull AI business model. The reality is that the Datacenter investment is not new and it is creating tangible revenue streams at the same time as doubling as an AI investment.

From a growth perspective and using a startpoint from before the AI boom, it becomes clear that the datacenter investment growth actually follows the growth of the cloud revenue growth.

While I will let other people decide if that is a good return on investment, the CapEx growth compared to Cloud revenue growth does not look insane. That might happen later but right now it can certainly be defended.

The next question is, how much processing candy the large cloud companies can get for their CapEx?

The processing share of total capex is certainly increasing although the capex also have increased significantly since the AI boom started. It is worth noting that the new AI servers delivers significantly more performance that the earlier CPU only servers that traditionally has been used in the data centers.

The Q2 increase in CapEx is a good sign for the Datacenter Processing companies. It represents a 8.3B$ increase in CapEx for top 5. This can be compared with a 4.3B$ increase in Processing and Networking revenue for the Semiconductor companies.

What is even better is that the CapEx commitment from the large cloud companies will continue for the foreseeable future. Alphabet, Meta and Amazon will have higher CapEx budgets in 2nd half and Meta will have significantly higher CapEx in 2025.

Microsoft revealed that even though almost all the CapEx is AI and data center related, around half of the current CapEx is used for land and buildings. These are boxes that needs to be filled with loads of expensive AI GPU servers later and a strong commitment to long term CapEx.

Conclusion

While the current valuations and share price fluctuations might be insane, the Semiconductor side of the equation is high growth but not crazy. It is alive and vibrant.

Nvidia might have issues with Blackwell but can keep selling H100 instead. AMD and Intel will start to chip away at Nvidia but it has not happened yet. Cloud companies will also start to sneak in their architectures.

The supply chain looks better aligned to serve the new AI driven business with improved supply of memory although advanced packaging might be tight still.

TSMC has rapidly increasing HPC revenue that is a good sign for the next revenue season.

The CapEx from the large cloud companies is growing in line with their cloud revenue and all have committed to strong CapEx budgets for the next 2 to 6 quarters.

In a few weeks, Nvidia will start the Data Center Processing earnings circus once again. I will have my popcorn ready.

In the Meta call, the ROI on AI was addressed with two buckets: Core AI where a ROI view is relevant and a Gen AI that is a long term bet where ROI does not make sense to talk about yet.

Also Read:

TSMC’s Business Update and Launch of a New Strategy

Has ASML Reached the Great Wall of China

Will Semiconductor earnings live up to the Investor hype?


Podcast EP240: Challenges and Strategies to Address New Embedded Memory Architectures with Mark Han

Podcast EP240: Challenges and Strategies to Address New Embedded Memory Architectures with Mark Han
by Daniel Nenni on 08-09-2024 at 10:00 am

Dan is joined by Dr. Mark Han, Vice President of R&D Engineering for Circuit Simulation at Synopsys. Mark leads a team of over 300 engineers in developing cutting-edge advanced circuit simulation and transistor-level sign-off products, including characterization and static timing analysis. With 27 years of industry experience, he has a proven track record of driving innovation and growth.

Dan discusses the changing landscape of embedded memory architectures with Mark. High-Bandwidth Memory (HBM) stacks are becoming much more prevalent in semiconductor system design, thanks in part to the substantial demands for high volume and high speed data management required by AI applications.

Mark discusses the ways this type of memory is different from traditional embedded technologies. He discusses the design and verification challenges presented by HBM-based designs. New challenges associated with heat dissipation and mechanical stability are also explored.

Mark describes how Synopsys is using its unique full stack of EDA tools, from TCAD to system architecture to address the growing demands of new memory architectures. He discusses innovations in both speed and accuracy for the Synopsys simulation tools that are making a difference in the design of advanced systems.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Yogish Kode of Glide Systems

CEO Interview: Yogish Kode of Glide Systems
by Daniel Nenni on 08-09-2024 at 6:00 am

Yogish Kode

Yogish Kode is a senior solutions architect with substantial experience in product lifecycle management for over 20 years. His focus has been on semiconductor PLM and IP management. Prior to founding Glide Systems, he was a global solutions architect at Dassault Systèmes, an IT lead at Xilinx, and a senior programmer/analyst at SDRC (now part of Siemens).

Yogish Kode

Yogish has a passion for product lifecycle management and the impact that an optimized solution can have on the enterprise.

He holds a Professional Certificate in architecture and systems engineering from MIT xPRO and a Master’s Degree in industrial engineering with a minor in computer science from the University of Minnesota.

Tell us about your company

Glide Systems specializes in advanced lifecycle management solutions, optimizing product development from inception to disposal with our cloud-native platform, Glide SysLM. Our adaptive data modeling capabilities and seamless integration of hardware, electrical, electronics, and software domains ensure robust traceability and efficient data flow throughout the product lifecycle.

The key features of Glide SysLM include a comprehensive, cloud-native solution integrating all phases of the product lifecycle. Thanks to our unified platform for hardware, electrical, electronics, and software domains we can deliver a seamless data flow. Our adaptive data modeling capabilities facilitate dynamic alignment with business needs and industry standards.

The bottom line is we reduce implementation costs by 30% to 50%, accelerate time-to-market, and enhance operational efficiencies, delivering substantial cost savings and faster product launches for our customers.

What problems are you solving?

Global manufacturing companies struggle to adopt digital solutions because of siloed enterprise applications and a lack of industry-specific solutions for product design data management. This usually results in extensive customization of generic products.

During my years of implementing PLM solutions, I observed many acute performance and usability issues. The lack of industry specific data platforms required companies to purchase off-the-shelf solutions and customize, resulting in huge consulting, services, and upgrade costs. The effort involved to develop an effective, integrated solution was far too time-consuming and costly.

I felt there was a better way, and that led me to try a new approach with a major semiconductor company. The new approach was successful with enhanced productivity, so I formed Glide Systems to productize the strategies I had uncovered and make the technology broadly available.

What application areas are your strongest?

Our initial focus is the semiconductor industry. The current product is optimized for the unique challenges of semiconductor design – things like IP lifecycle management, operations BOM management and implementation of a digital thread.

Our semiconductor IP management solution addresses IP catalog, managing hierarchical IP bill of material, versioning, hierarchical defect management, and impact analysis. Our performant hierarchical defect management and rollups features are crucial for effectively dispositioning defects against a system-on-chip before tapeout, providing engineers with rapid, comprehensive information and an intuitive UI that enhances both performance and usability, streamlining the debugging process and accelerating time-to-market.

Our operations bill of material solution enables seamless management of the entire process from raw wafer to the finished chip, ensuring compliance with the supplier network and supplier qualifications.

Different teams often use disparate tools, leading to siloed data and fragmented workflows. For example, requirements might be managed in Jama, user stories in Jira, test cases in Verisium, source code control in GitHub, GitLab, or Subversion, and issue management in Jira. Despite these tools being semantically connected, their isolated nature can complicate traceability and integration. Our platform addresses this challenge by seamlessly integrating any application with REST APIs within 2-3 weeks, rather than several months. This integration provides comprehensive traceability across all applications, ensuring cohesive and efficient project management.  

The core technology can be applied to many markets and processes. Going forward, we will diversify into markets such as medical devices, automotive, and aerospace.

What keeps your customers up at night?

In a word, visibility. Disparate enterprise applications lead to fragmented data, inefficiencies, and increased operational costs due to the lack of integration and seamless data flow across departments. The lack of enterprise-wide visibility for complex product development projects can cause substantial time delays, cost overruns and even result in ineffective products.

There is also a growing list of standards that products must adhere to. ISO26262 in the automotive industry is just one example. Tracking and documenting compliance with these standards becomes another major challenge.

Let me cite just two examples from 2022 Functional Verification Study, published by Wilson Research Group and Siemens EDA:

  • 66 percent of ASIC projects are behind schedule, with 27% behind by 30% or more.
  • 76% of ASICs require two or more respins.

What does the competitive landscape look like and how do you differentiate?

Large public companies offer broad, generic and costly solutions that do not integrate well with the complex electronic system flow. In this context, one size does not fit all. Smaller companies focus more on the problem at hand. These organizations offer solutions for semiconductor design, but they tend to be very limited in scope.

So, the options are either invest time and consulting money to attempt to integrate broad-based solutions into the electronic system flow or purchase multiple products from more narrowly focused companies and again attempt to integrate the collection of tools into the electronic system flow.

Glide SysLM offers an out-of-the-box solution that integrates with all phases of the electronic system flow, so all requirements are covered. And thanks to our no-code technology, fine-tuning the application to the specific needs of each customer can be done quickly and easily.

What new features/technology are you working on?

We have plans to enhance the current release across three major axes:

  • Unified Product Data Ecosystem: Seamlessly integrate with a growing list of industrial IoT devices, systems, and enterprise applications.
  • Sustainability: Track carbon footprint, resource usage, material compliance. Manage recycling, reusability, and end-of-life strategies.
  • Advanced Analytics and AI: Leverage advanced analytics, machine learning, and AI to derive actionable insights from vast amounts of lifecycle data.

How do customers normally engage with your company?

You can reach us at sales@glidesystemsinc.com. You can also contact us and request a demo through our website at https://www.glidesystemsinc.com/#contact-us.

Also Read:

CEO Interview: Orr Danon of Hailo

CEO Interview: David Heard of Infinera

CEO Interview: Dr. Matthew Putman of Nanotronics


Design Automation Conference #61 Results

Design Automation Conference #61 Results
by Daniel Nenni on 08-08-2024 at 10:00 am

IMG 3273

This was my 40th Design Automation Conference and based on my follow-up conversations inside the semiconductor ecosystem it did not disappoint. The gauge I use for exhibitors is “qualified customer engagements” that may result in the sale of their products. This DAC was the best for that metric since the pandemic, absolutely.

The official numbers are out and support that sentiment:

DAC 2024 reported a remarkable 34% increase in paper submissions for the Research Track and a 32% increase in submissions for the Engineering Track, highlighting the rapid pace of innovation and the growing interest in the field. Additionally, AI sessions now constitute 13% of the conference, reflecting the rising importance of artificial intelligence in electronic design.

Conference attendance also jumped 8% compared to the previous year, as organizers welcomed a vibrant and diverse group of participants from academia, industry, and government sectors. This year’s event hosted 25 new first-time exhibitors, adding fresh perspectives and innovations to the exhibition floor.

DAC returns to San Francisco, June 22 – 26, 2025.  The call for papers and presentations will open October 1, 2024.

Preliminary figures for DAC 2024 in San Francisco:

  • Full Conference & Engineering Track passes: 2,240
  • I LOVE DAC passes: 2,338
  • Exhibitors’ booth staff: 1,708

Total Attendee Registration: 6,286

Personally, I am disappointed it is in San Francisco again next year. If the organizers wanted to pump up the attendance numbers they should have it in San Jose or Santa Clara. In previous years Southern California locations (Orange County and San Diego) were really good as well. Even better, DAC should start traveling the U.S. again. The two DACs in New Orleans were crazy!

My first DAC was in Albuquerque, New Mexico which was very early for the EDA industry. In fact, I don’t think any of the EDA companies that exhibited then exist today. The next year it was in Las Vegas and that was a very big year for “networking”.  As they say, location, location, location.

I also think partnering with other conferences is a good idea. I don’t think collocating with Semicon West worked out as planned. It really is two different audiences. I attend both so this is my personal experience, observation, opinion.

I think partnering with the RISC-V ecosystem would be great. There is some good overlap and it would be a great addition and might encourage Arm to get back into DAC. IP has always been a popular category on SemiWiki so DAC should get more aggressive about IP exhibitor recruitment.

It is a shame the foundries abandoned DAC. Samsung Foundry dropped out this year and I expect Intel Foundry to drop out next year. It was a glorious time when TSMC, GlobalFoundries, UMC and even SMIC were at DAC. The foundry business really is the cornerstone of semiconductor design. Hopefully some of the boutique foundries without events of their own can come aboard the DAC train.

And yes, the big EDA companies are downsizing at DAC. I get this since they have their own events. CDNLive was excellent this year as was SNUG. In my opinion this presents more opportunity for the rest of the ecosystem, more customer engagement time.

I was on the fence about advising companies to exhibit at DAC this year and I regret that. Next year however I am fully behind it. The amount of qualified customer engagements at #61 DAC justifies it, absolutely.

About DAC
DAC, The Chips to Systems Conference (previously known as the Design Automation Conference) is recognized as the premier event for the design and design automation of electronic systems and circuits. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Over 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area with approximately 150 of the leading and emerging EDA, silicon, intellectual property (IP) and design services providers. The conference is sponsored by the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) and is supported by ACM’s Special Interest Group on Design Automation (ACM SIGDA).

Also Read:

proteanTecs Introduces a Safety Monitoring Solution #61DAC

CAST, a Small Company with a Large Impact on Many Growth Markets #61DAC

Perforce IP and Design Data Management #61DAC


Application-Specific Lithography: Patterning 5nm 5.5-Track Metal by DUV

Application-Specific Lithography: Patterning 5nm 5.5-Track Metal by DUV
by Fred Chen on 08-08-2024 at 6:00 am

Application Specific Lithography I

At IEDM 2019, TSMC revealed two versions of 5nm standard cell layouts: a 5.5-track DUV-patterned version and a 6-track EUV-patterned version [1]. Although the metal pitches were not explicitly stated, later analyses of a 5nm product, namely, Apple’s A15 Bionic chip, revealed a cell height of 210 nm [2]. For the 6-track cell, this indicates a metal track pitch of 35 nm, while for the 5.5-track cell, the pitch is 38 nm (Figure 1). Just a 3 nm difference in pitch matters a lot for the patterning approach. As will be shown below, choosing the 5.5-track cell for DUV patterning makes a lot of sense.

Figure 1. 210 nm cell height means 38 nm track pitch for 5.5 tracks (left) or 35 nm track pitch for 6 tracks (left).

Extending the 7nm DUV Approach to 5nm

The 5.5-track metal pitch of 38 nm is at the limit of DUV double patterning. It can therefore reuse the same approach used in 7nm, where the 6-track cell metal pitch was 40 nm [3]. This can be as simple as self-aligned double patterning followed by two self-aligned cut blocks, one for each material to be etched (core or gap) (Figure 2). The minimum pitch of the cut blocks (for each material) is 76 nm, allowing a single exposure.

Figure 2. SADP followed by two self-aligned cut blocks (one for the core material, one for the gap material). Process sequence from left to right: (i) SADP (core lithography followed by spacer deposition and etchback, and gapfill; (ii) cut block lithography for exposing gap material to be etched; (iii) refill of cut block for gap material; (iv) cut block lithography for exposing core material to be etched; (v) refill of cut block for core material. Self-aligned vias (not shown) may be partially etched after the block formation [4].

In lieu of SADP, SALELE [5] may be used instead. This would add an extra mask for the gap material, resulting in a total of four mask exposures needed.

Going Below 38 nm Pitch: Hitting the Multipatterning Barrier

For the 3nm node, it is expected that the metal track pitch will go below 30 nm [6]. Any pitch below 38 nm would entail the use of substantially more DUV multipatterning [7]. Yet a comparable amount of multipatterning could also be expected even for EUV, as the minimum pitch from photoelectron spread can be effectively 40-50 nm for a typical EUV resist [8,9]. The edge definition for a 25 nm half-pitch 60 mJ/cm2 exposure is heavily affected by both the photon shot noise and the photoelectron spread (Figure 3).

Figure 3. 25 nm half-pitch electron distribution image exposed with an incident EUV dose of 60 mJ/cm2 (13 mJ/cm2 absorbed), with a 7.5 nm Gaussian blur to represent the electron spread function given in ref. [9]. A 1 nm pixel is used, with 4 secondary electrons per photoelectron.

5nm For All?

The 5.5-track cell provides an easy migration path from 7nm to 5nm using DUV double patterning. Potentially, this is one of the easier ways for Chinese companies to catch up at 5nm, although clearly that would be as far as they can take it.

References

[1] G. Yeap et al., IEDM 2019, Figure 5.

[2] https://www.angstronomics.com/p/the-truth-of-tsmc-5nm

[3] https://fuse.wikichip.org/news/2408/tsmc-7nm-hd-and-hp-cells-2nd-gen-7nm-and-the-snapdragon-855-dtco/#google_vignette

[4] F. Chen, Self-Aligned Block Redistribution and Expansion for Improving Multipatterning Productivity, https://www.linkedin.com/pulse/self-aligned-block-redistribution-expansion-improving-frederick-chen-rgnwc/

[5] Y. Drissi et al., Proc. SPIE 10962, 109620V (2019).

[6] https://fuse.wikichip.org/news/7375/tsmc-n3-and-challenges-ahead/

[7] F. Chen, Extension of DUV Multipatterning Toward 3nm, https://semiwiki.com/lithography/336182-extension-of-duv-multipatterning-toward-3nm/, https://www.linkedin.com/pulse/extension-duv-multipatterning-toward-3nm-frederick-chen/

[8] F. Chen, Why NA is Not Relevant to Resolution in EUV Lithography, https://www.linkedin.com/pulse/why-na-relevant-resolution-euv-lithography-frederick-chen-ytnoc, https://semiwiki.com/lithography/344672-why-na-is-not-relevant-to-resolution-in-euv-lithography/

[9] T. Kozawa et al., JVST B 25, 2481 (2007).

Also Read:

Why NA is Not Relevant to Resolution in EUV Lithography

Intel High NA Adoption

Huawei’s and SMIC’s Requirement for 5nm Production: Improving Multipatterning Productivity


Podcast EP239: The Future of Verification for Advanced Systems with Dave Kelf

Podcast EP239: The Future of Verification for Advanced Systems with Dave Kelf
by Daniel Nenni on 08-07-2024 at 8:00 am

Dan is joined by Dave Kelf, CEO of Breker Verification Systems, whose product portfolio solves challenges across the functional verification process for large, complex semiconductors. Dave has deep experience with semiconductor design and verification with management and executive level positions at Cadence, Synopsys, Novas, OneSpin, and now Breker.

Dave explores the future of automated verification with Dan. He discusses the illusive executable specification and how Breker is providing a way to use this type of technology to automate the semiconductor verification process,

The various applications of AI and generative models to the verification challenge are also discussed, along with an assessment of how the RISC-V movement is impacting system design.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.