SILVACO 073125 Webinar 800x100

Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months

Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months
by Kalar Rajendiran on 06-14-2023 at 10:00 am

Design Costs Comparison

The semiconductor industry has been responding to increasing device complexity and performance requirements in multiple ways. To create smaller and more densely packed components, the industry is continually advancing manufacturing technology. This includes the use of new materials and processes, such as extreme ultraviolet lithography (EUV) and 3D stacking. To meet performance requirements, the industry is developing new chip architectures that enable more efficient data processing and power consumption. This includes open-domain-specific-architectures (ODSA) incorporating specialized processors and artificial intelligence (AI) accelerators. To reduce costs and improve performance, the industry is integrating more components onto a single chip, resulting in System on Chip (SoC) designs or opting for multi-die systems using chiplets-based implementations. There is also increasing levels of collaboration within the ecosystem including the equipment suppliers, foundries, package and assembly houses.

At the same time, time-to-market (TTM) is taking on more and more importance for product companies. In today’s fast evolving markets, the market window for a product may be just two years. A company cannot afford to be late to any market, let alone these kind of fast moving markets. Thus, each company utilizes its own tested and proven ways of deriving TTM advantages to get to market first. Of late, deep data analytics is being leveraged by many companies to accelerate their SoC product development efforts. By leveraging deep data analytics, design issues can be caught early in the development process, reducing the need for expensive and time-consuming re-spins. It can also identify potential performance bottlenecks and optimization opportunities. In essence, deep data analytics can not only reduce TTM but also help improve product performance, increase power efficiency and enhance reliability of a product. The product company gets to enjoy bigger market share at significantly improved return on investment (ROI) and longer term customer satisfaction.

proteanTecs is a leading provider of deep data analytics for advanced electronics monitoring. Its solution utilizes on-chip monitors and machine learning techniques to deliver actionable insights during development through production and in-field deployment. The company hosted a webinar recently where Rich Wawrzyniak, Principal Analyst for ASIC and SoC at Semico Research, presented a head-to-head comparison of two companies designing a similar multicore SoC on a 5nm technology node. One of the two companies in this comparison leveraged proteanTecs technology in its product development and gained a six-month TTM advantage over the other.

The webinar is based on a Semico Research white paper, which we covered in the article, “How Deep Data Analytics Accelerates SoC Product Development.”

Here are some excerpts from the webinar.

The Cost Edge

Below is a design costs comparison table for two competing solutions for the same application based on current industry design and production costs. Company A’s solution leveraged proteanTecs analytics-based design methodology and Company B’s solution used standard methodology. The solution is a data center accelerator SoC product, details of which are shared by Rich in the webinar. Company A’s cost savings amounted to about 9% over Company B.

The Time-to-Market (TTM) Benefit

Using proteanTecs approach for deep data analytics, Company A met their market window with on-time entry, allowing it to capture the majority of the target market. The company gained a 6-month TTM advantage over Company B. It also recovered its design investment even as their market was still growing, allowing for increased revenues and profitability.

In-Field Advantage

As highlighted in the Figure below, proteanTecs analytics solution not only helps during design, bring up and manufacturing phases but also after a product has been deployed in the field. This helped Company A monitor for and correct potential problems in the field under real world operating conditions. This kind of analytics insights could be used for preventive maintenance and fine tuning for power consumption and product performance in the field. Marc Hutner, Senior Director of Product Marketing at proteanTecs, presented this information during the webinar.

Cloud-Based Platform Demo

To conclude the webinar, Alex Burlak, Vice President, Test & Analytics at proteanTecs, showed a demo of the proteanTecs cloud-based analytics platform. He highlighted the platform’s capabilities and revealed the different types of insights users receive from proteanTecs’ on-chip monitors, also called Agents.

Summary

Anyone involved with semiconductor product development will find the information presented in the webinar very useful. You can watch the webinar on-demand here.

Also Read:

Maintaining Vehicles of the Future Using Deep Data Analytics

Webinar: The Data Revolution of Semiconductor Production

The Era of Chiplets and Heterogeneous Integration: Challenges and Emerging Solutions to Support 2.5D and 3D Advanced Packaging


TSMC Doubles Down on Semiconductor Packaging!

TSMC Doubles Down on Semiconductor Packaging!
by Daniel Nenni on 06-14-2023 at 6:00 am

TSMC 3DFabric Integration

Last week TSMC announced the opening of an advanced backend fab for the expansion of the TSMC 3DFabric System Integration Technology. It’s a significant announcement as the chip packaging arms race with Intel and Samsung is heating up.

Fab 6 is TSMC’s first all-in-one advanced packaging and testing fab which is part of the increasing investment in packaging TSMC is making. The fab is ready for mass production of the TSMC SoIC packing technology. Remember, when TSMC says mass production they are talking about Apple iPhone sized mass production, not engineering samples or internal products.

Today packaging is an important part of a semiconductor foundry offering. Not only is it a chip level product differentiator, it will take foundry customer loyalty to a whole new level. This will be critical as the chiplet revolution takes hold making it much easier for customers to be foundry independent. Chiplet packaging however is very complex and will be foundry specific which is why TSMC, Intel, and Samsung are spending so much CAPEX to secure their place in the packaging business.

The TSMC 3DFabric is a comprehensive family of 3D Silicon Stacking and Advanced Packaging Technologies:

  • TSMC 3DFabric consists of a variety of advanced 3D Silicon Stacking and advanced packaging technologies to support a wide range of next-generation products:
    • On the 3D Si stacking portion, TSMC is adding a micro bump-based SoIC-P in the TSMC-SoIC®family to support more cost-sensitive applications.
    • The 2.5D CoWoS®platform enables the integration of advanced logic and high bandwidth memory for HPC applications, such as AI, machine learning, and data centers. InFO PoP and InFO-3D support mobile applications and InFO-2.5D supports HPC chiplet integration.
    • SoIC stacked chips can be integrated in InFO or CoWoS packages for ultimate system integration.
  • CoWoS Family
    • Aimed primarily for HPC applications that need to integrate advanced logic and HBM.
    • TSMC has supported more than 140CoWoS products from more than 25
    • All CoWoS solutions are growing in interposer size so they can integrate more advanced silicon chips and HBM stacks to meet higher performance requirements.
    • TSMC is developing a CoWoS solution with up to 6Xreticle-size (~5,000mm2) RDL interposer, capable of accommodating 12 stacks of HBM memory.
  • InFO Technology
    • For mobile applications, InFO PoP has been in volume production for high-end mobile since 2016 and can house larger and thicker SoC chips in smaller package form factor.
    • For HPC applications, the substrateless InFO_M supports up to 500 square mm chiplet integration for form factor-sensitive applications.
  • 3D Silicon stacking technologies
    • SoIC-P is based on 18-25μm pitch μbump stacking and is targeted for more cost-sensitive applications, like mobile, IoT, client, etc.
    • SoIC-X is based on bumpless stacking and is aimed primarily at HPC applications. Its chip-on-wafer stacking schemes feature 4.5 to 9μm bond pitch and has been in volume production on TSMC’s N7 technology for HPC applications.
    • SoIC stacked chips can be further integrated into CoWoS, InFo, or conventional flip chip packaging for customers’ final products.

“Chiplet stacking is a key technology for improving chip performance and cost-effectiveness. In response to the strong market demand for 3D IC, TSMC has completed early deployment of advanced packaging and silicon stacking technology production capacity, and offers technology leadership through the 3DFabricTM platform,” said Dr. Jun He Vice President, Operations / Advanced Packaging Technology & Service, and Quality & Reliability. “With the production capacity that meets our customers’ needs, we will unleash innovation together and become an important partner that customers trust in the long term.”

TSMC’s customer centric culture will be a big part of the chiplet packaging revolution. By working with hundreds of customers you can bet TSMC will have the most comprehensive IC packaging solutions available for fabless and systems companies around the world, absolutely.

TSMC Press Release:
TSMC Announces the Opening of Advanced Backend Fab 6, Marking a Milestone in the Expansion of 3DFabric™ System Integration Technology

Also Read:

TSMC Clarified CAPEX and Revenue for 2023!

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5


The Opportunity Costs of using foundry I/O vs. high-performance custom I/O Libraries

The Opportunity Costs of using foundry I/O vs. high-performance custom I/O Libraries
by Stephen Fairbanks on 06-13-2023 at 10:00 am

signal 2023 06 07 192908

The original vision for Certus Semiconductor in 2008 was to leverage production I/O Libraries from more significant partners, starting with Freescale, and take it to smaller external customers for licensing.  This IP was proven and validated, with an excellent silicon track record and big company support; in our minds, we thought, “What small company wouldn’t want to use it!”  A year later, we had not sold a single I/O Library license.

Instead, every customer looked at the offerings and said, “This is not much different from the foundry IP, which is free, and despite a few minor advantages, we see no benefit in licensing it.”  This could have been the end of our story and the original business model. Still, our customers were very good at pointing us toward our future business model with this final statement, “Now, if the IO had this feature or higher performance, then we would license it.”

Our vision of an IP company shifted, and we fine-tuned our core business, designing custom best-in-class, high-performance I/O libraries that meet or exceed our customer’s market needs. Custom design services and support were added to the mix, and over time our standard IP offerings grew to a significant library of leading IP.

Today we are firmly both an IP licensing company and a custom design services company. Still, the most popular I/O Libraries grow from our custom portfolio, offering features, benefits, and capabilities our customers want that do not exist anywhere else in the semiconductor industry.

When asked, “How do you compete against free IP?” about the foundry or freely available third-party I/O libraries, I respond, “Opportunity Cost.”

In economic theory, opportunity cost is the value of what you lose when choosing between two or more options. Ideally, when you decide, you feel your choice will have better results for you regardless of what you lose by not choosing the alternative.

Engineers are very good at understanding opportunity costs regarding  I/O tradeoffs when it is quantifiable.  They can quickly determine the specific benefits of a custom I/O when they consider it has lower power, freeing up their power budgets for other blocks; or smaller footprints where they can compare the cost of saved silicon area versus the licensing fees. Such metrics allow simple calculations to guide the decision to use free I/O libraries or purchase a custom I/O license.

Conversations with sales, marketing leads, and product architects are where the hidden, and many times more significant, opportunity cost discussions surrounding I/O libraries happen.  They understand the subjective benefits of a custom I/O library better than the design engineers.   For example, adding an additional I/O protocol to a bank of I/O may open a new market or industry for the product.  If by licensing a custom I/O library, you can double or triple your available market space, is that not worth it?  By choosing a free I/O library, what does the loss of that potential market cost you?

Discussing with a Marketing lead what we can do with an I/O design is always fun.  As soon as you start to mention new features or electrical interfaces that can easily be added to a set of I/O’s, you can begin to see them get excited about potential new markets, potential new customers, and new business opportunities!

One of my favorite questions is, “If you had a wish list for this product’s I/O capability, what would it include?”  There have been many situations where a program director or marketing lead begins mentioning a feature they wanted because they saw a market opportunity but didn’t think it possible.  As soon as we offer to add it in, the excitement is real!  Some of the best custom designs we have done in the past were built off of a wish list of features given to us by the customer, many times with features they didn’t think possible but also features we wouldn’t have considered adding without their input.

Personal favorites of such collaborative designs are our 12V-30V interfaces in standard 40nm and 28nm low voltage CMOS processes, with no special masks, used for MEM’s and RF products.  Additional fun examples are precision tristate-able PWM GPIOs and a specialty die-to-die low-power high-speed interface for MCMs.

Very few areas of chip design can enable new markets, and unique design socket wins, then I/O features.  I/O design flexibility and options directly impact the variety of systems and market a part can be sold into.  By allowing our team to collaborate with our customers’ marketing leads, we have been lucky to design many fascinating libraries for the industries.

At a conference in 2017, I gave a presentation titled “Fear not to Customize.”   In that presentation, I explored several examples of how I/O custom features enabled our customers to leverage new opportunities, grow their markets, and expand their design wins.  The principles of that presentation are still valid today.  The last statement is one of my favorites, “Fear not to customize, instead let your competitors fear it.”

I still stand by that belief, telling my customers always to be bold and open to discussing with us or requesting custom I/O features.  In many cases, we have already implemented that feature in a different node—the only fear they should have ar the opportunity costs of not customizing.   Product architects and marketing must dream big and consider any design requests that enable new markets and opportunities and expand product impacts on the industry, even if those features seem implausible.  We never know what unique products will come from such collaborations and dreams.

Certus Semiconductor will be present at DAC 2023, so you’ll have an opportunity to learn more about the opportunity costs of using foundry I/O versus high-performance I/O libraries.  More importantly, you’ll have the chance to brainstorm with us new ideas about how a unique I/O design could reimagine your product and your market.

Also Read:

CEO Interview: Stephen Fairbanks of Certus Semiconductor

Certus Semiconductor releases ESD library in GlobalFoundries 12nm Finfet process

Certus Semiconductor becomes member of Global Semiconductor Alliance (GSA)


WEBINAR: Revolutionizing Chip Design with 2.5D/3D-IC Design Technology

WEBINAR: Revolutionizing Chip Design with 2.5D/3D-IC Design Technology
by Daniel Nenni on 06-12-2023 at 10:00 am

Figure 1 (2)

In the 3D-IC (Three-dimensional integrated circuit) chip design method, chiplets or wafers are stacked vertically on top of each other and are connected using Through Silicon Vias (TSVs) or hybrid bonding.

The 2.5D-IC design method places multiple chiplets alongside each other on a silicon interposer. Microbumps and interconnect wires establish connections between dies whereas TSVs are used to make connections with the package substrate.

Figure 1: 2.5D IC design block diagram
Why do we need 3D-ICs?

Emerging technologies like Artificial Intelligence, machine learning, and high-speed computing require highly functional, high-speed, and compact ICs. 3D-IC design technology offers ultra-high performance and reduced power consumption, making it suitable for multi-core CPUs, GPUs, high-speed routers, smartphones, and AI/ML applications. As the high-tech industry evolves, the need for smaller size and more functionality grows. The heterogeneous integration capability of 3D-IC design provides more functional density in a smaller area. The vertical architecture of 3D-ICs also reduces the interconnect length, allowing faster data exchange between dies. Overall, this advanced packaging technology is a much-needed IC design method to meet the growing demand for speed, more functionality, and less power consumption.

Benefits of 3D-ICs

One key advantage of 3D-ICs is heterogeneous integration. It allows the integration of chiplets in different technology nodes in the same space. Digital logic, analog circuits, memory, and sensors can be placed within a single package. This enables the creation of highly customized and efficient solutions tailored to specific application requirements.

Higher integration density is another benefit of 3D-IC design. By vertically stacking multiple layers of interconnected chiplets or wafers, the available chip area is utilized more efficiently. This increased integration density allows for the inclusion of more functionality within a smaller footprint, which is particularly beneficial in applications where size and weight constraints are critical, such as mobile devices and IoT devices.

3D-ICs also exhibit higher electrical performance. The reduced interconnect length in vertically stacked chips leads to shorter signal paths and lower resistance, resulting in improved signal integrity and reduced signal delay. This translates to higher data transfer rates, lower power consumption, and enhanced overall system performance.

With the latest configuration methods like TSMC’s CoWoS (Chip On Wafer on Substrate) and WoW (Wafer on Wafer), which utilize hybrid bonding techniques, the interconnect length is further minimized, leading to reduced power losses and improved performance.

3D-IC technology provides a range of exceptional advantages, including heterogeneous integration, higher integration density, smaller size, higher electrical performance, reduced cost, and faster time-to-market. These advantages make 3D-ICs a compelling solution for advanced chip designs in various industries.

Challenges of 3D-IC Design

Although 2.5D/3D-IC design methods have numerous advantages, these new methodologies have also introduced new challenges related to physics. The structural, thermal, Power, and Signal integrity of the entire 3D-IC system is more complicated. 3DIC designers are at the beginning of the learning curve to master the integrity challenges during the physical implementation of the system. Accurate simulation methods are a must for any chip designer especially when dealing with 3D-IC. Each component in the 3D-IC system should be examined and validated using highly accurate simulation tools.

Learn more about the latest developments in 3D-IC design, challenges, and simulation, and the key to a successful 3D-IC design by registering for the replay: Design and Analysis of Multi-Die & 3D-IC Systems by Ansys experts. He will also discuss the advanced simulation methods to predict the possible structural, thermal, Power, and Signal integrity issues in 3D-IC.

Also Read:

Chiplet Q&A with John Lee of Ansys

Multiphysics Analysis from Chip to System

Checklist to Ensure Silicon Interposers Don’t Kill Your Design


VLSI Symposium – Intel PowerVia Technology

VLSI Symposium – Intel PowerVia Technology
by Scotten Jones on 06-12-2023 at 6:00 am

Slide4

At the 2023 VLSI Symposium on Technology and Circuits, Intel presented two papers on their PowerVia technology. We received a pre-conference briefing on the technology embargoed until the conference began and received the papers.

Traditionally all interconnects have taken place on the front side of devices with signal and power interconnects sharing the same set of interconnect layers. There is a fundamental trade off between signal routing where small cross sectional area routing lines are required for scaling and large cross sectional area routing lines are needed for low resistance/power drop power delivery. Moving power delivery to the backside of a wafer, Backside Power Delivery Network (BS-PDN) enables optimized signal routing layers on the frontside and optimized power delivery layers on the backside with big/thick power interconnects, see figure 1.

Figure 1. Frontside Versus Backside Power Delivery

As logic technology has advanced the number of interconnect layers required has been steadily growing, see figure 2.

Figure 2. Intel Interconnect Layers

Please note that for recent nodes interconnect layers may vary by a few layers depending on the device.

Connections from the outside world to a device are made through the top interconnect layers and that means for power to get down to the devices, power must go through the entire interconnect stack via chain, see figure 3.

Figure 3. Power Routing Challenges

The example in figure 3 from TSMC’s 3nm technology shows a via chain resistance of 560 ohms versus imec reports of a backside nano-via of ~50 ohms. One of the key advantages of BS-PDN becomes clear.

Another advantage that Intel is talking about is cost. BS-PDN relaxes the requirements for metal zero lowering cost for the most expensive interconnect layer at the expense of relatively large pitch backside metal layers.

There are multiple approaches to BS-PDN. Imec is advocating for Buried Power Rails (BPR) as a connection point for BS-PDN. In figure 4. Intel shows a density advantage to Power Via versus BPR.

Figure 4. Buried Power Rail Versus Power Via

I have two comments about, first of all, my sense is the industry is reluctant to implement BPR because it requires metal buried in the wafer before transistor formation. In my discussions with imec they admit this reluctance but believe BPR will eventually be needed. I should also mention that imec believes BPR can also connect into the side of the device without going up to metal 0 and achieve the same or better density as Power Via, this is an area of contention between the technologies.

In order to minimize risk instead of running their first PowerVia tests on Intel’s 20A process that also introduces RibbonFET (Horizontal Nanosheets), Intel has run Power Via on the i4 FinFET process Intel is currently ramping up in production.

Figure 5. summarize the results seen with PowerVia on i4. Power Via has demonstrated improved Power, Performance, and Area (PPA).

Figure 5. Power Via integrated into i4

Figure 6. illustrates the area improvement and figure 7. Illustrates the power and performance advantages.

Figure 6. Power Via Scaling

From figure 6 it can be seen that power via reduces the call height while also relaxing metal 0 from 30nm pitch to 36nm pitch. The relaxation in pitch likely results in a single patterned EIV layers versus multipatterned EUV.

Figure 7. IR Droop and Fmax

In figure 7 it can bee seen that IR Droop is reduced by 30% and Fmax is increased by 6%.

Finally, in figure 8 we can see that i4 = PowerVia yield is tracking i4 yield offset by 2 quarters.

Figure 8. i4 + Power Via Yield

With PowerVia due to be introduced in 2024 on Intels 20A process in the first half and 18A in the second half, it appears that PowerVia should have minimal impact on yield.

It is interesting to note that Intel is planning to introduce PowerVia in 2024. Samsung and TSMC have both announced BS-PDN for their second generation 2nm nodes due in 2026, giving Intel a 2 year lead in this important technology. My belief is two fold, one Intel is continuing to make progress on the timely introduction of new technologies, and, two, Intel likely prioritized BS-PDN because they are more focused on pure performance that the foundries.

Here is the official Intel press release:

https://www.intel.com/content/www/us/en/newsroom/news/powervia-intel-achieves-chipmaking-breakthrough.html

Also Read:

IEDM 2022 – Ann Kelleher of Intel – Plenary Talk

Intel Foundry Services Forms Alliance to Enable National Security, Government Applications

Intel and TSMC do not Slow 3nm Expansion

How TSMC Contributed to the Death of 450mm and Upset Intel in the Process


Podcast EP167: What is Dirty Data and How yieldHUB Helps Fix It With Carl Moore

Podcast EP167: What is Dirty Data and How yieldHUB Helps Fix It With Carl Moore
by Daniel Nenni on 06-09-2023 at 10:00 am

Dan is joined by Carl Moore, a Yield Management Specialist at yieldHUB. Carl is a semiconductor and yield management expert with more than 35 years of experience in the industry. Carl has held technical management positions across product and test engineering, assembly, manufacturing, and design.

Carl explains what “dirty data” is from a semiconductor test and yield management perspective. He  explains the sources of dirty data, the negative impact it can have on an organization and its customers and how yieldHUB partners with its customers to analyze and fix dirty data at the source.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Sean Wei of Easy-Logic

CEO Interview: Dr. Sean Wei of Easy-Logic
by Daniel Nenni on 06-09-2023 at 6:00 am

Photo Sean Wei 006

Dr. Wei has served as CEO & CTO of Easy-Logic since 2020.  Prior to this role, Dr. Wei served as CTO since 2014 where he constructed the core algorithm and the tool structure of EasyECO.  As the CEO, Dr. Wei focuses on building a strong company infrastructure.  In his CTO role he interfaces with strategic ASIC design customers and leads the field support efforts to seamlessly align EasyECO’s technology with emerging industrial needs.  Wei worked at Agate Logic as an FPGA P&R algorithm developer prior to pursuing his PhD degree.

Dr. Wei received his PhD in Computer Engineering from the Chinese University of Hong Kong and both his MS and BS degrees of Computer Science and Technology from Tsinghua University.

Tell us about Easy-Logic

Easy-Logic was founded in year 2014 by a group of PhD graduates with their supervisor from the Chinese University of Hong Kong.  While in the school, they analyzed the EDA solutions for ASIC design industry and realized that functional ECO demands were growing at an alarming rate, but the EDA industry didn’t respond to it.

They participated in the CAD contest of ICCAD International Conference using the functional ECO-related algorithms developed in their research, won world champions 3 times in a row (2012- 2014).  Worth mentioning, in 2012, the contest subject was functional ECO provided by Cadence. Their algorithm performed twice as good compared to any other contender’s.

With a strong combination of the required product development expertise, Easy-Logic set its course for empowering the ASIC project teams to quickly react to functional ECOs at a substantially lower overall cost.

After the product EasyECO was first introduced in 2018, positive responses from the design industry surprised the young entrepreneurs.  The number of customer evaluation requests overwhelmed the startup, and Easy-Logic quickly became a rising star in the EDA industry.  Currently the customer base extends across Asia and North America, among them, many world’s top-tier semiconductor providers.

What problems are you solving?

Easy-Logic Technology is a solution provider for Functional ECO issues in the ASIC design.

A Functional ECO requirement occurs when there is a change in the RTL code that fixes, or modifies, the chip function.  Functional ECO means inserting only a small patch into the existing design (i.e., pre-layout, cell routing, or even post-mask) to make sure the logic function of patched circuit is consistent with revised RTL.  The purpose is to quickly implement the RTL change without re-spinning the whole design.

The design team may receive Functional ECO requests at any stage of the design process.

Depending on the design stage, the required RTL change ripples through design constraints like multi-clock domain and low power design rules, the DFT test coverage requirements, physical restrictions of the layout change, eventually metal changes, and timing closures.  There is no reliable correlation between the complexity of RTL change and the success of layout ECO even if the RTL change looks simple, where a ECO failure means project re-spin.

At present, most IC design companies still need to invest a lot of manual work in functional ECO because market leading EDA tools are not yet capable of effectively addressing challenging ECO issues.  Each design revision mentioned above requires a skilled engineer to crack down the problem based on the nature of the RTL change and the characteristics of the ASIC design.

Easylogic ECO’s automatic design flow efficiently solves functional ECO problems for design teams.

What application areas are your strongest?

Almost all ASIC designs require Functional ECOs, however, each different application has its unique ECO challenges.  Fortunately, EasylogicECO is structured to handle all challenges.

For example,

  1. HPC has challenges on deep optimization which leads to larger differences between netlist and RTL structure, posing greater challenges for ECO algorithms.
  2. AI chips comprise a significant amount of arithmetic logic, requiring specialized algorithms for arithmetic logic ECO.
  3. Automotive area has challenges on scan chain fixing as test coverage is critical.
  4. Consumer products, such as panel controllers, have challenges adopting subsequent functional ECOs as their products need to be versatile and are revised frequently.

EasylogicECO’s core optimization algorithm lays the foundation for all general optimizations on top of the general algorithm. Algorithm designed for each specific application scenario enables identifying and handling the application challenge automatically.

What keeps your customers up at night?

As mentioned earlier, there is no guarantee for the success of functional ECO and each failed functional ECO job means a project delay from weeks to months. The closer it gets to the tape-out stage, the greater the challenges in achieving success.  A re-spin when the design is close to tape out might even kill the product, so the enormous pressure on the success of ECO task, within the shortest ECO turnaround time, sometimes pushes designers over the edge.

Functional ECO is never a simple job.  Its importance has become an industry consensus, and yet, to this day, major EDA companies still couldn’t provide any satisfactory solutions.  The nagging uncertainty of whether the ECO task can be successful is extremely stressful.

What does the competitive landscape look like and how do you differentiate?

Most ASIC design companies still must invest a lot of manpower on complex functional ECO cases as the solutions provided by major EDA vendors couldn’t get the job done efficiently.

Easy-Logic is a newcomer in the functional ECO landscape.  Easy-Logic’s flagship product, EasylogicECO, deploys patented optimization algorithms to create a combination of

  1. The smallest ECO patch
  2. The easiest tool to address complex cases
  3. The most suitable tool flow to address the depth of ECO design changes

That differentiates EasylogicECO from other solutions.

What new features are you working on? 

Functional ECO requires a complete design flow/toolchain.  Following Functional ECO, DFT ECO, PR ECO, Timing ECO, Metal ECO are also required.  Currently, there is no complete solution available for all these needs.  Easy-Logic is committed to developing a toolchain for the complete functional ECO process, enabling customers to easily navigate from an RTL change to a GDS2 change.

How do customers normally engage with Easy-Logic?

The easiest way is to send an email to the Easy-Logic Customer Response Team through the Contact Us form on the Easy-Logic website.  The Easy-Logic field team will reach out to the sender shortly.

Now that travel is open, Easy-Logic will appear in many conference events, the next one being DAC 2023 in San Francisco.  Please make an appointment before the event, or simply drop by, for a detailed solution discussion.

Also Read:

CEO Interview: Issam Nofal of IROC Technologies

CEO Interview: Ravi Thummarukudy of Mobiveil

Developing the Lowest Power IoT Devices with Russell Mohn


Getting the most out of a shift-left IC physical verification flow with the Calibre nmPlatform

Getting the most out of a shift-left IC physical verification flow with the Calibre nmPlatform
by Peter Bennet on 06-08-2023 at 10:00 am

Correct Verify Debug

Who first came up with this term shift-left ? I’d assumed Siemens EDA as they use it so widely. But their latest white paper on the productivity improvements possible with shift-left Calibre IC verification flows puts the record straight: a software engineer called Larry Smith bagged the naming rights in a 2001 paper (leapfrogging hardware engineers who’ve been doing prototyping for decades).

It’s well known that catching problems earlier in the design process can reduce the rework cost by orders of magnitude.

While the detect, debug and correct costs might not vary much through the flow, it’s rework costs that escalate as fixes require longer fixing and verification loops, with potential hardware respins.

Not all design errors and violations are created equal – some have higher impact and fixing costs than others – and design checks fall into two categories: strictly functional (binary; pass/fail) and attribute checks which are qualitative (checked against values) where there may be more scope to over-design earlier or perhaps waiver later.

Shift-left strategies hope that early violation detection reliability is as good as the 100% level at signoff. Let’s consider what actually happens.

Differences in verification engines or rule interpretation between early and signoff checks may result in false positive and/or negative violations. So Siemens make a strong point that a shift-left strategy hugely benefits from using the same engine for physical design checks throughout the flow.

But the toughest challenge in handling large, dirty (early stage or incomplete) designs is the sheer volume of violations. Unless we do something smart, the signal to noise ratio here can get pretty bleak. And anyone who’s done much verification will know that huge error and warning reports often cluster into similar types with common causes. Figuring out these patterns often takes time, even with experienced designers.

Features we’d like in a shift-left flow might include:

Optimizing shift-left in the Calibre nm Platform flow

How does a Calibre flow measure up to these challenges ?

A Calibre shift-left flow must include the whole range of physical verification checks – LVS, DRC, ERC, PERC, DFM and reliability – as well as design modifications like metal fill and in-design fixing (DRC: Design rule checking; LVS: Layout-vs-schematic; ERC: Electrical rule checking; PERC: Programmable ERC; DFM: Design for manufacturing).

We can’t just optimize the flow without first making sure the tools have the capabilities to do the necessary early design checking. Calibre has added many features here and considers these from four aspects:

Early-stage verification includes equation-based design rule checking, intelligent pattern matching, advanced property extraction and clustering, and embedded machine learning. Reliability is addressed with a set of pre-formatted Calibre PERC™ checks and the Calibre language infrastructure supports signoff verification capability during design and implementation.

Execution optimization covers run configuration and management, including automated run invocation and simplified setup. Calibre nmDRC™ Recon and nmLVS™ Recon tools minimize the rules and data needed for early-stage DRC and LVS verification. Automated check selection and design portioning allows designers to quickly find and fix the real errors while filtering out the irrelevant errors in incomplete designs.

Debug includes color mapping to help minimize and group results and identify root causes quickly, efficiently, and accurately. Intelligent debug signals speed up determining optimal corrections. Calibre RealTime Custom and Digital tools give immediate DRC feedback during design and implementation using standard foundry-qualified Calibre rule decks. Smart automated waiver processing avoids repeating already waivered violations.

Correction is improved with automated, Calibre-correct layout enhancements and repairs that are back-annotated to implementation tool design databases. Calibre’s DFM toolsuite provides a wide range of correct-by-construction layout modifications and optimizations that enhance both manufacturing robustness and design quality. Combining fixing and verification in the same tool also saves license usage and run time.

Another recent Calibre white paper provides a detailed summary around this diagram.

Some of these operations – like smart automation and recognizing patterns in complex result sets – are a natural fit for Artificial Intelligence (AI) techniques, so it’s no surprise to see these are widely used. There’s more detail around the diagram below in the paper.

Summary

Design flows often feel like they were built “tools-up”, with usability aspects added as an afterthought. It’s refreshing to see a more “flow-down” approach here and perhaps no surprise that comes from Siemens as a historically system-centric EDA company.

Much as we’ve seen flows try to consolidate around common timing engines, Siemens argue a strong case for making signoff qualified Calibre PV checks available throughout the design flow.

Siemens have made some really interesting progress with these Calibre shift-left capabilities and clearly see this as a continuing journey with plenty more to come.

Find out more in the original white paper here:

Improve IC designer productivity and design quality with Calibre shift-left solutions; published 3 May 2023

https://resources.sw.siemens.com/en-US/white-paper-calibre-shift-left-solutions-optimize-ic-design-flow-productivity-design

Related Blogs and Podcasts

I found these closely-related white papers very useful:

Michael White, “Optimize your productivity and IC design quality with the right shift left strategy,” Siemens digital Industries Software; published 01 July 2022, updated 10 March 2023.

https://resources.sw.siemens.com/en-US/white-paper-optimize-your-productivity-and-ic-design-quality-with-the-right-shift-left

The four foundational pillars of Calibre shift-left solutions for IC design & implementation flows, published 4 May 2023.

https://resources.sw.siemens.com/en-US/white-paper-the-four-foundational-pillars-of-calibre-shift-left-solutions-for-ic-design

Here’s the original software engineering article introducing the shift-left concept:

Larry Smith, “Shift-Left Testing,” Dr. Dobb’s, Sept 1, 2001.

https://www.drdobbs.com/shiftleft-testing/184404768

Also Read:

Securing PCIe Transaction Layer Packet (TLP) Transfers Against Digital Attacks

Emerging Stronger from the Downturn

Chiplet Modeling and Workflow Standardization Through CDX


Democratizing the Ultimate Audio Experience

Democratizing the Ultimate Audio Experience
by Bernard Murphy on 06-08-2023 at 6:00 am

3D Audio

I enjoy talking with CEVA because they work on such interesting consumer products (among other product lines). My most recent discussion was with Seth Sternberg (Sensors and Audio software at CEVA), on spatial or 3D audio. The first steps to a somewhat immersive audio experience were stereo and surround sound, placing sound sources around the listener. A little better than mono audio, but your brain interprets the sound as coming from inside and fixed to your head, because it’s missing important cues like reverb, reflection, and timing differences at each ear. 3D audio recreates those cues, allowing the brain to feel the sound source is outside your head but still fixed to your head; move your head to the left and the band moves to the left, move to the right and the band moves to the right Connecting head movements to the audio corrects this last problem, fixing the sound source in place. When you move your head you hear a change in the same way you would in the real world. This might seem like a nice-to-have but it has major implications in user experience and in reducing fatigue induced by lesser implementations.

Why should we care?

Advances in this domain leverage large markets, especially in gaming (~$300B), which doesn’t just drive game sales. If you doubt gaming is important, remember that last year gaming led NVIDIA revenues and is still a major contributor. As a further indicator the headphones/earphones market is already above $34B and expected to grow to $126B by 2030. Apple and Android 13 provide proprietary spatial audio solutions for music and video services and are already attracting significant attention. According to one reviewer there are already thousands of Apple Music songs encoded for 3D. Samsung calls their equivalent 360 Audio, working with their Galaxy Buds Pro and content encoded for Dolby Atmos (also supported by Apple’s Spatial Audio). Differentiating on the user audio experience is a big deal.

The music option is interesting but I want to pay special attention to gaming. Given an appealing game, the more immersive the experience the more gamers will be drawn to that title. This depends in part on video action of course, but it also depends on audio well synchronized both in time and in player pose with the video. You want to know the difference between footsteps behind you or in front. When you turn your head to confirm, you expect the audio to track with your movement. If you look up at a helicopter flying overhead, the audio should track. Anything less will be unsatisfying.

Though you may not notice at first, poor synchronization in timing and pose can also become tiring. Your brain tries to make sense of what should be correlated visual and audible stimuli. If these don’t correspond, it must work harder to make them align. An immersive experience should enhance excitement, not fatigue, and game makers know it. Incidentally, long latencies and position mismatch between visual and audio stimuli are also thought to be a contributing factor in Zoom fatigue. Hearing aid wearers watch a speaker’s lips for clues to reinforce what they are hearing; they also report fatigue after extended conversation.

In other words, 3D audio is not a nice-to-have. Product makers who get this right will crush those who ignore the differentiation it offers.

To encode or not to encode

In the early days of surround sound, audio from multiple microphones was encoded in separate channels, ultimately decoded to separate speakers in your living room. Then “up-mixing” was introduced using cues from the audio to infer a reasonable assignment of source directions to support 5.1 or 7.1 surround sound. This turns out to be a pretty decent proxy for pre-encoding and certainly is much cheaper than re-recording and encoding original content in multiple channels. If there is more information like stereo, a true 5.1, 7.1 or ambisonics, 3D audio should start with that. Otherwise up-mixing provides a way for 3D audio to deliver a good facsimile of the real thing.

The second consideration is where to render the audio, on the phone/game station or in the headset. This is relevant to head tracking and latency. Detecting head movements obviously must happen in the headset but most commonly the audio rendering is handled in the phone/gaming device. Sending head movement information back from the headset to the renderer adds latency on top of rendering. This roundtrip over Bluetooth can add up to 200-400 milliseconds, a very noticeable delay between visual and audible streams. Apple has some proprietary tricks to work around this issue but these are locked into an Apple exclusive ecosystem.

The ideal and open solution is to do the audio rendering and motion detection in the headset for minimal total latency.

The RealSpace solution

In May of this year, CEVA acquired the VisiSonics spatial audio business. They have integrated this together with the CEVA MotionEngine software for dynamic head tracking, providing precisely the solution defined above. They also provide plugins for game developers who want to go all the way to delivering content fully optimized to 3D audio. The product is already integrated in chips from a couple of Chinese semis and a recently released line of hearables in India. Similar announcements are expected in other regions.

Very cool technology. You can read about the acquisition HERE, and learn more about the RealSpace product HERE.

Also Read:

DSP Innovation Promises to Boost Virtual RAN Efficiency

All-In-One Edge Surveillance Gains Traction

CEVA’s LE Audio/Auracast Solution


Nominations for Phil Kaufman Award, Phil Kaufman Hall of Fame Close June 30

Nominations for Phil Kaufman Award, Phil Kaufman Hall of Fame Close June 30
by Paul Cohen on 06-07-2023 at 10:00 am

PK Generic

Plan ahead now because Friday, June 30, is the deadline to submit nominations for the Phil Kaufman Award and the Phil Kaufman Hall of Fame for anyone you think is deserving of these honors. If you haven’t given it any thought, please consider nominating someone.

Before we look at both and the nomination requirements, here’s a thumbnail sketch of Phil Kaufman (1942-1992) and the reasons why we continue to honor his memory. Phil Kaufman was an industry pioneer who turned innovative technologies into commercial businesses that have benefited electronic designers. At the time of his death, he was president and CEO of Quickturn System, developer of hardware emulators. Quickturn’s products helped designers to speed the verification of complex designs. Previously, he headed Silicon Compiler Systems, an early provider of high-level EDA tools that enabled designers to efficiently develop chips.

The annual Phil Kaufman Award for Distinguished Contributions to Electronic System Design was first presented in 1994 to Dr. Herman Gummel (1923-2022) of Bell Labs (now Nokia Bell Labs). Since then, an impressive list of notables from across the spectrum of our ecosystem have received the award.

Sponsored by the Electronic System Design Alliance (ESD Alliance) and the IEEE Council on Electronic Design Automation (CEDA), it honors individuals who have made a visible and lasting impact on electronic design. Their influence could be as a C-level executive or someone setting industry direction or promoting the industry, a technologist or engineering leader or a professional in education and mentorship. Dr. Gummel, for example, was honored for his fundamental contributions to central EDA areas including the integral charge control model for bipolar junction transistors known as the Gummel-Poon model.

Per a policy set with the IEEE, only living contributors are eligible to receive awards. Thus, the Phil Kaufman Hall of Fame was introduced in 2021 by the ESD Alliance and the IEEE CEDA to honor deceased individuals who made significant and noteworthy creative, entrepreneurial and innovative contributions and helped our community’s growth. As Bob Smith, executive director of the ESD Alliance, said at the time: “Many contributors to our success died before being recognized for their efforts shaping our community. The Phil Kaufman Hall of Fame changes that.”

Our first recipients in 2021 were Jim Hogan (1951–2021) and Ed McCluskey (1929-2016). Jim Hogan was managing partner of Vista Ventures, LLC., and an experienced senior executive who worked in the semiconductor design and manufacturing industry for more than 40 years. Ed McCluskey, a professor at Stanford University, sustained a relentless pace of fundamental contributions for efficient and robust design, high-quality testing and reliable operation of digital systems. Mark Templeton (1958-2016) was the 2022 recipient. Artisan Components (now Arm), where he served as CEO, catalyzed the increasing use of IP as major components in chip designs. At the time of his death, he was managing director of investment firm Scientific Ventures, and a Lanza techVentures investment partner and board member.

How to Nominate
Selections for the Phil Kaufman Award and the Phil Kaufman Hall of Fame are determined through a nomination process reviewed by the ESD Alliance and IEEE CEDA Kaufman Award selection committees. To download a nomination form, go to: Phil Kaufman Award or Phil Kaufman Hall of Fame.

About the ESD Alliance
The ESD Alliance, a SEMI Technology Community, acts as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry. With a variety of programs for member companies, it represents the electronic system and semiconductor design ecosystem for technical, marketing, economic and legislative issues affecting the entire industry.

Follow SEMI ESD Alliance

www.esd-alliance.org

ESD Alliance Bridging the Frontier blog

Twitter: @ESDAlliance

LinkedIn

Facebook

Also Read:

SEMI ESD Alliance CEO Outlook Sponsored by Keysight Promises Industry Perspectives, Insights

Cadence Hosts ESD Alliance Seminar on New Export Regulations Affecting EDA and SIP March 28

2022 Phil Kaufman Award Ceremony and Banquet Honoring Dr. Giovanni De Micheli