webinar banner2025 (1)

Project-Centric Design Process, or IP-centric

Project-Centric Design Process, or IP-centric
by Daniel Payne on 04-14-2020 at 10:00 am

projects

How do most IC design teams organize their work during the design process?

Most design teams would say that they organize their work into a project-centric view, and that at the beginning of the process use a tool for requirements management, maybe a bug tracker, or some design management tool. On the four IC designs that I worked on in the 70’s and 80’s, each one took a project-centric view, and there was virtually zero IP reuse going on.

Let’s take a closer look at some common issues that arise with a project-centric approach to SoC design.

Scalability

A team starts out on a new SoC and then someone in the CAD groups sets up a new project in each of their tools, like:

  • Requirements management
  • Bug tracking
  • Design Management

Most electronic products tend to reuse cells, blocks, modules and sub-systems from previous products, but how does a project-centric flow account for any of this IP reuse?

Any dependencies in these IP blocks are not really handled with point tools that basically silo design data. Each new project then gets a new DM server instance, and who is going to maintain these servers for years or even decades?

If your company has four concurrent projects going on, then who is tracking what is commonly used between each of the projects, when all of the tools are setup per project?

If your tools only understand the scope of what’s inside each Project, then there’s a gap of knowing what happens if a common cell, block, module or sub-system (IP)  is changed or a bug is fixed, creating a new version.

Collaboration

Common IP blocks being used within multiple projects makes collaborating a challenge, because each project has their own permission settings, as individual servers are setup per project. Who wants to stop an ongoing project to request access to all IP blocks being used?

Traceability

When purchasing design or verification IP you have to sign a license agreement with each vendor, and these vendors want to track how many instances of their valuable IP is being used to ensure that the agreement terms are being met. You really want to know how all IP is being used, across all projects, not just within one project.

Countries have laws in place regarding how silicon IP is being used or exported, and for American companies the U.S. Department of State has defined the International Traffic in Arms Regulations (ITAR). Your company needs to know how each IP block complies with ITAR or other local requirements.

If a bug is found inside some IP block, and that block is re-used in multiple projects, then how does each project team hear about the bug fix?

IP-centric Design Process

There is an alternative approach to a project-centric design process with its challenges, and that is to use an IP-centric design process. Instead of each project being a silo of design data, each project can be treated as an IP block as part of a connected hierarchy of other IP blocks as shown below:

With this IP-centric approach each Project continues to have its own permissions and DM backend as desired. IP metadata goes along with each IP block, so that all users of an IP block have all the info they need when reusing. Even dependencies from bug tracking tools and requirements tools are integrated into this IP-centric view.

Scaling works well because there’s a centralized server that can be quickly update once there’s an IP update, then its effects are seen in all projects. This is the approach that Methodics has taken with Percipient, their IP Lifecycle Management (IPLM) tool. Shown below are four projects being managed with the Percipient central server.

Your company can even follow a Zero-Downtime upgrade policy while using a central server approach.

When a bug is found for an IP block in Project A, then an engineer would file a bug report under Project A. Engineers on Project B and C would then note that a new bug was just filed on the re-used IP block.

Summary

Times have changed, and IC designs are getting larger every day, so the approach that your company and teams take makes a difference. The project-centric approach worked OK for small designs with little IP reuse, however for today’s SoC projects you’d be better served with the IP-centric approach being offered by Methodics. I like how they’ve integrated with other bug tracking and DM tools, so you don’t have to ask your CAD group to customize lots of point tools to play well together.

Here’s a final view of how Percipient provides several useful management features.

To read the complete 10 page White Paper, browse here.

Related Blogs


Innovation in Verification April 2020

Innovation in Verification April 2020
by Bernard Murphy on 04-14-2020 at 6:00 am

Innovation

This blog is the next in a series in which Paul Cunningham (GM of the Verification Group at Cadence), Jim Hogan and I pick a paper on a novel idea we appreciated and suggest opportunities to further build on that idea.

We’re getting a lot of hits on these blogs but would like really like to get feedback also.

The Innovation

Our next pick is Metamorphic Relations for Detection of Performance Anomalies. The paper was presented at the 2019 IEEE/ACM International Workshop on Metamorphic Testing. The authors are from Adobe, the University of Wollongong, Australia and the Swinburne University of Technology, Australia.

Metamorphic testing (MT) is a broad principle to get around the oracle problem – not having a golden reference to compare for correctness. Instead it checks relationships expected to hold between related tests. Maybe for a distribution in runtimes, or correspondence between two software runs with code changes, or many other examples.

The authors applied the principle to test performance in software called a tag manager. Tags are slivers of JavaScript inserted in a web page to collect information from page views. Consumer-focused companies may have 50+ tags on a page, a maintenance headache. Tag managers allow marketing to quickly update these without web expertise, at the expense of some added page load time.

The authors tested load times for an Adobe tag manager. Since multiple factors influence load, they expected a distribution. The metamorphic relationship they chose was that load times with tagging should be shifted (by tag support overhead) from load times without tagging, but that distributions should otherwise be similar.

The relationship held in most cases except one where the managed distribution became bimodal. This they tracked to a race condition between different elements of the code. Depending on execution order a certain function would or would not run, causing the bimodal distribution. This was a bug; the function should have run in either case. When fixed, the distribution again became unimodal. The authors also describe how they automated this testing.

Paul

I like this. I see it as a way to do statistical anomaly-based QA. You compare a lot of runs, looking at distributions to spot bugs. I see a lot of applications: anything performance-related, heuristic-based, machine-learning-based will be naturally statistical. Distribution analyses can then reveal more complex issues than pass/fail analyses. MT gives us tools to find those kinds of problem.

For functional verification, this is a new class of coverage we can plan and track alongside traditional static and dynamic coverage metrics. I’m excited by the idea that a whole new family of chip verification tools could be envisioned around MT, and I welcome any startups in this space who want to reach out to me.

The main contribution in this paper assumes, given some performance metric with random noise, you’re going to have a distribution.  Mu/sigma alone don’t fully classify the distribution. If the it’s multi-modal, maybe there’s a race? Now I’m looking to distribution modality to detect things like race conditions. That’s great and got me thinking how we might use this in our QA.

They discuss mechanics to automate detecting bi-modality, but then raise another possibility – using machine learning to check for changes between distributions. Mathematical characterization may not be as general as training a neural network to detect anomalies between different sets of runs. Similar to what credit card companies do in analyzing your spending patterns. If an anomaly is detected, maybe you’ve been hacked.

MT could find problems sooner and at finer levels than traditional software testing. The latter will find obvious memory leaks or race conditions, but MT plus statistical analysis may probe more sensitively for problems that might otherwise be missed.

Finally, the authors discuss outliers in the distribution, that these should remain similar between distributions. I’m excited to see how they develop this further, how they might detect difference in outliers and what bugs those changes might uncover.

Generally, I see significant opportunity in exploiting these ideas.

Jim

This is the first of the papers we’ve looked at in this series which to me is more than just a feature. This paper would definitely be worth putting money behind, trying to get to production. It looks like a product, perhaps a new class of verification tool. It might even work as a startup.

It reminds me of Solido and Spice. We used similar techniques to get beyond the regular statistical distributions – they were at six sigma already, very hard to get better. They had to start doing stuff like this to go further. I heard “no-one’s going to buy more, they already have spice”. Well they did buy a lot more. There is appetite out there for innovation of this kind.

I’m also very interested in the security potential, especially for the DoD. Another worthy investment area.

Me

As Paul says, MT is a rich vein, too rich to address in one blog. I’ll add one thought I found in this paper. We invest huge amounts of time and money in testing. For passing tests, the only value we get is that they didn’t fail. Can we extract more? Maybe we can through MT.

To see the previous paper click HERE.


Linley Spring Processor Conference Kicks Off – Virtually

Linley Spring Processor Conference Kicks Off – Virtually
by Mike Gianfagna on 04-13-2020 at 10:00 am

Linley Gwennap

The popular Linley Processor Conference kicked off its spring event at 9AM Pacific on Monday, April 6, 2020. The event began with a keynote from Linley Gwennap, principal analyst and president at The Linley Group. Linley’s presentation provided a great overview of the application of AI across several markets. Almost all of the conference is focused on AI.

Before getting into Linley’s keynote, I want to comment on the overall event. Delivering a live event through the internet is challenging. Holding attention spans, dealing with network glitches and capturing the spontaneous nature of the interaction between the speaker and the audience is not easy to accomplish. I suspect there are a lot of newly minted web meeting aficionados these days, so you know what I mean.

Simply put, the Linley Processor Conference appears to be doing a thoughtful and well-planned job of delivering the closest thing to a live, in-person event. Each presentation is followed by a relatively short Q&A.  Questions are queued from written requests from the audience. This definitely works much better than opening everyone’s audio and hoping you can hear just one person at a time. After the short Q&A, there are separate break-out meetings with each speaker at the end of each day. These tend to be smaller meetings and some speakers do open up audio for these events to foster an interactive discussion.

Mike Demler, senior analyst at The Linley Group, moderated several presentations on ultra-low power AI during the first day. Each presentation was quite engaging, using slides, real-time demos and full-motion video of the speaker. I dropped in on all of the break-out sessions. All had good attendance (with Linley having the largest audience). These Q&A sessions were less formal than the presentations.

Thanks to the strong presenters and highly engaged audience, these sessions touched on all sorts of relevant and useful topics. I particularly liked the way Jonathan Tapson, chief scientific officer at GrAI Matter Labs demonstrated how his company achieves sparse processing with a real-time self-driving car demo. There are also breaks sprinkled throughout the event with slide shows from the various sponsors. A good time to check out these technologies or get another cup of coffee. The sessions run from 9AM to 12:45PM over four days. Another good move as a full-day web meeting is too much for most.

If you weren’t able to register for the event, keep watching the Linley site. The Linley Group will develop presentation materials and videos of the conference and make them available sometime after the event concludes.

Back to Linley’s keynote. The tropics covered include:

  • Deep learning trends
  • AI in the data center
  • AI in automotive
  • AI at the edge
  • Ultralow power AI

I won’t attempt to capture all the information presented here. You can catch the replay of Linley’s keynote for that. I will offer a few nuggets presented on each topic.

Deep learning trends: Model growth is exploding. Image processing models are growing at 2X per year – increased accuracy means increased size. The same is true for natural language processing.  Some models have 17 billion parameters. That’s not a typo. Architectures support both large numbers of simple processors (hundreds of thousands per chip) as well as a smaller number of complex processors. The decision of which way to go depends on the workload and your business plan. Convolution accelerators, systolic arrays, sparse computing, in-memory computing, binary neural networks, analog computing and more are all touched on.

Data center: NVIDIA is still the leader but, there is a lot of competition in this multi-billion-dollar market. What will the new announcements from NVIDIA be? Competitors discussed in this market include Cerebras, Intel (with its Habana acquisition to replace Nervana), Huawei, Graphcore, Groq, Xilinx, SambaNova, Alibaba, Google, Microsoft, Amazon and Baidu.  The challenges of developing a new software stack is discussed as well.

Automotive: Autonomous driving deployment is taking longer than expected. Limited Level 3 capability is available now. Level 4 is next, likely implemented as commercial fleets (taxis, trucking, etc.). Vendors discussed include GM, Tesla, Waymo, Intel/Mobileye and NVIDIA.  Will Level 5 ever happen?  Listen to the keynote.

The edge: The general move from the cloud to the edge motivated by things like power, latency, scalability, reliability and privacy were discussed. The edge is really a hierarchy of processing capability. AI accelerators in smartphones is also discussed. AI for embedded applications was also discussed. The barrier to entry here is lower, so this is a potential area of large growth. There is a long list of companies mentioned.

Ultralow power: Power optimization was discussed throughout the presentation. The TinyML Foundation and the TinyML Summit were discussed. Much of this work focuses on embedded applications.

That’s a quick overview of Linley’s keynote. If you missed it, I highly recommend you watch the replay. All event proceedings and video replays are available here.


Webinar on Transient Simulation of Power Transistors in Converter Circuits

Webinar on Transient Simulation of Power Transistors in Converter Circuits
by Tom Simon on 04-13-2020 at 6:00 am

PTM TR high side low side currents 300x182 1

Magwel is offering a webinar that takes a deeper look at how Power Transistors can be more accurately simulated in converter circuits to provide extremely accurate information about switching efficiency. DC converter circuit efficiency has a big effect on the battery life of mobile devices and can affect performance and efficiency for wall-power operated circuits. One large consideration is that PowerMOS devices themselves do not operate as ideal devices. Performing circuit analysis at the device pins leaves out important information about what is happening inside these devices.

PowerMOS devices are really an assembly of large numbers of parallel intrinsic devices with a complex and distributed structure. As such, switching does not occur simultaneously across all the intrinsic devices. In converter circuits PowerMOS RC delays can affect Vgs over time at the gate contacts in low and high side transistors. Previously it has been difficult to run full circuit simulations that take this into consideration. Fine grain extraction of gate, source and drain interconnect is difficult for traditional circuit level extractors. Designers have struggled with this lack of visibility up until now.

Magwel offers a tool specifically targeted at realizing comprehensive and accurate simulation of converter circuits, including the complex internals of PowerMOS devices. Magwel’s PTM-TR does several unique things to provide transparency into the detailed switching behavior of PowerMOS devices. PTM-TR uses a solver-based extractor to correctly and accurately determine parasitics for the internal metallization within PowerMOS devices. The gate regions are divided up according to user set parameters and the intrinsic device model is applied to create a simulation view of the device that incorporates its full internal structure. This model is known as a Fast3D model and is used by PTM-TR with Cadence Spectre® to co-simulate dynamic gate switching behavior at each time step of circuit operation.

Because the Fast3D model is used in conjunction with Spectre circuit simulation, it can be used with test benches, or to perform any desired simulation, such as corner analysis. PTM-TR comes with the additional benefit of showing graphically a field view of the device internals at each time step. During early switching with only small sections of the device turned on higher than expected current densities are possible – leading to EM and thermal issues. With PTM-TR designers can modify and test PowerMOS devices to achieve optimal performance and reliability.

To learn more and see how Magwel’s PTM-TR helps engineers optimize switching performance in converter circuits, sign up for the free webinar replay. Magwel Application Engineer Allan Laser will present an overview of the tool and then go through a demo that shows the simulation results and detailed insight into internal device operation during transient operation.


Online Class: Advanced CMOS Technology 2020 (The 10/7/5 NM Nodes)

Online Class: Advanced CMOS Technology 2020 (The 10/7/5 NM Nodes)
by Daniel Nenni on 04-12-2020 at 9:00 am

3D Finet Model

Our friends at Threshold Systems have a new ONLINE class that may be of interest to you. It’s an updated version of the Advanced CMOS Technology class held last February. This is normally a classroom affair but to accommodate the recent COVID-19 travel restrictions it is being offered virtually.

As part of the previous class we did a five part series on The Evolution of the Extension Implant which you can see on the Threshold Systems SemiWiki landing page HERE.   Registration is HERE. And here is the updated course description:

Course Description:
The central theme of this seminar is an in-depth presentation of the key 10/7/5 nm node technical issues for Logic and Memory, including detailed process flows for these technologies.

This course addresses the issues associated with Advanced CMOS manufacturing with technical depth and conceptual clarity, and presents leading-edge process solutions to the new and novel set of problems presented by 10nm and 7 nm FinFET technology and previews the upcoming manufacturing issues of the 5 nm Nanowire.

A key part of the course is a visual survey of leading-edge devices in Logic and Memory presented by the Fellow Emeritus of the world’s leading reverse engineering firm, TechInsights. His lecture is a visual feast of TEMs and SEMs of all of the latest and greatest devices being manufactured and is one of the highlights of the course.

An update on the status of EUV lithography will be also be presented by a world-class lithographer who manages an EUV tool. His explanations of how this technology works, and the latest EUV breakthroughs, are enlightening as they are insightful.

Finally, a detailed technology roadmap for the future of Logic, SOI, Flash Memory and DRAM process integration, as well as 3D packaging and 3D Monolithic fabrication will also be discussed.

Each section of the course will present the relevant technical issues in a clear and comprehensible fashion as well as discuss the proposed range of solutions and equipment requirements necessary to resolve each issue.

Course Notes:
The course notes are technically current, reproduced in high-resolution color and profusely illustrated with 700+ pages of high-quality 3D graphics and TEMs of real-world devices.

In addition, dynamic 3D models of semiconductor micro-structures are presented on-screen to clarify the structural details of FinFETs, 3D Flash and Nanowires. Click on the link below to preview a typical dynamic 3D image.

Date: May 27, 28, 29, 2020
Location: This course is held ONLINE
Class Schedule:
Wednesday: 8:30 AM – 5:00 PM, PDT
Thursday: 9:00 AM – 5:00 PM, PDT
Friday: 9:00 AM – 5:00 PM, PDT
Tuition: $1,695

Online Instruction:
The Covid-19 pandemic and the travel restrictions it has imposed have changed the landscape of technical instruction from traditional instructor-lead classroom learning and toward a distributed learning experience that enables the instructor and the students to be in different locations. This learning solution eliminates high cost and inconvenience associated with having students travel to one location for instruction. It offers a flexibility and a convenience that traditional classroom instruction does not have. It is ideal for companies with a global business model and whose employees are scattered around the world.

The benefits of Online Learning are numerous:

  • a safe environment for both the students and the instructor
  • elimination of instructor travel costs
  • elimination of employee travel and lodging costs
  • minimal disruption of employee work schedules
  • reduced enrollment costs
  • the students do not all have to be in a single location. Remote Learning permits student attendance even if they are located in different countries.
  • unlike watching streamed video, remote instruction closely simulates the classroom experience and permits student questions and real-time student/instructor interaction
How Online Learning Works:

Shortly after you register a link will be emailed to you that will take you to the online classroom the day of the course, and the week before the class begins a binder of color course notes will be shipped to you via Fedex or UPS.

The class is three days long and will begin on May 27, 2020 at 8:30 PDT. On that day you simply click on the link that has been provided and you will be seamlessly taken to the online classroom. There you will see the opening slide of the course and a small inset image of the instructor. Once the class begins you will hear the instructors voice as he addresses the technical issues on each slide being presented. There will be a one-to-one correspondence between the slides being presented on the screen and the slides in your binder of course notes. Questions can be asked at any time by typing in the software’s question box and each question will be answered verbally by the instructor as soon as they are received.

In the unlikely event that you experience technical difficulties an IT expert will be on-hand to resolve any issues for you in real time.

Online learning is a simple, safe and convenient way to learn that offers a highly effective classroom experience without having to leave the safety and comfort of your own home or office.

What’s Included:

  • Three days of instruction by industry experts with comprehensive, in-depth knowledge of the subject material
  • A high quality set of full-color lecture notes, including SEM & TEM micrographs of real-world IC structures that illustrate key technical points
  • A Diploma stating that you have successfully completed the seminar will be mailed to you at the end of the course

Who is the seminar intended for:

  • Equipment Suppliers & Metrology Engineers
  • Fabless Design Engineers and Managers
  • Foundry Interface Engineers and Managers
  • Device and Process Engineers
  • Design Engineers
  • Product Engineers
  • Process Development & Process Integration Engineers
  • Process Equipment Marketing Managers
  • Materials Supplier Marketing Managers  & Applications Engineers

Course Topics:

1. Process integration
The 10/7nm technology nodes represent a landmark in semiconductor manufacturing and they employs transistors that are faster and smaller than anything previously fabricated. However, such performance comes at a significant increase in processing complexity and requires the solution of some very fundamental scaling and fabrication issues, as well as the introduction of radical, new approaches to semiconductor manufacturing. This section of the course highlights the key changes introduced at the 10/7nm nodes and describes the technical issues that had to be resolved in order to make these nodes a reality.

  • The enduring myth of a technology node
  • Market forces: the shift to mobile
  • The Idsat equation
  • The motivations for High-k/Metal gates, strained Silicon
  • Sevice scaling metrics
  • Ion/Ioff curves, scaling methodology

2. Detailed 10nm Fabrication Sequence
The FinFET represents a radical departure in transistor architecture. It also presents dramatic performance increases as well as novel fabrication issues. The 10nm FinFET is the 3rd generation of non-planar transistor and involves some radical changes in manufacturing methodology. The FinFET’s unusual structure makes its architecture difficult for even experienced processing engineers to understand. This section of the course drills down into the details of 10nm FinFet structure and its fabrication, highlighting the novel manufacturing issues this new type of transistor presents. A detailed step-by-step 10nm fabrication sequence is presented (Front-end and Backend) that employs colorful 3D graphics to clearly and effectively communicate the novel FinFET architecture at each step of the fabrication process. Attention to key manufacturing pitfalls and specialty material requirements are pointed out at each phase of the manufacturing process, as well as the chemistries used.

  • Self-Aligned Quadruple Patterning (SAQP)
  • Fin-first and Fin-last integration strategies
  • Multiple Vt Hi-/Metal Gate integration strategies
  • Cobalt Contacts & Cobalt metallization
  • Contact over Active Gate methodology
  • Advanced Metallization strategies
  • Air-gap dielectrics

3. Nanowire Fabrication – the 5nm Node
Waiting in the wings is the Nanowire. The advent of this new and radically different 3D transistor features gate-all-around control of short channel effects and a high level of scalability. A detailed process flow of a Horizontal Nanowire fabrication process will be presented that is beautifully illustrated with colorful 3D graphics and which is technically accurate.

  • A step-by-step Horizontal Nanowire fabrication process flow
  • Key fabrication details and manufacturing problems
  • Nanowire SCE control and scaling
  • Resolving Nanowire capacitive coupling issues
  • Vertical versus Horizontal Nanowire architecture: advantages and disadvantages

4. DRAM Memory
DRAM memory haS evolved through many generations and multiple incarnations. Despite claims that DRAM memory is nearing its scaling limit, new technological developments keep pushing the scaling envelope to extremes. This part of the course examines the evolution of DRAM memory and presents a detailed DRAM process fabrication flow.

  • DRAM memory function and nomenclature
  • DRAM scaling limits
  • A DRAM process flow
  • The capacitor-less DRAM memory cell

 

5. 3D NAND Flash Memory
The advent of 3D NAND Flash memory is a game changer. 3D NAND Flash not only dramatically increases non-volatile memory capacity, it will also add at least three generations to the life of this memory technology. However, the structure and fabrication of this type of memory is radically different, even alien, to any traditional semiconductor fabrication methodology. This section of the course presents a step-by-step visual description of the unusual manufacturing methodology used to create 3D Flash memory, focusing on key problem areas and equipment opportunities. The fabrication methodology is presented as a series of short videos that clearly demonstrate the fabrication operations at each step of the process flow.

  • staircase fabrication methodology
  • the role of ALD in 3D Flash fabrication
  • controlling CDs in tall, vertical structures
  • detailed sequential video presentation of Samsung 3D NAND Flash
  • Intel-Micron 3D NAND Flash fabrication sequence
  • Toshiba BICS NAND Flash fabrication sequence

6. Advanced Lithography
Lithography is the “heartbeat” of semiconductor manufacturing and is also the single most expensive operation in any fabrication process. Without further advances in lithography continued scaling would difficult, if not impossible. Recently there have been significant breakthroughs in Extreme Ultra Violet (EUV) lithography that promise to radically alter and greatly simplify the way chips are manufactured. This section of the course begins with a concise and technically correct introduction to the subject and then provides in-depth insights into the latest developments in photolithography. Special attention is paid to EUV lithography, its capability, characteristics and the recent developments in this field.

  • Physical Limits of Lithography Tools
  • Immersion Lithography – principles and practice
  • Double, Triple and Quadruple patterning
  • EUV Lithography: status, problems and solutions
  • Resolution Enhancement Technologies
  • Photoresist: chemically amplified resist issues

7. Emerging Memory Technologies
here are at least three novel memory technologies waiting in the wings. Unlike traditional memory technologies that depend on electronic charge to store data, these memory technologies rely on resistance changes. Each type of memory has its own respective advantages and disadvantages and each one has the potential to play an important role in the evolution of electronic memory.

This section of the course will examine each type memory, discuss how it works, and what its relative advantages are in comparison with other new memory types.

  • Phase Change Memory (PCRAM), Cross-point memory; separating the hype from the reality
  • Resistive RAM (ReRAM) – a novel approach that comes in two variations
  • Spin Torque Transfer RAM (STT-RAM) – the brightest prospect?

8. Survey of leading edge devices
This part of the course presents a visual feast of TEMs and SEMs of real-world, leading edge devices for Logic, DRAM and Flash memory. The key architectural characteristics for a wide range of key devices will be presented and the engineering trade-offs and compromises that resulted in their specific architectures will be discussed. The Fellow Emeritus representative of the world’s leading chip reverse engineering firm will present the section of the course.

  • How to interpret Scanning and Transmission Electron microscopy images
  • A visual evolution of replacement gate metallization
  • DRAM structural analysis
  • 3D FLASH structural analysis
  • Currently available 14nm/10nm/7nm Logic offerings from various manufacturers

9. 3D Packaging Versus 3D Monolithic Fabrication
Unlike all other forms of advanced packaging that communicate by routing signals off the chip, 3D packaging permits multiple chips to be stacked on top of each other, and to communicate with each other using Thru-Silicon Vias (TSVs), as if they were all one unified microchip. An alternate is the 3D Monolithic approach, in which a second device layer is fabricated on a pre-existing device layer and electrically connected together employing standard nano-dimensional interconnects. Both approaches have advantages and disadvantages and promise to create a revolution in the functionality, performance and the design of electronic systems.

This part of the course identifies the underlying technological forces that have driven the development of Monolithic fabrication and 3D packaging, how they are designed and manufactured, and what the key technical hurdles are to the widespread adoption of these revolutionary technologies.

  • TSV technology: design, processing and production
  • Interposers: the shortcut to 3D packaging
  • The 3D Monolithic fabrication process
  • Annealing 3D Monolithic structures
  • The Internet of Things (IoT)

10. The Way forward: a CMOS technology forecast
Ultimately, all good things must come to an end, and the end of FinFET technology appears to be within sight. No discussion of advanced CMOS technology is complete without a peek into the future, and this final section of the course looks ahead to the 5/3.5/2.5 nm CMOS nodes and forecasts the evolution of CMOS device technology for Logic, DRAM and Flash memory.

  • Is Moore’s law finally coming to an end?
  • New nanoscale effects and their impact on CMOS device architecture and materials
  • The transition to 3D devices
  • Future devices: Quantum well devices, Nanowires, Tunnel FETs, Quantum Wires
  • The next ten years …

REGISTER NOW!


SiFive in a Virtual World Webinar Series 2020

SiFive in a Virtual World Webinar Series 2020
by Swamy Irrinki on 04-10-2020 at 6:00 pm

Rapid Embedded Prototyping with SiFive Software

Introducing the SiFive Connect Webinar Series –A Platform Designed for Continued Engagement with the Global Hardware and Software Community Developing RISC-V Based Semiconductor Solutions

After hosting the SiFive Tech Symposiums in a record 52 cities in 2019, it became amply evident that the RISC-V revolution has reached all corners of the globe and is here to stay. RISC-V cores are being designed into many SoCs and domain-specific custom silicon. To take our previous engagement with the global community to the next level, we’re launching the SiFive Connect Webinar Series as a highly educational and interactive platform for SoC developers to connect directly, on an ongoing basis, with industry experts. Targeted for engineers, architects, developers, researchers and students, attendees will learn about the RISC-V ecosystem and the latest RISC-V based cores and software, security solutions, SoC and system IP, various end market solutions and development platforms.

These bimonthly webinars will be one hour in duration, and each will take place twice on the same day – once at 9 a.m. PDT and again at 6 p.m. PDT – enabling the global community to choose the time that works best for them.

Registration is now open for the first two SiFive Connect webinars of 2020!

  • Thursday, April 16, 2020

Embedding Intelligence Everywhere with SiFive 7 Series Core IP

View Abstract & Register to Attend

  • Thursday, April 30, 2020

Rapid Embedded Prototyping with SiFive Software

View Abstract & Register to Attend

To view an extended list of upcoming topics, please visit https://www.sifive.com/resources/webinars/sifive-connect.

We look forward to engaging with you and sharing knowledge!

About SiFive

SiFive is on a mission to free semiconductor roadmaps and declare silicon independence from the constraints of legacy ISAs and fragmented solutions. As the leading provider of market-ready processor core IP and silicon solutions based on the free and open RISC-V instruction set architecture, SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers in all markets to build customized RISC-V based semiconductors. Founded by the inventors of RISC-V, SiFive has 16 design centers worldwide and backing from Sutter Hill Ventures, Qualcomm Ventures, Spark Capital, Osage University Partners, Chengwei, Huami, SK Hynix, Intel Capital, and Western Digital. For more information, please visit www.sifive.com.

Stay current with the latest SiFive updates via Facebook, Instagram, LinkedIn, Twitter, and YouTube.


Wally Rhines: Mentoring Generations of Semiconductor and EDA Professionals

Wally Rhines: Mentoring Generations of Semiconductor and EDA Professionals
by Mike Gianfagna on 04-10-2020 at 10:00 am

Wally Then and Now

I had the good fortune to catch a live webinar recently that was quite compelling – Conversation with Dr. Walden Rhines: Predicting Semiconductor Business Trends After Moore’s Law! Dr. Rhines, known to most as Wally, doesn’t need much of an introduction. Any semiconductor or EDA professional knows who he is and what he’s accomplished. His discussion about predicting the future didn’t disappoint. Wally is the rare individual who is articulate, knowledgeable and able to explain complicated phenomena in a way that is accessible to all. If you missed the live event, you will want to catch the replay here

You may wonder about the significance of the photo above.  Read on…

Wally’s discussion was based on his new book, “Predicting Semiconductor Business Trends After Moore’s Law!” The book is available on Amazon here.  To get you more interested in the webinar, there may be some free stuff, like a PDF version of the book in store for you if you watch it.

I don’t want to repeat the insights offered in the webinar here, it’s much more entertaining to hear them from Wally. I will offer a few topics just to whet your appetite. Do you know what the semiconductor learning curve is and how it informs the predictions of Moore’s Law? What about the Gompertz curve? This one has been around since 1825 and can be used to predict everything from the growth of tumors, population and product adoption. Want to understand how all this relates to semiconductor and Moore’s Law? Watch the webinar.

Do you ever wonder when silicon transistors will finally need to be replaced? Wally explains that with solid analytics. What about IC silicon revenue per unit area over time? What does that curve look like?  Again, watch the webinar. I could go on, but I’ll stop here.  By now, you should be looking to click one of the webinar links above. The entire event is under 40 minutes, including a very robust Q&A session. The questions posed to Wally are included below to further whet your appetite.

And regarding the photo above – this was the final slide for the final question below – How does Wally manage to look so young after 50 years in this dog-eat-dog business?  Wally’s answer to this one might be the best nugget of all.

  • Can you predict the revenue impact of Covid 19 on both the semiconductor and EDA industries?
  • Does your book suggest that Moore’s Law is not coming to an end?
  • What will drive Scale for semiconductor going forward if Moore’s law will not be true forever. Will it be innovative packaging, materials or some other factor?
  • If your consolidation vs specialization trend plays out, what will the company structure of the EDA industry look like in ten years?
  • How can you explain some of those remarkable ratios, like the constancy of revenue per unit area of silicon?
  • Will China be successful in its quest for self-sufficiency in semiconductors? How will semiconductor companies in the West be affected?
  • What are some future challenges for this next generation of neuromorphic computers?
  • Why has the EDA industry accelerated its growth in recent years?
  • Compute servers in the cloud were once dominated by Intel. Now NVIDIA, Google Tensor Flow and other hardware is showing up in the cloud.  What will things look like in the future?
  • How will the semiconductor industry cope with the need for better data security?
  • How should the semiconductor companies prepare their teams for transitioning from a “component / bottom up” focus to a “top down” approach? How can the many IC design expert’s transition to a “system-level” thinking and learn about many, likely very diverse applications?
  • The semiconductor industry has relied on parallel cost and learning curve (seen in chapter 2 of the blog) and past deviations from this have been quickly corrected like in the case of Test Equipment. With the introduction of EUV, the lithography curve is meant to go almost flat as ASML increase the price of its tools almost in line with their productivity. What do you think is the impact of this very significant cost inflation on the industry? Is it possible to bring the lithography curve back to trend despite a technological monopoly? If not, who bears the extra cost / sub-trend deflation?
  • In your opinion, what is going to be the role of Europe in the future semiconductor business? Any segment that can be led by European companies such as Infineon, Soitec or ST?
  • How does he see the role of all the backend next 10 years vs. past 10 years? Thinking about packaging and PCBs/Substrates as well?
  • What does he think is the impact of a move to chiplets architectures on the EDA industry?
  • What could change the shape / inflection points of his prediction for the transistor curve (chapter 4, fig 7, the S curve of Silicon)? Or what does he see that could materially change the Si/GDP penetration curve?
  • Ask Wally how he manages to look so young after 50 years in this dog-eat-dog business.

Webinar: Design Methodologies for Next-Generation Heterogeneously Integrated 2.5/3D-IC Designs

Webinar: Design Methodologies for Next-Generation Heterogeneously Integrated 2.5/3D-IC Designs
by Herb Reiter on 04-10-2020 at 6:00 am

2d 3d Semiconductor Packaging SemiWiki Cadence

I had the opportunity to preview the upcoming SemiWiki webinar titled: Design Methodologies for Next-Generation Heterogeneously Integrated 2.5/3D-IC Designs. John Park’s message, describing this powerful Cadence solution, really impressed me. That’s why I want to encourage you to register for it and join this SemiWiki webinar on Thursday, April 23, at 10 am PDT. You’ll get in-depth information about how Cadence makes planning, design and verification of next-generation heterogeneously integrated 2.5/3D-ICs and wafer-level packages cost-effective, easier and faster.

Our customers value the semiconductor industry for its fast pace of innovation as well as for providing better and cheaper solutions for an ever-broader range of – often heterogeneous – applications. A well-coordinated design and manufacturing supply chain, with domain experts at every stage, is the basis for all these accomplishments. Dozens of recent announcements of heterogeneously integrated 2.5/3D-IC designs, primarily from larger companies, have demonstrate how well heterogeneous integration can improve performance per Watt and increase functionality in a single IC package. However, until now, the high development cost, resource requirements and long development times have stopped many engineers from using these powerful solutions for bringing their ideas to market.

That’s a very familiar scenario to me. During my ASIC years, (1980 to 2000) I saw our design center engineers working 80+ hour weeks to meet tape-out schedules for, in today’s view, really small designs. How did our industry get from then 10 Million gate designs to today’s up to 10 Billion gate solutions – a 1000 x improvement?

The short answer is: AUTOMATION!  In more depth: This level of improvement was only possible, because TSMC and other wafer manufacturers developed, together with their Electronic Design Automation (EDA) partners, process design kits (PDKs). They specified exactly what the process technologies were capable of and what was not allowed. This PDK data (e.g. libraries, SPICE decks, design rules, layer information, etc.) enabled their mutual customers to accurately simulate what’s technically feasible and quickly iterate to improve performance and/or reduce unit cost of a design. Over time, design tools and methodologies became more user-friendly, managed larger design complexities and drove reducing cost per function. In addition to more powerful EDA tools, the initially very simple library elements became more and more complex building blocks and, available as verified soft IP (RTL code) or silicon-proven hard IP (GDSII), they simplified and accelerated ASIC design even further.

Back to the webinar. John Park will outline why and how Cadence, in cooperation with the big assembly and test houses (a.k.a. OSATs) and IC packaging experts at wafer foundries, developed a design environment for heterogeneously integrated 2.5/3D-ICs and wafer-level packages. Cadence is also simplifying the use of chiplets (silicon proven hard IP, implemented in bare die) as design-productivity enhancing building blocks.

In my view, the biggest advantage of the Cadence multi-die IC solution is that it links their proven and well-known design tools for IC, package and board (InnovusR, VirtuosoR, AllegroR), uses OrbitIOR and other proven tools, as well as recently introduced tools (e.g. ClarityR) to plan, design and verify your 2.5/3D-ICs. This Cadence solution will enable you to quickly become productive as developer of heterogeneously integrated 2.5/3D-ICs and wafer-level packages.

Please use this opportunity, register here and view this SemiWiki webinar replay.  A link to the replay will be sent to all registered people in case you miss it or want to review it again. It’s time well spent… Herb


Why I’m Lowering Semiconductor Equipment Revenue Growth to -6.9% in 2020

Why I’m Lowering Semiconductor Equipment Revenue Growth to -6.9% in 2020
by Robert Castellano on 04-09-2020 at 10:00 am

Applied Materials Lam lower C2

Because of significant $4 billion in equipment pull-ins in Q4 from sales in Asia, I was reducing my semiconductor wafer front-end (WFE) equipment revenue growth from an earlier +5% to 0% in 2020. Now, based on CORVID-19, I am further reducing revenue growth to -6.9%.

Chart 1 also shows the cyclical nature of semiconductors and semiconductor equipment. In principle, semiconductor manufacturers increase production to meet customer demand. If demand increases, manufacturers will make capacity purchases of processing equipment to make more chips. This is why there is strong correlation between semiconductor revenue changes (blue line) and semiconductor equipment (red line).


Chart 1

GDP (black line), which measures the value of economic activity within a country, is often a factor in halting the increases in semi and semicap revenues. GDP is important because it gives information about how an economy is performing. The growth rate of GDP is often used as an indicator of the general health of the economy.

Thus, if a country has healthy economy (upward slope of black line), we often see an upward slope in semi and semicap revenues. The converse is also true. If the economy is doing well, individuals have money to make purchases of products using chips – smartphones, cars, TVs, etc.

Besides the significant drop in GDP, there are other factors corroborating my forecast.

Drop in Capex Spending
Table 1 shows announced and estimated capex spending for the top five semiconductor manufacturers. There are three caveats readers must recognize. First, capex includes building as well as equipment. Second, capex is a variable expenditure, and the numbers planned at the beginning of the year are never the amount of actual spend. Third, Samsung’s (OTC:SSNLF) capex spend are for its memory and foundry. These five companies represent about 60-65% of total spend, so it is a good representation.

According to Table 1, expected capex spend for these top five companies will decrease 12.1% in 2020, according to The Information Network’s report entitled “Global Semiconductor Equipment: Markets, Market Shares, Market Forecasts.”

Drop in Semiconductors and Semiconductor Equipment
Table 2 compares GDP, semiconductor and semiconductor equipment revenues for 2008-2009 and 2019-2020. A striking observation in the comparison of the two crises is the similarity of YoY changes in the year up to and including the year that GDP dropped in 2009 and forecast to drop in 2020. That’s despite the difference in origins of the recession.

The Global Financial Crisis began in 2007 with a depreciation in the subprime mortgage market in the United States, and it developed into an international banking crisis with the collapse of the investment bank Lehman Brothers on September 15, 2008. The crisis was nonetheless followed by a global economic downturn. This coming recession in 2020 has to do with CORVID-16.

At this time, I forecast semiconductor equipment revenues will drop 6.9% in 2020. I also forecast that semiconductor revenues will decrease 6.1% in 2020 to $434.0 billion.

Drop in Consumer Confidence
When consumer confidence is high, consumers make more purchases. When confidence is low, consumers tend to save more and spend less. Consumer confidence typically increases when the economy expands, and decreases when the economy contracts.

The University of Michigan’s consumer sentiment for the US was revised down to 89.1 in March of 2020 from a preliminary of 95.9 and 101 in February (Chart 2). It is the lowest reading since October of 2016 and the fourth largest one-month decline in nearly a half century.


Chart 2

ECRI’s Weekly Leading Index (WLI), is in free fall. According to ECRI, the nine-week drop in the WLI is more pronounced than anything it’s ever seen at this stage of a recession. Chart 3 shows WLI growth going back half a century. It looks like WLI growth has just fallen off a cliff, not having been this low since the immediate aftermath of the Lehman Brothers collapse in 2008.


Chart 3

The impact of this drop in revenues will primarily impact processing equipment companies AMAT and LRCX. ASML is another processing company that currently does not see the impact of COVID-19 beyond Q, but the company announced in a press release on March 30 it could affect Q2. The company noted:

“Due to the uncertainties regarding COVID-19, ASML has decided not to execute any share buybacks in Q2 2020. This decision follows the pause in the execution of the program in the first quarter, after having already performed share buybacks under the new program for an amount of approximately €507 million.”

KLA’s metrology/inspection equipment is very different from processing equipment such as lithography, etch, or deposition sold by peers.

KLA’s metrology/inspection equipment sales fare better during technology purchases versus capacity purchases of equipment to make more of the same chip. With the slowdown in global economies in 2020, there will be minimal capacity purchases but major technology purchases for the next processing node. KLA will benefit, as well as ASML for its EUV systems, but they represented only 31% of total revenues in 2019.

Once the pandemic is halted, large companies in the semiconductor and high-tech businesses should resume business as usual. Other large companies will be slower to recover, like airlines and hotels, as mentioned above. Small companies and mom and pop stores may never recover. Along with the elimination of their businesses goes their purchasing power – money that could be spent on smartphones, cars, and TVs.


Learning to Live with the Gaps Between Design and Verification

Learning to Live with the Gaps Between Design and Verification
by Tom Simon on 04-09-2020 at 6:00 am

Learning to live with the gaps between design and verification

Whenever I am asked to explain how chip design works by someone who is unfamiliar with the process, I struggle to explain the different steps in the flow. It also makes me aware of the discrete separations between each phase of activities. Of course, when you speak to a novice it is not even possible to get more than one layer down in the explanation. For folks in in the industry we are painfully aware of the separate steps and partitions in the process. Not only does the design flow from high level front end representations through transformations to silicon transistors and geometry, there are also parallel processes that involve internal and external IP. Woven throughout this is the design verification process, which must work within each step of the process and also across the entire flow.

As designers, the first urge is to smooth over all the gaps and try to make them disappear. However, in a paper given by Mentor at DVCON in Silicon Valley this year they suggest acknowledging the gaps in the design flow and in some cases embracing them to improve the speed and effectiveness of the verification process. I had a chance to talk to Chris Giles, one of the coauthors of the paper along with Kurt Takara, on their view of the issue of gaps and his approach to dealing with them. Fortunately, in light of present-day circumstances, Mentor is offering a replay of the paper online for anyone interested in seeing its entire contents.

In their presentation they cover the main sources of gaps, such as documentation issues, models that are not accurate or incomplete, changes to any aspect of the project, organizational boundaries or team member skill sets. They point back to the time when RTL was first coming into use where designers would code the design and test benches in RTL sequentially. Without a gap between design and verification they would run these to verify the design. Of course, things are much different today with design teams and verification teams working as separate entities and using different tools for their jobs.

There is a stark choice to make. Do you throw the design over the wall and assume that the information needed to fully and properly verify the design intent and functionality made it through the gap? Or do you hope you have sufficient numbers of engineers that have expertise in design and verification to pull the project through? Chris spoke of taking this gap and embracing it by moving the intent verification into the design group and handing the functional verification to the verification team. Mentor tools play a role in this by ensuring that the tools each team would use are suited to the task and expertise of the users.

The presentation in the video goes into detail discussing which techniques are useful for various verification tasks by each prospective user. These include tools to help with code writing, static and formal lint tools, and CDC and RDC checkers. During the video Chris highlights each of these activities and how they might work when design gaps are used constructively.

Because all designs are hierarchical, verification flows need to not only work with hierarchy, but work to resolve issues that can arise due to the use of hierarchy. Mentor has worked out flows for hierarchical verification that work not only with black box views, but also use white box models to manage convergence issues efficiently and accurately. The presentation talks about their Hierarchical Data Model (HDM) for intent verification. The video also covers situations where designers need to handle intent verification, yet subsequent design transformations and optimization alter the design such that the intent is lost. This is a case where there is an unavoidable gap that must be acknowledged and dealt with. Chris and Kurt provide examples of how this can be done by applying specific techniques.

Verification is a huge topic with many facets. It is incumbent on engineering teams to understand potential sources of errors and methods of addressing them. Because gaps in the flow are unavoidable, being savvy about minimizing or taking advantage of them is a necessity. The video shows a range of specific cases and how they can be handled. Mentor is making the DVCON presentation available for viewing through the web. It goes into much more detail than is possible here and I strongly suggest checking it out.