Bronco Webinar 800x100 1

Power Transistor Modeling for Converter Design

Power Transistor Modeling for Converter Design
by Tom Simon on 04-18-2022 at 10:00 am

Magwel PTM Field Viewer

Voltage converters and regulators are a vital part of pretty much every semiconductor-based product. They play an outsized role in mobile devices such as cell phones where there are many subsystems operating at different voltages with different power needs. Many portable devices rely on Lithium Ion batteries whose output voltage can vary from 4.2 volts down to 3.0 volts as they discharge. The power distribution systems in these devices need to operate with extremely high efficiency to meet battery life requirements.

As an example, a typical cell phone contains CPU cores, DRAM, RF radio, display backlight, camera, audio codec and other subsystems which need voltages ranging from 0.8V to ~4V – all from a single voltage source in the lithium ion battery. A combination of buck and boost converters are needed to precisely produce all these voltage levels from the battery regardless of its state of charge. Because switching based converters can be noisy, low drop out (LDO) voltage regulators are also needed for several power supplies.

In the converters and regulators listed above, one of the most important elements is the pass device, which handles all the current to the load and controls the final output voltage. Pass devices can be made from a wide range of materials and can be designed as bipolar or MOS devices. Regardless of material and device type, the design of the pass device has a major effect on power loss and thermal behavior.

Magwel PTM Field Viewer

Power devices typically have many fingers and large channel widths (W). Connections to the semiconductor layers are made through a complex interconnection of metal and via layers that connect all the active areas in parallel. The size and topology of these devices leads to complex electrical behaviors. There are a large number of gate/base contacts which often have maze like connections to the external device terminal(s). The same is often true for connections to the source and drain, or emitter and collector.

These complex metal connections contribute to device resistance and can also introduce non-uniform delays within the device. To model this electrical behavior, designers need tools like the Magwel Power Transistor Modeler (PTM) suite. Traditional circuit extractors are not designed to deal with wide metal, large via arrays and usual shapes found in power devices. Likewise, point-to-point resistance values are needed, along with efficient and accurate ways to model the channel.

Magwel’s PTM tools use a solver based extractor that is optimized for the complex metal shapes and vias found in power devices. PTM can automatically identify the channel and will segment it according to user settings to create multiple parallel devices that can be used for full device modeling.

Usually when power devices are used for switching converters the active area can be modeled effectively as a linear resistive value based on the foundry device model and operating conditions, such as temperature and stimulus. However, Low Drop Out (LDO) regulators are often used to get as much working voltage out of a discharging battery. The lower the drop-out voltage the longer the LDO regulator can use a battery and the less overall power is wasted on internal resistance and converted to heat. For this reason, LDO regulator pass device performance is extremely important, necessitating the use of more sophisticated device modeling for the active region. Magwel’s PTM has the option to use non-linear models to accurately predict the behavior of the active area during LDO power device operation.

Another important aspect of power transistor modeling is the stimulus used at the external device pins for simulation. Magwel’s PTM offers a wide range of easy to use options for this. The most basic method is to simply set a constant voltage or current. The user can select the operating Temperature for each simulation. There is also a voltage controlled voltage source (VCVS) mode for modeling the device pin voltage as a proportional function of a probe voltage in the device. This is exceptionally useful for working with circuits that have replica or sensing devices.

With the inputs described above, PTM can provide voltage values at every point in the device. Designers can also view the current density throughout each layer. Thresholds for current density can be set to flag potential electromigration violations. In addition to output reports and exportable csv files, users can view a field view for full visualization of the device for easy debugging and optimization.

Magwel’s PTM is used by many leading converter circuit design companies. Silicon validation results show correlation within a percent or two. Designers can make provisional changes to the device geometry and pin locations and quickly rerun simulations without iterating back through the layout tools to perform what-if analysis when optimizing the design. More information on the PTM suite of tools is available on the Magwel website.


Bespoke Silicon is Coming, Absolutely!

Bespoke Silicon is Coming, Absolutely!
by Daniel Nenni on 04-18-2022 at 6:00 am

IMG 9977

It was nice to be at a live conference again. DesignCon was held at the Santa Clara Convention Center, my favorite location, which to me there was a back to normal crowd. The sessions I attended were full and the show floor was busy. Masks and vaccinations were not required, maybe that was it. Or there was a pent-up demand to get back engaged with the semiconductor ecosystem? Either way it was a great conference, absolutely.

SemiWiki stalworth companies Cadence, Ansys, Siemens, and Samtec, were all there. We will have more coverage of their talks over the next week or two. SemiWiki newcomer Xpeedic was there and we will be covering their new announcement as well.

The first panel I attended in the Chip Head Theater was titled Bespoke Silicon: How System Companies are Driving Chip Design. The panelists were John Lee, GM Semiconductor, Electronics, Optics BU, Ansys. Rob Aitken, Arm Fellow and Director of Technology, Arm Research. Prashant Varshney, Head of Product, Silicon Vertical, Microsoft Azure.

This panel was set up to explore the trend of system/software companies deciding they need semiconductor solutions that cannot be bought off the shelf. Some prominent examples of this are Meta, Amazon, Microsoft, and Google who are all defining and designing their own chips.  An understanding of what is driving this market trend also gives insights on how it impacts the technical demands on Ansys’ simulation/analysis products.

Why are these companies doing this? The background enabler is, of course, the internet and the pervasive digitalization of society and the economy. But more specifically, it is a confluence of advances in AI/ML algorithms together with semiconductor systems that have become big and complex and capable enough to actually move the needle for an entire business division. Take, for example, Meta’s vision for a VR-enabled future: it all depends critically on the technical capability of the optical headset as well as the power of the AI algorithms driving it – which itself requires a lot of silicon to execute.

Microsoft’s gaming division is only competitive to the degree that its Xbox can stay at the cutting edge of graphic processing. Amazon Web Services finds its costs structure is tied to the price, performance and power profile of the CPUs they use to power their data centers. So they developed their proprietary Graviton2 microprocessor in collaboration with Arm.  There are very interesting business dynamics resulting from this that the panel explored.

At the lower, technical level this evolution is driven on the one hand by advances in AI/ML techniques, and on the other hand by advances in integration density with 3D-IC that have accelerated past reliance on just Moore’s Law. We see the latest HPC products from AMD and Nvidia and Intel are all multi-die chiplet systems. The recent industry collaboration on the release of the UCIe spec indicates how seriously these companies take the 3D-IC revolution as an enabler for the systems they want to build. Not to mention that just AI/ML algorithms are driving a leap in design sizes all on their own – see the wafer-scale engine from Cerebras which is explicitly targeted at ML training.

What this means from the Ansys point of view is that they are being called on to analyze increasingly large and complex multi-die systems. That is where the analysis/signoff market is going. However, the technical challenge extends well beyond simple massive capacity (which makes a cloud strategy a must-have for EDA tools). Even more challenging is the emergence of new physical effects that need to be simulated. So, the 3D-IC problem is not just quantitatively bigger, it is also qualitatively different. We call this the multiphysics challenge of 3D-IC.

The primary new physics is thermal analysis since heat dissipation is often the #1 limiting factor on these advanced designs (part of Cerebras’ secret sauce is how they manage to cool their ~15kW wafer). Of course, thermal analysis is not new but it is to most chip designers. It is an example of how chip, package, and PCB design is collapsing into a single design problem. Furthermore, thermal analysis screams out for a computational fluid dynamics simulation engine to model how the air flow and heatsink interact to set boundary conditions for the 3D-IC module. That’s another modeling physics pulled into the mix. And then there are the mechanical stress/warpage issues from having differential thermal expansion in various parts of the 3D-IC stack. Add a mechanical modeling engine to the mix.

One last example of new physics being jammed into the 3D-IC design problem space: electromagnetic analysis of high-speed signals. You see, what makes a 3D-IC integration fundamentally different from just placing two packaged chip next to each other on a PCB is that the inter-chip communication is very low-power and very high-bandwidth. If that can be done, then we can minimize the power/performance cost of going off-chip. But these interconnect traces absolutely require electromagnetic simulation for interference and coupling. How many digital designers are familiar with EM simulation?

Bottom line: The manufacturing process allows us to produce very fine-grain electrical integration of multiple chips. But the success of this market, which is driven in large part by bespoke silicon projects, is gated by the ability of designers to model, simulate, and verify the electrothermal interactions. I believe that is where the true bottleneck to adoption lies, and something Ansys tools are uniquely positioned to alleviate.

Also read:

Webinar Series: Learn the Foundation of Computational Electromagnetics

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 4

The Clash Between 5G and Airline Safety


Quantum Computing Trends

Quantum Computing Trends
by Ahmed Banafa on 04-17-2022 at 10:00 am

Math Physics Biology

Quantum Computing is the area of study focused on developing computer technology based on the principles of quantum theory. Tens of billions of public and private capitals are being invested in Quantum technologies. Countries across the world have realized that quantum technologies can be a major disruptor of existing businesses, they have collectively invested $24 billion in in quantum research and applications in 2021 [1].

A Comparison of Classical and Quantum Computing

Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra. Data must be processed in an exclusive binary state at any point in time or what we call bits. While the time that each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in billionths of a second, there is still a limit as to how quickly these devices can be made to switch state.

 As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply. Beyond this, the quantum world takes over, in a quantum computer, a number of elemental particles such as electrons or photons can be used with either their charge or polarization acting as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and behavior of these particles form the basis of quantum computing [2]. Classic computers use transistors as the physical building blocks of logic, while quantum computers may use trapped ions, superconducting loops, quantum dots or vacancies in a diamond[1].

Physical vs Logical Qubits

When discussing quantum computers with error correction, we talk about physical and logical qubits. Physical qubits are the physical qubits in quantum computer, whereas logical qubits are groups of physical qubits we use as a single qubit in our computation to fight noise and improve error correction.

To illustrate this, let’s consider an example of a quantum computer with 100 qubits. Let’s say this computer is prone to noise, to remedy this we can use multiple qubits to form a single more stable qubit. We might decide that we need 10 physical qubits to form one acceptable logical qubit. In this case we would say our quantum computer has 100 physical qubits which we use as 10 logical qubits.

Distinguishing between physical and logical qubits is important. There are many estimates as to how many qubits we will need to perform certain calculations, but some of these estimates talk about logical qubits and others talk about physical qubits. For example: To break RSA cryptography we would need thousands of logical qubits but millions of physical qubits.

Another thing to keep in mind, in a classical computer compute-power increases linearly with the number of transistors and clock speed, while in a Quantum computer compute-power increases exponentially with the addition of each logical qubit [4].

Quantum Superposition and Entanglement

The two most relevant aspects of quantum physics are the principles of superposition and entanglement.

Superposition: Think of a qubit as an electron in a magnetic field. The electron’s spin may be either in alignment with the field, which is known as a spin-up state, or opposite to the field, which is known as a spin-down state. According to quantum law, the particle enters a superposition of states, in which it behaves as if it were in both states simultaneously. Each qubit utilized could take a superposition of both 0 and 1. Where a 2-bit register in an ordinary computer can store only one of four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a quantum computer can store all four numbers simultaneously, because each qubit represents two values. If more qubits are added, the increased capacity is expanded exponentially.

Entanglement: Particles that have interacted at some point retain a type of connection and can be entangled with each other in pairs, in a process known as correlation. Knowing the spin state of one entangled particle – up or down – allows one to know that the spin of its mate is in the opposite direction. Quantum entanglement allows qubits that are separated by incredible distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated. Taken together, quantum superposition and entanglement create an enormously enhanced computing power[3] .

Quantum computers fall into four categories [1]

  1. Quantum Emulator/Simulator
  2. Quantum Annealer
  3. Noisy Intermediate Scale Quantum (NISQ)
  4. Universal Quantum Computer – which can be a Cryptographically Relevant Quantum Computer (CRQC)

Quantum Emulator/Simulator

These are classical computers that you can buy today that simulate quantum algorithms. They make it easy to test and debug a quantum algorithm that someday may be able to run on a Universal Quantum Computer (UQC). Since they don’t use any quantum hardware, they are no faster than standard computers.

Quantum Annealer

A special purpose quantum computer designed to only run combinatorial optimization problems, not general-purpose computing, or cryptography problems. While they have more physical Qubits than any other current system they are not organized as gate-based logical qubits. Currently this is a commercial technology in search of a future viable market.

Noisy Intermediate-Scale Quantum (NISQ) computers.

Think of these as prototypes of a Universal Quantum Computer – with several orders of magnitude fewer bits. They currently have 50-100 qubits, limited gate depths, and short coherence times. As they are short several orders of magnitude of Qubits, NISQ computers cannot perform any useful computation, however they are a necessary phase in the learning, especially to drive total system and software learning in parallel to the hardware development. Think of them as the training wheels for future universal quantum computers.

Universal Quantum Computers / Cryptographically Relevant Quantum Computers (CRQC)

This is the ultimate goal. If you could build a universal quantum computer with fault tolerance (i.e., millions of error- corrected physical qubits resulting in thousands of logical Qubits), you could run quantum algorithms in cryptography, search and optimization, quantum systems simulations, and linear equations solvers.

Post-Quantum / Quantum-Resistant Codes

New cryptographic systems would secure against both quantum and conventional computers and can interoperate with existing communication protocols and networks. The symmetric key algorithms of the Commercial National Security Algorithm (CNSA) Suite were selected to be secure for national security systems usage even if a CRQC is developed. Cryptographic schemes that commercial industry believes are quantum-safe include lattice-based cryptography, hash trees, multivariate equations, and super-singular isogeny elliptic curves [1].

Difficulties with Quantum Computers [2]

•       Interference – During the computation phase of a quantum calculation, the slightest disturbance in a quantum system (say a stray photon or wave of EM radiation) causes the quantum computation to collapse, a process known as de-coherence. A quantum computer must be totally isolated from all external interference during the computation phase.

•       Error correction – Given the nature of quantum computing, error correction is ultra-critical – even a single error in a calculation can cause the validity of the entire computation to collapse.

•       Output observance – Closely related to the above two, retrieving output data after a quantum calculation is complete risks corrupting the data.

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

1.     https://www.linkedin.com/pulse/quantum-technology-ecosystem-explained-steve-blank/?

2.     https://www.bbvaopenmind.com/en/technology/digital-world/quantum-computing-and-ai/

3.     https://phys.org/news/2022-03-technique-quantum-resilient-noise-boosts.html

4.     https://thequantuminsider.com/2019/10/01/introduction-to-qubits-part-1/

Also read:

Facebook or Meta: Change the Head Coach

The Metaverse: A Different Perspective

Your Smart Device Will Feel Your Pain & Fear


Tesla: Canary in the Coal Mine

Tesla: Canary in the Coal Mine
by Roger C. Lanctot on 04-17-2022 at 6:00 am

Tesla Canary in the Coal Mine

The automotive industry is tied up in knots over cybersecurity. Consumers expect their cars to be secure. Car makers spend millions on securing cars, but don’t know how, what, or if to charge consumers for security.

Meanwhile, most cyber penetration reports to organizations such as the Auto-ISAC are related to enterprise attacks. The only cars being regularly hacked are Teslas. Tesla is effectively the automotive industry’s canary in a coal mine.

Like the proverbial canary in a cage in a coal mine – whose asphyxiation might serve as a warning to miners – the high profile attacks on Teslas – the latest reported by a German teenager – are a persistent reminder of what is in store for the rest of the industry. While there have been infamous hacks (like the infamous Jeep hack of 2016), Tesla has been the target of everyone from teenagers to professional Chinese hacking organizations.

A consumer survey was released by vehicle infrastructure supplier Sonatus last week under the provocative headline: “Sonatus Survey Shows Majority of Consumers Would Spend Big to Alleviate Automotive Cybersecurity Concerns.” The survey found that “despite seemingly constant headlines about automotive cybersecurity breaches, over a third of respondents are not concerned about their vehicles being hacked.” Most automotive industry participants would consider that percentage of unconcerned respondents a little on the low side.

The Sonatus release continues: “Most of the surveyed consumers who did have cybersecurity concerns expressed a willingness to pay a premium for added security features, with nearly 60% of all consumers willing to spend at least $250, and 30% willing to spend at least $1,000.” This finding would be greeted by skepticism by most. The disconnect may reflect a definition of “security” within which Sonatus appears to have included vehicle theft.

Says the Sonatus press release: “With regards to specific concerns over what a hacker might do if they were to infiltrate a vehicle’s security system, consumers are most concerned about their vehicle being physically stolen, which is not something typically associated with cybercrime. 60% cited this as a key concern, compared to 55% that reported concerns of hackers gaining access to their personal data, 53% that have concerns about location tracking, and 52% that are concerned about hackers interfering with driving capabilities.”

Alas, Sonatus polluted its cybersecurity interest level findings (too high) with stolen vehicle and privacy violation concerns – after all, your phone is more likely to be tracked than your car. Cybersecurity is a problematic issue because consumers are less familiar with the likely scenarios associated with cyber vehicle crime – such as ransomware that might lock out a vehicle owner or brick the car by preventing it from being started.

While the spectacular Jeep hack, with its demonstration of remote control was alarming, anyone hacking a car is more likely to be after financial gain of some kind – not a remote joy ride of someone else’s car. What is really remote is the potential for a terrorist attack.

Most vehicle attacks in the news have been for sport and have typically involved disabling a car or remotely activating functions for fun. This contributes to the uneasy confidence of auto makers that continue to invest in hardening their vehicles and their networks in anticipation of an attack that has yet to materialize.

The low level of threat activity directed at vehicles is deceptive. With thousands of suppliers working with the typical auto maker, the level of vulnerability is extraordinarily high. This is especially so when taking into account networks of dealers and independent servicers.

You can add to the risk profile dozens of in-vehicle electronic control units, multiple in-vehicle networks, and a dozen or more wireless connections to the car. In addition, electric vehicles are not only interacting with network operating centers and telematics service providers, they are also plugging into the power grid.

The list of companies providing cybersecurity solutions is long and growing. These companies are targeting everything from in-vehicle gateways and ECUs to car maker network operating centers and engineering operations. At the same time, semiconductor suppliers themselves are building secure elements directly into their devices.

All of this points toward the dashboard-ization of vehicle management. Any car maker worth its welds is going to want a command center where the entire connected fleet can be monitored in real time for physical crashes or cyber penetrations. Some have had this in place for years.

But day after day it is Tesla seeing the brunt of vehicle-centric attacks, while legacy auto makers contend with hackers targeting their enterprise operations. The Sonatus survey highlights the growing awareness of cybersecurity among consumers, but it misrepresents the willingness of consumers to pay for cyber protection.

Consumers expect this protection and auto makers must provide it. In the end, it boils down to the value and reputation of a brand and how it is perceived by consumers. This is a question of consumer confidence, customer retention, and cost avoidance.

It’s time for auto makers to start establishing their cybersecurity credentials – along with theft and privacy protection. Tesla has established a reputation for paying hacker bounties for finding vulnerabilities and also for rapidly fixing them. It’s time to pay attention to that canary.

Also Read:

ISO 26262: Feeling Safe in Your Self-Driving Car

Chip Shortage Killed the Radio in the Car

A Blanche DuBois Approach Won’t Resolve Traffic Trouble


Podcast EP71: Critical Enablers for the Custom Silicon Revolution

Podcast EP71: Critical Enablers for the Custom Silicon Revolution
by Daniel Nenni on 04-15-2022 at 10:00 am

Dan is joined by Dr. Elad Alon, CEO and co-founder at Blue Cheetah Analog Design, Elad’s experience includes Professor of EECS at UC Berkeley, co-director of the Berkeley Wireless Research Center, and consulting or visiting positions with many global semiconductor companies.

Dan and Elad explore the trends for increasing custom chip development, the complexity of the process and how analog mixed signal and chiplet strategies are becoming critical enablers for success.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Lost Opportunity for 450mm

The Lost Opportunity for 450mm
by Scotten Jones on 04-15-2022 at 6:00 am

450mm Wafer SemiWiki


I spent several days this week at the SEMI International Strategy Symposium (ISS). One of the talks was “Can the Semiconductor Industry Reach $1T by 2030” given by Bob Johnson of Gartner. His conclusion was, that $1 trillion dollars is an aggressive forecast for 2030 but certainly we should reach $1 trillion dollars in the next 10 to 12 years. He also noted that the industry would need to nearly double to achieve this forecast (a 73% increase in wafer output). He further forecast ~25 new memory fabs at 100K wafers per month (wpm) and 100 new logic or other fabs at 50K wpm (300mm). It immediately struck me, where are we going to build all these fabs, where will the people come from to run them, and where would we get the resources required. Wafer fabs are incredibly energy and water intensive and produce large quantities of greenhouse gases.

At the same conference there was a lot of discussion of environmental impact. Across the entire semiconductor ecosystem there is growing awareness and actions to reduce our environmental impact – reuse, reduce, recycle.

What does this have to do with 450mm wafers you ask.

A 450mm wafer has 2.25 times the area of a 300mm wafer. If you build 450mm wafer fabs with the same wpm output as 300mm fabs you need approximately 2.25 times fewer fabs (even less due lower edge die losses), 25 memory fabs becomes 11 memory fabs and 100 logic or other fabs becomes 44 fabs. These are much more manageable numbers of fabs to build.

If you look at people required to run a fab, the number of people required is largely based on the number of wafers, by running fewer-bigger wafers the number of people required is reduced.

When 450mm was being actively worked on, the goals where the same tool footprint for the same wafer throughput (likely not achievable), the same chemical and gas, and utility usage per wafer, a 2.25x reduction in usage per unit area. There was a recognition that beam tools such as exposure, implant, and some metrology tools where the wafer surface was scanned would have lower throughput but even accounting for this my simulations projected a net cost reduction per die for 450mm of 20 to 25%.

Unfortunately, the efforts to develop 450mm have ended and the only 450mm wafer fab has been decommissioned. The 450mm effort was different than past wafer size conversions, at 150mm Intel was the company that led the transition and paid for a lot of the work and at 200mm it was IBM. At 300mm a lot of the cost was pushed onto the equipment companies, and they were left with a long time to recover their investments. At 450mm once again the costs were being pushed onto the equipment companies and they were very reluctant to accept this situation. In 2014 Intel (one of the main drivers of 450mm) had low utilization rates and an empty fab 42 shell and they pulled their resources off 450mm, TSMC backed off, equipment companies put their development efforts on hold and 450mm died.

At this point it is likely too late to revive 450mm, ASML have their hands just trying to produce enough EUV systems and getting high-NA into production. High-NA EUV systems for 300mm are already enormous – difficult to transport systems, making much bigger 450mm versions would be an unprecedented engineering challenge. I do think there is important lesson for the semiconductor industry here. The semiconductor companies have a long history of short-sighted squeezing of their suppliers on price often to their own long term detriment. Starting wafers are an excellent example, prices have been driven down so low that it isn’t economical for the wafer manufacturers to invest in new capacity and now the industry is facing shortages. It is only shortage driven price increases that are now finally making new investment economical.

Over the next decade, as we potentially double our industry while trying to reduce our environmental footprint our task would be much easier with 450mm wafer, but unfortunately our inability to work together and unwillingness to take a long term view has left us without this enhancement in our tool kit.

Also Read:

Intel and the EUV Shortage

Can Intel Catch TSMC in 2025?

The EUV Divide and Intel Foundry Services


5G Core – Building An Open, Multi-Vendor Ecosystem

5G Core – Building An Open, Multi-Vendor Ecosystem
by Kalar Rajendiran on 04-14-2022 at 10:00 am

5G Potential for Businesses

For those not familiar with Fierce Technology, this firm offers a one stop place for news, analysis and education in the areas of telecom, wireless, sensors and all related electronics markets. They organize popular events such as the 5G Blitz Week, Sensors Innovation Week series, Sensors Converge and many more. These events facilitate the exchange of critical information among industry professionals, with the intended goal of accelerating the advancement of the industry as a whole.

Fierce Technology hosted their annual “5G Blitz Week: Spring Edition” as a virtual event in March. The topics covered included fixed wireless access (FWA), Open RAN, Private Networks, open core and more and related opportunities, applications and deployment challenges. The 4-day event was packed with interesting talks and panel discussions on 5G evolution and the roadmap to 6G. One such discussion was a panel session titled “5G Core – Building An Open, Multi-Vendor Ecosystem,” with participants representing Achronix, Google, HCL, Meta and Red Hat.

The discussion was moderated by Dave Bolan, Research Director at the Dell’Oro Group. The session started with an opening keynote by Ersilia Manzo, Director, Global 5G Solutions at Google. The panelists were Nick Ilyadis, senior director of product planning at Achronix, Parviz Yegani, VP CTO, Office Industry Software Division at HCL, Xinli Hou, Connectivity Technology & Ecosystem Manager at Meta and Fatih Nar, Chief Architect Telco Solutions at Red Hat. Fatih also delivered a closing keynote.

It is appropriate to present a couple of slides from Fatih’s closing keynote before synthesizing the panel discussion.

In the age of open-source, open architecture, open ASIC, etc., organizations such as the Telecom Infra Project (TIP) have been working toward accelerating the development and deployment of open standards. The question is, will an open core ecosystem become a reality for communications service providers (CSPs)? What is needed from the different players within the ecosystem? These are some of the questions that this panel session addressed. The following are excerpts from that panel session. You can listen to the entire panel session on-demand, by registering at Fierce Technology. 

Google – Ersilia Manzo (opening keynote)

Over the last decade or so, the relationship between telcos and vendors has changed. The focus has shifted from “what to build” to “how to build.” The telco industry has been moving toward a cloud-native deployment and evolving to a service delivery platform, eliminating the boundaries between network and services. Elements that were shipped with the network functions such as the orchestrator and operating systems are no longer integrated but are delegated to the cloud-native platform. Cloud-native networking offers the benefits of agility, flexibility and cost-efficiencies to CSPs and is expected to become the dominant approach in the future.

With the open-standard based approach and the resulting disaggregation, more parties have to get involved to build the network. In the past, the industry relied on interoperability of finished products. Now, the industry needs co-development and cooperation between the parties that are building the networks.

Google believes there are three main elements that are critical to the success of the cloud-native, multi-vendor 5G network.

  • Standards: These are no longer just documents that are produced over months or years by industry organizations. Now, standards include code releases such as Kubernetes and PDKs that help accelerate services deployment and shorten validation cycles.
  • Contributions: Process of fixing and contributing code back as a community is key.
  • Partnerships: Must truly involve cooperation and co-development.

Achronix – Nick Ilyadis

The demand for high performance at low latencies without compromising on power has led to the use of heterogenous compute platforms to support many of today’s applications. A heterogeneous compute platform is a data processing unit (DPU) that may include a combination of CPUs, GPUs, ASICs, and FPGAs. The DPU is essentially an offload engine for the main CPU, to do hardware acceleration of data processing. As the 5G Core is deployed, and the standards evolve, there is going to be the need to offload the CPU and accelerate 5G Core. These offload engines will allow 5G Core to scale to higher capacities without the cost and burden of building more and more server installations.

Achronix’s products are data accelerators with their current high-end FPGAs supporting multiple 400G ports, PCI-e Gen5, 4 Terabits of memory bandwidth, Machine Learning Inference and more. These capabilities enable field-level adaptability and extensibility for the 5G Core as the standards and customization requirements evolve. Incorporating reprogrammable hardware within a 5G Core implementation is a great way to accelerate deployment of open core, 5G infrastructure in a multi-vendor ecosystem. For more details, you can download a recently published whitepaper titled “Enabling the Next Generation of 5G Platforms.”

HCL – Parviz Yegani

There are many use cases/scenarios to handle in a multi-vendor, multi-technology, multi-domain 5G environment. The domains include the Radio Access Network, the Transport Network and the 5G Core Network. A good solution should be able to support this requirement and allow any vendor to be able to plug-in their offering into the platform.

HCL is working on Augmented Network Automation (ANA), which is an evolution of the next generation of Self Organized Network (SON). This network management platform enables proactive network management which is key to the success of 5G Core adoption and deployment. ANA facilitates and allows for inclusion of various software solutions from 5G Radio vendors, RAN vendors, and network management vendors. A key feature of the ANA platform is its unified management console and is centered around comprehensive data visibility.

Meta – Xinli Hou

While Meta is not a 5G vendor or service provider, they do drive the advancement of connectivity through their involvement in projects such as the Telecom Infra Project (TIP). Compared to the wireless market of the past, the 5G market place is attractive to many varied use cases, thereby fragmenting the market. This necessitates customizing the solutions as per the use case.  The Open Core Network initiative within the TIP effort is focusing on what can be done to enable faster adoption of Open Core, Cloud Native, 5G by more service providers serving these fragmented market segments.

Red Hat – Fatih Nar

Zero touch provisioning is a hot topic these days. A key aspect of zero touch provisioning is ensuring security and trust. A Zero Trust architecture should be foundational to 5G. Red Hat is deeply involved in how Zero Trust can be implemented with Open Core, 5G solutions in a multi-vendor environment.

The 3rd Generation Partnership Project (3GPP) as a telecom body, focuses on standards that dictate how mobile applications work with each other. But defining applications’ scalability and maintainability falls on the shoulders of the vendors. Red Hat works with vendors on implementing scalability, to manage costs depending on the traffic demands.

Also Read:

Benefits of a 2D Network On Chip for FPGAs

5G Requires Rethinking Deployment Strategies

Integrated 2D NoC vs a Soft Implemented 2D NoC


WEBINAR: How to Improve IP Quality for Compliance

WEBINAR: How to Improve IP Quality for Compliance
by Daniel Nenni on 04-14-2022 at 6:00 am

webinar semiwiki improving ip quality

Establishing traceability is critical for many organizations — and a must for those who need to prove compliance. Too often, the compliance process is manual, leading to errors and even delays. A simple clerical mistake can invalidate results and lead to larger issues throughout the product’s lifecycle. Developing a unified, IP-centric platform can help organizations improve overall quality while meeting compliance standards, like ISO 26262.

Compliance standards, such as ISO26262, require the SoC developer to collect and document evidence of compliance during the design process. These documents need to prove that requirements have been met by tracing tests and test results back to requirements on an IP. They need to show that “defensive” design techniques have been used.

Save your seat >>

The Perforce/Methodics IPLM platform is designed to have IP at the center of a compliance workflow. So, what is an IP? An IP is an abstract model that combines the design data files (that define its implementation) and metadata (that defines its state). Although this model is well-known in the semiconductor industry, it can revolutionize a business by creating full transparency into how IP objects evolve across projects and teams.

By centralizing IP management, designers and developers can collaborate inside their tools while creating a traceable flow from requirements, through design, to verification. This is because all software, firmware, and hardware IP metadata is stored in a single layer on top of a data management system. This metadata is comprised of information such as dependencies, permissions, hierarchy, properties, usage, and more. With IPLM, organizations can automate processes by using metadata that was collected and stored through the design and verification steps to automatically build FuSa compliance documentation.

There are other advantages when moving to an IP-centric workflow besides meeting compliance. By attaching relevant metadata to each IP, organizations can have a single source of truth that enables reuse across projects and teams. Making the design transparent allows individual blocks to evolve at their own pace, boosting innovation and cutting down on development costs. Because all the information around an IP is managed with Perforce IPLM Software, organizations can retain the context and connection back to the rest of the design, as well as the requirements. This improves overall quality while meeting regulatory standards.

Furthermore, since everything inside of IPLM is treated as an IP, this enables the creation of a full system level hierarchical Bill of Materials.  This facilitates the generation of correct-by-construction full system configurations, including the desired versions of all hardware, software, and firmware design IPs as specified by the project level IP hierarchy. This enables traceability from the silicon back to the exact IP BoM used for tape-out. This also helps to eliminate costly errors introduced by manual and outdated methods of configuration management, such as spreadsheets or simple text files. These errors could lead to delayed tape-outs, improperly functioning silicon, ECOs, and mask re-spins.

Learn more about IP quality and compliance from Wayne Kohler — Senior Solutions Engineer at Perforce. Join a live discussion with him on Wednesday, April 27, 2022, at 12:00 PM – 1:00 PM CDT. He’ll review how to build a platform to improve traceability and what you need to consider when complying with ISO 26262.

Save your seat >>

Also read:

Future of Semiconductor Design: 2022 Predictions and Trends

Webinar – SoC Planning for a Modern, Component-Based Approach

You Get What You Measure – How to Design Impossible SoCs with Perforce


Intel and the EUV Shortage

Intel and the EUV Shortage
by Scotten Jones on 04-13-2022 at 10:00 am

Slide1

In my “The EUV Divide and Intel Foundry Services” article available here, I discussed the looming EUV shortage. Two days ago, Intel announced their first EUV tool installed at their new Fab 34 in Ireland is a tool they moved from Oregon. This is another indication of the scarcity of EUV tools.

I have been tracking EUV system production at ASML to-date and forecasted output looking forward. I have also been looking at fabs that have been built and equipped and fab announcements to estimate the future requirement for EUV tools.

My approach is as follows:
  • List out each EUV capable fab by company with process type/node and capacity by year. I estimate how many EUV exposures are required for each process and convert this to an EUV layer count forecast by year (exposures x capacity).
  • For each year I look at the type(s) of EUV tools ASML produces and estimate the throughput by tool type for logic and memory processes.
  • Outset the required tools by time to account for the time between a tool delivery and the tool being in production.
Some notes about demand:
  • Intel currently has 3 development fabs phases that are EUV capable and 1 EUV capable production fab although only the development fab has EUV tools installed. Intel is building 8 more EUV capable production fabs.
  • Micron Technology has announced they are pulling in EUV from the one delta node to one gamma. Micron’s Fab 16-A3 in Taiwan is under construction to support EUV.
  • Nanya has talked about implementing EUV.
  • SK Hynix is in production of one alpha DRAM using EUV for approximately 5 layers and have placed a large EUV tool order with ASML.
  • Samsung is using EUV for 7nm and 5nm logic and ramping up 3nm. Samsung also has 1z DRAM in production with 1 EUV layer and 1 alpha ramping up with 5 EUV layers. Fabs in Hwaseong and Pyeongtaek have EUV tools with significant expansion in Pyeongtaek underway and the planned Austin logic fab will be EUV.
  • TSMC has fab 15 phases 5, 6, and 7 running 7nm EUV processes. Fab 18 phase 1, 2, and 3, are running 5nm with EUV. 5nm capacity ended 2021 at 120k wpm and has been projected to reach 240k wpm by 2024. Fab 21 in Arizona will add an additional 20k wpm of 5nm capacity. 3nm is ramping in Fab 18 phases 4, 5, and 6 and is projected to be a bigger node than 5nm. Fab 20 phases 1, 2, 3, and 4, are in the planning stages for 2nm and another 2nm site is being discussed.

Based on all of these fabs and our estimated timing and capacity we get figure 1.

Figure 1. EUV Supply and Demand.

 Figure 1 leads to a couple of key observations:

  • There will be more demand for EUV tools than supply in 2022, 2023, and 2024. Our latest forecast is a shortage of 18 tools in 2022, 12 tools in 2023 and 20 tools in 2024.
  • Looking at the logic companies where the bulk of EUV demand is, TSMC has the most EUV systems with roughly one half of the systems in the world, Samsung is next and then Intel. Of the three companies Intel will likely be the most constrained by the supply of EUV tools. It wasn’t that long ago that Intel was pushing out EUV tool orders, likely a mistake they wish they could take back.

In summary, over at least the next three years, leading edge EUV based capacity will be constrained by the scarcity of EUV tools with Intel likely to be hardest hit.

Also read:

Can Intel Catch TSMC in 2025?

The EUV Divide and Intel Foundry Services

Samsung Keynote at IEDM


Python in Verification. Veriest MeetUp

Python in Verification. Veriest MeetUp
by Bernard Murphy on 04-13-2022 at 6:00 am

Python

Veriest held a recent meetup on a topic that has always made me curious – use of Python in verification. The event, moderated by Dusica Glisic (technical marketing manager at Veriest), started with an intro from Moshe Zalcberg (CEO of Veriest) and talks by Avidan Efody (Apple verification) and Tamás Kállay (Team leader, Veriest). I know Moshe is a fan of this concept as an example of extending gains in SW development to the HW world. This meetup digs deeper into Python in Verification.

Flows and stupid verification tasks

Avidan has background as a verification expert from Amazon, Intel and Apple, which makes him a serious authority in my view. He was careful to stress that none of what he talked here should be interpreted as methodologies at his current employer. Here, he was simply synthesizing his know-how gained over many years in using Python in his day-to-day verification activities. He also stressed that he is a verification expert using Python, not a Python expert drafted into verification.

This talk was an excellent introduction to “Why Python?” in verification. Consider Python’s assets. Many of us, not just in hardware design, already know and use the language., Python supports version control systems access and readers and writers for virtually any formats. It has increasing support from EDA companies and is already used in many production CAD flows. It has support for databases, CI/CD flows, etc etc. and is widely understood and supported for questions on e.g. StackOverflow.

From an applications point of view, Avidan cited 5 (the last one a stretch). First for building production flows such as tool wrappers and regression runners. Second for what he called stupid verification tasks: connectivity checks, clock/power gating, register checks. He made the point that tests of this type require design knowledge and spreadsheets but really don’t need SystemVerilog testbenches or randomization. Python can drive all of this. He pointed to the fact that Python can read RTL directly. There is a nice package called cocotb for testbenches, good enough for these purposes. And Python can read waveform files.

Python for designers who hate the verification team!

I really liked his next point – “Python for designers who hate us”. His point here is that a couple of decades ago, designers were doing verification themselves but stopped because verification split off into a separate team and became very complicated. Designers stopped verifying, not because they wanted to but because the whole process with UVM etc became too complicated and too slow to respond. Python provides them a way to return to unit/block testing, again using coctb etc, without having to wait on the verification team.

Avidan mentioned using Python to boost UVM flows by isolating stuff that can change quickly – sequences, configuration, assertion, checkers, etc, minimizing recompile requirements. The final application he mentioned is the “one language to rule them all” concept – that Python could replace UVM. He’s not a believer but he does know smart people who are pushing this direction 😎.

Developing bringup tests

Tamás described another interesting application – developing bringup tests before silicon arrives. In this context he needs to be able to support multiple platforms such as simulation, emulation, FPGA prototyping and of course silicon when available. What is important here is a unified development interface, supporting communication over standard hardware interfaces such as PCIe, JTAG and UART. In the early stages of development this supports development and debug of the tests, and later I would guess in support of post-silicon debug.

UVM obviously plays a role in test development but needs to sit under a superstructure which can span all these platforms. And which especially will work equally well with first silicon. For this reason the team built a client-server structure in which the servers are the various simulation platforms or silicon. These communicate through sockets with a client written in Python and running Python tests. The rationale for using Python was that the low-level SW team were already using Python to write tests in pytest. Also they found that many HW engineers already have at least some Python expertise. Which made adoption quite painless across both teams.

Tamás includes more detail on how they architected this system. He wrapped up by saying the approach worked well for them with some limitations. Perhaps not surprising for an in-house development to serve a custom purpose.

My takeaway

A few dreamers aside, production verification engineers are not aiming to replace UVM with Python. There will always be many clever things that UVM can do that Python cannot (easily). The purpose of Python development and usage around verification is to plug the holes in mainstream verification methodologies. For stupid tests and to support designers running their own verification. To speed up standard verification flows and to support silicon bringup test development. Could you do all that in standard UVM (or PSS) flows? Perhaps as an exercise, but would it have the flexibility of Python for these often-custom applications. With minimal learning across diverse HW and SW teams? That would be a stretch too, I think.

You can watch the meetup replay HERE.

Also read:

5 Talks on RISC-V

Ramping Up Software Ideas for Hardware Design

Verification Completion: When is Enough Enough?  Part II