RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Lam Research performing like a Lion – Chip equip on steroids

Lam Research performing like a Lion – Chip equip on steroids
by Robert Maire on 04-29-2021 at 10:00 am

NASDAQ LRCX LAM

– Business is about as good as it gets- $75B WFE in 2021?
– China remains strong at 32% despite SMIC lack of license
– NAND remains 48% of revs versus 31% foundry
– DRAM steady @ 14% – Service was record $1.3B

Strong results in a strong market

Lam reported revenues of $3.85B and EPS of $7.49 for the March quarter, with record performance in many categories. Guidance was also strong at $4B +- 250M in revenues and $7.50 +- $0.50 in EPS.

The stock was down in after market trading likely in part due to an interpretation of flattish guidance. We think its likely Lam being a bit more conservative as is usual for them

Lam remains a “memory” poster child

Memory was 62% of Lam’s business while foundry was half that at 31%. While its clear that the vast majority of shortage related spending will be on the foundry side we would not exclude Lam from getting its fair share of foundry business as well.

Investors who may be concerned that Lam will not see as much benefit may be nervous but the results seem to belie that concern.

A memory comeback in the second half?

DRAM spending remains reasonable (as pointed out in our earlier note and ASML management earlier today). There likely remains room for more business there while NAND continues its strength for high aspect ratio etch for multi layer technology.

Even with record NAND revenue, foundry revenue was a record as well.

Service is large at $1.3B

Confirming a trend we have been seeing with companies in the space the percent of revenue from “service” continues to climb. Older fabs and equipment are being pushed hard as are new tools.

Customers don’t want downtime and want to squeeze as much out of tools as is possible.

This trend will help build a strong defense against the eventual down cycle which always follows these exuberant times.

A fairly “boringly” good quarter despite Covid

Other than shipping costs being high and overall capacity tight we don’t see any problems caused by Covid or related supply chain issues. There was nothing remarkable other than the record performance in several categories.
We think Covid issues are behind the industry for the most part and Lam does not have the supply chain issues of more complex technology like EUV.

Share gains in dep and etch

Management talked about share gains in both dep and etch and especially conductor etch. While good, they were not likely huge as we haven’t heard of big “takeaway” wins it is more likely a case of customer mix.

The stocks

Lam’s stock was off in the after market after having a strong day on the coat tails of ASML’s good news which drove all the equipment stocks.

The flattish guide likely weighed the stock down in the after market but we think that is likely prudent, conservative guidance.

Concerns about having less benefit from foundry related chip shortage spend are somewhat real but its not like Lam is not getting its fair share of foundry spend.

Lam remains a bit of a memory story which could develop more if DRAM comes back more.

At this point we think the onus is on KLA which is the poster child for the foundry business who should echo strong foundry performance seen by ASML when they report the quarter.

Following that AMAT is the house that built TSMC so they should also have a great quarter.

Also Read:

ASML early signs of an order Tsunami – Managing the ramp

It’s not a Semiconductor Shortage It’s Demand Delirium & Poor Planning

Foundry Fantasy- Deja Vu or IDM 2?


Accelerating Cache Coherence Verification

Accelerating Cache Coherence Verification
by Bernard Murphy on 04-29-2021 at 6:00 am

Cache coherence checking min

It would be nice if there were a pre-packaged set of assertions which could formally check all aspects of cache coherence in an SoC. In fact, formal checks do a very nice job for the control aspects of a coherent network. But that covers only one part of the cache coherence verification task. Dataflow checks are just as important, where many things can go wrong, such as unintended reads of stale data. And where performance bottlenecks will inevitably appear.

The need for directed test

In other words, directed test verification is unavoidable, whether through simulation or emulation. So what? You have to create testbenches for lots of other verification objectives. This is just another testbench, right? But directed coherence testing can get very complicated very fast. First consider the most basic test: from a single processor in a single cluster, through coherent interconnect. Does the cache in the interconnect behave correctly? Returning the value in the cache if found there, otherwise falling through to the memory controller to collect that value from off-chip memory. Repeat for every coherent master on every coherent interconnect.

Now get a bit more sophisticated. Two masters, maybe two CPUs, are reading and writing at the same time. Except they’re doing so on two separate coherent networks (which must be mutually coherent). CPU-A writes a value to a memory location in cache in its network, then CPU-B wants to read the equivalent location from cache in its network. Snooping should detect the change and ensure that CPU-B picks up the correct value. Does it? Repeat for every possible such sequence and every pairwise combination of who goes first and who goes second (or even at the same time)? Also permute in the slaves that also need to access memory.

Automating test generation

The number of combinations here could spin out of control very fast. Especially when you may have more than 1000 interfaces on the network, not uncommon in datacenter SoCs. This needs thoughtful planning. Avoid confusing interaction between bugs in multiple blocks, and run faster, by using VIPs for most blocks except the ones you want to include in a given test. Avoid having to test all N2+ combinations through a carefully considered plan to meet coverage.

Which also means you need a way to automatically generate a series of tests in which first only A, B and C blocks are RTL and everything else is VIP. Then B, C and D are RTL and everything else is VIP. And so on. For each test you’d like to be able to draw on pre-defined coherency test sequences, either at the simple test level or at the system level. And monitors to check across interconnect ports. All ready to apply.

All of this is provided through the VC SoC AutoTestbench, building tests starting from a DUT IP-XACT model (or Verdi KDB) and VIP IP-XACT models. Which can then feed into the SoC Interconnect Workbench. An automated workflow to run all those zillions of tests.

Coverage can be monitored through a verification plan you can define in Verdi. Debug you handle through Verdi Protocol Analyzer. Here with a protocol view, a transaction view and some pretty sophisticated filtering to isolate what is happening in transactions and resulting memory values. This is where you would pick up those instances (hopefully few) of reading a stale value.

Performance testing

The last big verification objective is for performance. It’s nice that your system works well when unstressed, but what happens when you’re running at speed and there’s a lot of traffic churning through those coherent networks? Here Synopsys provides a capability called VC VIP Auto Performance which will generate traffic following the Arm Adaptive Traffic Profile. (You need to create a test profile as input to this tool.) Subsequently you can analyze for latency and bandwidth problems in Verdi Performance Analyzer.

A very comprehensive solution for directed cache coherency testing, especially at the system level. Satyapriya Acharya (Sr. AE Manager at Synopsys) has created a recorded presentation on this topic, with more detail than I have provided. He wraps up with a discussion of how quickly this testing can be mapped over to ZeBu emulation, delivering a 10k performance speedup.

Also Read:

Addressing SoC Test Implementation Time and Costs

Your Car Is a Smartphone on Wheels—and It Needs Smartphone Security

Global Variation and Its Impact on Time-to-Market for Designs


Chip Shortage, COVID-19 Unmasks Transit Gaps

Chip Shortage, COVID-19 Unmasks Transit Gaps
by Roger C. Lanctot on 04-28-2021 at 10:00 am

Chip Shortage COVID 19 Unmasks Transit Gaps

I haven’t traveled a lot during the COVID-19 pandemic, but I have flown a few times around the U.S. As a former frequent flyer I pride myself on anticipating most travel circumstances and not being surprised or blindsided, but two recent visits to Austin, Texas, changed that when I couldn’t find a rental car.

It was just 12 months ago that rental car operators were stashing cars at baseball stadium parking areas and vacant lots as the travel industry came crashing to a halt. One might have thought that those same rental car companies would be licking their chops about now as flights fill up and travelers return. One would be wrong.

The problem is that rental car companies liquidated large portions of their fleets, expecting to restore them after the pandemic passed. Just as U.S. consumers are emerging from their pandemic dormancy, with stimulus cash in hand and cabin fever on the brain, rental car companies are confronting the reality of a new car shortage as the supply of microchips for automobiles dries up. Press reports are spreading about travelers resorting to renting U-Haul trucks – a stopgap measure that is likely to flummox graduating college seniors and families in the process of spring and summer relocations.

I have to smile to myself as I read these reports. Almost anywhere else in the world, a lack of available rental cars would be a nothing burger, a non-story. From Asia to Europe to South America, foreign travelers can ably get around with the aid of widely available public transportation or taxis. Who needs a rental car?

The U.S. traveler is uniquely dependent, or perhaps reliant is a better word, on rental cars to get around. One might argue this is the result of the vast open spaces in the U.S. not served by public transit or the insistence on the “convenience” and independence enabled by a borrowed vehicle to get from place to place. The reality is that the country has a long history of hostility toward mass transit – with that hostility emanating from the oil and automotive industries and radiating through the political environment.

Let’s consider the wider impacts of these circumstances. The few flights I have been on in the past 12 months have all been completely packed – with the exception of one Delta flight – United was not as generous or rigorous at preserving open middle seats during the pandemic.

Packed flights means airfares are headed northward with rising demand, as has been widely reported. Combine steeper airfares, with limited or unavailable rental cars and you have a recipe for packed highways this summer as consumers opt for a glorious return to the open road with all of the accompanying risks.

Those still flying will have options should they be unable to locate rental cars at their destinations. Uber and Lyft are likely to see a robust boost in demand – though these operators are facing their own challenges recruiting and retaining drivers who are suddenly able to find better employment opportunities as the economy stirs to life.

Reports are already emerging of renewed interest in taxis and taxi operators are reporting a recovery in demand. Now they, too, must recruit drivers as many of their taxis are sitting idle after their drivers were beaten down and kicked to the curb by the predations of Uber and Lyft and the pandemic.

Where Uber and Lyft are unable to fill the transportation gap, the car sharing sector remains vibrant. Turo and Getaround are seeing a resurgence and, despite the departure from the U.S. of Maven, Car2Go, and DriveNow, operators including Avis’ ZipCar, AAA’s Gig, PSA’s FreeNow, Hyundai’s Mocean, Toyota’s Kinto, and a dozen or more other local operators such as Blink’s BlueLA or Good2Go in Boston have arrived to fulfill local driving needs.

The U.S. is at a strange transportation tipping point where the Federal government is seeking to simultaneously rebuild highways, tunnels, and bridges; rejuvenate mass transit; and modulate personal car use with miles-driven taxes – at a time when consumers are shifting away from mass transit and diving back into their cars to road trip. Concerns over emissions and climate change – reflected in proclamations at global summits and EV investments – appear to have taken a back seat to the call of the open road…which is not so open.

The only saving grace is the growing population of workers indicating that they do not intend or would prefer not to return to commuting to the office. Meanwhile, due to the ongoing automotive chip shortage, demand for new and used vehicles is outstripping supply driving up new and used car prices.

The winners in this emerging scenario will be ride hailing and car sharing operators, new and used car dealers, and the airlines and oil companies. The rest of us can expect a return to gridlocked highways and the familiar sticker shock on the dealer lot. What is new, though, is the almost complete absence of available rental cars.

The lack of rental cars is a truly ominous turn – an unmasking of the vulnerability of the traveling public to the limitations of the automobile industry’s supply chain. This reality ought to motivate a reconsideration of the inadequacy of public transportation in the U.S.

With the creation of the interstate highway system, inspired in large part by Germany’s Autobahn, the U.S. struck a deal with the oil and automobile industries. Now, we are confronting the limitations of vehicular travel based on individually owned cars and a failure to build out mass transit.

The country is turning to electrification as the panacea to solve all automobile-based transportation woes, not recognizing that electrification brings its own challenges related to both supply chain and infrastructure issues. The evaporation of the rental car supply is a reminder that the real missing link in U.S. transportation is a comprehensive network of mass transit joining up highway, rail, and airport hubs.

The Biden Administration’s emphasis on electrification ($174B) over mass transit ($30B) seems more than a little off kilter. Mass transit may not be sexy, but it is precisely because of the investments other nations have made in mass transit that a (presumably) temporary rental car shortage is a non-issue – that is, everywhere else but the U.S. where we have come to rely on the rental car.

That dependence is a red flag that ought to be read as the white flag of surrender that it represents. COVID-19 and the automotive chip shortage have combined to send us a message that we have become and are becoming automobile dependent. This is a weakness that puts U.S. transportation in the category of a Third World country. If we are going to build back better, we ought to focus on fixing this first…with more money for mass transit.


Arteris IP Contributes to Major MPSoC Text

Arteris IP Contributes to Major MPSoC Text
by Bernard Murphy on 04-28-2021 at 6:00 am

Wileybook min

You might have heard of the Multicore and Multiprocessor SoC (MPSoC) Forum sponsored by IEEE and other industry associations and companies. This group of top-notch academic and industry technical leaders gets together once a year to talk about hardware and software architecture and applications for multicore and multiprocessor systems-on-chip (SoCs). They gather to debate the latest and greatest ideas to meet emerging needs. Kurt Shuler, vice president of marketing at Arteris IP, calls these meetings “The Davos for chips.” They’re held in some pretty nice locations around the world, and he tells me the food and wine at these events are also quite good!

The forum will release a two-volume book to celebrate their 20th anniversary on May 11. You can buy this direct from Wiley, or you can pre-order on Amazon. The first book covers architectures, the second applications. The first book divides into sections on processor architectures, memory architectures, interconnects, and interfaces. K. Charles Janac, president and CEO of Arteris IP, wrote the first chapter in the third section on network-on-chip (NoC) architectures. I’m impressed that what must be considered a definitive technical reference on MPSoCs required a chapter on NoC interconnect, and the editors turned to Arteris IP to write that chapter.

Background

Let me start by emphasizing that these books are a technical reference without marketing or advertising, not surprising given the authors and publisher. Charlie’s chapter kicks off with some background on how chip connectivity has evolved from buses through crossbars to NoCs. I’ve talked about this in a previous blog. He then goes into detail that I think teams new to NoCs will find helpful — considerations in architecting and configuring the network. This spans from architecture to floor planning since you must consider quality of service (QoS) and additional services you must support like debug and safety. Floorplan efficiency is a key advantage for NoCs over crossbars. Naturally you should plan this into the implementation.

System-level Services

The most obvious service a NoC can provide is in guaranteed QoS. What may be less familiar to many is the degree of flexibility designers have in that management. You can manage performance statically or dynamically within the NoC or through software-based controls.

Debug is another obvious service. Since the NoC sees all data traffic, designers can create probes to inspect data, monitor performance and generate traces for use by CoreSight and other debuggers.

For safety-critical designs, the NoC must also provide support for FMEDA analyses. And for safety mitigation techniques like parity, ECC, duplication and TMR. A NoC can support system-level ISO 26262 ASIL D safety by connecting a safety monitor through the network to each IP and supporting the isolation of connected IP blocks to test those IPs while the design is active in an application. For security, NoCs provide firewalls with the same intent as network firewalls, blocking malware activity inside the chip.

Cache Coherence

I’ve written before about cache coherence support in SoCs. The size and complexity of modern SoCs, driven particularly by computer vision and AI, create a need for coherence across many IPs in the chip. Just think of an ADAS object recognition system with a video front-end. Now coherence must span many non-CPU IPs, distributed across a large die. That wide distribution demands NoC interconnect, which must also support cache coherence. Charlie goes into some details on the mechanism, protocols and messaging here.

Future Developments

NoCs have been doing very well in keeping up with these needs. So well that now they’re the leading interconnect option across the top semis and system builders in many applications. From mobile phones to TVs, cameras, cars, drones, remotes and high-performance servers. You’d be hard-pressed to find an advanced design that isn’t based on NoC technology.

With this lead, NoC providers are being pushed to service  more new demands on interconnect. Among these, topology synthesis and floorplan awareness rank high. The bigger these SoCs become, the more NoC teams need automation to test topologies against trial floorplans.

Proliferating AI architectures push the need for more creative interconnect options in grid-, ring- and torus-based accelerators. Broadcasting weights and aggregating reads across this architecture in one clock tick requires special support. AI already demands cache coherence support with the controller subsystem. Scalable accelerators want to rely on local cache coherence domains for 1, 2 or 4 accelerators at the top connecting to controllers. Making hierarchical cache coherence a reality.

The ASIL D “fail-operational” mechanism I talked about earlier is going to grow. Who wants the whole SoC to fail if one subsystem fails? Remember when you had to restart your browser if one website locked up? That’s Stone Age – we expect modern browsers to be resilient to page failures. SoCs will go the same way. Now system builders want to move beyond error detection to prediction. Sound familiar? This further emphasizes the central role the NoC will play in an SoC, moving from a passive interconnect to the heart of communication, monitoring and control within the chip/3D stack/intelligent system.

Well worth reading.

Also Read:

SoC Integration – Predictable, Repeatable, Scalable

Arteris IP folds in Magillem. Perfect for SoC Integrators

The Reality of ISO 26262 Interpretation. Experience Matters


ASML early signs of an order Tsunami – Managing the ramp

ASML early signs of an order Tsunami – Managing the ramp
by Robert Maire on 04-27-2021 at 10:00 am

ASML Stock Price 2021

Taiwan and Korea represented 43% and 44% respectively with China at 15% and Japan and the US in the far distance.

ASML a tidal wave of orders

On the call management talked about logic potentially being up 30% in 2021 and memory being up potentially 50%. While we thing foundry/logic will clearly be on fir we think memory will lag a bit.

Given that litho has the longest lead times of any semiconductor equipment tool, especially EUV, customers are clearly securing their place in line as quickly as they can. Litho systems are typically much of the bottleneck in a fab.

Capacity constrained

As we have previously mentioned, litho tools are a bit like 15 year old scotch in that the production pipeline is both very long and very limited.

Perhaps the biggest limitation to increasing capacity remains the lens systems which take a very long lead time from partner Zeiss.

Not only are the raw materials in short supply but there are just so many lenses that can be polished at the same time.

At one point, years ago, there were a limited number of young east Germans who wanted to apprentice to learn how to polish glass for the rest of their lives. Now that the process is more automated the limitation is the number of custom built polishing machines which management pointed to on the call.

Much like Scotch, there is not a lot that management can do to increase supply in the short term, the next 2-3 years, given the time and cost it would take to build out infrastructure. The other problem is that if you start spending on capacity infrastructure now, the shortage will likely be long over by the time it comes on line thereby creating and excess supply or wasted capacity spending.

Basically you have to make calculated smaller increases in order to not overshoot the target (remembering that this is a cyclical industry no matter how much everyone tries to forget that)

Software “quick fixes”

Management did point to increased sales of software upgrades which do provide a painless quick hit to increase capacity somewhat.

These are also important in that they do not take the tool down for long periods which would further exacerbate the capacity issue.

The problem is that software is little more than a band aid and nothing replaces more litho tools.

We could see similar software upticks at companies like KLA that offer multiple menus of upgrade options and additions to their tools that could help the yield curve and thus capacity.

Positive collateral impact

Its pretty clear that everyone in the semiconductor equipment food chain will see a lot more business in 2021.

As usual, Litho leads the way, followed by process control followed by fabrication tools.

We expect similarly positive outlooks fro Applied, Lam and KLA as well as TEL.
The back end is already seeing an order jump with BESI showing a strong uptick.

The stocks

As we write this, ASML is up 3.5% along with most of the group up a similar amount.

The stocks have recently had a bit of a pullback after a rip roaring run.
Applied analyst meeting seemed to be the high water mark with things falling off after that.

Earnings season with associated strong guidance may likely get us back on track if others follow ASML’s lead as I expect will be the case.

Also Read:

It’s not a Semiconductor Shortage It’s Demand Delirium & Poor Planning

Foundry Fantasy- Deja Vu or IDM 2?

Micron- Optane runs out of Octane- Bye Bye Lehi- US chip effort takes a hit


Agile and Verification, Validation. Innovation in Verification

Agile and Verification, Validation. Innovation in Verification
by Bernard Murphy on 04-27-2021 at 6:00 am

Innovation image 2021

Agile methods in hardware design are becoming topical again. What does this mean for verification? Paul Cunningham (GM, Verification at Cadence) and I continue our series on research ideas. We’re also honored this month to welcome Raúl Camposano to our blog as a very distinguished replacement for Jim Hogan. As always, feedback welcome.

The Innovation

This month’s pick is An Agile Approach to Building RISC-V Microprocessors. The paper was published in IEEE Micro in 2016. The authors are from UC Berkeley.

Agile development is already well-established in software. As projects have become larger, more complex, more distributed, traditional waterfall development struggles to converge or deliver to expectations. Can agile methods apply to hardware design? So far, hardware teams have mostly resisted this change, arguing that hardware development has very different constraints, complexities and costs. Yet the same trends continue for size, complexity and distributed development, with ever-challenging schedules. The authors of this paper (including David Patterson and Krste Asanović) have adapted a set of agile development principles for hardware over 5 years and multiple design projects. They argue that agile is not only possible but desirable for hardware design.

Breaking from previous Innovation reviews, this is not a deeply technical paper. It covers something much more fundamental – a change in process for development and verification/validation. Our focus here is on the latter. Traditional practice separates pre-silicon verification and post-silicon validation, whereas with agile hardware development product features are continuously validated in the target application, in their case using FPGA prototypes, alongside verification to a spec.

Paul’s view

There’s no question that agile has transformed the software community. We use agile methods heavily at Cadence for EDA software development.  I find this RISC-V study intriguing for two reasons: On one hand, continuous incremental feature development has similarity with traditional hierarchical design as units (IPs) are developed and verified in parallel, with continuous integration and validation at the top.

On the other hand, RTL for the RISC-V core here is auto-generated from code generators written in a high-level language called Chisel. The authors combine this raised abstraction with a highly automated physical implementation flow to enable rapid iteration on many different generated RTLs. This iteration is more agile than would be possible in a traditional hierarchical design methodology. The authors also mention that Chisel source code can be compiled to both RTL and a fast cycle-accurate C++ model. They don’t elaborate on how C++ models were used for verification and validation. I would welcome some follow-on publications here.

The idea of bringing agile to hardware by raising the abstraction resonates with me. If we can make hardware source code sufficiently software-like, and we can sufficiently automate physical implementation, then agile hardware design could become mainstream.

The challenge in achieving this next leap is to abstract at scale in our industry. Chisel extends the functional programming language Scala. SystemC extends C++ for hardware design and both Cadence and Siemens offer SystemC synthesis and verification solutions. As Hennessy and Patterson explain in their Turing award paper on domain specific architectures, there are multiple languages with domain specific emphasis. These include Halide, Matlab, and TensorFlow, all with active communities and tool development. How many abstractions will we need and how easily can these proliferate? Please let us know what you think!

Raúl’s view

The authors make an important point – this is not high-level synthesis, it’s generation. If something is wrong, they don’t so much fix the design as improve the tools and generators. I see this as a little different from a traditional agile flow and not quite the same as raising the level of abstraction. It’s abstracting differently, in terms of generators. I think that’s interesting.

I teach a class at Stanford, where Rick Bahr and others have a related paper based on the domain-specific language, Halide, where they also talk about an agile process. Both papers introduce intriguing ideas; they have found their way into commercial IP development, e.g., memory generators. In logic, I think flows like this would be great for architectural exploration, for figuring out what we want the system to do. Both can generate the microarchitectural level.

After that you still must deal with all the nitty-gritty details of Verilog, external IP and synthesis, where agile doesn’t obviously have a role. I think there’s merit in verifying chip-level prototypes early on, a point both papers make. If you have a prototype, you have a functioning system useful for functional verification, which is also key in agile software design. Microarchitecture and implementation would be a different topic. Still, the prototype could also be a useful validation reference during microarchitecture design.

My view

The authors are clear that their motivation for using an agile flow is to validate early and often. But general emphasis in the paper is on design creation rather than verification. Which I understand – agile has a big impact on that phase. It would be very helpful to dig deeper into agile impact on verification / validation processes and where there might be potential for agile-related improvement.

Also Read

Cadence Dynamic Duo Upgrade Debuts

Reducing Compile Time in Emulation. Innovation in Verification

Cadence Underlines Verification Throughput at DVCon


Small EDA Company with Something New: SoC Compiler

Small EDA Company with Something New: SoC Compiler
by Daniel Payne on 04-26-2021 at 10:00 am

Defacto SoC Compiler

I read the semiconductor press, LinkedIn and social media (Twitter, Facebook) every morning along with an RSS feed that I setup, staying current on everything related to using EDA tools to make the task of SoC design a bit easier for design teams. A recent press release announced a tool called SoC Compiler, so my curiosity was piqued enough to read through it and then contact the EDA vendor to better understand what it was all about.

The company with SoC Compiler has been around now for 18 years, and is located in beautiful Grenoble, France. I’ve made one business trip to Grenoble visiting a customer and EDA vendor, and the mountain views are simply stunning, it was quite similar to seeing the Grand Tetons in Wyoming. Defacto Technologies is run by Chouki Aktouf, a PhD from Grenoble University, who founded the company in 2003. SoC Compiler is at version 9, because it’s the new product generation for what used to be called STAR, with a history of shipping for 15 years to front-end RTL designers.

What Does it Do?

In EDA there are lots of special interest groups that tend to create their own narrowly defined sub-flows and file formats, eventually gaining enough momentum to form a standard, so that multiple vendors can compete and cooperate in a bigger design flow. Here are some of the standards in use today for SoC design:

  • Design Data, RTL or Gate-level
  • IP block descriptions, IP-XACT
  • Power Intent, UPF
  • Physical cells, LEF/DEF
  • Timing constraints, SDC

In the SoC Compiler tool it accepts as input all of these formats, then enables an engineer to efficiently and quickly assemble and create a new SoC, getting the RTL and design collateral ready for logic synthesis. Here’s the Defacto tool flow:

Modern SoCs can have hundreds of IP blocks, and when those blocks use the IP-XACT standard they can be read into SoC Compiler to check for correctness and a hierarchy can be quickly built for a new design.  Defacto is a member of Accellera, and their tool works with the IP-XACT 2009 and 2014 standards. You can even write scripts to control how your IP blocks should be connected together, saving time from making manual file edits.

RTL design editing is made easier by the use of scripts or API languages like Tcl and Python, and SoC Compiler understands that when the RTL design is changed that the SDC and UPF collateral files also need to be updated automatically. Error-prone manual editing of SDC and UPF files in coherency with RTL changes is a thing of the past when using SoC Compiler.

Based on physical information contained in LEF/DEF files, SoC Compiler assists in the SoC creation, by being physically aware in order to optimize connectivity insertion. This helps in reducing the width of routing channels which helps in reducing the chip area.

SoC Compiler also provides the capabilities to explore and understand the Power intent and clock trees. A large set of linting checks are part of the embedded parser to ensure correctness, proper syntax and semantics for UPF and SDC files in conjunction with RTL.

Happy Customers

Two happy customers were mentioned in the press release:

EDA Tool Flows

Since the file formats that SoC Compiler uses are defined by the semiconductor industry, design engineering teams can use this tool with any other EDA vendor tools, like: Synopsys, Cadence, Siemens, etc.

Summary

Defacto has shown that the SoC Compiler tool can be used in a fully automated design assembly flow, with IP insertion, IP stitching and design refactoring while creating top-level views. An internal CAD group could work for years to replicate all of the automation found within SoC Compiler, or they could instead consider an evaluation to witness how a proven tool benefits their SoC design team. Why develop something in-house, when there’s already a commercial product with a good track record?

Related Blogs


PCIe 6.0 Doubles Speed with New Modulation Technique

PCIe 6.0 Doubles Speed with New Modulation Technique
by Tom Simon on 04-26-2021 at 6:00 am

PCIe 6.0 Eye

PCI-SIG has held to doubling PCIe’s data rate with each revision of the specification. The consortium of 800 companies, with its board consisting of Agilent, AMD, Dell, HP, Intel, Synopsys, NVIDIA, and Qualcomm, is continuing this trend with the PCIe 6.0 specification which calls for a transfer rate of 64 GT/s. PCI-SIG released the final specification for PCIe 5.0 in May 2019. This revision is gaining traction in the marketplace, however the needs of many market segments, such as HPC, are already creating strong interest in PCIe 6.0. PCIe 6.0 is expected to be finalized this year, but with completion of draft 0.7 it is already stable enough for development of IP and test silicon.

In PCIe 6.0 many changes are being made to meet the higher data rate requirement. Synopsys offers a very informative technical bulletin titled “Successful PCI Express 6.0 Designs at 64GT/s with IP” written by Gary Ruggles, Senior Product Marketing Manager, that articulates what these changes are.  To double the data rate from 32 GT/S to 64GT/s it is necessary to move from NRZ to PAM-4 signaling. This in turn necessitates a new error correction strategy. Variable size Transaction Layer Packets (TLPs) used previously are being packed into fixed size Flow Controller Units (FLITs). In addition to this, there is a new low power state, L0p, which will help with rapid power/bandwidth scaling.

The biggest change is adoption of PAM-4. If PCIe 6.0 continued to use NRZ, the increased data rate would have had to come from a doubling to the frequency, which would have been accompanied with a deterioration in channel losses. The Nyquist frequency would have doubled to 32 GHZ, causing channel losses of 60db, which is too high for a workable system. PAM-4 offers a doubling of the data transmitted by utilizing 4 signal levels instead of 2. The tradeoff is that the eye regions in each transition are smaller, so the signal is more vulnerable to noise.

PCIe 6.0 Eye

Improved RX design is required, which leads to using an ADC and digital signal processing to ensure improved receive performance. This also makes it easier to provide legacy support for previous versions using NRZ. Even with improved RX design there will be issues in the channel, including the package and board, that will cause higher error rates. The PCIe 6.0 specification includes the use of forward error correction (FEC) to mitigate the higher bit error rate (BER). To avoid higher latency the PCI-SIG calls for a lightweight forward error correction (FEC) coupled with the use of cyclic redundancy codes (CRC) to detect bad packets.

PAM-4 encoding and the addition of FEC required the change to FLITs with fixed size packets of 256 bytes. Multiple TLPs may be combined into a single FLIT or may span several, depending on their size. In PCIe 6.0 TLP and DLP headers have changed and no longer include their own CRC because the CRC checking occurs at the FLIT level now. Also, PHY layer framing tokens are no longer needed.

The Synopsys technical bulletin also covers the new low power state mode and how it operates to avoid delays during mode change. There is a section of how increasing the number of available tags helps boost performance. In closing there is a discussion on the testing and debug challenges that come with PCIe 6.0 based designs. PAM-4 is more complex than NRZ, making it important that there is support for built-in loopback modes, and pattern generators & receivers in the PHY and controller IP. Also, to improve the development process PCIe 6.0 IP should have support for debug, error injection and monitoring capabilities.

Even though there is still some time before the final PCIe 6.0 specification is approved, we can expect to see that companies which build PCIe based products will want to hit the ground running. Synopsys is offering PCIe 6.0 IP right now to help those companies prepare for this upcoming version. The technical briefing is a good source of information about PCIe 6.0 and what is changing. It is available on the Synopsys website along with the announcement of their complete IP solution for PCIe 6.0.

Also Read:

How PCI Express 6.0 Can Enhance Bandwidth-Hungry High-Performance Computing SoCs

Why In-Memory Computing Will Disrupt Your AI SoC Development

Using IP Interfaces to Reduce HPC Latency and Accelerate the Cloud


It’s not a Semiconductor Shortage It’s Demand Delirium & Poor Planning

It’s not a Semiconductor Shortage It’s Demand Delirium & Poor Planning
by Robert Maire on 04-25-2021 at 10:00 am

Biden Chip Shortage

-The semiconductor industry is not to blame its the customers
-How do you fix something that’s not really broken?
-Long taken for granted, semi’s are sexy again
-Pawns in a Political Power Play?

Its not the chip makers that screwed up. It’s the customers that stressed the system beyond breaking

The semiconductor industry has been humming along for a very long time. Churning out billions and trillions of chips that go into every imaginable device and then some. There has always been more than enough to go around as evidenced by the fact that the industry goes through regular cyclical patterns based on over and under supply that put the industry through the ringer but the customer never experiences the ups and downs…they always got the chips they wanted…until 2020.

The industry has always had enough built in resiliency to deal with seasonal/annual and cyclical demand patterns. Sure prices vary and lead times have stretched at times but not like we have recently scene.

Did the semiconductor industry that has been doing its thing for 50+ years suddenly go stupid? No

We saw orders for chips drop off a cliff at the beginning of last year due to Covid which had a much more rapid negative impact on global trade than any prior economic downturn which the chip industry has weathered in the past. It is a combination of the rapidity and completeness of the shut down that hurt the industry beyond its ability to cope.

When the world started to recover demand picked back up just as fast as it slowed and the chip industry just couldn’t respond that rapidly.

Few outside the industry understand just how long, how complex and how much planning goes into making chips…and not just the most advanced chips, the stupid, mundane chips as well

Yes, secular demand has increased but not enough to cause the dislocation

Many will point to 5G, work at home, IOT, the cloud, AI and a myriad of other demand drivers and say that it just overwhelmed the chip industry.
While there are a lot of new applications for chips its not the primary root cause of the shortage but rather a contributory factor.

These applications have been growing over a span of years and over a long enough period of time for the chip industry to react if that were the only variable.

Its not like there is a shortage of fabs, or that existing fabs burned down or were lost

Old fabs never die….they just move to lower labor cost regions and get reused for making cheaper chips. China has been both building more fabs than the rest of the world combined as well as moving older fabs into the country for trailing edge capacity.

A few years ago, old fab equipment could be bought for scrap value of pennies on the dollar as there wasn’t nearly enough demand for older technology to make it worth the while to keep them in service.

Companies were virtually giving away old chip fabs that were no longer economically viable to get them off their books.

A few smart operators, such as Tower Jazz, very smartly picked up a significant number of these old fabs that even came with supply agreements, for little cost especially when compared to the original value of the equipment.
These fabs are still around turning out more chips than ever so its not like a lot of capacity has come off line.

Its all about utilization

Given that fab economics are all about maintaining a high utilization rate of a highly capital intensive asset, a fab full of tools, the goal is to find a way to keep them as close to 100% as long as possible to maximize profitability.

Many years ago, over 25, when we were first covering the industry there seemed to be a simple rule of thumb for running a fab. If you were over 60 or 70% utilization you were making money. 80 to 90% utilization was a sweet spot where profitability was good and lead times to customers kept them happy. When you got above 90% you ordered more equipment or started building a new fab as you needed the lead time.

The cyclical problem came in where all the fabs got above 90% and all started ordering new equipment at the same time which created an over supply a few quarters down the road when all that equipment was installed.

Last year, TSMC’s utilization rate fell off a cliff in March due to Covid. Now they are likely running 100% or more (not taking tools down for normal service), running flat out trying to make up for the months of lost production due to canceled orders.

The problem is that its very difficult to make up for the loss as there just isn’t that much excess capacity in the system…its not economic.

Demand Delirium

We have previously compared the chip situation to the toilet paper problem during Covid. Its not like toilet paper makers had a sudden loss of capacity or conspired to raise prices or didn’t build enough factories. There is a finite amount of toilet paper capacity in the system as you want to keep the factories running at reasonable utilization.

The toilet paper problem was the opposite of chips in that demand spiked rather than dropped at the outset of Covid out of fear…. it wasn’t the bounce back in demand that caused the problem but the sharp initial uptick.

Delirium-a decline from a previous baseline mental functioning that develops over a short period of time.

What we had was a sudden departure from the baseline demand in the chip industry over a short period of time

The Political Power Play

The US freaked out during Covid when it figured out the Chinese controlled all the PPE in the world. We freaked out again when we found out that we don’t make our own drugs anymore. It is that feeling of helplessness that drives people crazy.

When Ford can’t make the beloved F150 pick up truck and we can’t get enough chips to power vape “e cigarettes” we had a similar freak out reaction as it attacked the heartland of America.

We wrote a note a few weeks ago saying that the chip shortages would be to blame across a wide spectrum of industries with even more widespread impact than anyone could predict. So far… I think our prediction was correct.

All this freaking out sets the stage for political finger pointing….who is to blame?? Chip makers? the Chinese? QAnon? Did Bill Gates corner the market on gps tracking chips to surreptitiously inject along with fake vaccine?

We applaud the Chips for America act as we need more chips made in America not because we need more chips but because we need more secure domestic supply. The chip shortage seems to have become a more convenient excuse.

We wonder when Intel will hold out their hand to the US government asking for help

GloFo is only spending a bit over $1B in capex which is barely enough to keep running in place and maintaining what they have. They are certainly not expanding US capacity with that low a spend level which is barely a rounding error of what TSMC and Samsung are spending, its a joke.

We certainly don’t mind the “shortage” being an excuse to spend more domestically on chip making we should just understand that shortage is not the real reason.

If its not broke, don’t fix it

The semiconductor industry is not broken…at least not the supply side. If anything the industry is way more mature than it used to be and a lot smarter as to how it spends money. The industry has consolidated to a few very successful players…perhaps too few… but that’s hardly broken.

Despite all the whining we continue with Moore’s Law one way or another. Smart Phones are smarter and computers are faster and chips are everywhere. I don’t see a problem here.

The customers got themselves in a tizzy by acting stupidly and not understanding their own supply chain…..shouldn’t have canceled those orders for 25 cent anti lock brake chips during Covid cause they need them to ship a truck when things get better.

Semi’s are Sexy….. again

For a long time chips have been taken for granted much as the oxygen we breath…it will always be there and always in adequate supply. Valuations have been low and software and apps have been the sexy tech plays.

Shortage always makes the heart grow fonder…..As Joni Mitchel sang…you don’t know what you got til its gone.

Valuations are now through the roof and semi’s are sexy once again after more than 20 years or so.

Stocks and the Hangover to follow

You can’t have the kind of fun we are having right now in the chip industry without a huge hangover.

We are partying like its 1999….but sooner or later we will get to excess capacity and we will pay the price.

The investment question is how long does the party go on for?
So far it appears that this could be at least a multi-year long party if not much longer. There are multiple positive factors and the spending will take years as we haven’t even built the buildings to house all our new toys yet. It will be hard for Intel to spend $20B…it will take years….ASML can’t even make EUV tools fast enough to help the drunken sailors spend their money.

The end is never pretty but at least this will be a party to remember for a long time. It will likely come to an unnatural end for some reason other than what we expect.

There may be ebbs and flows in the stock prices but there is no reason to leave the party early as we are just getting started.

Also Read:

Foundry Fantasy- Deja Vu or IDM 2?

Micron- Optane runs out of Octane- Bye Bye Lehi- US chip effort takes a hit

Chip Channel Check- Semi Shortage Spreading- Beyond autos-Will impact earnings


Why Tech Tales are Wafer Thin in Hollywood

Why Tech Tales are Wafer Thin in Hollywood
by Craig Addison on 04-25-2021 at 10:00 am

Chip Warriors

Mad scientists have been a staple of Hollywood science fiction since Dr Victor Frankenstein created his eponymous monster in 1931. Pre-pandemic, the Marvel Cinematic Universe was the main source of on-screen geeks-turned-superheroes, from Iron Man’s Tony Stark to Ant Man’s Hank Pym.

When it comes to real-life scientists on screen – mad or otherwise – the field gets a lot thinner – and is non-existent in the case of the semiconductor industry.

Benedict Cumberbatch played Alan Turing in The Imitation Game (2014) and Thomas Edison in The Current War (2017), and a decade earlier starred in a 2004 biopic of astrophysicist Stephen Hawking. Ten years later Eddie Redmayne won the best actor Oscar for playing Hawking in The Theory of Everything.

NASA has inspired a handful of true-life screen stories, from The Right Stuff (1983) and October Sky (1999) to Hidden Figures (2016) and First Man (2018).

In the category of Silicon Valley computer geeks, there have only been three biopics in 10 years. Jesse Eisenberg played Facebook co-founder Mark Zuckerberg in The Social Network (2011), while Apple co-founder Steve Jobs has been portrayed by Ashton Kutcher and Michael Fassbender in 2013 and 2015 movies respectively.

While half the Oscar winners in the Best Picture category over the past decade were based on true stories, to find one about a scientist you have to go back to Russell Crowe’s portrayal of Nobel prize winning mathematician John Nash in A Beautiful Mind (2001).

In contrast, there have been an abundance of biopics on rock stars (Freddie Mercury), sports stars (Muhammad Ali), movie stars (Judy Garland) and political leaders (Margaret Thatcher). So why are there relatively few biopics on scientists and engineers – and none when it comes to chips?

The simple answer is that their work doesn’t put them on stages, arenas and podiums where they become household names. But that has changed in today’s tech-driven world, at least for billionaire geeks like Elon Musk and Jeff Bezos, who are just as well known as the Mercurys, Alis, Garlands and Thatchers were.

The technical nature of the industry doesn’t help either. Hollywood screenwriters are schooled to “write what you know”, and most are not familiar with the tech world. This philosophy also explains why there are so many movies about the movies, the latest being Netflix’ Mank  – a biopic on Citizen Kane script writer Herman J. Mankiewicz.

Finally, there is the perception that geeks are boring, therefore their lives won’t make good screen stories, unless it’s a comedy like The Nutty Professor. However, the arrogant genius of Steve Jobs, as depicted in his two biopics, disproves that notion.

Although semiconductor stories have been ignored by Hollywood, the industry offers plenty of potential protagonists who can match the likes of Jobs and Nash when it comes to flawed genius. A starting point would be transistor co-inventor William Shockley, who literally put the silicon in Silicon Valley when he started a transistor laboratory in Palo Alto in 1957.

Shockley’s venture did not succeed – and his career ended in disgrace after he preached theories on race and genetics – but he inadvertently spawned the chip industry in the Valley when the so-called “traitorous eight” left to start Fairchild Semiconductor.

In the 1960s, the hard-driving, hard-drinking men (yes, mostly men) of Fairchild pushed the limits of technology as well as their personal lives. The company’s larger-than-life characters like analog eccentric Bob Widlar, cigar-chomping Charlie Sporck, and flamboyant Jerry Sanders would make compelling screen protagonists.

While there are few, if any, notable women pioneers in the chip industry, the potential cast of characters is not just white Americans. Morris Chang, who emigrated from China to the US in 1949, became a major figure in the chip industry with 25 years at Texas Instruments. But his real claim to fame was pioneering the wafer foundry concept with Taiwan Semiconductor Manufacturing Co.

Chang was a typical take-no-prisoners manager at TI, but his epic battle with the late Samsung Electronics chairman Lee Kun Hee for supremacy in the foundry business is a largely untold story that would make a gripping screen narrative even if it weren’t true.

Recent media photos showing President Joe Biden holding a chip and a silicon wafer in the White House were the equivalent of the microchip’s own starring moment. Will Hollywood get the message? Not likely, especially when you consider that tinseltown overlooked one of its own scientists. Actress Hedy Lamarr co-invented spread spectrum technology used in modern cell phone networks.

The author is an independent filmmaker and writer-producer of The Chip Warriors podcast series.