Bronco Webinar 800x100 1

Dynamic Coherence Verification. Innovation in Verification

Dynamic Coherence Verification. Innovation in Verification
by Bernard Murphy on 02-16-2022 at 6:00 am

Innovation New

We know about formal methods for cache coherence state machines. What sorts of tests are possible using dynamic coherence verification? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is McVerSi: A Test Generation Framework for Fast Memory Consistency Verification in Simulation. The paper was published in the 2016 IEEE HPCA The authors are from the University of Edinburgh.

This is a slightly dated paper but is well cited in an important area of verification not widely covered. The authors’ goal is to automatically generate tests for multicore systems which will preferentially trigger consistency errors. Their focus is on homogenous designs using Gem5 as the simulator. Modeling is cycle accurate; in similar work CPU models may be fast/virtual and coherent network/caches and pipelines are modeled in RTL running on an emulator.

The method generates tests as software threads, one per CPU, each a sequence of loads, modifies, stores and barrier instructions. Initial generation is random. Tests aim to find races between threads where values differ between iterations, i.e. exhibit non-determinism. The authors argue that such cases are more likely to trigger consistency errors.

The authors then use genetic programming to combine such non-deterministic test components from sequences. From these they build new sequences to strengthen the likelihood of races which are likely to fail consistency tests. Where they find inconsistencies, they run a check to classify these as valid or invalid per the memory consistency model. They have a mechanism to measure coverage to guide the genetic algorithm and to determine when they should stop testing.

The authors describe bugs that such a system should find in a cache coherent network and their method performs well. The authors note limitations of formal methods to early and significantly abstracted models. In contrast this method is suitable for full system dynamic coherence verification.

Paul’s view

Memory consistency verification is hard, especially in pre-silicon. But it is a very important topic that we find increasingly center stage in our discussions with many customers.

The heart of this paper is a genetic algorithm for mutating randomized CPU instruction tests to make them have more and more race conditions on memory writes and reads. The authors achieve this using a clever scoring system to rank memory addresses based on how many read or write race conditions there are on that address in a given test. Read or write instructions on addresses scoring high (which they call “racey”) are targeted by the genetic algorithm for splicing and recombination with other tests to “evolve” more and more racey tests. It’s a neat idea, and it really works, which is probably why this paper is so well cited!

The authors benchmark their algorithm on a popular open-source architecture simulator, Gem5, using the Ruby memory subsystem and GARNET interconnect fabric. They are able to identify two previously unreported corner case bugs in Gem5, and also show that their clever scoring system is necessary to create tests racy enough to catch these previously unreported bugs. Also, the authors show that their algorithm is able to find all other previously reported bugs much faster than other methods.

Overall, I found this a thought-provoking paper with a lot of detail. I had to read it a few times to fully appreciate the depth of its contributions, but it was worth it!

Raúl’s view

Our October 2021 article, RTLCheck: Verifying the Memory Consistency of RTL Designs addressed memory consistency by generating assertions and then checking them. This is a different approach which modifies the well-known constrained random test pattern generation by generating the tests using a genetic algorithm with a particularly clever “selective crossover”.

They run experiments for the x86-64 ISA running Linux across 8 cores. The cores they model as simple out-of-order processors, with L1 and L2 caches. Tests run in 512MB of memory with either 1KB or 8KB of address range. Each test runs 10 times. Three experiments run: pseudo randomly generated tests, genetic without selective crossover, and genetic with selective crossover. The evaluation detected 11 bugs, 9 already known and 2 newly discovered. Within 24 hours only the full algorithm (at 8KB) finds all 11 bugs, the other approaches find 5-9 bugs. The full algorithm also beats other approaches in terms of coverage for MESI and TSO-CC (cache-coherence protocols). However this is by a small margin.

The paper is highly instructive although a challenging read unless you’re an expert in MCM. The authors provide their software in GitHub, which no doubt encouraged subsequent papers which cite this work 😀. Given enough expertise, this is certainly one arrow in the quiver to tackle full system memory consistency verification!

My view

As Raúl says, this is a challenging read. I have trimmed out many important details such as memory range and stride. Also I skipped runtime optimization, simply to keep this short summary easily digestible. Methods like this have an objective to automatically concentrate coherency suspects in a relatively bounded testplan. I believe this will be essential to dynamic coherence verification methods to catch  long-cycle coherence problems.

Also read:

How System Companies are Re-shaping the Requirements for EDA

2021 Retrospective. Innovation in Verification

Methodology for Aging-Aware Static Timing Analysis


Intel buys Tower – Best way to become foundry is to buy one

Intel buys Tower – Best way to become foundry is to buy one
by Robert Maire on 02-15-2022 at 10:00 am

Intel Tower Semiconductor Acquisition

-Intel jumpstarts foundry model with Tower semi buy @$54B Gets complementary tech to round out offerings
-Approval based on satisfying China’s needs as well
-Margin concerns overblown- Will it be allowed to flourish?

Intel pays up to get foundry and technology….

Intel announced a $5.4B acquisition of Tower Semiconductor in Israel. This amounts to $53 per share for a company whose last trade was $33 per share or a whopping 60% premium.

Intel is getting a very well run company that has been a foundry for a long time as well as technology that it currently does not offer that will add to a broader foundry offering other than just bulk CMOS.

In addition the company gets more China business as well as military business both of which are needed to support Intel’s strategic direction. In essence this deal covers multiple birds with one stone which likely accounts for the premium.

Margin concern is overblown and wrong way to look at it

On the conference call a number of analysts seemed to be negative on the acquisition due to Towers lower margins. Most any company acquired by Intel is going to have lower margins and being a smaller player of older technology in the land of giant TSMC is not easy on margins.

We don’t think Intel is buying the business for its margins or financials or as a “bolt on” but obviously for its current positioning as a foundry with complementary technology to Intel. If Tower can help Intel’s core become a more competent foundry than that is more than worth the acquisition price alone.

Will Intel let it succeed?

The main reason Intel previously failed in the foundry business was that the upstart foundry group was smothered by the larger IDM corporate mentality. The current foundry effort is relatively weak as Randhir Thakur has never run a foundry or anything close. Our hope is that the management of Tower becomes the more dominant partner rather than being suppressed. In this way Intel can try to get what it paid for… expertise.

Corporate antibodies are strong in this one and Pat Gelsinger may have to take steps to make it so.

We like Tower and its management

We view Tower as a tough, smart, scrappy company in the model of many other Israeli companies. It has been cobbled together over the years out of a string of deals and acquisitions for little to no money that has followed a much different model.

We have known the CEO, Russell Ellwanger since his days at Applied Materials and think he has done a great job. If Tower’s mentality and methods can be set loose in a much larger company with large resources they might get a lot done.

A limited commodity

There aren’t that many foundries in the world that Intel could buy that would potentially make a difference and make sense.

We view Tower as a somewhat smaller but better, and more profitable version of Global Foundries. Global was cobbled together through a number of acquisitions and stumbled badly in mainstream CMOS and is now also trying to avoid the big dogs (like TSMC) by doing “specialty” silicon. AbuDhabi poured a ton of money into GloFo which is barely break even where Tower was more of a bootstrap operation that is solidly profitable. We think that Intel might have been interested in GloFo until its IPO priced it out of that possibility (but Intel could get a second chance)

Could China have been in the mix?

We would certainly have not been surprised if China was taking a hard look at Tower. China has been sniffing around a number Israeli based technology companies because of its concern about being cut off from US technology as well as CFIUS restrictions on foreign investment/acquisition which makes Israeli acquisitions much easier.

The very high premium could also be signs of a behind the scenes bidding war or wishing to avoid someone else (like China) coming in to outbid Intel after the deal was announced.

Technology is needed and complementary

The technology side should be a no brainer for Intel….just let it run and don’t screw with it. Most of the processes that Tower runs harken back to Gelsingers earlier stint at Intel and most of those Intel people took the buy out long ago, so Tower knows the tech better than Intel does.
Intel can likely help with some older equipment and fabs it has lying around and Tower could easily contribute to equipment reuse. Tower could likely get a ton of benefit from what Intel has written off.

The stock

While we view this deal very positively and a step in the right direction, it has to be approved first, which will likely take at least a full year and then be integrated (or not).

For Intel we think its a great trade to sell off the crummy memory business and get a foundry business in trade…it just makes sense.

We don’t see this as a reason however to run out and buy the stock of Intel as the impact will take years but we do see it supporting the longer term strategy.
We will likely hear more at Intel’s investor day this Thursday… In the presentation “Goliath buys David”.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.


About Semiwatch
Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch.

Also read:

The Intel Foundry Ecosystem Explained

Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops

The Semiconductor Ecosystem Explained


The Intel Foundry Ecosystem Explained

The Intel Foundry Ecosystem Explained
by Daniel Nenni on 02-15-2022 at 5:00 am

Intel Tower Semiconductor Acquisition

Exciting times for the semiconductor industry! Last week Intel announced a billion dollar fund to build a foundry ecosystem and today Intel announced they are acquiring foundry Tower Semiconductor for $5.6 billion dollars, WOW! Some people doubted Intel’s commitment to the foundry market this time. I think we can now put that to rest. Intel is in the foundry business, period.

First let’s talk about the billion dollar ecosystem investment:

How It Works: As a key part of its IDM 2.0 strategy, Intel recently established IFS to help meet the growing global demand for advanced semiconductor manufacturing. In addition to providing leading-edge packaging and process technology and committed capacity in the U.S. and Europe, IFS is positioned to offer the foundry industry’s broadest portfolio of differentiated IP, including all of the leading ISAs.

A robust ecosystem is critical to helping foundry customers bring their designs to life using IFS technologies. The new innovation fund was created to strengthen the ecosystem in three ways:

  • Equity investments in disruptive startups.
  • Strategic investments to accelerate partner scale-up.
  • Ecosystem investments to develop disruptive capabilities supporting IFS customers.

Who’s Involved: The IFS Accelerator features innovative partner companies across each of the three pillars of the program:

  • EDA Alliance: Ansys, Cadence, Siemens EDA, Synopsys
  • IP Alliance: Alphawave, Analog Bits, Andes, Arm, Cadence, eMemory, M31, SiFive, Silicon Creations, Synopsys, Vidatronic
  • Design Services Alliance: Capgemini, Tech Mahindra, Wipro

Richard  Wawrzyniak did a nice explanation of the RISC-V investment (Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops) so let’s continue with the EDA, IP, and Design Services sections.

EDA is an easy one for Intel as they are the biggest consumer of EDA tools. In fact, Intel is one of Synopsys’ largest customers, if not THE largest. And since Synopsys is also the largest IP vendor this relationship is critical. So, EDA is important but not a big challenge since Intel is already tight with the EDA community.

IP is a similar situation for Intel. There are already relationships with the commercial IP vendors through the TSMC based business. Remember, Intel historically uses TSMC for 20% of their production soon to be increased to more than 50% with N3 so there is a lot of TSMC based commercial IP floating around Intel already. Opening that up to Intel specific processes should not be a hard thing to do and that is where the billion dollars is going, to get that IP silicon proven on Intel processes and packaged up for foundry customers. According to the IP ecosystem, they are already gearing up to port military grade IP to Intel processes.

The big weakness I saw in the announcement is the Design Services Partners. Design services plays an important role inside the foundry ecosystem. I had once suggested that Intel buy a big ASIC company to closely partner with like TSMC and GUC, UMC and Faraday, or SMIC and VeriSilicon.

Of course, now that Intel is buying Tower Semiconductor for $5.6B that disrupts the ecosystem quite a bit:

Tower’s expertise in specialty technologies, such as radio frequency (RF), power, silicon-germanium (SiGe) and industrial sensors, extensive IP and electronic design automation (EDA) partnerships, and established foundry footprint will provide broad coverage to both Intel and Tower’s customers globally. Tower serves high-growth markets such as mobile, automotive and power. Tower operates a geographically complementary foundry presence with facilities in the U.S. and Asia serving fabless companies as well as IDMs and offers more than 2 million wafer starts per year of capacity – including growth opportunities in Texas, Israel, Italy and Japan. Tower also brings a foundry-first customer approach with an industry-leading customer support portal and IP storefront, as well as design services and capabilities.

What isn’t mentioned is military embedded systems. Tower is perfectly positioned for US military aerospace and defense contracts and I can assure you that is part of the Intel strategy. Jazz Semiconductor, formerly Rockwell, is a key part of this transaction, More leverage for CHIPs Act money, absolutely.

Jazz Semiconductor Trusted Foundry (JSTF), a wholly owned subsidiary of Tower Semiconductor Newport Beach, Inc., was created and accredited as a Category 1A Trusted Supplier by the United States Department of Defense’s (DoD’s) Defense Microelectronics Activity (DMEA) as a manufacturer of semiconductors that may be used in trusted applications. In addition, as of October 2014, JSTF has been accredited with Category 1B.

JSTF joins a small list of companies accredited by the DoD Trusted Program, established to ensure the integrity of the people and processes used to deliver national security critical microelectronic components, and administered by the DoD’s Defense Microelectronics Activity (DMEA). JSTF is proud to join the DoD Trusted Program to enable trusted access to a broad range of on-shore technologies and manufacturing capabilities.

The Intel investor call is getting ready to start. I will add more thoughts in the comments section.

Also read:

Intel Discusses Scaling Innovations at IEDM

Intel Architecture Day – Part 1: CPUs

Intel Architecture Day – Part 2: GPUs, IPUs, XeSS, OpenAPI


Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops

Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops
by Richard Wawrzyniak on 02-14-2022 at 6:00 am

Intel Foundry Ecosystem

On Monday, February 7, 2022, Intel Foundry Services (IFS), made a series of major announcements regarding the RISC-V architecture and ecosystem:

  • Intel will be joining the RISC-V International association with a Premier Membership, and Bob Brennan, Vice President of Customer Solutions Engineering for Intel Foundry Services, will be joining both the RISC-V Board of Directors and Technical Steering Committee.
  • Intel has created a $1B strategic innovation investment fund as a collaboration between Intel’s venture capital group and IFS to support established companies and the slew of early-stage start-ups that are looking to create disruptive technologies and to foster further build-out of the RISC-V architecture and ecosystem.
  • IFS launched Accelerator, a foundry ecosystem alliance with three specific areas of focus:
    • The IFS Accelerator – EDA Alliance includes: Ansys, Cadence, Siemens EDA and Synopsys.
    • The Accelerator – IP Alliance includes IP vendors as well as SoC silicon vendors, chiplet vendors, board vendors and accelerator manufacturers.  RISC-V vendor members include Andes Technology and SiFive.  Other partners in the IP Alliance: include AwaveIP, Analog Bits, Arm, Cadence, eMemory, M31, Silicon Creations, Synopsys and Vidatronic.
    • The Accelerator – Design Services Alliance includes design service providers as well as cloud service providers, and the three partners are Capgemini, Tech Mahindra and Wipro.
  • Finally, Intel announced that IFS is working with several cloud providers to develop an open chiplet platform for creating products with multiple types of computing processor modules aboard.

All these announcements taken together indicate just how serious Intel is in restarting their silicon foundry business and their intent to become a major player. Intel was already supporting the x86 and Arm architectures, but the addition of RISC-V indicates a serious commitment to the silicon foundry customer base and a desire to be open to new innovations going forward. In fact, Intel has already developed a RISC-V CPU as a soft processor for use in its FPGA product lines. The Nios V is based on RISC-V; the RV32IA architecture is designed for performance, with atomic extensions, 5-stage pipeline, and AXI4 interfaces. However, the current set of announcements go far beyond this soft CPU offering.

Intel’s support in these announcements of the RISC-V architecture and its evolving ecosystem further validates RISC-V as a viable counter to Arm CPUs in the marketplace and adds even more credibility to the growing momentum behind RISC-V.

The creation of an open chiplet platform is especially notable since it could mean platforms using x86, Arm and RISC-V processors all in the same product. This would be very interesting to many companies looking to accelerate their product introductions and also reduce development costs. A standard created around this concept would go far in influencing future production commitments from prospective customers looking for foundry capacity and wafer starts. Since Intel would be responsible for developing such a standard, they would likely figure prominently in the decisions reached on where to engage such technologies.

The four RISC-V IP vendors partnering with IFS – Andes Technology, Esperanto Technology, SiFive and Ventana Micro Systems – are all poised to benefit from this announcement. Their IP will be used by future IFS customers to optimize RISC-V CPU cores, chiplets and packaged products. In addition, they would likely gain access to Intel foundry capacity for their own silicon products at advanced nodes at some future point, although the current announcement does not discuss this.

All in all, this series of announcements, adding 16 founding companies to the IFS Alliance, paint a picture of a company – Intel – that is looking to establish a serious presence in the silicon foundry business, to once again be not only a market leader (they are currently #2 behind Samsung in the 2021 revenue rankings), but to also resume their position as a thought leader in our industry. This is reminiscent of the Intel from the 1980s and 90s when Intel was driving the industry with new products and concepts. Truthfully, much of this was initially driven by the rise of the PC market and then later the rise of the internet, but there was no shortage of innovative ideas and concepts.

Today, the semiconductor market is long past being solely driven by the PC market and the internet, but new horizons beckon with AI, 5G, IoT, IIoT and automotive all poised to become the new drivers of our industry. Intel seems poised to assume a leadership position once again, and their recent commitments in terms of new fabs and capital expenditures underline these commitments. Adding support for the emerging RISC-V architecture and ecosystem back up these commitments in a very meaningful way!

However, there are other areas of the semiconductor market that are impacted by these announcements beyond the silicon foundry market.

Implications and Possibilities

The EDA Market

IFS named several EDA vendors to their Accelerator EDA Alliance – Ansys, Cadence, Siemens EDA and Synopsys. Each of these companies has a range of design tools that can be applied to silicon designs using the RISC-V architecture. Semico expects these companies will see an increased usage for their tools now that Intel is seeking new customers for its foundry services.

An added benefit to these companies is having early access to process and advanced IC packaging roadmaps, process design kits (PDKs) and technical training. This allows the EDA vendors’ R&D teams to fine-tune EDA tools and IP for the Intel portfolio of process and packaging technologies so customers can meet power, performance and area requirements.

SoC Design

SoC designs are growing around 5% – 6% per year – and this in an environment where wafer starts are constrained due to high demand. With Intel’s signal of investing in several new fabs over the next few years, and with TSMC and Samsung also committing to higher capex expenditures over the short term, Semico believes the industry can expect to see more companies undertake new design activity. Although design starts are not tied directly to wafer capacity, it does help to have a reasonable expectation that your just-completed design will find a home at one of the leading silicon foundries and have a hope of being produced in a reasonable timeframe!

The SoC Market

Creation of an open standard around a chiplet platform that embodies heterogenous CPU architectures offers the possibility of establishing new criteria for performance and flexibility in SoC silicon solutions. The emerging chiplet market already has a great deal of interest in it and momentum behind it, and the commitment to drive its evolution further will create even more support for it.

The SIP Market

More designs and more SoC silicon mean more IP being created and licensed. The chiplet phenomena is going to be one of the main drivers of the growth in this market in the coming years. With any SoC solution, IP is used in its creation, and chiplets will be no exception. As we have seen previously, designs that start out small will grow in performance and complexity over time due to evolving market requirements, requiring even more IP to accomplish. The prospect of heterogenous CPU designs based on the chiplet standard and platform Intel is looking to create could enable another uptick in IP market growth and revenue generation. Increasing complexity and performance levels usually mean more IP usage and the IP market is well-positioned to deliver on those requirements.

RISC-V International

We would be remiss if we didn’t also mention the great job the RISC-V International association has done in shepherding and guiding the organization to this point. By their steady guidance, they have enabled a new CPU architecture to take center stage in an amazingly short period of time. Their leadership should be commended for its vision and service to the semiconductor industry. Good job everyone!

The Other Shoe

“Synchronicity – describes circumstances that appear meaningfully related yet lack a causal connection.” C.G Jung.  Or, nothing ever happens in a vacuum!

On Tuesday, February 8, 2022, Softbank and NVIDIA officially terminated their agreement for  NVIDIA to acquire Arm. Softbank will revisit the plan to IPO Arm as a US-headquartered company set to occur sometime within their 2023 fiscal year, ending March 31, 2023.

Simon Segars has stepped down as Arm CEO and will be replaced by Rene Haas, who will oversee Arm’s IPO. Segars will “support the leadership transition in an advisory role for Arm.”

In the initial deal execution of the Purchase Agreement between SoftBank and NVIDIA, NVIDIA paid Softbank a total of $2.0B. Here is a direct quote from SoftBank’s 2021 annual report.

“Upon the execution of the Purchase Agreement on September 13, 2020, SBGC and Arm received cash totaling $2.0 billion. Of this amount, $1.25 billion was received as a deposit for part of the consideration in the Transaction (refundable to NVIDIA subject to certain conditions until the closing of the Transaction, after which such amount will become non-refundable) and $0.75 billion was received by Arm as consideration for a license agreement that Arm and NVIDIA entered into concurrently with the execution of the Purchase Agreement.”

Arm-NVIDIA acquisition, SoftBank 2021 Annual Report, page 78

The official ending of the purchase agreement is a recognition of the regulatory headwinds that were aligned against the deal and signal it is time to move to the next phase of this story – the successful IPO of Arm as an independent entity once again.

In this context, the ‘other shoe’ refers to the Intel announcements and now the Softbank-NVIDIA announcement in conjunction with each other. As far as I can see, no one has yet commented on these two events taken together. While coincidences do happen, I suspect many people will take these two events as being tied together; they very well may be, although they don’t appear to be on the surface.

Could Intel have orchestrated these announcements at this time to take advantage of the difficulties being encountered by Arm and NVIDIA? It is possible, but unlikely. The deal was either going to gain approval or not based on its own merits, and the timing of these announcements, while very interesting, is probably not connected to the approval process.

Arm–NVIDIA

Now that the acquisition is officially terminated, what is next for Arm and NVIDIA? Arm has received the $750M from NVIDIA presumably to account for licensing all of Arm’s IP in a long-term licensing arrangement and to partially offset the costs of collaboration before the two companies became one. Now that this is no longer happening, what are the likely next steps for both?

  • One of the arguments Arm put forth in favor of approval was the thought that if Softbank tried to pursue an IPO for Arm, Softbank would not be able to attain the desired revenue from the IPO.
  • Arm would not be able to attract the level of investment they feel is necessary to fund their future R&D efforts as an independent company.

While both these statements may be true to a degree, it seems highly unlikely that an IPO of Arm would not generate a reasonable amount of revenue. Would this be enough to satisfy Softbank? They did originally acquire Arm for $32B and would definitely want to recoup at least that sum again. The question is: can they?

Arm is the largest IP vendor today by far, and one could assume that any IPO would take that into account. Both the SoC market and the IP market are growing nicely, and as outlined above, the Intel chiplet initiative will likely add to that growth. It is likely that Softbank would garner a reasonable amount of cash from the IPO—although probably not the $32B+ they are looking for—and they could take the rest in stock in the new company. Over the long term, this would provide them the amount of money they think Arm is worth—it would just take longer than Softbank would want.

Is it unlikely that Arm, in its IPO, would not be able to generate adequate amounts of capital to fund their R&D efforts? This is more problematic. When Arm was a standalone company, they did have this problem to some degree. But that was several years ago, when Wall Street was not as in tune with the SoC, IP and AI markets as it is today. Things have changed with the rise of AI, which Arm is at the heart of with their Neoverse CPU architecture. They now have a competitor who is about to become more capable with new investments to its ecosystem. Arm will have its work cut out for it, but the company is still the market leader by a wide margin!

With the termination of the acquisition, NVIDIA’s efforts to commercialize their IP into licensable form using ARM’s expertise is going to be more difficult, but not impossible. Their efforts to move their GPU architecture SIP into SoC silicon and obtain access to some of the markets Arm serves needs to be intelligently orchestrated and managed.

Can NVIDIA still successfully commercialize its GPU as licensable IP? It is hard to believe they will be unable to do so. They could always commission Arm to do the commercialization and not need to pay $40B+ for the privilege. They already have a very tight partnership with Arm. Why not give them another $1B–$2B to complete the task if NVIDIA truly believes this is a must-have for their business going forward? It will be more difficult now, but still mostly doable.

This brings up an interesting point: Arm and NVIDIA have been working on this collaboration for approximately 18 months—how close were they to completed? Even if not fully finished, now that the deal is over, some of the IP being worked on must be close to completion even if it still needs some work. What happens to that IP? I’m sure they can’t simply abandon it – too much money and time invested. Once it is finally completed, we could see a cross-licensing deal between the two companies which would still get the NVIDIA IP into the market with some sort of revenue sharing arrangement. After all, one of the reasons NVIDIA wanted Arm was also to gain their marketing prowess and contacts to market the IP. That can still be done, but as a partnership, not as an acquisition.

It is likely that a larger loss to NVIDIA would be the lack of access to Arm’s markets—the ones that NVIDIA cannot access today. NVIDIA was looking for Arm’s expertise to take the NVIDIA GPU architecture and pare the power requirements down to something more palatable to use in the mobile market. This is probably still possible, but needs to be carefully done. Not every application requires hundreds of GPU cores, and a judicious scaling of the architecture will probably yield results that will fit into most applications as IP. Also, now that Arm and NVIDIA will remain separate companies, and NVIDIA does not absolutely need to exclusively use Arm, the door is partially open to them using one of the RISC-V CPU cores that are coming onto the market. At the end of the day, its all about fitting the right solution to the right application and a RISC-V CPU core might be one of those solutions in the future.

Conclusions

With these announcements, Intel has acted both strategically and tactically:

  • Strategically, in aiding and empowering a competing architecture to Arm
  • Tactically, in lining up 16 companies to support its foundry initiative

Intel satisfied many needs all at one time and have adroitly given themselves momentum for reentrance into the silicon foundry market. Now they need to execute on this vision. Given how they have handled the process so far, Semico thinks it likely they can do so!

Also read:

The Intel Foundry Ecosystem Explained

The Rising Tide of Semiconductor Cost

Podcast EP61: A deep dive on Intel and semiconductors with Stacy Rasgon


The Rising Tide of Semiconductor Cost

The Rising Tide of Semiconductor Cost
by Doug O'Laughlin on 02-13-2022 at 6:00 am

Moores Law Slowing Down

It Isn’t Transistory

There’s a quiet upheaval happening in the semiconductor industry. The rules that have always governed the industry are fraying, undoing assumptions that we took for granted, that was pounded into us in school. The irreproachable Moore’s Law, that exponential progress will make things cheaper, better, and faster over time, is dead.

People are starting to appreciate that making a chip is not easy. Shortages and the geopolitical concentration of TSMC and ASML have awakened the popular imagination and have highlighted the science-fiction-like process of chipmaking. The road ahead has obstacles that aren’t widely appreciated. Making a semiconductor is going to get even harder, more expensive, and more technical. In other words, the challenges are going to accelerate.

To operate in the future, chipmakers will need more scale, more talent, and more money. I’ve written about this before, but I want to dive deeper into what’s driving the rising costs of making a semiconductor. It affects the entire range of chips, from the most advanced chips to the most basic. The trend is not new, it’s already been happening, but I believe now it will start to pick up speed. And the price increases will likely impact every person on earth. This inflationary cost is not transitory.

To understand how we got here, I want to first refresh you on the death of Moore’s Law. We’ve topped out the growth in transistor-energy scaling, frequency scaling, and we’re starting to hit the end of multi-core scaling in transistor-density increases. But more important than the end of those trends, cost scaling has ended. While we continue to improve transistor density through new techniques, each one layers additional costs.

ASML, in its investor day, made a bold statement that Moore’s Law will continue with system-level scaling. Another name for this is advanced packaging. But these costs are additive to the already escalating costs of making a smaller transistor.

While I believe Advanced Packaging is going to solve the transistor-density problem, I don’t believe it will make chips cheaper. In fact, transistors per dollar have gotten more expensive since the early 2010s.

I want to focus not just on the technological headwinds, but the cost headwinds. One of the major historical assumptions of Moore’s Law is that not only would your transistors double every two years, but the cost of the transistors would decline. No longer. The chart below is from Marvell’s 2020 investor day. The bar for 28nm was approximately 2011-2012.

What’s interesting is that a qualitative change happened around 28nm, as it was one of the last planar nodes. Planar in plain language is a two-dimensional surface (plane), while FinFET – the technology that replaced planar – introduced a “fin” into the transistor to jut upwards, creating a 3D structure instead of a 2D structure. We are now on the verge of yet another gate transition – gate-all-around (GAA), which is an even more 3D-intensive structure. As we switch to GAA or the next iteration of gate technology, I believe that the cost increases per 100m gates will continue to increase, just like they did for FinFET over planar. This is driven by the increased complexity of making these chips — namely, the added number of steps in manufacturing.

It isn’t just this transition that’s pushing costs higher. The lagging edge — older chips — is starting to get more expensive, too. The story here is not technological, but rather economic, and what was once ample capacity with commodity-like returns is starting to become in-demand. Businesses are not willing to add capacity unless subsequent price increases follow. This is another key driver, not just on the most advanced but in the older chips as well.

Finally, not just old chips and new chips, but the companies that make the chips (semiconductor fabs) are becoming more consolidated and more strategic. There really isn’t a lot of room at the leading edge, where the most advanced chips are made. This is the third driver of semiconductor costs, that fabs that offer a one-of-a-kind product are passing their rising costs on to customers. TSMC is not a price-taker, and the world is reliant on its products. Despite the increasing costs, they’re are starting to extract larger profits. Fabless companies have no choice but to pay more.

Each of these themes deserves a deeper dive. I’ll start first with my favorite topic: Semicap – or the tools that are required to make a chip.

Industry Consensus: Semicap Cost Intensity Will Go Up

One of the universal themes this earnings season was the higher cost of tools to make semiconductors. The real warning shot was this slide that Tokyo Electron presented at its investor day.

The drastic price increase is broad-based and is across DRAM, NAND, and Logic. Given that 5nm is in production today, this is not a prediction, but a trend that’s going to continue. The primary driver is not only the rising costs of tools such as EUV, but the rising number of steps to make a chip. Below is a graphic that shows the increase of steps over time.

It’s just not Tokyo Electron making the call for higher intensity alone. This most recent earnings season, TSMC, Lam Research, KLAC, and other Semicap companies called out rising intensity. I think all else being equal, the cost of 100K wafer starts should start to rise low- to mid-single digits per chip at the leading edge. Another way to look at this is from the top-down perspective. I compared the total capacity shipped in Million Square Inches (MSI) and compared this to wafer-fab equipment growth. Think of this as total volume versus the spending to make more wafers.

Excuse the 5-year averages, the numbers year to year are much bumpier. The workbook is attached at the end of this post, by the way.

The relationship used to be that spending on new fab equipment under paced the expansion of MSI, as each tool purchased would give more capacity. I want to highlight that if you squint, you can see that the relationship of WFE and MSI seemed to have flipped somewhere around 2012. This coincides with the 28nm node or the crucial node where cost increases started. Going forward I think that we should expect WFE to increase faster than historical averages.

From 2012 to 2020, MSI increased at a compound annual growth rate of 3.6%, and WFE increased at a compound annual growth rate of 8.0%. This is roughly double capacity additions. If demand for semiconductor devices continues to increase, then MSI should grow faster, and thus WFE at a higher multiplier. I am unsure if WFE’s relationship holds at 2x of MSI growth, but it should be a multiplier higher – say, 1.5x-2x MSI. That’s acceleration compared to its historical rate, and what’s driving that is the demand for semiconductors and the technological headwinds we’re facing.

I want to move now to another trend that got called out repeatedly this last earnings season that frankly surprised me but is surprisingly logical: old chips are more expensive.

Old Chips Cost More

It’s not just the newest, fastest, and most expensive chips that are costing more because of technological problems. The most interesting trend recently is that old chips are starting to drive price increases. The recent inflection in Automotive semiconductors is driving demand, not for the latest and greatest, but older and more mature technologies. The problem is that there never has been meaningful capacity added for older technologies, and most of the time fabs would just become “hand-me-downs” as the leading edge pushed forward and the fab equipment would continue to be used and depreciated. Using fully depreciated fab equipment meaningfully lowered the cost to make a semiconductor, especially after 10+ years.

That’s been an important aspect of pricing. A fab being maintained without much incremental capital, yet still producing chips, is what has driven down the price of older chips so much over time. Often there would even be improvements in yield, which further lowered costs. The thinking went that a leading-edge chip that cost hundreds of dollars in 2000 would cost pennies in 2021 because the fab would be fully depreciated.

But there’s a problem with that. We have a situation that’s never happened before. Historically, demand was clustered toward the leading edge. Today, we have demanded at the lagging edge, too. In fact, the demand for older chips is starting to rise sharply. This is driven by automotive and IoT production, as most of these applications are older, more mature technologies, which have better yield, cost, and, importantly, reliability. And even though demand has spiked, no capacity has been added. It’s very rare to add to lagging-edge production.

Up until very recently, there was ample capacity at the lagging edge. It would have been unheard of to add capacity to the lagging edge. A fab was considered “full” if it was running at 80%. Now, trailing-edge fabs are running at close to 100%. Something must change.

This is one of the drivers of the automotive semiconductor shortage: higher demand and no supply or incentives to add more supply. Most semiconductor firms and fabs are obsessed with the leading edge because they can make higher profits. Now firms are waking up to maintaining the lagging edge. The “old” chips are becoming just as important as new chips, and to add capacity, firms had to start making large capital additions again.

There are obvious pricing problems in this scenario. With a new fab, a company can’t turn a profit selling a lagging-edge chip at the price that was previously dictated by a fully depreciated fab. Prices have to go up. This dynamic was discussed by Silicon Lab’s CEO last quarter:

just in terms of the cost increases and the durability of that, I think that we are going into kind of a new phase of the semiconductor industry, where we’ve got Moore’s Law and advanced nodes becoming more and more expensive and you’ve got mainstream technology now full. And it used to be that the digital guys would move out and the N minus 1, N minus 2, N minus 3 nodes would be fully depreciated fabs that you would move into. We have now reached a point where the mix, the ratio between advanced and mainstream is causing fabs TSMC and others to build new mainstream technology, and that means that those fabs are not fully depreciated. So a large element of the cost increases that the industry is seeing right now is because of the additional CapEx that’s having to be put in to build new capacity across the nodes, not just at the advanced nodes, but across.

And so if you look at the cost increases that we’re seeing in other — it’s across the industry, there is a certain element of that, that’s durable over time. And so this is a step function in terms of the cost structure of the industry to match the demand that we’re seeing and the increasing content of electronics throughout the economy and the acceleration of demand that we’ve seen through the pandemic has really pushed that forward and driven us into the supply constraints. We’ll work through that, but to work through that is requiring a lot of CapEx, and that’s got to be recouped. And that’s got to flow upstream from our suppliers to us, to our customers and that’s what you’re seeing right now.

What’s great is that this wasn’t a single company saying this in a vacuum. On Semi, NXP Semiconductors, Microchip, and other “mainstream” or lagging-edge semiconductor companies affirmed that this is going to continue because demand at the lagging edge is higher than supply. This is completely new territory for the industry, and not only a confirmation of the strategic importance of semiconductors, but a confirmation of the broad-based demand. There is really only one solution — adding capacity — but fabs are uncertain they could make a profit adding greenfield lagging edge without raising prices. In turn, they’re concerned about the demand for these “new” lagging-edge chips.

It’s a standoff. Fabs and semiconductor companies are uncertain that this will last, but Auto OEMs and the like seem to have insatiable demand. For a fab, it’s hard to change your behavior in one year against a trend that has lasted decades. Even harder is to go through those difficult changes and expect customers to accept higher prices. UMC put it well in its Q2 2021 call:

But for any greenfield capacity expansion, for the mature node, you are competing with a fully — most likely, you’re going to be competing with a fully depreciated capacity. Unless the demand is significantly important to the customer, but the economics — if the economics stay the same, it will be very difficult to have a justified ROI. However, if the customer is willing to face those challenges together with us, we will definitely explore those opportunities.

Put differently, pay up or continue the shortage. And given that the lead times aren’t really improving, I think that paying up marginally will be the outcome. There has to be some kind of compromise here, and right now it’s coming in the form of non-cancelable, non-returnable orders (NCNR). Fabs are not going to expand trailing-edge capacity unless their customers truly commit to orders that cannot be canceled or returned. It’s uncertain how much double ordering is going on, but NCNRs are a pretty strong commitment to adding long-lived demand.

While the addition of capacity should eventually smooth out price increases over time, this will take multiple years, especially if the demand from automotive and IoT devices continues. I don’t think that prices will rise forever in lagging edge, but I think that we should expect the relationship of historical price decreases to not be as large as they used to be. Lastly, that leaves me with the 800-pound gorilla — Taiwan Semiconductor Manufacturing Company.

TSMC Is Not Lowering Its Prices

Tool prices for leading-edge products are increasing, and this should hurt the margins of TSMC and other leading-edge fabs. While we can gross margin contraction at Intel, this is more a function of company-specific investment to catch up to TSMC. But at TSMC, the exact opposite is happening. TSMC is convinced that it can improve its margin with smaller nodes. The company reiterated this multiple times on its calls, and given TSMC’s one-of-a-kind product, the higher gross margin is very justified. The reality is that TSMC is not a price-taker, but a price-setter. The primary reason: It has a stranglehold on all leading-edge manufacturing.

100% of sub-10nm logic in the world was made at TSMC in 2019. It’s likely that Samsung has caught up, but in terms of the leading edge, it’s just TSMC leading the pack. Given the concentration of fabrication power at TSMC, customers really don’t have another option. If a fabless company wants to sell its latest and greatest chips at a higher margin, they must go to TSMC. Meanwhile, TSMC’s gross margin has slowly drifted higher over the years.

Despite intense CapEx increases which flow through to the Cost of Goods sold as Depreciation, their gross margin is still high and the company believes it will stay that way.

TSMC put it this way on its most recent call and given its absolutely staggering outright capital expenditures, I believe it’s justified:

Even as we shoulder a greater burden of the investment for the industry, by taking such actions, we believe we can achieve a proper return that enables us to invest to support our customers’ growth and deliver long-term profitable growth with 50% and higher gross margin for our shareholders.

Right now in order to shoulder the higher CapEx burden, the company is passing on prices meaningfully. It recently hiked chip prices by 20%, and it can and will raise prices more if the continued cost of making an advanced semiconductor continues. Until the situation changes, there’s no other meaningful competition in town. If Samsung and Intel get their act together in foundry, maybe they will be willing to take lower gross margins, but until then it’s TSMC’s world we’re living in. It’s going to raise prices on the back of higher capital costs.

The Price of a Semiconductor is Rising and It’s Not Transitory

Each of these factors on its own would be a tailwind to rising prices. Together, it’s a certainty that prices are going to rise from here. Between more steps leading to higher tool costs, the declining economic benefit of Moore’s Law, and lagging-edge capacity problems, I believe that the historical cost declines we have seen in semiconductors will start to slow. In some places, I believe the price increases in the trailing edge will be here to stay. Before you go wild with inflation speculation, I want to reiterate that semiconductors are only a ~$530 billion industry or half of a percent of global GDP. This is not going to cause rampant inflation everywhere.

The demand side of the equation doesn’t seem to be slowing in the slightest. The higher demand is coming from the highest number of verticals, with PC, Mobile, Data center, Automotive, and IoT each as a credible and large growing end market. Certain applications are more compute-intensive going forward, such as machine learning. As software ate everything in our lives, silicon made the meal possible. All this is happening as Moore’s Law is screeching to a halt. In simple terms: higher demand + lower supply improvements = price goes up.

I’m going to dive into who I think benefits in the paid-only section. Thanks for reading this far.

Become a paying subscriber of Fabricated Knowledge to get access to this post and other subscriber-only content. Subscribe

Also read:

Podcast EP61: A deep dive on Intel and semiconductors with Stacy Rasgon

Podcast EP61: A deep dive on Intel and semiconductors with Stacy Rasgon
by Daniel Nenni on 02-11-2022 at 10:00 am

Dan is again joined by  Stacy Rasgon, Managing Director and Senior Analyst, U.S. Semiconductors at Bernstein Research. Stacy is an unusual semiconductor analyst as he holds a Ph.D. in chemical engineering from MIT. His substantial technical knowledge allows for a deep dive on Intel and other semiconductor topics that you will find refreshing and quite interesting.

Podcast EP12: A Close Look at Intel with Stacy Rasgon

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Verific Sharpening the Saw

Verific Sharpening the Saw
by Bernard Murphy on 02-11-2022 at 6:00 am

Sharpening the saw min

Verific is an unusual company. They are completely dominant in what they do – providing parsers for Verilog/SV, VHDL and UPF. Yet they have no ambition to expand beyond that goal. Instead, per Michiel Ligthart (President and COO), they continue to “sharpen the saw”. This is an expression I learned in sales training, habit #7 from 7 Habits of Highly Effective People. Constantly refining and polishing (or sharpening) the tools you already have rather than launching out into building new tools. That’s a great way to keep existing customers loyal and to steadily grow a business. They are still investing in interesting development, but it is all around these core tools.

The Core Business

Michiel told me that their SystemVerilog continues very strong, as does VHDL. A blow perhaps for those who hope VHDL is approaching its end. Apparently this is partly a function of popular IPs available only in VHDL and partly a function of markets or organizations which are still VHDL-centric (think defense, aerospace, FPGA).

Speaking of markets, Verific continues strong with all the major and many of the minor EDA vendors. It also continues to enjoy popularity among semi and systems development groups in support of utility applications – glue, proprietary transforms, analytics and other functions. Michiel mentioned that demand for their UPF parser was originally low but has been picking up recently. I would guess the slow start probably reflects EDA vendors wanting to keep UPF parsing in-house for now. Recent pickup in demand then reflects designs teams extending to analysis and manipulation of UPF for their own purposes.

Michiel acknowledged that they continue to polish for corner cases (I remember this was a constant battle in Atrenta.) Verific may not be perfect (even the standards aren’t perfect) but they continue to pursue that goal as their priority.

Beyond the Basics: Cross Module References

Support for these is good example of sharpening the saw with a little new development. Cross module references (XMRs) are those references you can embed in your RTL like top.A.B.x. XMRs are essential for assertions and more generally for verification/debug. I can even imagine them being useful for quick what-if experiments within functional code, without having to go through the pain of making a change synthesis-legal.

It turns out that handling XMRs efficiently in a parser is trickier than it might appear. The logic for a simulator is easy enough, but uniquification can blow memory consumption up massively. Now Verific is finishing up an improved data structure  to better support path-based uniquification which can dramatically limit this growth.

Scripting in Python

This one should attract a lot of attention – it certainly has among system clients. EDA vendors and CAD teams want to work with Verific at the C++ level, but that only makes sense when developing or extending tools that will have a wide audience and be needed for many years. Design groups have always needed utilities to manipulate design content for a smaller audience and likely smaller half-life. They need scripting capabilities and the scripting language of choice these days is Python which is nicely reflected in Verific’s INVIO add-on to its parsers (Sorry Tcl fans. You are strong in implementation, but you have no play in pure RTL manipulation.) I predict Python manipulation of SV, VHDL and UPF is likely to attract a very eager following.

In short, Michiel and team continue to sharpen the saw in interesting ways, but with a continued focus on keeping loyal customers happy above all. You can learn more about Verific HERE.

Also read:

COO Interview: Michiel Ligthart of Verific

2021 Retrospective. Innovation in Verification

Topics for Innovation in Verification

 


S2C’s FPGA Prototyping Solutions

S2C’s FPGA Prototyping Solutions
by Kalar Rajendiran on 02-10-2022 at 10:00 am

Logic Matrix System Configuration Table

Prototyping solutions have been in the news a lot lately. And FPGA-based prototyping approach is pretty widely used. On a panel session at DAC 2021, Amir Salek, Head of Silicon for Cloud and Infrastructure at Google had the following to say. Prototyping FPGA is a tremendous platform for testing and validation. We are doubling down on emulation and prototyping every year to make sure that we close the gap between the hardware and software.

With unrelenting time to market pressures, companies deploy different tactics to compress their product development cycles.  As a product launch requires having both the hardware and software ready at the same time, companies seek a head start on software development. Rather than waiting for their chips to be ready in final form, they leverage FPGA prototyping to develop and test their software. And hardware/software co-verification is an important aspect of product development, for integration of software with hardware well before final chips and boards become available.

While FPGA with its flexibility lends itself nicely for implementing prototypes of very complex, high-performance SoCs, they do come with some drawbacks. One of those drawbacks being that they trail in terms of supporting the most advanced connectivity interface speeds. System interfaces such as PCIe, DDR, HBM, etc., have been evolving rapidly to support faster and faster data transfer speeds. While mainstream FPGAs can fit large SoCs, they lag when it comes to supporting the most advanced interface speeds. More advanced FPGAs can support the latest PCIe speeds,  but they cannot yet fit large SoCs.

S2C solved the above dilemma through a partnered solution with Avery Design Systems. Avery’s PCIe and memory speed adapters can be synthesized into S2C’s FPGA prototyping platforms to support up to PCIe 6.0, HBM3 and LPDDR5 protocol interfaces. You can learn more about this S2C-Avery partnered solution from a recent SemiWiki post. That post provided just an overview of S2C’s product offerings, as its primary focus was on Avery’s speed adapters. The speed adapters cannot do their part without a complete FPGA prototyping platform.

As a global leader of FPGA prototyping solutions, S2C has been delivering rapid prototyping solutions since 2003. S2C offers many different product lines leveraging both Intel and Xilinx FPGAs. Their solutions can fit a wide range of design sizes and support advanced interface speeds as well. Here is a short video clip about their Prodigy Logic Matrix LX2 prototyping solution. The following is a summary of their range of platforms and accompanying productivity tools.

S2C’s Prototyping Platforms

S2C’s offerings cover the prototyping needs of the spectrum of high-end, mainstream and cost-sensitive market driven designs.

Prodigy Logic Matrix

The Prodigy Logic Matrix is built to address the highly complex, high-performance, large SoCs used in applications such data center, AI/ML, 5G and autonomous driving. It is a high-density FPGA prototyping platform that is optimized for space and connectivity. It is designed for multi-system expansion to support billions of ASIC gate capacity.

Prodigy Logic System

The Prodigy Logic System series is a compact, sleek, all-in-one design that offers maximum flexibility, durability and portability. Both Xilinx-FPGA-based and Intel-FPGA-based versions of these systems are available to match customers’ preferences.

Prodigy Logic Module

The Prodigy Logic Module series is specially designed with low profile and affordability in mind. The Prodigy Logic Modules are based on Xilinx’s KU and K7 FPGA devices.

S2C’s Productivity Tools

S2C’s prototyping solutions are accompanied by a suite of productivity tools that eases the setup, development, test and validation process.

 

PlayerPro

Prodigy PlayerPro is a tool that offers an integrated GUI environment and Tcl interface and works with their prototyping platforms. It makes it easy to take an existing design, compile it, partition into multi-FPGAs, and generate the individual bit files. It also runs remote system management and provides setup for multi-FPGA debugging.

ProtoBridge

Traditionally, realizing the full benefits of a prototyping platform meant investing in additional hardware and other resources. With ProtoBridge, customers can leverage as much of their existing resources. For example, they can use a PC’s memory to store data rather than providing additional memory on the design under test. Use the prototyping platform with other commercial or in-house verification tools through C-based API over standard AXI. IP blocks connected to the AXI bus can be verified without processor cores or peripheral blocks. For all the details, refer to Prodigy ProtoBridge page.

Neuro

Neuro enables multi-dimensional management of resources over multiple projects and users. It helps optimize resource utilization by enabling users to quickly access FPGA compute power and CPU clusters wherever they may reside. This allows S2C’s customers to leverage their in-house resources as well as cloud-based resources.

Prototype Ready IP

S2C provides a large library of off-the-shelf interfaces and accessories for FPGA-prototyping. All interfaces and accessories work with the Prodigy Logic Modules to further speed up and simplify the system prototyping process. Accessory modules are supplied as daughter boards that plug into the Prodigy Logic Module, providing pre-tested interfaces and reference design flows for easy bring-up.

S2C also provides professional services to customize interface and accessory modules to meet their customers’ application needs.

Multi-Debug Module

The S2C Prodigy Multi-Debug Module is an innovative debug solution for FPGA prototyping. It has the ability to run deep-trace debugging simultaneously on multiple FPGAs. It can trace up-to 32K signals per FPGA in eight groups of 4K signals each without re-compiling. It can store 8GB of waveform data without consuming user memories.

Availability

S2C FPGA prototyping products are in stock now and customers can get their hands on them with a short lead time. While the industry is facing supply chain constraints, this is refreshing to hear. For more information, contact S2C.

Also Read:

DAC 2021 Wrap-up – S2C turns more than a few heads

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions

S2C EDA Delivers on Plan to Scale-Up FPGA Prototyping Platforms to Billions of Gates


Agile SoC Design: How to Achieve a Practical Workflow

Agile SoC Design: How to Achieve a Practical Workflow
by Andy Tan on 02-10-2022 at 6:00 am

Picture1

“The only sustainable advantage you can have over others is agility, that’s it.” (Jeff Bezos)

The best workflow is the one that has been working until it doesn’t. In their quest to tape-out the next design, SoC teams rely on their proven workflow to meet design constraints and project milestones. As often the case, design constraints or project milestones change during different phases of a workflow – this means prior workflows may be rendered obsolete by new technology nodes adding new challenges. That’s why many SoC teams have had to constantly tweak their workflows in order to meet their elusive tape-out milestones and constraints.

Agile was first introduced in the ‘90s (yes it’s been around for that long) to improve software development teams’ ability to respond to change, productivity, predictability, and quality. Since then, data from 25 years shows that Agile projects are nearly 2X more likely than Waterfall projects. Moreover, Agile has replaced Waterfall as project management of choice for software development, with more than 70% of companies using it. (Djurovic, 2020). It has built a strong support community such as Agile Alliance, a non-profit organization with over 72,000 members providing global resources for its members.

Unfortunately, the success of Agile in the hardware domain is much less heralded than in software. As Paul Cunningham suggested in his EE Times article “Agile Verification for SOC Design” agility trends in hardware are perceived to be limited likely because they are not explicitly labeled as such. Another likely reason is the fact that Agile started as a software development concept and the plethora of jargon such as Scrum, Kanban, Extreme Programming, FDD, Agile Manifesto, etc. can be intimidating to SoC teams that just want to tape out their projects ASAP—not to adopt seemingly academic software development concepts.

So, although a typical SoC workflow consists of multiple distinct tasks with short cycles relative to the project timeline, serves internal customers who expect flawless and timely handoff, uses IPs developed by different teams, needs multiple ECO flows to tape out, requires collaboration with many teams often in different geographies, and likely expected to meet with many plan changes, what we often hear is that Agile is not suitable for hardware development.

Instead of focusing on the potential benefit of Agile such as responsiveness to change, better project success rate, better collaboration, and better-quality deliverables, critics tend to focus on the differences between software and hardware projects as if adopting Agile were an all or nothing proposition. The truth of the matter is that adopting Agile is a journey that often starts small, driven by practical needs such as improving current processes that are no longer patchable.

With such a well-established track record in software development, focusing on the similarities and benefits of Agile for hardware development through incremental, practical steps seems like a good idea. The good news is, even in what appears to be very sequential with waterfall workflows for SoC design, there are opportunities for improvement by adopting Agile principles. The following figure depicts a typical high-level workflow for a worldwide SoC development team.

The front-end design team (1) is in US-West and uses soft IPs developed by a team in the US-Central region (3). The back-end design team is in India (2) and uses hard IPs developed by a team in Europe (4). Each team’s compute resources can be fully on-premises, entirely in the cloud, or using a hybrid on-premises/cloud configuration (5).

The front-end design team’s workflow goes through distinct phases such as design specification and modeling in HDL (Hardware Description Language), followed by a verification phase where functional simulation and formal verification are performed, followed by logic synthesis and testability. This design phase typically consists of running thousands of jobs in parallel, equally numerous but relatively small files, iterating through each of the phases before finally handing over to the back-end team, likely after running out of allocated time.

Although they typically must deal with much larger file sizes, including binary files, hard IPs, macros, and memories, the back-end team also must operate in iterative distinct phases. Physical design tasks such as floorplanning, place & route, and ECO (Engineering Change Order) are iterative in nature. Likewise, parasitic extraction, DRC/LVS (Design Rule Checks/Layout Vs. Schematic), timing ECO, power and signal integrity analysis typically must be done multiple times prior to the signoff phase.

Keeping in mind that each of the IP teams likely will have its own iterative workflows, and each team can be in different locations, it’s not hard to imagine that a common design and collaboration platform with features to help make iterations and changes safe and easy, naturally scalable through cloud computing, and integrated with productivity tools to help fuse teams together can be very beneficial as a foundation of a practical Agile SoC workflow.

This is where HCM (Hardware Configuration Management) platforms come into the picture. HCM has been around for a while and many SoC design teams are already using it in their projects. However, instead of using its full potential to improve the team’s agility, HCM may just be relegated to a specific task such as creating a Bill of Materials (BoM), or revision control. This is a pity because leading HCM platforms will continue to advance in response to the latest technology needs and every new release brings features that make them increasingly suitable for a first step towards adopting Agile in an SoC workflow. Consider the following figure.

Using the same high-level SoC workflow example previously mentioned, the figure depicts an Agile-friendly HCM platform. The platform offers code, documentation, and graphics review capability through integration with Review Board, a widely popular review collaboration tool (1). Integration with open-source automation tools such as Jenkins allows the platform to easily be used to drive iterative tasks (2). It connects to SCM tools such as Git, Perforce, and Subversion to allow the use and management of IPs created in the SCM tool’s repository (3). To leverage cloud computing and enable virtually unlimited horizontal scalability the platform is well supported by top cloud providers with the flexibility to be deployed fully in the cloud, on-premises, or in hybrid mode (4). Finally, the platform also connects to popular issue tracking tools so that any issue found during design can be automatically reported and tracked (5).

As with software, adopting Agile requires more than just tools; however, tools that automate processes consistently interpret it for all team members, are conducive to frequent changes and workflow improvements, and encourage collaboration, which goes a long way in helping adoption. It has been said that without the right tools adopting Agile can feel like driving a race car on a dirt road—sooner or later the driver will blame the car.

An Agile-friendly HCM platform such as Cliosoft SOS is an ideal first step to a practical Agile SoC workflow as it will ensure a positive adoption experience. Key characteristics include:

  1. Built from the ground-up for hardware designers instead of bolted on top of an SCM built for software development purposes. A native HCM platform is built with hardware designers in mind and key features friendly to hardware design practices such as:

A Efficiently handles all the data types and sizes present in an SoC design from millions of small text or binary files to GB size files, layout blocks, composites, etc.

B Ability to configure design phases into project views, and the flexibility to easily support branching and merging of projects and files when needed

C Can easily tag/label data to communicate design state readiness, and facilitate iterations during various SoC workflow phases such as design, verification, and test

D Can group (abstract) objects into composites to allow hierarchical data management

E Secure yet flexible work area configuration supports checkout based and writable work area use models under full ACL control

  1. Comprehensive CLI suitable for all phases of SoC design, and intuitive GUI developed with simplicity and ease of use in mind for GUI centric design tasks and to accelerate adoption
  2. Integration with tools from major EDA vendors to further minimize user ramp-up time
  3. Integration with leading tools for automation such as Jenkins to orchestrate various tasks such as simulation and test, and Review Board code and document review collaboration tool
  4. Integration with popular Issue Tracking tools such as Bugzilla, Jira, Redmine and TRAC to automatically log/report issues from the design management platform
  5. Connection to other SCM tools such as Git, Perforce, and Subversion to enable working with blocks and IP stored in diverse repositories and manage the entire design in progress
  6. Ability to scale from managing activities for a co-located team of few people to a worldwide organization with multiple teams and hundreds of members collaborating on projects
  7. Horizontal scalability to improve compute efficiency through cloud computing. Supports hybrid and full cloud deployments with major cloud computing providers
  8. Continuous improvements such as “sparsely populating” work areas allowing users to instantaneously populate it and significantly reduce disks and inodes usage, ability to programmatically configure projects conducive to process automation

In addition to ease of adoption by SoC team members, these characteristics help ease Agile adoption by offering practical benefits for the workflow. In other words, although how they are deployed together matters in building an Agile SoC workflow, each of them will bring practical benefits to the project.

Jeff Bezos of Amazon once said “In today’s era of volatility, there is no other way but to re-invent. The only sustainable advantage you can have over others is agility, that’s it…” This line of thought resulted in Amazon’s Day 1 principles on being agile that have guided the company for over two decades. It took Amazon over 3 years to reach full SoC development in the cloud since the acquisition of Annapurna Labs in 2015. Although Amazon owns cloud infrastructure, this transformation did not happen overnight.

Adopting Agile for SoC design may seem daunting. However, as with everything else, taking the first step is critical. A practical Agile SoC workflow can be achieved using a leading HCM platform such as Cliosoft SOS by deploying pertinent features and enhanced by regularly upgrading to the latest release to benefit from improvements. So why wait?

Also read:

Cliosoft and Microsoft to Collaborate on the RAMP Program

DAC 2021 – Cliosoft Overview

Cliosoft Webinar: What’s Needed for Next Generation IP-Based Digital Design


The Roots Of Silicon Valley

The Roots Of Silicon Valley
by Malcolm Penn on 02-09-2022 at 6:00 am

The Roots of Silicon Valley

 

The transistor was successfully demonstrated on December 23, 1947, at Bell Laboratories in Murray Hill, New Jersey, the research arm of American Telephone and Telegraph (AT&T).  The three individuals credited with its invention were William (Bill) Shockley Jr., the department head and group leader, John Bardeen and Walter Brattain.  Shockley continued to work on the development at Bell Labs until 1955 when, having foreseen the transistor’s potential, rather than continue to work for a salary, he quit to set up the world’s first semiconductor company and de facto industry father.

The Men … The Legend … The Legacy
Graphic Attribution: Dr Jeff Software
“It wasn’t scary, when you are in your late twenties, you don’t know enough to be scared. We just did it. We just knew what we had to do, and we did it.” – Jay Last

 

William Shockley Jr.  Shockley was born in London, England on February 13, 1910, the son of William Hillman Shockley, a mining engineer born in Massachusetts, and his wife, Mary (née Bradford), who had also been engaged in mining as a deputy mineral surveyor in Nevada.

The family returned to the United States in 1913, setting up home in Palo Alto, California, when Mary joined the large Mining Engineering Department faculty at Stanford University.  But for this twist of fate, given that both Shockley’s parents were mining engineers, the family could have easily settled in Colorado, Nevada or West Virginia instead.

In the event, William Jr. was educated in California, taking his BSc degree at the California Institute of Technology (CalTech) in 1932, before moving to the East Coast to study at the prestigious Massachusetts Institute of Technology (MIT) in Boston under Professor J.C. Slater.  He obtained his PhD there in 1936, submitting a thesis on the energy band structure of sodium chloride, and joined Bell Telephone Laboratories where he remained until his resignation in 1955.

On leaving Bell Labs, Shockley moved back Palo Alto, where his sick and aging mother still resided, initially as a visiting professor at Stanford University but with the vision to establish his own semiconductor firm making transistors and four-layer (Shockley) diodes.  Had he decided instead to stay on the East Coast, close to Bell Labs (New Jersey), MIT (Boston) or IBM (Burlington), then Silicon Valley might well have developed on the East rather than the West Coast of America, with almost certainly a very different DNA and personality.

On moving back to Palo Alto, Shockley found a sponsor in Raytheon, but Raytheon discontinued the project after a month.  Undeterred, Shockley, who had been one of Arnold Beckman’s students at CalTech, turned to him for advice on how to raise the one million dollars seed money needed.  Beckman was an American chemist, inventor, entrepreneur, founder and CEO of the hugely successful Beckman Instruments, and now also a budding financier, who believed that Shockley’s new inventions would be beneficial to his own company so, rather than pass the opportunity to his competitors, he agreed to create and fund a laboratory under the condition that its discoveries should be brought to mass production within two years.

He and Shockley signed a letter of intent to create the Shockley Semi-Conductor Laboratory (the hyphenation was common practice back then) as a subsidiary of Beckman Instruments, under William Shockley’s direction. The new group would specialise in semiconductors, beginning with the automated production of diffused-base transistors.  Shockley’s original plan was to establish the laboratory in Palo Alto, close to his mother’s home, but this changed when Frederick Terman, provost at Stanford University, offered him space in Stanford’s new industrial park at 381 San Antonio Road, Mountain View.  Beckmann bought licenses on all necessary patents for US$25,000 and the firm was launched in February 1956.

The seeds for Stanford’s hi-tech relationship with industry were sewn much earlier, in 1936, when Sigurd and Russell Varian, together with William Hansen, Russell’s ex college roommate and by then a professor at Stanford, approached David Webster, head of Stanford’s Physics Department, for help in developing the Varian brother’s idea of using radio-based microwaves for aircraft detection in poor weather conditions and at night.  Webster agreed to hire them to work at the University in exchange for lab space, supplies, and half the royalties from any patents they obtained.  The group’s work eventually led to the development of the Klystron in August 1937, subsequently adopted by Sperry, and a decade later, in 1948, the formation of Varian Associates.

In 1938, shortly after the Klystron’s development, Bill Hewlett and David Packard, who had graduated three years earlier with degrees in electrical engineering from Stanford University, formed Hewlett Packard in their home garage at 367 Addison Avenue in Palo Alto under the mentorship of Stanford professor Frederick Terman.  In some circles this garage has been attributed as the “Birthplace of Silicon Valley”, which, whilst not wishing to undermine the importance of Hewlett Packard’s contribution to the industry, understates both the role Stanford played in creating the catalytic environment for Californian hi-tech ventures, and the explosive role Shockley Semiconductors would subsequently play.  From a semiconductor perspective, 381 San Antonio Road in Mountain View is more appropriately the real Silicon Valley birthplace, as recognized by IEEE.

Shockley Semiconductors.  Given his own high IQ, Shockley embarked on an ambitious hiring campaign, seeking to employ the smartest and brightest scientists available; not just PhD’s, but PhDs from the finest universities at the very top of their class, bringing together a veritable brain trust of brilliant people.  The hiring process was not that straightforward however, given that the majority of electronics-related companies and professionals were at that time based on the East Coast, thus requiring ads to be posted in The New York Times and the New York Herald Tribune.  He did initially try to recruit from his Bell Lab peers but, knowing his reputation as a difficult manager, no-one would join him.

Early respondents included Sheldon Roberts of Dow Chemical, Robert Noyce of Philco, and Jay Last, a former intern of Beckman Instruments.  Each candidate was required to pass a psychological test followed by an interview.  Julius Blank, Jay Last, Gordon Moore, Robert Noyce, and Sheldon Roberts started working in the April-May timeframe, and Eugene Kleiner, Victor Grinich, and Jean Hoerni during the summer and by September 1956, the lab had 32 employees, including Shockley.

Although never medically diagnosed by psychiatrists, Shockley’s state of mind has been characterised as paranoid or autistic. All phone calls were recorded, and staff were not allowed to share their results with each other, which was not exactly feasible since they all worked in a small building.  At some point, he sent the entire lab for a lie detector test, although everyone refused.  He also lacked experience in business and industrial management and unilaterally decided that the lab would research an invention of his own, the four-layer diode, rather than developing the diffused silicon transistor that he and Beckman had agreed upon.

Barely six months old discontent boiled over leading to seven of the employees voicing their concerns to Arnold Beckman, not to get rid of Shockley but to put a more rational boss in between him and them.  The seven in question were Julius Blank, Victor Grinich, Jean Hoerni, Eugene Kleiner, Jay Last, Gordon Moore and Sheldon Roberts.  Their request might well have been granted had Shockley’s Nobel prize not been announced in November, fanning the flames of Shockley’s fame and already inflated ego.  Rather than rock the boat, Beckman chose not to interfere, telling instead the seven to accept things as they were.  At that time, Noyce and Moore stood on different sides of the argument, with Moore leading the dissidents and Noyce standing behind Shockley trying his best to resolve conflicts.  Shockley appreciated that and considered Noyce as his sole support in the group, but the team started to lose its members, starting with Jones, a technologist, who left in January 1957 due to a conflict with Grinich and Hoerni.

In March 1957, Kleiner, who was also beyond Shockley’s suspicions, asked permission ostensibly to visit an exhibition in Los Angeles.  Instead, he flew to New York to seek investors for a new company that he and the six others were by now contemplating.  Kleiner’s father, who was involved in investment banking, introduced Eugene to his broker, who in turn introduced Kleiner to Arthur Rock at Hayden Stone & Co.  The team’s original idea was to join an existing company and Rock, who had already developed a side interest in investing in new companies, what today would be called startups, together with Alfred Coyle, also from Hayden Stone, took a strong interest in Kleiner’s proposition of a seven-strong, pre-packaged team, believing that trainees of a Nobel laureate were destined to succeed.  Finding prospective investors, however, proved to be very difficult, given the US electronics industry was at that time concentrated in the East Coast and the California Group, as the seven became known, wanted to stay near Palo Alto.  Rock presented the group to 35 prospective employers, but no one was interested.

With the task of finding a backer proving hard, as a last resort on May 29, 1957, the group, led by Moore, presented Arnold Beckman with an ultimatum – solve the ‘Shockley problem’ or they would leave.  Moore suggested finding a professor position for Shockley and replacing him in the lab with a professional manager.  Beckman again refused, believing that Shockley could still improve the situation, later regretting this decision.

In June 1957, Beckman finally put a manager between Shockley and the team but by then it was too late as the seven were now emotionally committed to leave and embark on Plan B, namely creating their own startup.  Recognising, however, that they were followers not leaders, the group persuaded Bob Noyce, a born leader, to join them.  The now enlarged California Group met up with Rock and Coyle at the Hill Hotel in California and these ten people became the core of a new company.  Coyle, a ruddy-faced Irishman with a fondness for ceremony, pulled out 10 newly minted US$1 bills and laid them carefully on the table.  “Each of us should sign every bill”, he said.  “These dollar bills covered with signatures would be our contracts with each other.”

In August 1957, in a final throw of the funding dice, Rock and Coyle met with the inventor and businessman Sherman Fairchild, founder of Fairchild Aircraft and Fairchild Camera & Instrument Co.  Sherman, son of a rich entrepreneurial father who had made his fortune as a big investor in IBM, was a bright and equally entrepreneurial engineer who had amassed a small fortune during the war selling cameras for reconnaissance planes and suchlike.  Given that he had already developed a curious interest in semiconductors, Sherman sent Rock to meet his deputy, Richard Hodgson, who, risking his reputation, accepted Rock’s offer and within a few weeks all the paperwork and funding for the new company, Fairchild Semiconductor, had been sorted.

The capital was divided into 1,325 shares with each member of the eight receiving 100 shares, 225 shares went to Hayden, Stone & Co and the remaining 300 shares were held back in reserve.  Fairchild provided a loan of US$1.38 million and, to secure the loan, the eight gave Fairchild the voting rights on their shares with the right to buy them back at a fixed total price of US$3 million.

The eight left Shockley on September 18, 1957, and Fairchild Semiconductor was born.  Whilst there is no documentary evidence that he ever used the term, the group quickly became known and ‘The Traitorous Eight’.  Shockley never understood the reasons for their defection, considering it to have been a betrayal, and allegedly never talked to Noyce or the others again.

With the help of a new team, Shockley brought his own diode to mass production the following year but, by then, time had been lost and competitors were already close to the development of integrated circuits (ICs).  In April 1960, Beckman sold the unprofitable Shockley Labs to the Clevite Transistor Company based in Waltham, Mass, bringing his association with the semiconductor industry to an end.

On July 23, 1961, Shockley was seriously injured in a car crash and, after recovery, left the company and returned to teaching at Stanford.  Four years later, Clevite was acquired by ITT who, in 1969, decided to move the Labs to West Palm Beach, Florida where it had an already established semiconductor plant.  When the staff refused to move, the lab ceased to exist.

Fairchild Semiconductors.  Setting up shop on 844 East Charleston Road, on the border of Mountain View and Palo Alto, founded in intrigue, Fairchild has a long history of innovation having produced some of the most significant technologies of the second half of the twentieth century.  It quickly grew into one of the top semiconductor industry leaders, spurred on by the successful development of the silicon planar transistor.

Transistors, however, were already starting to develop their own ‘tyranny of numbers’ problem.  If you wanted to make a simple flip-flop, it needed four transistors but around ten wires to connect them up.  If two flip-flops were then interconnected, this needed not only twice the number of transistors and wires but also four or five additional wires to connect the two together.  So, four transistors needed ten wires, eight needed 25, and 16 needed 60-70 wires.  In other words, as the transistor count increased linearly, the number of connections grew exponentially, where the exponential was greater than one but less than two.

Whilst transistors were relatively easy to mass produce, connections were much more difficult as the wires had to be soldered together by hand and took up a lot of space.  The industry’s desire to build bigger and more complex systems was being held back by the difficulty in wiring everything together.  Up until now, no-one had really paid much attention to wiring but connections were soon to became public enemy number one, driving the need for the integrated circuit.

Jack Kilby, at the rival Dallas-based semiconductor firm Texas Instruments, demonstrated in 1958 the possibility to build two transistors in the same piece of semiconductor material but his transistors were wire bonded together with no practical solution for the connection problem at the time.  That problem was solved by Bob Noyce with the help of Jean Hoerni (who provided the technique) and Jay Last (who eventually made it work).

Jean Hoerni had been working on a solution for stopping transistors going bad, due to the fact the transistor surface inside the package was unprotected, allowing particles to contaminate and degrade the device over time.  His solution was to protect the transistor surface with a passivation (protection) layer of silicon dioxide (SiO2), grown or deposited on top of the structure.  He then further realized that, rather than depositing the emitter and base regions on top of the semiconductor substrate, as with the current Mesa process, if the surface was completely covered with silicon dioxide, the emitter and base areas could then be selectively diffused.  The net result was a much flatter surface and a more readily automated process.

This (Planar) technology, announced in January 1959, would become the second most important invention in the history of microelectronics, after the invention of the transistor, and laid the blueprint for all future integrated circuits.  But in 1959, it went virtually unnoticed, other than to Noyce who recognised that such a layer of glass was an insulator, opening the door for the connecting wires to be laid on top and patterned just like a printed circuit board.

When Noyce filed his patent in April 1959, it triggered a corporate patent battle between Texas Instruments and Fairchild, but not between Kilby and Noyce though who were friends and with high regard and respect for each other.  Texas Instruments claimed that Kilby’s patent wording ‘electrically conducting material such as gold laid down on the insulating material to make the necessary connections’ was a pre-existing description of Noyce’s patent claims and that Kilby had only used wire bonds as the quickest way to a prototype.  Had this assertion been upheld by the court, Noyce’s later-dated patent would have been declared invalid.  As it transpired, Texas Instruments lost the argument, both patents were declared valid, and a cross-licensing agreement was reached between the two firms.

Kilby, by nature, was a very humble person and, even though his patent pre-dated Noyce’s, he generously announced publicly that he felt both he and Noyce jointly invented the integrated circuit, contrary to Texas Instruments’ management position.

In 1959, Sherman Fairchild exercised his right to purchase the founding member’s shares, an event that turned former entrepreneurs and partners into ordinary employees, thereby destroying the team spirit and sowing the seeds of future tension.

There was still, however, one big problem yet to solve before integrated circuits could become a commercial reality, namely isolation; how to stop adjacent transistors interfering with each other.  Noyce delegated this thorny problem to Jay Last, who was running the R&D group at the time.  It was no easy task, taking some eighteen months before the first working device was produced on September 27, 1960.

Development also met with strong internal political resistance, with Tom Bay, the then Vice President of Marketing at Fairchild, accusing Last of squandering money and, in November 1960, Bay demanded termination of the project with the money saved spent on transistor development instead.  Moore refused to help, and Noyce declined to discuss the matter, leaving Last to fight the battle on his own.  Timing-wise, the conflict had flared up barely a month after Fairchild had announced the transition of its transistor production from Mesa to Planar technology, but Moore had refused to credit this achievement to Hoerni, fanning the flames of the already developing tensions between the eight founding partners.

Last continued to develop six more parts but these conflicts were the last straw and, flushed with their planar and isolation process success, he and Hoerni left Fairchild on January 31, 1961, to set up Amelco in Mountain View, California, with financing from Teledyne Corporation arranged by Arthur Rock.  Their plan was to build integrated circuits in support of Teledyne’s military business.  Kleiner and Roberts joined the pair a few weeks later.  With this high-level defection, the eight founding members had been split into two groups of four.

With just seven parts, Fairchild announced the world’s first standard logic family of integrated circuits, direct-coupled-transistor-logic (DCTL), in March 1961 based on Hoerni and Last’s resistor-resistor-transistor (RTL) planar process under the µLogic trademark.  One of these devices, the µL903 3-input NOR gate, became the basic building block of the Apollo guidance computer.  Designed by MIT and built by Raytheon, it needed 5,000 devices and was the first major integrated circuit application.

Fairchild’s lead, however, was to prove short-lived as David Allison, Lionel Kattner and some other technologists also left at around the same time as Hoerni and Last to start up Signetics (Signal Network Electronics).  One year later, in 1962, the firm announced a much-improved, second-generation, logic family, the SE100 Series diode-transistor-logic (DTL).  Fairchild quickly counter-attacked with their own DTL family, the 930 series, undercutting Signetics and rendering them unable to compete against Fairchild’s marketing strength.

Signetics’ most famous legacy part was the NE555 timer.  Designed in 1971, the 555, along with the ubiquitous TTL 7400 Quad 2-input NAND Gate, was probably the most popular integrated circuit ever sold.  Signetics was subsequently bought by Philips in 1975.

Early integrated circuits were housed mainly in either TO-5 or TO-18 adapted metal can transistor packages.  These worked fine for 3-lead devices but scaling them up to provide more and more connections proved to be limiting, given the can could only be made so large and the radial leads could only be packed so tight.  Ten leads were about the practical limit and that would not support the more complicated integrated circuits in the pipeline.  It fell to Fairchild’s Don Forbes, Rex Rice, and Bryant “Buck” Rogers to provide a solution to this problem in 1964, via the invention of the now familiar dual-in-line package (DIP), the little oblong ‘millipedes’ that would crawl across circuit boards for the next 40 years.

The idea for the package came from the ceramic flatpack design devised in 1962 by Yung Tao, a Texas Instruments engineer, as an industry standard for surface-mount integrated circuits for the US military.  This concept was adapted for through-hole, rather than surface mounting, with an eye for ease of handling for electronics manufacturers, ease of PCB layout design getting power to the ever-increasing number of integrated circuits and routing their signals around the board, and low cost, given the growing consumer integrated circuit market.  The 0.1″ (2.54 mm) package pin spacing left plenty of room for PCB tracks to be routed between pins and the 0.3″ (7.62 mm) spacing between rows of pins offered room for other tracks.

Fairchild launched the dual in-line package in 1965, originally in ceramic, but it took off with a vengeance when Texas Instruments introduced a plastic resin version, driving the unit cost down dramatically.  As a result of great design, low cost and support for increasingly complex integrated circuits, the plastic dual-in-line package became the mainstay industry standard, with its basic 14-pin design extended to support more leads, up to 64 pins in a 0.6” wide form factor, and more complex integrated circuits.  It was eventually surpassed by second-generation surface mount devices in the late 2000s as integrated circuit complexity and pin count requirements surpassed the dual-in-line package’s capability.

With as many as 15,000 die on a single wafer, in the early days, wafer fab costs were not as significant as assembly and test, hence the need to find ways to reduce the labour costs as a matter of survival.  After some early failed ventures in the US, e.g. in Shiprock, New Mexico, at a Navajo Indian reservation, and early attempts at automation, it was the Far East that ultimately was to prove successful, and also Fairchild’s third innovative move, when Bob Noyce, who had an investment position in a small radio company in Hong Kong, suggested to Charlie Sporck that he and Jerry Levine take a look at Hong Kong.

Attracted by the low labour cost, non-unionised facilities, low-cost western educated technicians, good engineering schools and favourable government and tax incentives, under the direction of Ernie Freiburg and run by Norm Peterson, previously manager of Fairchild’s Crystal growing operation, in 1963 Fairchild set up the industry’s first Far East assembly and test operation in an old rubber shoe factory on the Kawloon side of Hong Kong.  Hong Kong also had the added benefit that any fall out from testing could be sold to the local toy industry.  Other semiconductor manufacturers subsequently followed Fairchild to the Far East, mostly in Malaysia

Blank, Grinich, Moore and Noyce stayed with Fairchild throughout most of the 1960s but in March 1968, Moore and Noyce decided to leave, turning to Arthur Rock for funding, setting up NM Electronics in the summer of 1968.  One year later, NM Electronics bought the trade name rights from the hotel chain Intelco and took the name of Intel.

Grinich also left in 1968, first for a short sabbatical and then to teach at Berkeley and Stanford, where he published the first comprehensive textbook on integrated circuits.  But he never really lost the startup buzz and quit academia in 1985 to co-found and run several new companies, including Escort Memory Systems developing industrial RFID tags.

Blank, the last of the Eight, eventually left Fairchild in 1969 to become a consultant to new startup companies.  Having also grown tired of this, seeking a more hands-on role, he too quit and co-founded Xicor in 1978 to make EEPROMs.

As for the original four defectors, Hoerni headed Amelco until the summer of 1963 when, after a conflict with the Teledyne owners, he left for Union Carbide Electronics.  In July 1967, supported by the watch company Société Suisse pour l’Industrie Horlogère (SSIH), the predecessor of Swatch Group, he went on to found Intersil, pioneering the market for low-power custom CMOS circuits, some of which were developed for Seiko, kick-starting the Japanese electronic watch industry.

Hoerni then went on to set up a European version of Intersil, called Eurosil, financed in great part by SSIH’s desire to build a fab in Munich, not far from the Swiss watch manufacturing centres.  Eurosil was eventually sold to Diehl in late 1975 and Hoerni left in 1980 returning to the West Coast to form a new startup called Telmos, to produce specialised semicustom products covering the linear interface between sensors to the microprocessor and digital logic core and the high voltage, high current drivers.

Last continued at Amelco, taking on a twelve-year tenure as Vice President of Technology at Teledyne, Amelco’s parent, before founding Hillcrest Press, specialising in art books, in 1982.  Roberts also left to set up his own business and in 1973-1987 served as a trustee of the Rensselaer Polytechnic Institute.

That left just Kleiner, who also left to pursue a career in financing the many early-stage entrepreneurial firms that were starting to spring up on the West Coast of America, teaming up with Thomas (Tom) Perkins, head of R&D at Hewlett Packard, to form Kleiner Perkins with an office in Sand Hill Road, Palo Alto, an area that would become the Venture Capitalist’s home.  Thus, whilst Arthur Rock and Hayden Stone could arguably be accredited with setting up the first Venture Capitalist firms, Kleiner Perkins was the first Venture Capitalist to have a physical office in Silicon Valley.  The firm would go on to fund Amazon, Compaq, Genentech, Intuit, Lotus, Macromedia, Netscape, Sun Microsystems, Symantec and dozens of other companies.

As for today, Amelco, the original Fairchild spinout, after numerous mergers, acquisitions and renaming, no longer exists, but its remnant IP has survived and is now owned by Microchip.

Silicon Valley.  Last, Roberts, Kleiner and Hoerni’s collective decision to leave and compete against Fairchild, just over three years after the company was founded, was the first of what would be many subsequent defections and spinouts, eventually known as ‘Fairchildren’, directly or indirectly creating dozens of corporations, including Intel and AMD.  In doing so, Fairchild sowed the seeds of innovation across multiple companies in an area that would eventually become known as Silicon Valley.

Local pubs, restaurants, and social gathering hot spots played a key role in the ‘work hard, play hard’ Silicon Valley ethos at the time, where industry folk would head after work to have a drink, gossip, brag, trade war stories, talk shop, exchange ideas, change jobs, party and develop new business ventures.  Key venues included the Wagon Wheel, Lion & Compass and Ricky’s, along with the Peppermill and Sunnyvale Hilton.

Stanford University, or more accurately Fredrick Terman, also played a huge catalytic role, propelled by his farsighted vision for academia to develop a new relationship with the science and technology-based industries dependent on brain power as their main source of resource.  More than that, he further recognised the need to develop local industry, not just by building a community of interest between the faculties and industry but also by encouraging new enterprises, what today we would call startups, to cluster around the university via the provision of low-cost premises, often rent-free other than the local property taxes.

Whilst it is unclear who came up with the Silicon Valley name, Don Hoefler, a technology news reporter for the industry tabloid Electronic News, is credited with popularising the name in a column he wrote in 1971 about the valley’s semiconductor industry.  He also played a fundamental role in promoting the area’s innovative qualities and was one of the first writers to describe the Northern Californian technology industry as a community.

The Fairchild Legacy.  Throughout the first half of the 1960s, Fairchild was the undisputed semiconductor leader, setting the bar for others across all aspects of the industry, be it design, technology, production or sales.  Early sales and marketing efforts had been relatively small and military-oriented but that changed in 1961 when Noyce and Bay recruited a group of bright and aggressive salesmen and marketing specialists including Jerry Sanders III and Floyd Kvamme.  These two newcomers transformed Fairchild’s sales and marketing departments into one of the most effective in the industry.

One of the industry’s pivotal moments was Fairchild’s dramatic entry into the consumer TV market.  Attracted by the high-volume potential, Jerry Sanders wanted to replace the then tube (valve) CRT driver with a transistor, but the target price needed was US$1.50.  Transistors at that time were selling to the military for US$150.  In what can only be regarded as a massive leap of faith, Noyce’s instruction to Sanders was “Go take the order Jerry, we’ll figure out how to do it later.  Maybe we’ll have to build it in Hong Kong and put it in plastic, but right now let’s just do it.”

In 1963, Fairchild hired Robert (Bob) Widlar to design analog operational amplifiers using Fairchild’s digital IC process.  Despite its unsuitability, Widlar, in partnership with process engineer Dave Talbert, succeeded and went on to adapt the process to produce two revolutionary parts, the world’s first operational amplifiers, the µA702 in 1964 and µA709 in 1965.  With these two parts, Fairchild now dominated both the analog and digital integrated circuit markets, first with its µLogic RTL family and then with its 930 series DTL, and in April 1965 Gordon Moore famously publishing his article ‘Cramming More Components onto Integrated Circuits’ in Electronics Magazine.  Later to be known as Moore’s Law, this was basically an extrapolation of four plots on a graph of IC transistor density over time.

Fairchild’s digital technology lead was, however, being overtaken by Texas Instruments who, having fallen behind in RTL and DTL, had decided to copy Sylvania’s Ultra High Performance (SUHL) transistor–transistor logic (TTL) circuit design and adapted it to its own process to counter the announcement of Fairchild’s third generation 9000 series TTL logic.

Headed up by Stewart Carrell, Texas Instruments set up a ‘design factory’ that could churn out several new designs a week, mostly by guessing the W/L ratios, laying out the circuits, correcting them if the prototypes did not work, and zeroing in on a specification that manufacturing could support.  The design factory was supported by an optical photomask generator, as opposed to manual rubylith layout, that could create a photographic chip layout very quickly, and a ‘quick-turn’ fab line to rapidly turn out parts.

To strengthen their attack, Texas Instruments masterminded a marketing coup over Fairchild by persuading other semiconductor firms to second source its TTL rather than Fairchild’s competing product.  In this one masterly move, Texas Instruments established its 74 Series version of TTL as the de facto third generation industry standard, leaving Sylvania’s SHUL, Fairchild’s 9000 series and other proprietary alternatives behind.  It then proceeded to masterfully neutralise the entire second-source movement by providing every engineer with a copy of its ubiquitous orange book (The TTL Data Book), its twice-yearly ‘must attend’ TTL seminars in all major cities, not just in the US but globally, supported by an aggressive new product introduction programme.

By always ensuring any bill of materials (BOM) included at least one TTL part that was only available from it, Texas Instruments was able to stay one step ahead of the competition and ‘own’ the TTL market for the best part of 30 years, until standard logic eventually fell victim to the 1980’s Application Specific Integrated Circuit (ASIC) revolution.

In the meanwhile, starved of CapEx, Noyce’s position on Fairchild’s executive staff was consistently being compromised by Sherman Fairchild’s corporate interference and his lack of company support.  Many of the Fairchild management team were increasingly upset by Sherman’s corporate focus on unprofitable ventures at the expense of the semiconductor division.  The firm then suffered its ultimate humiliation in July 1967 when the semiconductor industry fell victim to the first of its legendary recessions, during which time the company became both unprofitable and was forced to concede its technology leadership to Texas Instruments.

Charles Sporck, Noyce’s Operations Manager, and reputed to run the tightest operation in the world, together with Pierre Lamond left in early 1968 to join the already departed Widlar and Talbert at National Semiconductor, both having grown disillusioned by the way things were going.  This trigger Noyce and Moore’s departure from the firm later that same year and was to prove a pivotal moment in the eventual demise of the firm.  The collective exodus of Sporck, Noyce and Moore, along with so many other iconic executives, signalled the end of an era and prompted Sherman Fairchild to bring in a new management team, led by C. Lester Hogan, then vice president of Motorola Semiconductor.  Of the eight original founders only Julius Blank now remained, although he too would be gone within a year.

Hogan’s arrival, and the subsequent displacement of Fairchild managers, demoralised the firm even further, prompting a further exodus of employees to start up a host of new companies.  Nicknamed ‘Hogan’s Heroes’, the ultra-conservative Motorola executives immediately clashed with Jerry Sanders III who, with his boisterous flamboyant style, was responsible for Fairchild’s sales.

Whilst initially slow to respond to the changing market, under Sander’s direction Fairchild had embarked on a strategy of leapfrogging Texas Instruments by focusing on more complex large scale (LSI), 30 plus gate parts, instead of simpler small and medium scale (SSI/MSI), under 30 gate devices, a strategy that was proving popular and successful with engineers, forcing Texas Instruments to recognize the threat and copy all of Fairchild’s 9300 series parts under 74 series numbers, for example the 9300 became the 74195 and the 9341 the 74181.

Sander’s whole strategy collapsed, however, when Hogan capitulated to Ken Olsen, founder and CEO of Digital Equipment Corporation (DEC) and a key Fairchild customer.  Olsen wanted Fairchild to give up on its proprietary TTL technology and second source Texas Instruments’ 74 Series TTL instead.  Against Sanders’ wishes, Hogan agreed, signing the death warrant for Fairchild’s TTL strategy.  Sanders was, understandably, absolutely livid with fury.  “You’ve just killed the company Ken”, Sander’s fumed.

Hogan’s betrayal was the last straw for Sanders and he, together with a group of Fairchild engineers, quit to start Advanced Micro Devices (AMD).  With Sanders installed as President, one of his first moves was to declare the mantra ‘people first, revenues and profits will follow’ and give every employee stock options in the new company, an innovation at the time.

In a subsequent boardroom coup, Wilf Corrigan, who had moved with Lester Hogan as director of Discrete Product Groups, succeeded Hogan as President and CEO in 1974, but Fairchild’s fate continued to decline, dropping to sixth place in the semiconductor industry by the end of the decade.

In the summer of 1979, with the semiconductor market riding high on its fourth year of successive double-digit growth, Fairchild fell victim to a hostile takeover bid from Gould, a major US producer of electrical and electronic equipment, hell-bent on a diversification strategy.

Unable to fight off the buy-out, Corrigan elected instead to seek the best price for the shareholders and the firm was eventually sold to Schlumberger, a French oil services industry company for US$350 million or US$66 per share vs. the Gould US$54 (later increased to US$57) offer.

Schlumberger, however, proved unable to inject vitality into the deteriorating company and it continued to lose money.  Corrigan departed in February 1980 and, once his one-year non-compete severance obligation was over, he and Rob Walker co-founded ASIC pioneer LSI Logic Corporation in 1981.

It initially replaced Corrigan by one of its own managers, Tom Roberts, who unsuccessfully tried to run the firm like a heavy equipment company.  Two years later, in 1983, the firm finally called in Donald W. Brooks, a Texas Instruments veteran, to reverse its decline but by then Fairchild semiconductor was a legend in trouble, lagging in leading-edge technologies and losing money, even as the rest of the semiconductor industry was booming.

The firm was eventually sold to National Semiconductor in 1987 for one-third of the price paid by Schlumberger eight years earlier.  With the Fairchild brand now dead, Brooks left, and the company was back in the hands of former Fairchild General Manager, Charlie Sporck.

Kirk Pond became COO at National Semiconductor in 1994 where he led the successful management buyout in 1997.  With the Fairchild name revived, Pond continued as President and CEO until 2005, when he became Chairman, before retiring a year later in 2006.

He was succeeded by Mark Thompson until the firm was acquired by ON Semiconductor in September 2016.  ON Semiconductor was the discrete, standard analog and standard logic device division spun out from Motorola’s Semiconductor Components Group in 1999.

The Silicon Valley Legacy.  The three key inventions that changed the world in the 1960s were the integrated circuit, startup fever and venture capital.  No doubt these inventions would have happened somewhere else in the world, at some other time, by somebody else, but the fact they all occurred within a short space of time, in the Palo Alto region, driven by the entrepreneurial spirit of the traitorous eight and the many other key contributors, along with the Stanford University ethos, is what made Silicon Valley so special and unique.

But what if Shockley’s parents had moved to Colorado, Nevada or West Virginia to pursue their mining careers on their return to the United States from London rather than Palo Alto?  Would Silicon Valley had developed there instead?

What if Shockley had chosen to set up Shockley Semiconductors on the East Coast, where there was an already well-developed infrastructure, rather than Palo Alto which had none?  From an infrastructure perspective, the East Coast was far better positioned to have hosted Silicon Valley there.

What if the Russians, Europeans or Japanese had invented the integrated circuit first?  These regions were known to be working on this at the time.  Could Silicon Valley have sprung up in the USSR, Europe or Tokyo instead?

What if Fredrick Terman had not had the foresight to develop a community of interest between Stanford’s faculties, industry and encourage new enterprises to cluster around the university?

What would the world look like today had any of these scenarios happened?

Clearly, fate played a role in bringing Shockley and semiconductors to Palo Alto, but the West Coast proved a far more fertile environment for the risk-taking entrepreneurial spirit of the traitorous eight and their peers than the more risk-averse and mature East Coast business and financial community.

All eight of the original founders eventually left Fairchild and went on to become serial entrepreneurs, co-founding between them a wide variety of new startups, both in semiconductors and venture capital, surrounded by brilliant engineers who wanted to start new companies, prove themselves and change the world, stoking the startup fever boom driven by Shockley Semiconductor as the embryo, Fairchild Semiconductor as the incubator and the Palo Alto infrastructure as the catalyst.  The rest, as they say, is history.

The Lunch That Changed The World

On February 14, 1956, Arnold O. Beckman and William B. Shockley announced to a luncheon audience of scientists, educators, and the press at San Francisco’s St. Francis Hotel that they were founding Shockley Semiconductor Laboratory in Palo Alto, California.

The entrepreneurial spirit of the Valley, and the rise and fall of Fairchild, is best summed up by the following comment from Rob Walker, co-founder of LSI Logic: “It’s amazing what a few dedicated people can accomplish when they have clear goals and a minimum of corporate bullshit.”

 

Malcolm Penn

22 Dec 2021

With acknowledgment and gratitude to my many industry colleagues, old and new, who proofed early drafts and offered much-appreciated additional insights, fact checks and clarifications.  Happy 74th birthday!

Also read:

Future of Semiconductor Design: 2022 Predictions and Trends

The Semiconductor Ecosystem Explained

Are We Headed for a Semiconductor Crash?