I have written a lot of articles looking at leading edge processes and comparing the process density. One comment I often get are that the process density numbers I present do not correlate with the actual transistor density on released products. A lot of people want to draw conclusions an Intel’s processes versus TSMC’s processes based on Apple cell phone application processors versus Intel microprocessors, this is not a valid comparison! In this article I will review the metrics I use for transistor density and why I use them and why comparing transistor density on product designs is not valid.
The first comment I want to make is that I am not a circuit designer and therefore I am not familiar with all of the aspects of the decisions that go into creating a design that may impact the transistor density of the final product, but I do have an understanding of the difference in density that can occur across a given process.
Logic designs are made up of standard cells and the size of the standard cells is driven by 4 parameters, metal two pitch (M2P), track height (TH), contacted poly pitch (CPP) and single diffusion break (SDB) versus double diffusion break (DDB).
The height of a standard cell is the metal two pitch (M2P) multiplied by the number of tracks (Track Height or TH). In recent years in order to continue to shrink standard cells the TH has been reduced while simultaneously reducing M2P as part of something called design technology co-optimization (DTCO). One key aspect of reducing TH is that the number of fins per transistor must be reduced at low THs due to space constraints, this is called fin depopulation. If you reduce the number of fins per transistor you get less drive current from each transistor unless you do something else to compensate for it such as increasing fin height, therefore DTCO.
The width of a standard cell depends on contacted poly pitch (CPP), whether the process supports single diffusion break (SDB) or double diffusion break (DDB) and the type of cell. For example, a NAND Gate is 3 CPPs in width with a SDB and 4 CPPs in width with a DDB. On the other hand, a scanned flip flop (SFF) cell might be something like 19 CPPs wide with a SDB and 20 CPPs wide with a DDB (this can vary with SFF designs). As you can see the effect on SDB versus DDB has more affect on a NAND Cell size than on a SFF cell.
When discussing process density, I always compare the minimum cell size, but processes offer multiple options. For example, TSMC’s 7nm 7FF process offers a minimum cell that is a 6-track cell with 2 fins per transistor and a 9-track cell with 3 fins per transistor. The 9-trcak cell offers 1.5x the drive current as the 6-track cell but is also 1.5x the size. This illustrates one of the problems when comparing two product designs to each other as a way of characterizing transistor density, a high performance design would have more 9-track cells and therefore lower transistor density than a design targeted at minimum size or lower power with 6-track cells on the same process. Even the preponderance of NAND cells versus SFF cells would affect the transistor density.
Figure 1 summarize the density difference between 6-track and 9-track cells on the TSMC 7FF process. Please note the MTx/mm2 parameter is the million transistor per millimeter squared based on 60% NAND cells and 40% SFF cells.
Figure 1. TSMC 7FF Density Analysis
An interesting observation from figure 1 is that a minimum area SFF cell has over 2x the transistor density of a high-performance NAND cell on the same process. There are also many other types of standard cells with varying transistor densities.
Most system on a chip (SOC) circuits contain significant SRAM memory arrays, in fact it is not unusual for over half the die area to be SRAM array.
The 7FF process offer a high density 6-transistor (6T) SRAM cell that is 0.0270 microns squared in area and that works out it 222 MTx/mm2. In theory a lot of memory array area on a design could result in higher transistor density, however, as with a lot of things related to comparing process density it isn’t that simple.
While doing a project for a customer I analyzed 3 TSMC SRAM test chips and embed SRAM arrays in 4 Intel chips and 1 AMD chips. The SRAM arrays were on average 2.93x the size you would expect based on the SRAM cell size for the process and the bit capacity of the array. This is presumably due to interconnect and circuitry to access the memory. If we base transistor density for SRAM on the SRAM cells in the array the density drops to 75.84 MTx/mm2 although there are certainly some transistor in the access circuitry that this isn’t counting.
Certain SOC designs may also include analog, I/O and other elements that have significantly lower transistor density than minimum cells.
The bottom line to all this is that if you could implement the same design, say an ARM core with the same amount of SRAM into different processes you could use actual designs to compare process density, but since that isn’t available then some type of representative metric that can be consistently applied is needed. When I compare processes, I compare transistor density for a minimum size logic cell with a 60% NAND cell/40% SFF cell ratio. This is not a perfect metric but compares processes under the same condition. I also want to mention that for processes that are in production my calculations are based on dimensions measured on the product, typically by TechInsights and are not based on information from the individual companies I am covering. I do use information from the company announcements when estimating future process density.