RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Single HW/SW Bill of Material (BoM) Benefits System Development

Single HW/SW Bill of Material (BoM) Benefits System Development
by Daniel Payne on 02-23-2021 at 10:00 am

Perforce - IP

Most large electronics companies take a divide and conquer approach to projects, with clear division lines set between HW and SW engineers, so quite often the separate teams have distinct methodologies and ways to design, document, communicate and save a BoM. This division can lead to errors in the system development process, so what is a better approach?

To learn more, I attended a virtual event from Perforce, their Embedded Devops Summit 2021, which I blogged about last month. They had three concurrent tracks: Plan, Create, Verify. I chose the Create track, and listened to the presentation, Implementing a Unified HW/SW BoM to Reduce System Development. Vishal Moondhra was the presenter, and his company Methodics was acquired by Perforce in 2020.

IP is a term used for both HW and SW teams, and it’s the abstraction of data files that define an implementation, plus all of the meta-data that defines its state.

 

BoM

A SW IP example would be a USB device driver, and a HW IP example a SRAM block. The Bill of Materials (BoM) shows the versioned hierarchy of all IP used to define a system, both HW and SW.

The SW blocks are shown in Green, along with their version numbers, while IP2 and IP1 are HW blocks with their own version numbers and hierarchy. If you examine the hierarchy carefully there are two instances of IP13, one at version 8, and the other at version 9, so a version conflict has occurred and your BoM system needs to identify this so that consistency can be restored.

Your SW team may be using Git, while the HW team prefers to use Perforce, and a unified BoM allows this mix and match approach.

Meta-data is the dependencies, file permissions, design hierarchy, instance properties and usage for each IP, and the Perforce approach is that a single system is used for both traceability and reuse. Once again, any Data Management (DM) system can be used.

Being able to trace which SW driver applies to a specific HW block is fundamental to maintaining consistency during system design, and a unified BoM takes care of this compatibility requirement. Tracking patches and updates across HW and SW ensures that no mismatches creep into the system during design.

The Platform BoM knows all of the versions being used in both HW and SW BoMs, and it’s fully traceable so that you always know which SW component was delivered with each HW component.

If a SW driver is incompatible with a particular HW block, then you can quickly identify that occurrence with a unified Platform BoM. If your Platform was only a handful of HW and SW blocks, then a simple Excel spreadsheet would suffice to track dependencies, but modern SoC systems have thousands of HW IP blocks, and millions of lines of code, so having a unified BoM system with traceability is the better choice.

Sending out SW patches to your released Platform demands that proper testing has been validated, so keeping track of dependencies is paramount for success.

With IPLM a SW team can use the concept of private resources where all of the details are abstracted out, leaving behind instead just the results of a build process. It still provides consistency, traceability and dependencies. Here’s an example of using a private resource for an ARM SW stack:

Working as a team with a unified BoM breaks down the old silo approach that separated HW and SW designers from each other. Design metadata can be managed to ensure traceability, promote transparency across engineering teams, enable IP to be reused, all while separate DM systems continue to be used.

Summary

The Methodics IPLM implements this unified BoM approach, so that your engineering teams can focus on completing their system work, while knowing that their HW and SW IP is fully traceable with centralized management, and that their IP releases are not introducing bugs.

To watch the 25 minutes archived presentation online, visit here.

Related Blogs


Achronix Demystifies FPGA Technology Migration

Achronix Demystifies FPGA Technology Migration
by Tom Simon on 02-23-2021 at 6:00 am

FPGA Migration Achronix Tool Flow

System designers who are switching to a new FPGA platform have a lot to think about. Naturally a change like this is usually done for good reasons, but there are always considerations regarding device configurations, interfaces and the tool chain to deal with. To help users who have decided to switch to their FPGA technology, Achronix offers an application note, titled “Migrating to Achronix FPGA Technology”, that explains the differences that may be encountered. As the application note states, Achronix FPGA technology will be familiar to anyone using another platform, but there will be some differences that will be useful to understand.

From my reading, what is interesting is how the application note offers information that could help someone who had not yet decided and was looking to see how the Achronix FPGA technology compares to other solutions. Indeed, the first section of the app note is useful for understanding which Achronix devices are good candidates as substitutions for the range of Intel and Xilinx devices. Kintex Ultrascale, Kintex UltraSCale+, Virtex Ultrascale, Virtex Ultrascale+ along with Aria 10 and Stratix 10 devices are listed along with suitable Achronix offerings ranging from the ac7t750 up to the ac7t3000. Of course, there are many caveats, such as included memory or DSP blocks, etc.

Achronix hints early on in the app note about unique capabilities for AI/ML and network-on-chip (NoC) that their Speedster7t family offers that have no analog in the devices from Intel or Xilinx. Achronix includes a cross reference of core silicon components including lookup tables, logic arrays, distributed math functions, block memory, logic memory DSP and PLLs. Because many of the core components are similar few, if any, RTL modifications are required during porting.

Noticeable differences appear in the interface subsystems available on various FPGA technologies. Achronix has placed a priority on including hard interface subsystems within the I/O ring. This eliminates the need for soft IP interfaces that use up valuable FPGA fabric. This also makes interface integration and timing closure easier. Achronix Speedster7t offers higher performance in most interface categories, including up to 4 x 400G Ethernet, Gen5 x 16 PCIe, DDR4 with 72-bits at 3.2G bps/pin in hard IP. Their SerDes supports up to 112Gbps. Lastly, they offer a unique and highly effective NoC.

Aside from physical specifications, a user contemplating migrating to Achronix will want to understand the supported tool flow. Unlike many other FPGA vendors, Achronix has opted to use Synopsys Synplify Pro in conjunction with their standalone ACE place and route tool. Synplify is recognized as an industry leader already and it is used by many users in place of the vendor supplied options. Achronix users benefit from a mature tool flow that includes practically every feature found in any other flow. The app note includes a feature by feature comparison table that bears this out.

FPGA Migration Achronix Tool Flow

So what code changes are required typically when moving to the Achronix tool flow? The Achronix answer to this question in the app note is that few if any RTL changes should be needed. Synplify Pro will automatically handle inferred RLB features such as LUTs and DFFs. The same goes for memories and DSPs so long as their regular inferencing templates are used. RLBs have a dedicated ALU that Synplify will use for generating efficient math and counter operations. Achronix Speedster7t supports a rich combination of DSP, Block memories and shift registers. Wrappers are not needed for primitives such as I/O ports and global buffers. I/Os and buffers are managed by using constraints applied in the I/O designer tool flow.

The app note has extensive sections on memory and DSP instantiation. It also goes into detail on the topic of constraints. It is worth reading these sections in their entirety. Suffice to say that in most cases they are handled in a straightforward way that should make any porting related work fairly easy.

The end of the app note talks about two distinguishing features of the Achronix Speedster7t family, network-on-chip (NoC) support and their machine learning processor (MLP). The NoC relieves the designer of managing and coding for high speed data transfers between the FPGA fabric and/or I/Os without restriction. For instance, the NoC can even populate a GDDR6 or DDR4 memory from the PCIe subsystem without consuming any FPGA fabric resources and with no need to worry about timing closure. The app note includes a reference to the Achronix documentation for the Speedster7t Network on Chip User Guide.

The MLP is a powerful math block available on Speedster7t chips for use in AI/ML applications. Each MLP can have up to 32 multipliers, ranging from 3-bit integer to 24-bit floating point, supported natively in silicon. It is extremely useful for vector and matrix math. It offers integrated memories to optimize neural net operations. They cite an example of a Speedster7t device processing up to 8600 images per second on Resnet-50.

The most interesting aspect of the Speedster7t family is that if users wish they can move their design to the Speedcore embedded FPGA fabric to incorporate it into their own SoC. Speedster7t is very competitive as a standalone FPGA device but as a Speedcore eFPGA integrated directly into an SoC, Achronix FPGA technology presents entirely new opportunities.

As I said at the outset, not only is the app note useful for guidance on migration to Speedster7t, it also shines a light on the competitive differences between Speedster7t and other FPGA technologies. The app note is available on the Achronix website.

 

 

 

 


Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality

Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality
by Mike Gianfagna on 02-22-2021 at 10:00 am

Silicon Catalyst and mmTron are Helping to Make 5G a Reality

Everyone is talking about 5G these days. The buildout is beginning. The newest iPhone supports the new 3GPP standard. Excitement is building. But there is a back story to all this. Silicon Catalyst recently added a new company called mmTron to their incubator program. These folks are millimeter wave experts and that turns out to be quite relevant for 5G. I had a chance to catch up with mmTron to explore this new addition to the Silicon Catalyst Incubator. What I discovered was there is a critical portion of the 5G buildout that has some serious challenges. Challenges that mmTron is uniquely positioned to solve. Read on to learn about the 5G back story and how mmTron’s innovative products will contribute to delivering on the promises of  5G. Silicon Catalyst and mmTron are helping to make mmWave 5G a reality.

The Team

Dr. Seyed Tabatabaei

First, a bit about the two folks I spoke with. Dr. Seyed Tabatabaei founded mmTron in 2020. He has substantial expertise in millimeter wave technology having led design efforts at MaCom, Agilent, Endwave and Teramics before founding mmTron. Seyed has assembled a team with exceptional skills in this specialized and critical area, drawing on experience from satellite and defense applications.

Glen Riley has recently joined mmTron as an advisor. Glen has a storied career in semiconductors that includes TI, AT&T and Qorvo. Glen has held several senior executive positions in general management, marketing, and sales. Glen currently is a board member and advisor for companies in the RF and optical markets. He previously knew Seyed as a customer and recently Silicon Catalyst put Glen back in touch with Seyed to become a key executive advisor.

Glen Riley

The 5G Design Challenge

It turns out much of the 5G build out occurring today is based on sub-6GHz spectrum implementations which are similar to the currently deployed 4G network. The substantial benefits of 5G (e.g., very high bandwidth and very low latency) will be delivered in the millimeter wave spectrum (i.e., 24GHz to 80GHz). Verizon is deploying some of this technology today and the new iPhone 12 can support that technology. These efforts are just the beginning of the process and there is still much to do before the full benefits of 5G are realized.

At these frequencies the speed delivered to your handheld device will be equal to or greater than today’s broadband residential connections. This is where the challenges of transmission for 5G exist. You’ve probably heard about the need for sophisticated antenna systems that support beamforming to make all this work.

Beyond antenna systems, there is also a big challenge to deliver electronics for high bandwidth and high-power transmission systems at reasonable cost. Most millimeter wave electronics available today are based on military and satellite applications, where commercial cost pressures aren’t as severe. This is the area where mmTron delivers significant value over and above what is currently available from the existing RF / mmWave suppliers.

The mmTron Solution

Thanks to its proven, patented architecture, mmTron technology can support 5G millimeter wave applications requiring higher power and higher linearity higher power and higher linearity better than other solutions. These key differentiating features mean fewer base stations and smaller phased array antenna systems are needed to deliver the same or greater capability. mmTron’s high linearity products complement existing lower power silicon-based beamformer chips on the market. mmTron estimates that 5G infrastructure costs can be reduced by 40 percent or more using its technology and that is big news.

mmTron’s outsourced fab and assembly/test ecosystem is already in place. RF silicon-on-insulator, gallium arsenide and gallium nitride technologies are used to deliver mmTron’s products. When compared to other large companies that support this market, mmTron represents a disruptive force in the industry as shown in the figure below.

Competitive Landscape

mmTron is currently in discussions with several very large infrastructure manufacturers. The company will soon close a funding round and tape out its first family of products for first delivery in late 2021. The addition of mmTron to the Silicon Catalyst incubator illustrates the breadth of the program from a technology and market perspective.

You can learn more about mmTron and its new and disruptive technology here. Whether you’re interested in learning more about their product offerings or contributing to the company’s growth, you can inquire here.  It looks like an exciting adventure as Silicon Catalyst and mmTron are helping to make 5G a reality.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

Silicon Catalyst’s Semi Industry Forum – All-Star Cast Didn’t Disappoint

Chip Startups are Succeeding with Silicon Catalyst and Partners Like Arm

Silicon Catalyst Hosts Semiconductor Industry Forum – A View to the Future … it’s about what’s next®


A Perfect Storm for GLOBALFOUNDRIES

A Perfect Storm for GLOBALFOUNDRIES
by Daniel Nenni on 02-22-2021 at 6:00 am

Chuck Schumer Globalfoundries Chips

GF has played some groundbreaking roles in the semiconductor ecosystem. The spinout of the AMD fabs and the acquisition of the IBM semiconductor division just to name two. Another big one would be the GF Initial Public Offering which may come as early as 2022.

When the IPO was first mentioned during a chat with GF CEO Tom Caulfield I had my doubts. Today however it looks like a perfect storm for a GF IPO with the ongoing semiconductor supply chain issues and the resulting automotive wafer shortages. There is a renewed push for more US based semiconductor manufacturing and other countries are considering the same. With the help of some serious political muscle GF established a semiconductor manufacturing beach head in Upstate NY (Fab 8) in 2009 and additional land rights have already been secured for future expansion.

Another strong sign of GF U.S. based semiconductor manufacturing prowess is the recent announcement with the U.S. Department of Defense:

U.S. Department of Defense Partners with GLOBALFOUNDRIES to Manufacture Secure Chips at Fab 8 in Upstate New York

To make a long story short, the IBM Semiconductor group acquired by GF was a longstanding trusted contract chip manufacturer to the U.S Government through the Fishkill fab (IBM building 323). That relationship was maintained by GF and is now being expanded/transferred to Fab 8 in Malta. Fishkill Fab 10 was sold to ON Semiconductor so this transfer is an important step for GF. The first chips under this agreement will arrive in 2023 and will be based on a 45nm SOI process. Here are the related quotes:

“GLOBALFOUNDRIES is a critical part of a domestic semiconductor manufacturing industry that is a requirement for our national security and economic competitiveness,” said Senate Majority Leader Chuck Schumer, who successfully passed new federal semiconductor manufacturing incentives in last year’s National Defense Authorization Act (NDAA). “I have long advocated for GLOBALFOUNDRIES as a key supplier of chips to our military and intelligence community, including pressing the new Secretary of Defense, Lloyd Austin, to further expand the Department of Defense’s business with GLOBALFOUNDRIES, which will help expand their manufacturing operations and create even more jobs in Malta.”

In a supporting statement from the U.S. Department of Defense, “This agreement with GLOBALFOUNDRIES is just one step the Department of Defense is taking to ensure the U.S. sustains the microelectronics manufacturing capability necessary for national and economic security. This is a pre-cursor to major efforts contemplated by the recently passed CHIPS for America Act, championed by Senator Charles Schumer, which will allow for the sustainment and on-shoring of U.S. microelectronics capability.”

“GLOBALFOUNDRIES thanks Senator Schumer for his leadership, his ongoing support of our industry, and his forward-looking perspective on the semiconductor supply chain,” said Tom Caulfield, CEO of GF. “We are proud to strengthen our longstanding partnership with the U.S. government, and extend this collaboration to produce a new supply of these important chips at our most advanced facility, Fab 8, in upstate New York. We are taking action and doing our part to ensure America has the manufacturing capability it needs, to meet the growing demand for U.S. made, advanced semiconductor chips for the nation’s most sensitive defense and aerospace applications.”

Given his current political clout, having the Senate Majority Leader as a champion is a tremendous asset for GF. And let’s not forget GF Fab 1 in Dresden. I was there when Angela Merkel toured the facility in 2015 and thought for sure with there would be serious Government investment to strengthen the EU semiconductor supply chain. How times have changed. As I said, a perfect storm for GLOBALFOUNDRIES, absolutely.

Also Read:

Technology Optimization for Magnetoresistive RAM (STT-MRAM)

3DIC Design, Implementation, and (especially) Test

Designing Smarter, not Smaller AI Chips with GLOBALFOUNDRIES


Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash

Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash
by Fred Chen on 02-21-2021 at 10:00 am

3D NAND Flash unit cell

I recently posted an insightful article [1] published in 2013 on the cost of 3D NAND Flash by Dr. Andrew Walker, which has since received over 10,000 views on LinkedIn. The highlight was the plot of cost vs. the number of layers showing a minimum cost for some layer number, dependent on the etch sidewall angle. In this article, the same underlying principles are used to calculate the effective 2D design rule for the 3D NAND array as well as to find the maximum density, both of which are strongly dependent on the sidewall angle of the holes etched through the multilayer stack. A previous article of mine focused on initial estimates of 2D vs. 3D wafer cost [2], but here we will go directly to the impact of 3D processing on the effective 2D density.

Model of 3D NAND cell
The 3D NAND cell has a typical arrangement as shown in Figure 1. The charge storage areas are circular rings containing at least a nitride layer sandwiched between two oxide layers. The rings encircle a silicon channel, typically also ring-shaped. The circular hole structures are taken to be located on a hexagonal close-packed lattice. If we take the minimum distance between holes to be equal to 1/4 the hole diameter [3], the density will be 2/sqrt(3) ~ 1.155 times that of the case where the same diameter holes are placed on a square lattice with the same minimum distance between holes. This proportionality will help in determining the equivalent 2D design rule later, i.e., the design rule of the 2D planar NAND array with the same density (assuming one bit per cell).


Figure 1. 3D NAND Flash unit cell.

3D NAND Hole Widening
The holes penetrating the layers of the 3D NAND stack are ideally with vertical sidewalls. Realistically, it deviates by a fraction of a degree from normal [4]. As a result, the bottom diameter of the hole will be smaller than the top diameter. It is the top diameter that therefore determines the cell pitch. The widening of the hole diameter from the bottom to the top can therefore be given by:

Top diameter – bottom diameter = cot(sidewall angle) * # layers * layer height.

The top diameter is used to determine the equivalent 2D design rule (E2DDR):

1.25^2 * sqrt(3)/2 * (top diameter)^2 = # layers * 4 * (E2DDR)^2, or

E2DDR ~ 0.58 * top diameter/sqrt(# layers)

This allows us to predict a maximum density or minimum 2D equivalent design rule for some number of layers, at a given sidewall angle. We can still expect the equivalent 2D design rule to reach 10 nm.


Figure 2. Top: Widening of diameter as stack height increases with number of layers. Bottom: Equivalent 2D design rule vs. number of cell layers, for different bottom diameters, at a sidewall taper angle of 89.7 deg. (visually estimated for Samsung’s 92-layer case from IWAPS 2019 presentation by J. Choe [4]).

Note that the maximum density or minimum equivalent design rule occurs for a smaller number of layers for a smaller diameter. This means taller holes would eventually need to be built up from stacking multilayers supporting shorter holes, with alignment required. It is a vertical analogy to the Litho-Etch-Litho-Etch… multipatterning used by foundries [5]. This is already a common practice among 3D NAND manufacturers [4], with only Samsung holding out so far, but considering it for seventh-generation V-NAND [6].

References

[1] A. J. Walker, IEEE Trans. Semicon. Mfg. 26, 619 (2013).

[2] F. Chen, Toshiba’s Cost Model for 3D NAND: https://www.linkedin.com/pulse/toshibas-cost-model-3d-nand-frederick-chen, also https://semiwiki.com/semiconductor-manufacturers/291971-toshiba-cost-model-for-3d-nand/

[3] A. Tilson and M. Strauss, Intl. Symp. Phys. & Failure Analysis Integ. Circ., 2018.

[4] Some figures for measurement are provided for example in J. Choe’s IWAPS 2019 presentation “Technology Views on 3D NAND Flash: Current and Future.” http://www.chipmanufacturing.org/1-A2-Short%20version%20for%20Publish_IWAPS%202019_Jeongdong%20Choe_TechInsights_3D%20NAND_F_s.pdf

[5] J. Huckabay et al., Proc. SPIE 6349, 634910 (2006).

[6] https://en.yna.co.kr/view/AEN20201201006900320

Related Lithography Posts


Elon Knows When You Crash

Elon Knows When You Crash
by Roger C. Lanctot on 02-21-2021 at 8:00 am

Elon Knows When You Crash

It’s true. Elon Musk, CEO of Tesla Motors, knows when you crash your Tesla. He just isn’t obliged, in the U.S., to do anything about it. And he’s not alone.

Here it is, 2021 and buyers of cars in the U.S. can’t count on getting automatic crash notification (ACN) included in their next new car.  Even those cars equipped with ACN require a subscription for it to work in most cases.

When European regulators mandated eCall in all new cars years ago, those of us on this side of the Atlantic chuckled at their feeble attempt to “catch up” with the U.S., where OnStar had been launched by GM 20 years before. While the EU was working on eCall, the U.S. was tinkering with “next gen 9-1-1.”

Now, here we are in 2021, and emergency crash notification – an automatic call for help from a car in the event of an airbag deployment, or a button-push request for assistance – is still neither a standard feature on cars sold in the U.S. nor a mandated piece of automotive kit.  If you crash your car in the U.S., you’re pretty much on your own if you haven’t paid for the built-in telematics service.

Tesla is a special case, though.  By now we all know that Musk is collecting buckets of vehicle data throughout the operational life of a typical Tesla via its built-in wireless connection.  We also know that Musk has used that data forensically to get himself and his company “off the hook” in the event of multiple spectacular and fatal Tesla crashes.

Time and again Musk has used vehicle data to demonstrate how drivers have misused Tesla vehicles, violating various warnings and caveats, leading to fatal encounters with other vehicles and inanimate objects.  We’ve all seen multiple Tesla RUD (rapid unplanned disassembly) pictures and videos.  What is missing from all of these events is the timely arrival of assistance in the form of police, fire department, or ambulance personnel – beckoned by a built-in, on-board 911 call – a la OnStar or some equivalent.

This puts Musk in a special category. He is using the wireless connection and the data collected thereby against the misbehaving vehicle owner rather than putting connectivity to work to provide assistance in urgent circumstances.

For the rest of the industry, the failure of auto makers to provide a free, built-in emergency call capability in all cars sold in North America – including General Motors vehicles – is a sad commentary on the industry.  But Tesla’s failure to provide a built-in emergency call function stands out.

In a recent Twitter exchange between Musk and a Tesla owner – who was unable to summon assistance using his phone and also was unable to access the vehicle’s wireless connection to seek help – the Tesla Motors CEO tweeted “Absolutely” to the suggestion that Tesla ought to enable emergency calling from its vehicles. So, Musk likes the idea. Musk already offers this solution on vehicles sold in continental Europe and Russia. Tesla owners in the U.S. wait.

Musk’s Twitter exchange with Tesla owner: https://cleantechnica.com/2021/01/03/tesla-vehicles-could-be-able-to-call-911-during-an-emergency/

It was 25 years ago that GM first began the process of introducing emergency call modules developed as part of Project Beacon in Cadillac vehicles – beginning the journey to the introduction of what we now know as OnStar.  At that time GM Executive Chairman Harry Pearce asked the perplexing questions (from OnStar President Chet Huber’s “Detour”): “If one hundred cars crash and they don’t have something like OnStar on board, how many of them will call for help?” “Now, how many out of a hundred OnStar-equipped cars that crash will need to call for help before we’d be more wrong for holding back a potentially lifesaving technology like this than we would be for putting it in?”

The rest is history, as they say.  OnStar was born, but it was another 10 years before it was built into every GM vehicle.  And today, the automatic crash notification feature from GM is still not free.  A friend of mine is fond of saying that making customers pay for automatic crash notification is like a hotel charging you for the fire extinguisher (or sprinkler fire suppression system) in your room.

Musk should correct this embarrassing omission in Tesla vehicles.  If Tesla can deliver cars with eCall in Europe and Russia, the company can deliver an equivalent solution in the U.S.

The same goes for the rest of the automotive industry.  Car makers shouldn’t be de-contenting vehicles of vital safety systems for the U.S. market and up-contenting for Europe and Russia.  Automatic crash notification in passenger vehicles ought to be regarded as standard equipment – a human right maybe?

Automatic crash notification is only a start.  There is further work needed on leveraging vehicle data in the event of a crash to determine crash severity, the condition and number of vehicle occupants, and the accurate location of the vehicle.  It’s not too late for Tesla to show the way forward.  Sad to say, in 2021, automatic crash notification is not a solved problem in the U.S.


How do you plan the best Bitcoin miner in the world?

How do you plan the best Bitcoin miner in the world?
by Raul Perez on 02-21-2021 at 6:00 am

iStock 1213603569

As many of you know Bitcoin prices have surged recently up to $40,000 USD per bitcoin as of February 2021. We are in the middle of a bit rush! People are noticing Bitcoin’s surge and wondering how they can profit from it. In this article we will explore how custom silicon is a vital part of a winning bitcoin mining strategy.

Some people wonder what it would take to make their own Bitcoin mining custom silicon in order to beat everyone else. My quick survey of the field indicates Bitmain Antminer S19 Pro is the state of the art bitcoin mining equipment as of February 2021. Just as many others such as Amazon, Apple, Facebook, Tesla, Google and more have realized that there is a clear competitive advantage to their businesses from custom silicon; Bitmain too decided to make their own custom silicon.

Using the latest silicon process node increases the power efficiency and the processing power of the bitcoin miner system. This is why bitcoin miner systems manufacturers continue to update their custom silicon mining chips.

Some things to consider to plan your next custom silicon mining chip:

  • Selecting a chip supplier to design your custom silicon (i.e. ASIC).

Finding a good chip supplier to design your ASIC is an art in and of itself. You want a reputable company with an excellent design team, but they also want a reputable system company as a customer.  So if this is your first project making a bitcoin miner you will need to convince the chip supplier (among others) that you’re a serious customer. There are many chip design houses in the world, but many of them are not probably who you’d want to hire if you want to reduce your technical and schedule risks. A way to mitigate your risk of selecting the wrong supplier and also to present your RFQ professionally is by hiring a silicon manager to assist you in those interactions. As part of CustomSilicon.com’s process we work through the Concept and Requirements phase with the chip supplier candidates, and end up selecting one candidate after the Si proposal review. I’d go for 4 chip supplier candidates at Concept, reduce that to 2 suppliers at Concept phase sign-off and then at Requirements phase sign off downselect to 1 chip supplier.

You want to buy RTL IP that is ready for use or hire a chip supplier that has it from past projects. There are some companies out there with previous experience designing custom ASICs for bitcoin mining. But you always need to thoroughly vet them before moving forward writing checks for NRE and masks.

  • Project cost.

There are some costs that are more predictable than others. A disclaimer: all prices below are my gut feeling/what I read/hear from others, from my experience, etc… But as you should know many prices are negotiable and these are influenced by your relationships, total volumes, cost of opportunity of the supplier, negotiating skills, etc…Here are some:

    1. Masks: To make a chip at the foundry you need to buy masks. ASICs today for bitcoin mining are in the 7 nm node already. So if you want to leapfrog the competition you need to shoot for 5 nm or 3 nm. 3nm is the highest risk since this process node is in development.  In my opinion masks for a 5 nm or 3 nm process will be in the 10 to 14 million USD range, let’s call it 12 million USD for easy math. A project like this will probably take two full mask sets on a good case scenario. Selecting a good supplier, performing detailed reviews, using state of the art EDA tools, getting direct foundry support, and hiring a silicon manager are all good ways to mitigate the risk of needing to tape out more than two times.
    2. NRE: This is the cost to pay for the chip design house. This is really subjective and speculative before having gone through Concept and Requirements phases since it will depend on how closely the RTL they have matches the requirements the system company wants and what trade offs you negotiate. It also depends on foundry rule deck accuracy, and simply put what other things that chip design house could work on since that is an opportunity cost for them. In my opinion this will land in the 2 to 5 million USD range. But this can really vary depending on negotiations and everything mentioned here.
      • Firmware: Here it’s important to decide who will write the firmware for the chip. This can actually be a significant cost comparable to chip designer time cost and sometimes exceeding it.
      • Assembly and test: It’s possible that the chip design house is not going to provide a full solution. In that case you need to go work with an outsourced assembly and test (OSAT) house.
      • Ideally you don’t need to go through one of the value chain aggregators (VCAs) to work with the wafer foundry, but that could happen. I think working direct with the foundry will help your project go faster and reduce errors, but the foundries don’t want to work directly with startups (i.e. companies with small volume). So the point made further below in this article about buying foundry wafers is key to gain direct foundry access.
    3. Summary of fixed costs: 24 Million USD (assuming two full mask sets) + 2 to 5 Million USD (NRE) = 26 to 29 Million USD
  • Project schedule.

    1. My gut feeling is that getting from Concept phase start to Requirements phase sign off is probably a 3 months endeavor.
    2. Time from spec freeze to tape out is probably in the 6 months range. This could be longer or shorter depending on how close the starting RTL is to what the system company wants from its miner.  It’s important to highlight that specification freeze requires the system company to develop concise, precise and complete requirements documentation during the Concept and Requirements phases. This is important so that the chip supplier can provide a draft specification quickly after Requirements phase sign-off and we can close on the specification of the chip with Specification phase sign-off. Constant spec changes during development are a project schedule and cost nightmare that can be avoided with disciplined process and early attention to detail.
    3. Time from tape out to tested samples is probably 5 months.
    4. Time to first samples: So time to first ASIC samples is equal to 1+2+3. Which is 14 months for your late Proto or first EVT build.
    5. You will likely need to spin the silicon one more time to get to final shippable silicon. This likely means 2 months of Validation time , 2 months to get ready for the tape out and 5 months of fabrication time. You will have to be thorough at chip and system level validation to find all bugs. Then later check that the ECOs are properly root caused and verified before tape out.  So your final silicon samples (i.e. not production quantities) come in at 23 months for your DVT build.
    6. Mass production risk ramp is another area where you will need to make a judgement call. This will be about how much money you want to risk without knowing that the final silicon is good yet (i.e. you haven’t completed your DVT build).  You can decide to pull in the bitcoin miner system’s mass production ramp date by risk releasing wafers before building DVT phase. To do that you need to go through all your validation status and make a risk assessment in preparation for Mass production phase sign-off leading to your PVT (i.e. final) build. It takes about 5 months to get mass production parts in volume. So if you waited to build systems with final silicon samples until 23 months and then signed off on ramping the wafers at DVT build exit sign off, it will take another 5 months (plus some assembly packaging and test time) to get those mass production chips in quantity to your factory. Risk released wafers could end up being scrapped if you find bugs at DVT that are unacceptable with your final silicon. So this needs to be done with care as it can cost you millions of dollars in scrapped wafer material. Sometimes bugs can be fixed with one time programmable memory (OTP) at final test which would save you from having to scrap the wafers. So you will want to plan to lock OTP settings sometime before you would need to run chips through final test for your mass production ramp.
  • Buying wafers from the foundry.

As you may have heard there is a shortage of silicon wafer foundry capacity. So you will need to make a compelling case to the foundry why they should work with you in their 5 nm or 3 nm process nodes. As you know money is a great facilitator. So it may be that you need to commit to buy wafers ahead of time with the foundry. If you commit to buy a lot of wafers ahead of time they need to provide you with direct support, preferential fabrication times (super hot lots, hot lots), etc…

Let’s say you plan on building 100,000 bitcoin miner systems. Each system containing 200 chips/ASICs inside. So that is 20,000,000 chips. In 300 mm wafers that is probably something like 5,000 wafers. The number of chips per wafer depend on your final die size, your fab yield, your package yield and your test yield. So here I assumed you get 4,000 good chips/dies per wafer. Of course these numbers could be different for your system, but I will assume these to illustrate what I think is the likely ball park. During the process phases all of these details are nailed down and adjusted as needed. The question then is what is the minimum amount of wafers the foundry would ask you to commit to buy upfront to get the kind of support and preferential access you need to get your bitcoin miners built faster.

I am going to guess a 3 nm wafer may end up costing $20,000 USD. So you see that if you end up buying 5,000 wafers that is a $100,000,000 USD purchase! Maybe you can commit to buying 10% of that upfront and get a direct deal with the foundry, maybe not, you need to negotiate.

There are also system level miner considerations.

These are outside of the scope of this article since they are not directly custom silicon items. But it’s worth briefly mentioning them since custom silicon is developed to directly support a custom system hardware project; in this case a bitcoin miner.  Here are some things that will need to be planned:

  • Hiring a CM.

You need to build systems in mass production somewhere. This supplier needs to be able to source all components, assemble, test them and package them to be shipped for you. A lot of companies choose CMs in Asia (China, Taiwan, etc…). These guys will also develop some or all of your factory test infrastructure. You need to pick wisely. The same CM company has very different levels of quality and experienced personnel for different customers. If you’re a new or small customer you may not get a good team, so you need to shop around for the right CM partner.

  • Pre-silicon deliverables.

    1. FPGA board. Your firmware team needs a platform to start developing code on in preparation for the first build.
    2. Blank packaged chips and mock form factor accurate PCB boards. Your mechanical engineers may need these so they can build mechanical only prototypes to mock-up the cooling solution well ahead of your first system build at the CM.
  • Designing the hard system level stuff.

    1. Firmware. You’ll need to write the firmware to control the PCBs with all the ASICs on them. So you need some firmware engineers with experience writing firmware for bitcoin miners.
    2. Cooling. You’ll need to cool down your miners. These miners consume thousands of watts each. This means you’ll need to design a customized heatsink system. Some people use fans, others immersion cooling, etc… Whatever you do this is a critical part of the project and you need to hire good mechanical engineers with experience in this type of design.
    3. PCB boards. You’ll need to design efficient power supplies. There is no point in making a super power efficient custom silicon chip and then you waste lots of power in the power converter plugged to the wall feeding your chip. You’ll also need to design good PCBs with thick copper so that your board losses are not too high. This all means that you need to hire a good quality electrical engineer to design this for you.

In summary.

Likely time to first samples 14 months from kick off.

Likely time to final silicon samples 23 months from kick off.

Summary of fixed costs 26 to 29 million USD. This is for the chip only. There will be some additional costs to develop the bitcoin miner system as discussed in the system level miner considerations section.

The estimates above assume the project is run like a tight ship. This can be hard to do generally, and especially when there are a lot of people and companies working together for the first time. Without experienced people and a good process to follow the chances to execute in these timelines are greatly diminished. Managing cross functional, multi national and multi company teams is vital for this engagement.

As you can see this project is doable. It’s also a big investment with big risks that need to be mitigated. So the question is: what will be the price of bitcoin by the time you have your miners ready?

 

For more information contact us.

Disclaimers: All prices, schedules and details in this article are my best guesses, my opinions, and what I gather from multiple sources of information. I provide this for illustration and informational purposes only. Use at your own risk. As a project progresses through the phase sign offs all these details are committed/verified with suppliers.


Podcast EP8: A Look Inside Analog IP and Analog Bits

Podcast EP8: A Look Inside Analog IP and Analog Bits
by Daniel Nenni on 02-19-2021 at 10:00 am

Dan and Mike are joined by Mahesh Tirupattur, executive vice president at Analog Bits. Mahesh discussed how he found his way to analog IP design and his long association with Analog Bits. Effective strategies for analog IP design and deployment are discussed as well as leading edge applications for analog IP . Mahesh also provides the back story on those Analog Bits gift bottles of wine that are seen each year around the holidays.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Mark Williams of Pulsic

CEO Interview: Mark Williams of Pulsic
by Daniel Nenni on 02-19-2021 at 6:00 am

mark williams final 2

Pulsic recently announced its new “freemium” product Animate Preview and we had the chance to chat with Mark Williams, President and CEO of Pulsic.  Mark explained details of  the product and the new business model that could change the direction of the traditional EDA business model structure.

What brought you to the semiconductor industry?
After attending the Cardiff University in the UK (yes – a long time ago), I was fortunate to start work as a software engineer in a company called Racal-Redac. They were an early EDA company focused on PCB design. I was lucky enough to be working in the routing technology group – which pioneered the method of shape-based routing. Working in this area really fueled my appetite for CAD and algorithmic methods, so much so that I am still working in this area today.

Can you tell us about the origin of Pulsic?
I along with the other founders were keen to explore applying some of the technologies that we had worked on into the field of semiconductors. The best way of doing this was to start our own company. We managed to secure some initial funds, along with our own funds and set up shop in January 2000.  After 12 months, we were able to generate a small amount of revenue and by the end of 2002 we had managed to close our first round of VC funding.  We still call ourselves a start-up – although having just turned 21 – I’m not really sure that term can really apply!

What markets does Pulsic address today?
Custom IC, Mixed Signal, Custom Digital, Analog, Memory Periphery

What are some of the customer challenges Pulsic addresses?
Pulsic works with leading analog and custom design teams around the world to solve their toughest custom IC design challenges. Our solutions specialize in automating custom layout tasks that traditionally would be completed manually. Our unique approach to automation delivers custom quality results with automated speed and reduced time to market.

What is the Pulsic competitive positioning?
Pulsic’s technology enables analog and custom IC design teams to accelerate their design flows, and still achieve the same high-quality results.  Our solutions allow design leaders to remove iterations, shrink project timelines, and reach time to market goals.

Can you tell us about the new Animate Preview and how customers will benefit?
Animate Preview gives engineers quick, easy, and accurate physical information about their analog circuit, in real-time, while developing the schematic. Animate Preview gives detailed layout visualizations of the circuit and helps to spot problems, while accurately estimating design size, transfer design intent and creating a black-box layout. With Animate Preview an engineer can make better decisions earlier in the design flow and reduce iterations.

Animate Preview has an interesting new business model for an EDA tool, can you elaborate on how you see this new model resonating with customers?
The current EDA sales model is “enterprise sales”. This typically involves a long evaluation process, usually requiring an evaluation agreement. Traditional EDA tools often need a lot of setup and complicated installation, plus the EDA vendor typically provides an application engineer to the customer to assist with the evaluation. Many of the EDA tools are complex to use and often need onsite training. If “successful” the customer and vendor must then negotiate a purchase and support agreement. What does this mean?  It means the barriers to adopting new tools are extremely high. This traditional model stifles design flow innovation, which delays and prevents customers from realizing the time and cost benefits that design automation can bring to their flow.

The freemium model adopted by Pulsic for Animate Preview removes these barriers. Customers can download and run Animate Preview on their own data within minutes. There is no usage fee for Animate Preview. This allows customers to get the benefits without needing to build a business case and negotiate agreements. Pulsic also provides no cost online support for Animate Preview.

Pulsic does offer an upgraded version of Animate Preview called Animate Preview Plus. Users can see the value in Animate Preview on real data and be confident that the tool works for them in their design flow before choosing to upgrade.

Do you see this type of business model being adopted for other tools in the EDA industry?
It would be fantastic to see this model more widely adopted. The current business model of selling EDA software not only stifles innovation in customer flows but also innovation within the EDA industry.

However, to truly enable this model, the EDA tools must be designed to be simple to use, otherwise, users will get bogged down early in the process and likely to give up.  Pulsic’s Animate technology was designed to be simple to use, from the ground up to enable this approach.

What does the next twelve months have in store for Pulsic?
It is going to be exciting to see Animate Preview roll out over the next 12 months. We have unique technology, and as mentioned, a unique way to get it out to the market. Well, unique for EDA that is. That coupled with solid growth in 2020 for our Unity platform, in what is a strong and growing custom digital market. All points to an interesting year ahead for Pulsic, our customers and partners.

https://www.pulsic.com/

Also Read:

CEO Interview: Sathyam Pattanam

CEO Interview: Pim Tuyls of Intrinsic ID

CEO Interview: Tuomas Hollman of Minima Processor


Synopsys is Enabling the Cloud Computing Revolution

Synopsys is Enabling the Cloud Computing Revolution
by Mike Gianfagna on 02-18-2021 at 10:00 am

Synopsys is Enabling the Cloud Computing Revolution

In 2019 I was involved in a major project to move all our engineering and financial systems to the cloud. We succeeded in this endeavor, but it wasn’t easy. We faced a lot of infrastructure challenges during our journey. The freedom from facility management and capital budgeting offered by the cloud was significant, however. If you look a bit deeper, there is a long list of challenges associated with building the massive compute infrastructure needed to fuel the cloud revolution. Synopsys recently published a White Paper on these challenges that is very informative. If you’re involved in technology for cloud computing, you need to read this White Paper to understand the challenges ahead of you and how Synopsys is enabling the cloud computing revolution.

If you talk to folks about moving to the cloud, you will get one of two responses:

I’m on the cloud now.

I don’t think the cloud is ready yet, but it is the future.

The second response is really one of not if, but when. The graphic at the top of this post from a Gartner survey supports this trend. This survey was done about two years ago; I believe the sentiment measured today would be stronger. The Synopsys White Paper shines a light on many of the technical challenges associated with the massive cloud build-out that is occurring around us. It’s good to step back and understand the big picture and this White Paper does that. It’s written by Scott Durrant, Cloud Segment Marketing Manager at Synopsys. Prior to Synopsys, Scott had a 24-year career at Intel and also spent time in the enterprise software market at places like McAfee. Scott understands the technology foundations of the cloud.

He begins with some interesting trends regarding cloud migration – growth rates, the wide deployment of AI and the expansion of edge computing for example.  There is a prediction from IDC regarding the size of the global datasphere in the coming years that will either excite or frighten you, maybe both. Scott then spends quite a bit of time examining six major functional areas in cloud computing – the underlying technology, the challenges and the market trends. You will learn a lot. Here is a brief summary of each area.

Compute Servers

Compute capacity, communication bandwidth and energy efficiency are discussed. The various memory technologies and interfaces are explored, along with standards such as Compute Express Link (CXL) and the requirements of high-speed SerDes channels. It’s interesting to see how HBM2E fits. Compute server market share is also presented. This is a surprisingly balanced market – I see no “900-pound Gorilla”. You can also check out a Synopsys webinar I covered here that discusses high-speed communication in the data center.

Network Infrastructure

The main focus here is network switching. The march toward 400G speeds is discussed, along with the various architectures to get there. The market share view here is different. There is indeed a 900-pound Gorilla.

Storage Systems

Next up is storage systems. The use of AI to optimize these systems is discussed. The impact of non-volatile memory technology is examined, along with cache coherency and the relevant standards. There is a strong player in this market, but not as strong as the network infrastructure market.

Visual Computing

This is something of a new category. It refers to the hardware and software needed to perform real-time video processing and analysis.  Think online collaboration, movie streaming, virtual reality, security and assisted driving. These applications demand some very high-end processing capability.

Edge Infrastructure

The edge is all about reducing latency. The amount of data collected by IoT devices is exploding. You can see estimates of the number of connected devices that will be deployed in the White Paper.  I won’t spoil the details, but I will say these devices are counted in billions. The need to process all this data with latency that fits the application is the challenge. This leads to essentially tiers of edge computing so that the right processing can be done with the right proximity to the application. A three-tier view of such a system is presented. All this challenges what we used to think a data center was.

Artificial Intelligence

Last, but certainly not least is a discussion of AI accelerators. These devices form the very backbone of the whole infrastructure. Some applications demand performance first with power as a secondary requirement while others demand low power first with a required level of performance. The technology and relevant standards in this area are discussed.

How Synopsys Fits

Throughout the discussion there are examples of where Synopsys IP fits into the various architectures presented. It should come as no surprise that Synopsys offers a substantial footprint here. I strongly recommend you download this White Paper to become acquainted with the all the changes happening and how Synopsys is enabling the cloud computing revolution. The White Paper, entitled, Addressing the Evolving Technology Needs of Cloud Data Centers with IP, can be downloaded here.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

Synopsys Delivers a Brief History of AI chips and Specialty AI IP

The Heart of Trust in the Cloud. Hardware Security IP

Synopsys is Extending CXL Applications with New IP