webinar banner2025 (1)

Essential Analog IP for 7nm and 5nm at TSMC OIP

Essential Analog IP for 7nm and 5nm at TSMC OIP
by Tom Simon on 10-24-2018 at 7:00 am

When TSMC’s annual Open Innovation Platform Exposition takes place, you know it will be time to hear about designs starting on the most advanced nodes. This year we were hearing about 7nm and 5nm. These newer nodes present even more challenges than previous nodes due to many factors. Regardless of what kind of design you are undertaking at these nodes, clocking IP is essential. This IP is analog and has even trickier design constraints at these smaller nodes. Andrew Cole at Silicon Creations gave a presentation at the Exposition that provide a lot of insight into what is required to produce this important foundation IP.

Silicon Creations has delivered clocking IP, such as PLLs, that have been used literally billions of times on production chips. Achieving success on this many instances requires tremendous verification resources. One of the interesting parts of Andrew’s presentation discussed the size of their server farm that are used for AFS simulation. They have two sites with more than 2000 cores. The combined RAM is 15TB. They need over 2000 AFS licenses to run their SPICE simulations. Being analog guys, they have even added their own liquid cooling on the processors so they can overclock them.

So why the need for such enormous resources? Andrew started by mentioning the application target for many of these ICs, which turns out to be IoT. He admitted that it is an over used term and has no good definition. However, it’s a useful shorthand for ICs that need to operate on low power, can start and stop quickly, have low leakage, and require few or no external components. Silicon Creations leverage TSMC’s low power processes: 180LP, 40ULP, 22ULL and FinFETs from 16nm to 5nm. These PLLs consume as little as 5uW and can start in as little as 3 clock cycles.

Andrew talked about how analog designs scale as processes shrink. They have seen their PLLs become about 8x smaller in the move from 180nm to 5nm. The limiting factor is noise which turns out to be proportional to kT/C. As such, capacitor values play a big role in determining noise. The other big challenge is wire resistance. With the significant relative increase in wire resistance, it is no longer possible to use lumped R for simulation. Silicon Creations has moved to performing simulation using fully distributed netlists for R and C. Add to this the need to use fully 3D aware tools, and the problem grows substantially. For an example PLL, it now takes 100 times longer to run post layout simulation for 5nm than it did at 40nm.

PLLs and SerDes face even more simulation obstacles. Their jitter requirements are on the order of 0.1ps. Clock cycles are ~100ps. System level activity can stretch out to 1ms, which is 10 orders of magnitude greater than the resolution needed to see jitter issues. Next, factor in the need to run Monte Carlo transient simulations to ensure good yield and it’s easy to see why Silicon Creations has had to scale up their server farm so extensively.

The next question is how well does all this simulation effort correlate to silicon. The answer is: quite well. For power, the mean and standard deviation match closely – sim: 3.02uA 1.5% to meas 3.15uA 1.6%. Below are the PLL fast locking plots.

Lastly, here is the graph for phase noise.

Few IP companies have as much experience and as many instances in the field as Silicon Creations. For digital design teams eager to take advantage of the benefits of 5nm, using proven and well designed and verified IP for clocking, Silicon Creations offers a compelling solution. Their 5nm solutions are taping out shortly. More information on the topic of advanced node analog clocking IP is available on the Silicon Creations website.


Webinar: ASIC and FPGA Functional Verification Study

Webinar: ASIC and FPGA Functional Verification Study
by Alex Tan on 10-23-2018 at 12:00 pm

ASIC or FPGA? Each design style has earned designers’ votes depending on the level of urgency, application complexity and funding of their assigned projects. While it is feasible to transition from ASIC to FPGA design or vice versa, such a move is usually done across project refresh instead of midcourse.

Both Xilinx and Intel(PSG division, formerly from Altera acquisition) are the dominant FPGA vendors representing over 80% of FPGA market share –recently registered $4.52 billion combined annual sales between Q217 until Q118 (which is equivalent to the FPGA market size in 2016 based on Gartner). Although it represents only a slice of the global semiconductor market as captured in figure 1, FPGA and ASIC as design implementation vehicles are in high demand and will be around for a while.
As a leading EDA solution provider, Mentor, a Siemens Business commissions Wilson Research Group every two years to conduct a broad, vendor-independent study of design verification practices around the world. Harry Foster, Chief Scientist Verification for the Design Verification Technology Division of Mentor summarized the survey outcomes in this Mentor webinar. In order to provide some contexts on the highlights to be discussed here, let us review the state of our design space.

The computing landscape
There have been plenty narratives asserting the lasting footprints left by our computing technology adoptions. Driven by cost and usage personalization in the mid 80s, our compute system environment shifted from mainframe-centric to workstations. Such migration was then followed by the increased pervasiveness of mobile computing, and the subsequent introduction of cloud computing (which can be perceived as the ‘next-generation mainframe’ with an advanced architecture). Cloud computing allows scalability and provides elastic support to the increasing network of distributed compute clients, accentuated by the proliferation of Internet of Things (IoTs) in the past few years.

Now, with the ever louder drumbeat of AI and data inferencing move to the edge, enabled by the upcoming 5G deployment, once again we slide along this network continuum –to bring about intelligent closer to the data source or else, filter the captured data from dozens of sensors to be more manageable in size and more structured, prior to transporting back to the cloud. It is a data driven computing era with more heterogeneous system, agile data handling, reasonable compute capacity, shorter development cycles and smaller form factor.

The core enablers
As the workhorse of the computing solutions, IC design and its associated implementation methods also align with any paradigm shift on the end-to-end technology use. Since the early days of VLSI design, flexibility provided by Programming Logic Array (PLA) technology and later FPGA (Field Programmable Gate Array) have provided alternatives to full-custom or ASIC design approaches.

During the great push for performance and density aided by Moore’s Law, ASIC design was top on the designers select list. With the recent transition into more data-centric applications, FPGA design adoption has been on the rise –providing solution to data centers as augmented, dedicated accelerators such as for network, search, compute, etc. On the other hand, ASIC designers are faced with shorter development time to develop high-performance and complex designs as the development cost also increases rapidly –making it more prohibitive for cost-sensitive applications.

As FPGA devices have become larger and faster, verifying functionality of costly ASIC designs in FPGAs has become an effective and economical method of verification. However, some ASIC structures cannot be directly implemented in an FPGA efficiently. So the balance of which implementation is more suitable is dependent on the given project criteria: timeline, complexity and budget.

Highlights of the 2018 study
This year 2018 survey outcomes were collected from a total of over 1200 companies globally. More than 45% of the survey respondents are FPGA/High Performance FPGA based designs compared with only around 12% doing it with ASIC. In term of ASIC design size, they are split almost evenly split across three binnings (less than 1M gates, 1-40M gates and greater than 40M gates) and across process technologies (0.15u to 7nm) –with a trend towards 14nm and beyond.

The top three FPGA applications are related to aerospace/military, industrial, and data centers, which account for almost half of the FPGA based designs, with notable increase in servers related applications when compared against the 2016 survey data.

In ASIC, design applications related to compute platforms servers, wireless and automotive account for half of the study participants. There are more hardware/prototyping engineers involved in the implementation of FPGA design compared with those of ASIC designs.

The typical regression time incurred by ASIC designs is longer compared with those of FPGA based as shown in the tabulated comparison using values extracted from the study. There is a notable increase (7%) in shorter ASIC regression taking less than 9 hours compared with 2016 data. On the other hand, FPGA regression taking longer than 9 hours seems to be on the increase (5%) compared with 2016 data.

Both FPGA and ASIC design teams have adopted similar dynamic techniques such as coverage metric based (code, functional), assertion and constrained random. On the static side, almost a third of ASIC designs have adopted formal property checking or verification, while FPGA is playing catch up as only about one-fifth of designs embraced the static techniques.

Most of the reported functional flaws are design error related, which is common root cause for both FPGA and ASIC. Likewise, verification engineers spend almost half of their verification time in doing debug (42-45%). One critical finding from 2018 study is the potential FPGA bugs rolled into production. 84% of FPGA design projects have non-trivial bugs escape into production as shown in figure 4.

Key takeaways from this year study are ASIC projects displayed maturity in their processes while FPGA projects being pressured to catch-up in order to prevent bug escaping into production.

There are plenty of interesting data points which can be extracted from the study. For more details on 2018 Wilson Research Group Functional Verification Study, please check HERE.


Trade war could be the tipping point for American manufacturing

Trade war could be the tipping point for American manufacturing
by Vivek Wadhwa on 10-23-2018 at 7:00 am

When Western companies moved manufacturing to China, it was all about minimizing costs. China was a developing country with labor costs among the lowest in the world. It also offered massive subsidies and readily turned a blind eye to labor abuse and environmental degradation.

Today, China is the world’s second-largest economy and has ambitions of overtaking the West. Its labor, real-estate, and energy costs have increased so much that they are comparable to those in some parts of the United States. According to Boston Consulting Group, by 2014, China’s manufacturing-cost advantage over the U.S. had shrunk to less than 5 percent. Add to that the intellectual-property theft and unfair trade practices that China has engaged in, and it becomes clear why it makes sense for companies to bring manufacturing back to America.

Doing that is not easy. It is hard to hire the large number of skilled manufacturing workers in the U.S.; intricate supply chains pose barriers; and retooling factories is expensive. But with the trade war that President Trump launched and with the Chinese government’s rigging the deck against foreign companies, businesses may have a strong motivation to bite the bullet and make the investment. The problems they have long had in the U.S. are also now surmountable.

Robots have advanced so far that they can do the work of Chinese workers. Foxconn’s announcement in August 2011 that it would replace a million workers with robots at its Chinese factories never came to fruition, because the robots of that era were not capable of doing fine tasks such as circuit-board assembly and could not work safely alongside human workers. Today, industrial robots can thread a needle and work hand-in-hand with humans. They can do practically every assembly job as well as pack the boxes the goods are shipped in.

Assembling automobiles is one of the hardest of all manufacturing tasks. But with the help of a new generation of robots, Tesla was able to ramp up production at its Fremont, California, factory to produce more than 100,000 cars per quarter. It did this cost effectively in a region that has some of the highest labor costs in the world.

Low-value manufacturing can be moved out of China relatively easily. It’s already being shifted to nearby countries such as Vietnam, Thailand, and Indonesia. The challenge — and the prize — lies in the high-value, high-technology manufacturing such as what Apple does in China for all of its products except the MacBook Pro, manufactured in Austin, Texas.

There is a complex web of supply chains that have developed in China for electronic goods. Products such as the iPhone have hundreds of components, including the display, integrated circuits, optical modules, sensors, and internal memory, which are sourced from suppliers all over the world. Over the past three decades, production of these technologies started moving to China, and many of the key suppliers became closely interconnected. It is not easy to disentangle operations from China’s high-density integrated-circuit ecosystem.

But it is easier than it would have been if Western companies didn’t fear China would steal their intellectual property.

In 2015, according to Seamus Grimes of National University of Ireland and Yutao Sun of Dalian University of China, the supply chain for Apple’s products consisted of 198 global companies, with 759 subsidiaries, located in 16 different countries. The research, which they explained in their forthcoming book on China and Global Value Chains, found that 32.7 percent of these suppliers were Japanese, 28.5 percent American, 19.0 percent Taiwanese, 6.5 percent European, and only 3.95 percent Chinese. Of the 391 subsidiaries providing highest-value “core components,” 40.4 percent were American, 26.8 percent Japanese, 10.7 percent Taiwanese, 9.2 percent Korean, and only 2.2 percent Chinese.

To put it simply, more than half of the components of Apple’s products are imported into China, and practically none of the important core technologies are made by Chinese companies. Nearly all of the intellectual property in Apple’s products originates from outside China. The researchers found that the few subsidiaries that foreign companies located in China that were producing core components were largely involved in the production and testing of products for just-in-time delivery to locations for final assembly.

China surely isn’t happy with this situation. Having spent billions of dollars in state-led investment, its domestic production of semiconductors accounts for less than 13 percent of the country’s demand, and its ability to design and produce this critical input remains seriously constrained according to East-West Center’s Dieter Ernst. That is why the national focus is on moving further up the value chain and creating intellectual property.

American companies no longer have the financial motivation to sell their souls and deal with the risks. That is why I expect the trickle of manufacturing returning to U.S. shores will, over the next few years, become a flood.

More in the video below.


The Latest in Parasitic Netlist Reduction and Visualization

The Latest in Parasitic Netlist Reduction and Visualization
by admin on 10-22-2018 at 12:00 pm

The user group events held by EDA companies offer a unique opportunity to hear from designers and CAD engineers who are actually using the EDA tools “in the trenches”. Some user presentations are pretty straightforward – e.g., providing a quality-of-results (QoR) design comparison when invoking a new tool feature added to a recent release update. Occasionally, a user will convey to the audience an exuberant enthusiasm, describing how a tool enabled new capabilities that changed their methodology – those presentations are especially memorable.

At the recent Silvaco Users Global Event, or SURGE (link), I had the pleasure of attending such a user presentation. The topic was parasitic reduction and netlist qualification – a crucial flow step in the management of signoff electrical and timing analysis. Parasitic netlist reduction is critical to addressing the tradeoff between results accuracy and flow throughput.

Parasitic netlist reduction is somewhat unique, in that it is one of the few steps for which a (best-of-breed) point tool solution is preferred.

Whereas the distinctions between logic synthesis and physical design flows are blurring, and implementation-level timing-noise-power co-optimizations utilize a common data model, parasitic netlists utilize industry-standard file representations – i.e., DSPF, SPEF. Whereas the blended flows require the selection of a broad “platform” of tools from a large EDA vendor, parasitic netlist optimization need not be bound to a specific platform. In that vein, the SURGE user presentation (representing a large semiconductor design company) highlighted how they had integrated the Silvaco Jivaro netlist reduction and Visa netlist visualization tools into their design methodology.

The presentation began with a brief review of parasitic extraction challenges from Jean-Pierre Goujon, AE Manage at Silvaco. “The advanced process technology nodes are resulting in a tremendous increase in the size of extracted netlists that are becoming unmanageable for subsequent signoff analysis flows. Netlist reduction is mandatory.”

Jean-Pierre described some of the unique features required of a reduction tool, including:

 

  • selective reduction: the magnitude of the netlist reduction versus post-reduction accuracy may need to be applied differently on selected nets or hierarchical subckts
  • accuracy feedback: comparisons of pre- and post-reduction net topologies for equivalent (point-to-point) R, Ceff, delay are required; also, methods are needed to detect anomalous R and C data, perhaps related to opens/shorts in the original netlist)
  • “full custom” extraction netlist reduction options

The netlists associated with full custom extraction results in tremendous data volume, compared to cell-based interconnect extraction. Intelligent algorithms for (multiple, parallel) active device reduction are required – more on that shortly.

 

  • visualization: a means of visualizing the extraction netlist is needed for debug

Jean-Pierre’s overview was followed by the user perspective. Some of the user highlights were:

“We worked close with Silvaco developers to identify (custom) extracted topologies where aggressive reduction optimizations could be applied. Decoupling capacitors, dummy transistors, and multi-fingered devices are prevalent in custom layouts in advanced nodes, and contribute greatly to the netlist data volume. Unique merging algorithms were used, while maintaining the overall accuracy.”

“There are certainly ‘built-in’ reduction features of commercial extraction tools – yet, we found they did not provide the accuracy versus netlist size targets we were seeking. There were significant net RC differences.”

“And, the (reduced) netlist netnames were impossible to interpret – we used Viso just to try to visualize the topology that was present.”

“Our (custom) extracted netlists include resistive elements with Tc1 and Tc2 temperature coefficients – Jivaro correctly managed temperature-sensitive netlists during reduction.”

“A key feature to us is the capability to compare extracted netlists (for the same layout). We are designing in advanced process nodes, where the PDK collateral is evolving. Designers need to be able to quickly determine and visualize the extraction differences between PDK releases. The same applies to the evaluation of parasitic differences for extraction at different corners.”

The user’s enthusiasm for the application of Jivaro and Viso into their extraction flow was evident.

He ended with the comment,“By the way, I also want to say that the support from the Silvaco AE team as we were integrating the reduction tools into our flows was excellent.” You don’t often hear that sentiment expressed at these user group events.

-chipguy


TSMC Q3 2018 Earnings Call Discussion!

TSMC Q3 2018 Earnings Call Discussion!
by Daniel Nenni on 10-22-2018 at 7:00 am

The TSMC OIP Forum was very upbeat this year and now we know why. It wasn’t long ago that some media outlets and a competitor said 7nm would not be a popular node because it is too expensive blah blah blah. People inside the fabless semiconductor ecosystem however know otherwise. As I have said before, 7nm will be another strong node for TSMC, déjà vu of 28nm. The difference being that there will not be cloned 7nm processes like 28nm so TSMC market share and margins will remain strong, my opinion.

Let’s take a look at the Q3 2018 earnings call transcript and see what else we can learn:

Now let’s take a look at revenue by technology. 7-nanometer process technology contributed 11% of total wafer revenue in the third quarter. 10-nanometer accounted for 6%, while the combined revenue from the 16- and 20-nanometer accounted for 25%. Advanced technologies, defined as 28-nanometer and more advanced technologies, accounted for 61% of the total wafer revenue.

Apple is > 17% of Q3 revenue if you include 20nm (iPhone 6) and 16nm (iPhone 6+ and iPhone 7) legacy products.

Now let me make some comment about capacity and CapEx. At TSMC, we build our capacity according to the customer demand. We are continuing to increase 7-nanometer capacity to meet the strong customer demand. We reiterate our 2018 CapEx to be between US$10 billion and US$10.5 billion. In addition, as I have talked about before, although our leading edges capital cost continue to increase due to increasing process complexity, we are able to offset its impact to our CapEx by productivity improvements and further optimization of our capacity planning.

CAPEX can be further reduced by purchasing the equipment GF has in NY? TSMC will move from 5 layer EUV at 7N+ to 14 layer EUV at 5nm so they will need those extra ASML EUV Systems. TSMC will build new fabs for 5nm. In my opinion 5nm will be another big node for TSMC so I expect CAPEX spending to be at the high end for sure.

TSMC CEO C.C. Wei is a very strong leader and from what I am told he is loved by TSMC employees so I expect a very good run under his command. As we know from Intel’s latest debacle, a great CEO is key and C.C. is a great CEO, absolutely. He also has a sharp wit and is approachable and engaging which strengthens his credibility.

Now let me update you about the August 3 virus incident. On August 3, TSMC experienced a computer virus outbreak, which affected a number of computer systems and fab tools. The infection was due to misoperation and insufficient firewall controls. We have since corrected this problem to ensure such viruses will not happen again in the future. Our remediate actions including the following: implementing an automated system to guarantee fool proof execution so that such misoperation will not happen again; enhanced firewall control for fab isolation; and network control to each individual computer. More enhancements now are ongoing, too, for further improve tool immunity against future infections. TSMC sets top priority for such security enhancement.

From what I was told it was a vendor’s fault, but I am glad to see TSMC assume full responsibility and take the appropriate actions. I’m not a big fan of finger pointing as it is a sign of weak leadership.

Now let me talk about the N7 and N7+ and the EUV’s progress. TSMC’s N7 technology is now available for customers to unleash their innovations. This is the first time in the semiconductor industry the most advanced logic technology is available for all product innovations at the same time. We continue to work with many customers on N7, N7+ product design and expect to see more than 100 customer product tape-outs by end of 2019. We expect 7-nanometer to be a long node and will attract multiple waves of customer adoptions.

Absolutely.

N7+ is in risk production now. Since the N7+ has 15% to 20% better density and more than 10% lower power consumption, we are working with many customers for their second wave product designs in N7+. Although the number of tape-outs today account for a small portion of the total 7-nanometer tape-outs, we expect the activity to pick up at a rapid pace in 2020 and beyond. Because the N7+ is using a few layers of EUV photolithography to have better cycle time and patent control, we have made steady progress on EUV technology development towards high-volume production. Tool availability, EUV power, productivity, defect reduction, mask improvement, material and process optimization are all on schedule. A few customers have already made plans to adopt our N7+ in their 2019 products.

N7+ really is a test bed for EUV. They are doing 5 layers in preparation for a full EUV implementation of 14 layers at 5nm. It should not be hard to figure out the N7+ customers as they are the early adopters of 5nm. This half node approach has worked well since 20nm (Apple coming to TSMC) so I expect it to continue.

Let me move to our N5 status. Our N5 technology development is on schedule. We have completed the design solution development and are ready for customers’ design start. The N5 risk production schedule in first half 2019 stays the same. Compared to N7, TSMC’s N5 deliver 1.8x to 1.86x logic area reduction and close to 15% to 18% speed gain and ARM A72 core. We expect to receive first customer product tape-out in spring of 2019, followed by production ramp in first half 2020.

Apple will use 5nm in 2020 so you can bet it will be in HVM in the first half of 2020. From what I hear 5nm test chips are meeting/exceeding expectations and the PDK is solid so I see no reason to doubt TSMC’s 5nm schedule at this time.

Now let me talk about advanced packaging update. TSMC has been developing advanced wafer-level packaging technologies to integrated advanced SoCs, memories, integrated passive device, to enhance system performance. We believe our advanced packaging solutions will contribute to our business growth. We are now expanding the applications of both CoWoS and InFO especially for high-performance computing. Most of the CoWoS products require integration of SoC with High Bandwidth Memory, HBM, in 3D stack. We are making good progress in qualifying multiple HBM sources through close collaboration with customers and the DRAM suppliers. We are also working with a few leading customer on SoIC, which stands for system on integrated chips, where multiple heterogeneous chipsets will be integrated with close proximity to deliver better performance. And we target to start production in 2021 time frame.

TSMC has really done a nice job on packaging. I remember when CoWos came out there were quite a few doubters. Visionaries like myself and Herb Reiter saw this coming but even we are surprised at the amount of resources TSMC has committed to packaging and the excellent results. TSMC now has the MOST sticky foundry process in the world.

Now to the Q&A. Sometimes there are some very funny interactions but this is not one of them:

Michael Chou Deutsche Bank AG, Research Division – Semiconductor Analyst Is it fair to say that 7-nanometer sales portion will be more than 20% of total sales for the whole year next year?

Lora Ho Taiwan Semiconductor Manufacturing Company Limited – CFO and Senior VP of Finance Let me answer that. You have seen our report. The third quarter 7-nanometer accounts for 11%. The fourth quarter will be more than 20%. So for whole year 2018, 7-nanometer will contribute close to 10% of total TSMC revenue. Go beyond 7 — 2018, and we will have very, very strong ramp, in 2019 as well, we expect the revenue contribution will be much higher than 20%.

Randy Abrams Crédit Suisse AG, Research Division – MD and Head of Taiwan Research in the Equity Research Department Okay. The second question I wanted to ask was about the 7+ versus 5-nanometer. You mentioned 2020 would see the very strong ramp-up of tape-out and activity in volume on 7+. Is it your view — I think last conference, Mark said 5 was a little bit more conservative at this stage. So how’s your view now for interest activity and expectation for a steep ramp-up of 5 into 2020?

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman We still expect very fast ramp on 5. The reason is simple. Because of a lot of products developed in the AI area, you need the speed, you need the lower power, and you also need a small footprint. So from this — from today, we can see when we work with our customers, the ramp will be steep again.

Roland Shu Citigroup Inc, Research Division – Director and Head of Regional Semiconductor Research Okay. Can you just reiterate the growth breakdown for this 4 platforms next year?

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman Okay, let me give you some color on it. In the next few years, if we look at ahead, actually, the smartphone is going to be in our daily life even more and more. So we have a 4 growth engine: one is a mobile phone, actually it’s a high-end smartphone; second one is a high-performance computing; automotive; IoT. The mobile phone probably for TSMC will have a 5 year CAGR, if I look at it right from today, it will be mid-single digit growth. And the all others 3 platorms will have a very comfortably double-digit growth in the 5 year time frame.

Bill Lu UBS Investment Bank, Research Division – MD and Asia Semiconductors Analyst Great. I know 2018 is not over yet, but if you think about the next couple of years, I know TSMC has talked about a long-term growth rate of 5% to 10%. Now I feel like more recently, you’ve talked a lot more about the progress on 7-nanometers. We all know about Intel’s struggles with their process technology. And it’s public information. They’ve announced it, right? So — and then you’ve got some good design wins. Can you talk about your long-term outlook in 2019? Given these drivers you just said, out of the — 3 out of the 4 new drivers will be above 10%. So are we looking at something more towards the high end of that? Or how do you think about that?

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman We continue to say 5% to 10% growth rate. Probably I would like to — following your question, I would like to say probably tends to be at the higher side of that 5% to 10%.

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman Okay, actually the question is about the EUV and how much of the benefit we can get from the EUV, right? Usually, if we are not using the EUV, sometimes for the very critical dimension on the N7, you have to — or N7+, you have to use the 4 layers of lithography to pattern one of the critical dimension. Now using the EUV, you’re just using 1 layer so that you reduce the cycle time by 4x of photolithography, 4x of etch. Now you become 1 lithography, 1 etch. In total, how many layers we reduced? That depends on the customer’s requirement, but usually I just give you a hint already, right, 4 layer can become 1 and we are replacing some of the 3 layers to become 1 and we have a few layers of that. So that give you a hint. Cycle time reduction, definitely, because you do 4x into 1x, that’s a big advantage. Productivity-wise, today, EUV is progress very well — up to our expectation. And in fact, TSMC has turned on the 250-watt power and we believe we are the only one company continuously run the 250 watts EUV power so far today.

From what I hear ASML has 500-watt power working in the lab so 5nm EUV throughput should not be a problem. The question I have, now that EUV is in production, will ASML actually make money on EUV? After the many years of R&D and EUV broken promises? Billions of dollars must have been spent…

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman Okay. Actually, I don’t want to comment on my competitors’ strategy. But let me, again, stress our mature nodes’ strategy. We continue to develop some of the specialty technology to meet the customers’ requirement, right, I just stated in that. And yes, a lot of specialty technology we are doing, I give you some example already, power management IC, CMOS, MEMS, everything. So that will help us to compete with our competitor. Actually, this kind of specialty technology particularly we have to work with the customer. And so that’s why I say working with the customer to meet their requirement. And that, in turn, to keep TSMC’s business. And that’s a way that we migrate the logic technology — pure logic technology to the more advanced node. But for the existing capacity, we develop into the specialty technology. And so our strategy is still meet customer’s requirement, but we don’t increase the existing logic capacity.

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman I think the AI’s application would be everywhere, actually, from the edge server or to the end device that’s just like the smartphone of everybody. So this kind of a development is to our advantage because TSMC certainly have a technology leadership. In order the AI would be effective, you need a very advanced technology for the highest performance computing. So I don’t see the effect that you are talking about, this application is better than that so that affected the growth or something. No, it will be continues to grow. And I expect this growth much faster than I predicted here.

This is what I have been saying, AI everywhere. And with AI you get increased demand for performance, low power, and increased density which leads to increased leading edge process technology demand. This is really good news for TSMC and the fabless semiconductor ecosystem of course.

These are the questions that interested me. You can see the rest HERE. There is a lot more to discuss so let’s do that in the comments section.


ASML most immune to slow down due to lead times Not LRCX

ASML most immune to slow down due to lead times Not LRCX
by Robert Maire on 10-21-2018 at 7:00 am

ASML reported EUR2.78B in revenues with EUR2.08B in systems. 58% was for memory. EUV was EUR513M with 5 systems. Importantly orders were for EUR2.20B in systems at 64% memory and 5 EUV tools. This was likely better than expectations given the overall industry weakness. EPS of EUR1.60 was more or less in line with expectations. Guidance of roughly EUR3B in revenues for the December quarter is very good.
Continue reading “ASML most immune to slow down due to lead times Not LRCX”


Portable Stimulus enables new design and verification methodologies

Portable Stimulus enables new design and verification methodologies
by Jim Hogan on 10-19-2018 at 12:00 pm

My usual practice when investing is to look at startup companies and try to understand if the market they are looking to serve has a significant opportunity for a new and disruptive technology. This piece compiles the ideas that I used to form an investment thesis in Portable Stimulus. Once collected, I often share ideas to get feedback. Please feel free to offer up suggestions and critique. Thanks – Jim
Continue reading “Portable Stimulus enables new design and verification methodologies”


Honey I Shrunk the EDA TAM

Honey I Shrunk the EDA TAM
by Daniel Nenni on 10-19-2018 at 7:00 am

The “20 Questions with Wally Rhines” series continues

Throughout the history of the EDA industry, pricing models have caused discontinuities in the way the industry operates. For a variety of competitive reasons, individual companies have developed ways to change the pricing model in an attempt to secure competitive advantage. Following are some of the most memorable:

  • Valid Logic (1988) – Remove the premium for “global float” and allow all licenses to “float” around the world. This one sounds pretty reasonable in today’s computing server environment but in 1988, software licenses were “node locked”. You purchased a design software license for one work station and it could “float” only within a reasonable distance, say around a single corporate site. Valid Logic offered their customers free float of the license to any of the customer’s worldwide locations through a program called “ACCESS”. It was a big hit. It also destroyed a significant portion of the total available market for EDA software, more than half by some estimates, as other EDA companies followed suit.
  • AVANTI Subscription Licensing – In the mid-1990’s, AVANTI introduced a three-year time-based licensing model. I am told by Daniel Nenni it was driven by AVANTI’s observation that customers purchased perpetual licenses that lasted for about 3 years (two Moore’s Law process nodes) before they had to upgrade and buy new perpetual licenses (although Red Herring magazine reported that Gerry told them he got the idea from car leasing plans). At this time, the industry model was a combination of perpetual licenses plus ongoing maintenance. The maintenance fee was 15-20% annually of the cost of the perpetual license, similar to what most of the non-EDA software industry offers today, except for the more recent introduction of SAAS (software as a service) models. The perpetual license cost was high and the revenue was all recognized “up front” because the customer now owned the software. For the AVANTI three year subscription model, the entire EDA industry followed the example like lemmings because of pressure from customers. It also had an attraction for the EDA companies since it offered a continuing revenue stream and EDA companies were worried about what would happen when perpetual license sales slowed to a smaller percentage of their revenue and maintenance revenue became the primary ongoing revenue source. The problem with the three year subscription model was that competitive discounting quickly drove the subscription price down to about the same level as the previous annual maintenance cost. Now the customers were receiving product plus maintenance for the same cost as they previously paid just for maintenance. A good deal for the customers but questionable for the EDA companies.
  • Cadence FAM (Flexible Access Model) – This was introduced in the late 1990’s. It was essentially a three year “all you can eat” approach to software from a single EDA company. It was a hit with the Cadence sales force and the customers but it caused lots of disruption in the industry although I don’t think other companies offered anything similar. It led to internal management disruption at Cadence. At the Cadence earnings call on April 20, 1999, the company announced that “the company has run into a ‘one to two quarter delay in absorption of 0.18 micron design tools’ among semiconductor makers. Many in the EDA industry translated this as: “A large number of our best customers have purchased three year FAM licenses so we can’t collect additional revenue from them for a while”.
  • Cadence Re-Mix – Once again, Cadence sets the pace of innovation in pricing with the introduction of “Re-Mix”. A customer specifies the mix of software products desired on the date of contract renewal but, if the customer chooses to change the relative mix of one product versus another, he can do so within the limits of the original contract value. Up until this time, customers had to guess what their mix of product needs would be for the next three years. Typically, they had to buy twice as much software as they would use on an ongoing basis because they couldn’t predict the mix of products they would need. The result: By some estimates, this re-mix approach eliminated as much as half of the EDA TAM because customers didn’t have to predict their future mix of needs and didn’t have to buy licenses sufficient for peak usage.

Foundry IP libraries – Until the late 1990’s, silicon foundries like TSMC left the entire design process to their customers. TSMC received a verified GDSII file from the customer and they checked it and then generated photomasks, fabricated wafers and shipped parts to the customer. Companies like Artisan were in the business of creating physical libraries of standard cell blocks that were checked for correctness and modeling by being fabricated on a test wafer by the foundry. They were then sold to customers doing the designs to speed design of the standard, undifferentiated parts of their chips. Wouldn’t it be great if customers could have access to the entire Artisan library during the design phase and then only be charged based upon the number of cells that were actually used in their designs multiplied by the number of chips produced?

Artisan thought so. And they convinced TSMC to adopt the model, providing software to trace the usage of Artisan cells. Artisan consequently developed a stable stream of royalty revenue from TSMC, making them an attractive acquisition for ARM. I’m told that the deal was not so good for TSMC. High volume customers negotiated discounts to wafer pricing with TSMC and the standard cell libraries became part of those negotiations. As a result, the additional money that TSMC expected to receive from their customers by charging them for the use of standard cells turned out to be elusive. The bundled price of wafers plus photomasks plus IP, etc. was included in the wafer price and any incremental revenue for the cell libraries was hard to find.

How can I be so cavalier about this whole topic when, during the last twenty-five years as CEO of Mentor, my company was subjected to so much cost and revenue pressure by these model changes? The reason can be seen in my previous blog EDA Cost and Pricing on October 12[SUP]th[/SUP]. The revenue of the EDA industry has continued to be 2% of semiconductor revenue for more than twenty years. These model changes were simply part of the way that discounts were provided to customers so that the EDA companies could stay on the learning curve and give semiconductor companies a reduced cost per transistor for design software. If the pricing models hadn’t changed, we would have had to provide those discounts in some other form because the EDA industry had to reduce its software price per transistor at the same rate that the semiconductor reduced its revenue per transistor.

The 20 Questions with Wally Rhines Series


Musk the Magician and Data Monetizer

Musk the Magician and Data Monetizer
by Roger C. Lanctot on 10-19-2018 at 7:00 am

Tesla Motors CEO Elon Musk pulled a rabbit out of his hat last month making thousands of cars vanish and converting reports of production and delivery hell into market leadership for premium sedan deliveries. Following SEC legal action and a podcast where he appeared to be drinking scotch and smoking marijuana and after surrendering – temporarily – the chairmanship of the company, Musk stands vindicated by empty factory parking lots and a spiking stock price.

But that spiking (upward) stock price soon took another nose dive as Musk mocked the SEC’s decision and investors fled again. What endured, though, was the report from Tesla’s subsequent shareholders’ meeting showing that the Model 3 is now the best-selling mid-sized premium sedan in the U.S.

As amazing and impressive as this achievement was, it does not obscure the deeper issue of Tesla’s pursuit of automated driving. While it appears, now, that Musk misled investors and the public generally regarding the orchestration of a move to take the company private, he continues to play fast and loose with the automated driving capabilities of his cars.

According to a report in Bloomberg News, Musk sent an email indicating that Tesla needed 100 more employees to join an internal testing program linked to rolling out the full self-driving capability. Bloomberg reported that Musk wrote: “any worker who buys a Tesla and agrees to share 300-400 hours of driving feedback with the company’s Autopilot team by the end of next year won’t have to pay for full self-driving – an $8,000 saving – or for a premium interior, normally costing $5,000.

“This is being offered on a first come, first served basis,” Bloomberg reports Musk writing in his email. “Given the excitement around this, I expect it will probably be fully subscribed by noon or 1 p.m. tomorrow.”

The problem is that Musk is still referring to Autopilot as full self-driving capability when it, in fact, remains at best Level 2 with enhancements only likely to shift it to a Level 3 still requiring driver re-engagement with the driving task. This matters because the growing range of Level 2 systems – steadily advancing the capabilities of adaptive cruise control with lane keeping etc. – all appear to perform slightly differently and consumer confusion can be fatal or at least dangerous.

There is the now-infamous video of the Volvo customer at a dealership being run over by an XC60 with City Safety because a) the driver accelerated and b) the system did not have pedestrian detection. Subaru with EyeSight and Nissan’s ProPILOT are similar enhancements but with slight variations in performance. Daimler and Audi are also in the game.

GM’s Super Cruise goes all out with enhanced vehicle positioning technology from Ushr, Swift and Trimble; location data updates via wireless connections; and a driver monitor from Seeing Machines. All of these systems are Level 2, but only the GM system monitors the driver while also integrating enhanced localization.

Musk has shown he can make cars disappear and yo-yo his stock price to the dismay of short sellers and regulators. What is missing from the Tesla high wire act, though, is some candor regarding the current capabilities and long-term expectations for Autopilot.

It is worth noting the value of automated driving data imputed by Musk’s email. Musk has put a marker on data monetization at $20/hour of automated driving – but that’s probably a lowball estimate. That data is priceless to an organization that has none. Investors, meanwhile, can run that against the millions of hours of such data gathered by Cruise Automation, Waymo and others as a validation of their high flying valuations. Thanks, Elon, but it’s still not “Autopilot.”


ARM Turns up the Heat in Infrastructure

ARM Turns up the Heat in Infrastructure
by Bernard Murphy on 10-18-2018 at 7:00 am

I don’t know if it was just me but I left TechCon 2017 feeling, well, uninspired. Not that they didn’t put on a good show with lots of announcements, but it felt workman-like. From anyone else it would have been a great show, but this is TechCon. I expect to leave with my mind blown in some manner and it wasn’t. I wondered if the SoftBank acquisition had knocked them a little off their game.

This year they seem to have got their mojo back, at least judging by a press announcement I joined before the show. ARM is again swinging for the fences, this time in announcing a major initiative for cloud to edge infrastructure support and a new processor roadmap to support that direction. Of course ARM is already well-known in the edge but they’re also deeply embedded in base-stations, top of rack switches, gateways and WAN routers. In fact ARM claims the largest market share in units for processor IP in infrastructure (I’m sure intended as a reminder that RISC-V is still a toddler in markets already dominates).

They’re also starting to have some impact in the server space (the cloud in its various manifestations), though another person on the call asked if that initiative is struggling given e.g. Qualcomm’s exit. Drew Henry, the speaker and VP/GM for the Infrastructure BU, acknowledged the QCOM change but said there will be announcements at TechCon on new server entrants which apparently will demonstrate ample continuing momentum.

So what’s the big deal with infrastructure? ARM anticipates significant growth in this area given their expectation of a trillion devices in the IoT. Smart parking and city lighting, retail, transportation and many other applications will create many wireless edge nodes needing to communicate ultimately with the cloud. And that will require layers of intelligent data reduction and traffic management between those levels; mega-servers and 5G alone won’t be enough to manage the data volume.

So ARM has announced NEOVERSE, which diverges from the Cortex world. NEOVERSE is a brand we were told and is inclusive of technologies and services that partners will bring to the space. ARM’s contribution under this brand starts with high-performance secure IP and architectures. Drew showed us a roadmap for these IP, starting with the Cosmosplatform, related to the A72, A75 and available today in 16nm. The next step up will be the Aresplatform, available in 2019 at 7nm. Following Ares in 2020 we’ll get the Zeusplatform in 7nm+ and in 2021 we’ll see the Poseidon platform at 5nm. Drew expects about 30% improvement per generation in performance and features.

The next component of NEOVERSE is a wide range of solutions and, of course, an extensive ecosystem. The solutions don’t look too dissimilar from what you know around standard ARM offerings, though with perhaps with more networking options and multiple accelerators, from ML and embedded FPGA to video. In the ecosystem they already have endorsements from multiple silicon and EDA providers, most of the US and Chinese cloud providers (I didn’t notice Google), big systems names like Ericsson, Nokia, Huawei and Cisco and Sprint, Orange and Vodaphone among others in operators.

In OS they include RedHat, Suse and Oracle, in container/virtualization they have Docker, OpenStack, VMWare, etc. In language and library support they have endorsements from OpenJDK, Python, NodeJS and GO, in devtools they listed codefresh, shippable and more. Finally in Open source projects for networking and server they include Linaro (naturally), LF networking and Cloud Native Computing Foundation. Apparently some of this work is already underway, eg Ares is starting to appear in Linux builds. Overall not a bad starting point.

The third component of NEOVERSE is scalability. Naturally the level of solutions you want to see in the cloud will be different from what you would expect in connectivity, or the fog, or near the edge. In the cloud you want TBps and lots of cache, handling datacenter workloads. In networking, storage and security you still need performance but not at the same scale and you need the hardware features required to support workloads like NFV, SDN, IPSec and compression. At the edge, or close, demands are not nearly as challenging, though still possibly requiring support for virtualization and certainly support for wireless and/or wired upload.

This feels like more than just another arrow in ARM’s quiver. It’s an additional quiver with arrows crafted to a very distinct target, again with strong ecosystem support. Which should help them further expand the gap between ARM and alternative solutions. At least I’m sure that’s how ARM sees it.