webinar banner2025 (1)

Detail-Route-Centric Physical Implementation for 7nm

Detail-Route-Centric Physical Implementation for 7nm
by Alex Tan on 10-10-2018 at 12:00 pm

For many years TSMC has provided IC design implementation guidance as viewed from the process and manufacturing standpoints. The last time TSMC Reference Flow incremented, it was version 12.0 back in 2011. Since then, increased design, process and packaging related complexities of the advanced nodes have demanded more focused efforts –which have translated into incremental set of directives such as DPT (Dual Patterning Technology), advanced packaging CoWoS with HBM2, reliability analysis, EUV, etc.

Advanced Nodes and Physical Design
Finer geometry in advanced nodes has introduced timing degradation from wire and via resistance. The reference flow 12.0 recommends an enhanced routing methodology (such as by via count minimization, routing layers segregation and wire widening to mitigate the impact of wire and via resistance). Concurrently, there seems to be an increased in the EDA efforts to both tighten and secure timing optimization attained early (during synthesis/placement) with those of post-route stage. The heuristic nature and boundaries created by feed-forward point tools have partly contributed to the loss of predictability (figure 1a) and introduced a design capability gap, according to Prof. Andrew Khang from UCSD as seen in figure 1b.
While many block level place and route tools have committed a shift-left move in order to account for numerous physical effects during placement ranging from the mainstream congestion-aware to more SI-aware, IR-aware, DRC-aware, etc., the complexity of the advanced node DRC rules are making the effort of producing a clean and optimal route more painful.

The original Avatar’s
Aprisa and Apogee are two products from Avatar Integrated Systems, a leading provider of physical design implementation solutions. Aprisa is a complete place-and-route (P&R) engine, including placement, clock tree synthesis, optimization, global routing and detailed routing. Its core technology evolves around its hierarchical database and common “analysis engines,” such as RC extraction, DRC engine, and a fast timer to solve complex timing issues associated with OCV, SI and MCMM analysis. Aprisa uses multi-threading and distributed processing technology to further speed up the process. The other product, Apogee is a full-featured, top-level physical implementation tool that includes prototyping, floor planning, and chip assembly –integrated with the unified hierarchical database. Its unique in-hierarchy-optimization (iHO) technology intended to close top-level timing during chip assembly through simultaneous optimization of design top and block levels.

Why a refresh needed

The performance impact of both wire and via resistance are more pronounced in sub-16nm process nodes. This can be seen by much narrower and longer routes due to a faster wire width shrink as compared to standard cell’s. Complex design rules is not helping redundant vias insertion either. As a result, the transition waveform shape is affected for both short and medium length routes. Furthermore, increased cross-coupling capacitance and net resistance simultaneously induced timing impact through larger crosstalk effects. In the end, wire delay takes an increasing percentage of cycle time. By 7nm process node, for nets of significant length, wire delay is measured as more than half of total stage delay and critical paths are much harder to close (as shown in figure 2a, 2b).

The conventional placement-centric place and route architecture methodologies with separate sequential flows are no longer adequate to address this interconnect related effects –as they cause significant pre-route versus post-route timing correlation issues, excessive design iterations, and suboptimal QoR.

Aprisa re-engineered for 7nm and beyond
During TSMC 2018 Open Innovation Platform in Santa Clara, Avatar has announced the availability of a new architecture to its Aprisa and Apogee solutions. With its breakthrough detailed-route-centric architecture addressing advanced nodes challenges, the new place-and-route provides > 2X faster design closure times with better QOR than the conventional counterparts.

As one of Avatar’s customers, Mellanox provides end-to-end Ethernet and InfiniBand intelligent interconnect solutions for servers, storage, and hyper-converged infrastructure. Their SoC designs have both unique characteristics and challenges, involving vast I/O interconnects fabric and utilizing advanced process nodes to reduce their switching latency.

“Advanced place-and-route technology is important for our silicon design activities as we move to more advanced processes,” said Freddy Gabbay, vice president of chip design at Mellanox Technologies. “The detailed-route-centric technology introduced with the new release of Aprisa consistently delivered better quality-of-results and predictable DRC and more than two times faster design time.”

Another customer, eSilicon, is a leading provider of semiconductor design and manufacturing solutions. eSilicon guides customers through silicon proven design flow, from concept to volume production. Its solution targeted for optimal PPA of ASIC/system-on-chip (SoC) design, custom IP and manufacturing solutions.

“eSilicon has used Aprisa on several very large and complex FinFET chips across several process nodes, including 16nm and 14nm,” said Sid Allman, vice president, design engineering at eSilicon. “We have successfully used Aprisa at both the block and top level with very good results. We expect to apply the new release to our advanced 7nm work as well.”

Avatar re-architected Aprisa and Apogee using three prong approaches as illustrated in figure 3:

Unified Data Model (UDM) is the single database architecture for placement, optimization, routing, and analysis. All Aprisa engines utilize the same data models, objects, and attributes in real time.

Common Service Engine (CSE) enables analysis engines and optimization engines to work in concert. Any implementation engine can make dynamic real-time calls to analysis engines at will. Optimizations are made with accurate data the first time. Extraction and timing data gets updated dynamically and seamlessly.

Route Service Engine (RSE) provides proper routing information on a per-net basis to any engine within the system that needs it. The RSE manages the route topology during all phases of optimization and reports to the calling optimization engine the net routing topology, such as metal layers used, RC parasitics and crosstalk delta delay.

Only by predicting and guiding the route topology early in the design can optimization be performed effectively and efficiently as reflected in the results comparison with and without route-centric version in figure 4.

The new release also includes full 7nm support; IR-aware/hotspot avoidance; auto EM violation avoidance and fixing; native support of path-based analysis; design pessimism reduction and up to 20% power reduction.

“We are committed to developing new breakthrough technologies to address the most challenging designs in the industry,” said Dr. Ping-San Tzeng, Chief Technology Officer at Avatar Integrated Systems. “This breakthrough architecture to our flagship products provides leading design teams with much faster design closure while improving the quality of results at 16nm and below.”

To recap, it is imperative to have timing accuracy and predictability throughout placement and route to ensure timing closure convergence. Detailed-route-centric architecture in new Aprisa/Apogee coupled with unified data model and integrated optimization/analysis engines facilitates consistent and up-to-date optimization data throughout the flow, which helps deliver improved quality-of-results, reduces iterations and speeds design convergence more than 2X faster than competition.

Avatar will be highlighting Aprisa and Apogee’s new architecture at the Arm Techcon, October 16 – 18, 2018 at the San Jose Convention Center in booth #827.


Crossfire Baseline Checks for Clean IP Part II

Crossfire Baseline Checks for Clean IP Part II
by Daniel Nenni on 10-10-2018 at 7:00 am

In our previous article bearing the same title, we discussed the recommended baseline checks covering cell and pin presence, back-end, and some front-end checks related to functional equivalency. In this article, we’ll cover the extensive list of characterization checks, that include timing arcs, NLDM, CCS, ECSM/EM, and NLPM.


Timing Arc Checks
The recommended timing arc checks should include checking equivalence of WHEN and SDF conditions in a given liberty file, condition consistencies across different timing corners, and Liberty vs. Verilog/VHDL arc consistencies. This is essential in order to ensure accurate digital simulations and timing analysis.

NLDM Characterization Checks
NLDM related characterization QA should include consistency checks between delay and transition tables, ascending capacitance and transition index values, correct number of indices, and range checks for delay, transition, setup/hold, and minimum period. Ensuring that cell rise and fall delay values don’t vary too much can be a valuable check for clocks, as well as other ports that may require a balanced delay.

It may also be prudent to ensure delays increase with increasing output capacitance, input transition times, and decreasing supply voltage. At the same time, checking that both transition and capacitance values don’t fall outside the range of the defined maximum transition and capacitance is also a necessity. This will ensure that no extrapolation is needed when characterized data is used in a design flow environment. In terms of transition trip points, one must ensure that they are symmetrical and must be a given percentage outside of the delay trip points. Other accuracy checks should include checking for non-changing or zero delay or transition values in a given table.

When comparing two or more PVT corners, large delay deviations should be closely monitored, exact values should not repeat, pin properties and parameters should be consistent, and mode definitions should match. Capacitance and transition properties should be consistently defined for all pins. More importantly, ensuring the same tables and arcs are defined across all given corners will provide a more stable and error-free timing analysis down the line.

Constraint values and related timing information such as setup and hold tables should be defined in matching pairs. Each matching setup and hold tables should have equal indices, and the sum of setup and hold values should be greater than zero. For clock related pins, ensuring pulse width definitions is also necessary.

Additional consistency checks should flag cases where duplicated or illegally defined parameters are used and ensure user-defined parameters are correct. Temperature, voltage, and process corner parameters should be consistent with the library and file name. On top of this, units must be consistent and defined as expected. Pin related checks should guarantee the presence of arcs and the use of required tables and omission of obsolete ones. An important, yet often overlooked check, should ensure that related pin terminals are not defined as outputs.

For standard cell libraries, cell to cell trends with respect to changes related to increasing output drive should be closely monitored. They include area, cell footprint, pin attributes, arc consistency, delay, and power monotonicity. Also, ensuring consistency among the attributes pertaining to power switch cells and its pins will guarantee correct usage of specific cells.

On-chip variation related timing checks should include table presence, monotonicity, and guarantee that all files are paired correctly (when comparing NLDM to AOCV/POCV files).

CCS Characterization Checks

CCS power characterization can also benefit from many of the above checks along with ensuring that all given templates follow the Liberty specification guidelines. The nominal voltage must match the operating condition voltage. This is essential is guaranteeing correct data for a given voltage corner. The dynamic current group must be present for all cells and for power pins, the current must be positive, and negative for ground pins. Additionally, the reference time must be greater than zero since it’s related to physical circuit behavior. The same checks also apply to leakage current. In the absence of gate leakage values, current conservation must hold within the same leakage current group. If all power and ground pins are specified with leakage currents, the sum of all currents should be zero. Finally, when dealing with intrinsic resistance, total capacitance, or intrinsic capacitance, values should not be negative or zero.

ECSM/EM Characterization Checks
Effective Current Source Model (ECSM) and Electro-Migration (EM) related checks are in line with those specified for CCS. Beside ensuring that all tables are consistent across all corners, current values must also be checked to ensure monotonicity across given capacitive loads. Checking for the presence of a given combination of average, peak, and RMS current types may be a design specific requirement that would need to be qualified as well.

NLPM Characterization Checks
Last, but not least, power related characterization checks should include the standard and expected trends related to capacitance, transition times, and voltage. Power is expected to increase when load capacitance or transition times increase. At the same time, it is expected to decrease when supply voltages decrease. In terms of leakage power, one might want to ensure that values fall within an expected range and check whether pins are correctly defined for a given condition, whether they are required or missing.

Conclusion
Although we have already mentioned this in the previous article, it is important to repeat it… IP qualification is an essential part of any IC design flow. A correct-by-construction approach is needed since fixing a few bugs close to tapeout is a recipe for disaster. Given that, IP designers need a dedicated partner for QA solutions that ensures the QA needs of the latest process nodes are always up-to-date. In-house QA expertise increases productivity when integrated with Fractal’s Crossfire validation tool. All framework, parsing, reporting, and performance optimization is handled by the software. On top of that, with a given list of recommended baseline checks, we ensure that all customers use the same minimum standard of IP validation for all designs.

Also Read: Crossfire Baseline Checks for Clean IP


Should Companies be Allowed to Hack Back after a Cyberattack?

Should Companies be Allowed to Hack Back after a Cyberattack?
by Matthew Rosenquist on 10-09-2018 at 12:00 pm

Potential for Hack-Back Legislation. Government officials and experts are weighing in on the concept of ‘hacking back’, the practice of potentially allowing U.S. companies to track down cyber attackers and retaliate.

Former head of the CIA and NSA outlined his thoughts to the Fifth Domain on the Hack Back issue currently being debated by Congress. He is cautious but has expressed an openness to allowing some levels of retaliation by private organizations.

General Hayden is a very sharp and brings unprecedented national intelligence experience to the table, but I must disagree with his position on the risks of enabling businesses to ‘hack back’.

I have had the pleasure of an in-depth 1:1 discussion with him regarding the long-term nation-state threats to the digital domain and have always been impressed with his insights. However, this is a different beast altogether.

Allowing U.S. companies latitude to hack-back against cyber attackers is very dangerous. I believe he is underestimating the unpredictable nature of business management when they find themselves under attack. Unlike U.S. government agencies, which firmly align themselves to explicit guidance from the Executive branch, the guard-rails for businesses is highly variable and can be erratic. Decisions can be made quickly, driven by heated emotion.

The average American business does not understand the principles of active defense, proportional damage, or have insights to establish and operate within specific rules of engagement. They certainly don’t have the capacity to determine proper attribution, gather necessary adversarial intelligence, or even understand the potential collateral damage of weapons they may use.

Instead, we can expect rash and likely volatile responses that lash out at perceived attackers. Unfortunately, cyber adversaries will quickly seize on this behavior and make their attacks appear as if they are coming from someone else. It will become a new sport for miscreants, anarchists, social radicals, and nation states to manipulate their targets into hacking-back innocent parties. As the meme goes, “On the Internet, nobody knows you’re a dog”.

Hack Back Consequences
What happens when threats impersonate hospitals, critical infrastructure, or other sensitive organizations when they attack. The hack-back response may cause unthinkable and unnecessary damage.

Congress is also considering allowing companies to ‘hack back’. Senator Sheldon Whitehouse recently indicated he is considering a proposal to allow companies to “hack back” at digital attackers.

Weaponizing Businesses
I think the whole “hack back” movement is entirely misguided.

Many compare it to ‘stand your ground’ situations, as they try to convince others to join public support. But such verbal imagery it is just not applicable. A better analogy is saying if someone breaks into your house, you should have the right to break into their home or whomever you think did it (because you really won’t know). Most would agree it is not a good idea when framed that way.

Now consider whom you will be empowering to make such decisions. Businesses who were not able or responsible enough to manage the defense of their environment in the first place, will be given authority to attack back. Yet, it is unlikely they will truly understand where the actual attack is originating. They will be acting out of rage, fear, and with weapons they have no concept of potential collateral and cascading damage it may cause.

Every time I have heard an executive wanting to be able to ‘hack back’, it was someone who as not savvy in the nuances of cybersecurity and lacked the understanding of how incredibly easy it is to make an innocent 3rd party look like they are the ones conducting an attack. When I brought up the fact it is easy to make it appear like someone else was behind the strike, such as a competitor, government agency, or hospital, the tone radically changed. Attribution for cyberattacks can take experts months or even years. Businesses have neither the expertise nor the patience to wait, when they want to enact revenge.

Simple Misdirection
If allowed, hacking back will become a new sport for miscreants, anarchists, social radicals, and nation states to manipulate their adversaries into making such blunders or be hacked-back by others who were fooled into thinking they were the source.

Allowing companies to Hack Back will not deter cyberattacks, rather it will become the new weapon for threats to wield against their victims.

Interested in more insights, rants, industry news and experiences? Follow me on Medium, Steemitand LinkedIn for insights and what is going on in cybersecurity.


Top 10 Highlights from the TSMC Open Innovation Platform Ecosystem Forum

Top 10 Highlights from the TSMC Open Innovation Platform Ecosystem Forum
by Tom Dillinger on 10-09-2018 at 7:00 am

Each year, TSMC hosts two major events for customers – the Technology Symposium in the spring, and the Open Innovation Platform Ecosystem Forum in the fall. The Technology Symposium provides updates from TSMC on:
Continue reading “Top 10 Highlights from the TSMC Open Innovation Platform Ecosystem Forum”


Closing Coverage in HLS

Closing Coverage in HLS
by Alex Tan on 10-08-2018 at 12:00 pm

Coverage is a common metric with many manifestation. During the ‘90s, both fault and test coverage were mainstream DFT (Design For Testability) terminologies used to indicate the percentage of a design being observable or tested. Its pervasive use was then spilled over into other design segments such as code coverage, formal coverage, STA timing analysis coverage, etc.

Motivation of having a code coverage
While the term coverage may provide the management team with a sense of how much testing was done on a targeted code or design, to the code developers or system designers it offers an additional measure on how stable are their incepted codes –to be fully realized or used in the production mode. In software development domain, code coverage measures the percentage of source code being exercised during test using a collection of test suites –the underlying axiom here is that a high percentage value indicates a lower chance of undetected bugs.

The primary code coverage criteria include function coverage (measures frequency of a program function is called); statement coverage or line coverage (measures the number of statements that are executed); branch coverage (measures the amount of branches being executed, such as if-else constructs) and condition coverage or predicate coverage (measures whether the Boolean sub-expression has been assessed either true and false).

Although it is hard to achieve a full coverage and the relationship between an attained code coverage level versus probability of a program being bug-free is non-linear, code coverage data offers many benefits. This includes improving the completeness and robustness of the existing test suites (such as to guide generation of missing test cases, to minimize a test suite for runtime reduction, to guide fuzz testing, etc.) and enhancing regression sensitivity of a test suite.


SoC design, HLS and Catapult

Emerging applications such as 5G, automotive and IoT have introduced more embedded sensors, specialized processors and communication blocks such as visual processing, AI neural networks and high-speed communication units. In order to speed-up and lower the inception cost of such products, more system designers are migrating their code abstraction from RTL to high level synthesis (HLS) as shown in figure 1. The overall development time can be cut by half with higher level synthesis. While traditional software coverage tools can be run on the C++ source code, the generated results are inaccurate and may mislead designers for a good coverage as discussed in more details in the next paragraph.

Mentor’s Catapult® HLS Platform provides a complete C++/SystemC verification solution that interfaces with Questa® for RTL verification as shown in figure 2. The platform consists of a design checker (Catapult Design Checker or CDesign Checker), a coverage tool (Catapult Code Coverage or CCOV), a high-level synthesis (Catapult HLS) and a formal tool (Sequential Logic Equivalence Check).

To gain a better clarity of how these platform sub-components handshake, we should track the code as it is being processed. Initially, the code (containing Assert and Cover statements) is applied as input to CDesign Checker for linting and formal analysis to uncover any coding or language related bugs. A subsequent CCOV run is done on the checked code to provide hardware-aware coverage analysis. Once the code is cleanly analyzed, HLS can be performed to produce power-optimized and verification ready RTL. HLS propagates assertions and cover points to the output RTL and generates the infrastructure using SCVerify for RTL simulation using Questa –allowing the reuse of the original C++ testbench to verify the RTL. As a final step, SLEC-HLS formally verifies that the C++ exactly matches the generated RTL code.

Catapult vs traditional Code Coverage
Code coverage analysis in CCOV had been enhanced to be more hardware-context aware. The intent is to also enable designers to achieve the same coverage level for both high-level source code and post HLS RTL. The four main coverage techniques used to analyze the C++ code are statement, branch, focused expression, and toggle coverage. At first glance they look similar to those of the generic coverage tool version, but in actuality are not the same, as captured in the (left) table of figure 3. The right table shows differences between CCOV and traditional code coverage such as GCOV (Gnu Coverage).

Furthermore, mainstream software coverage tools are not hardware-context-aware in nature as highlighted below:

Using Catapult Code Coverage
A complete HLS to RTL coverage flow starts with stimulus preparation for HLS model in either C++ or SystemC. It is followed by running CCOV to assess if targeted coverage is met, otherwise more tests get added. Optionally, designer can exclude areas of the design from coverage analysis. CCOV captured all the coverage metrics into the UCDB and HLS is done to produce power-optimized RTL.

The generated coverage results captured in Questa Unified Coverage DataBase (UCDB), can later be used within the context of the proven Questa verification management tools as shown in figure 5. This CCOV integrated with the UCDB provides an assurance for a complete and accurate RTL coverage analysis.

If the Questa simulator is used with code coverage turned on, all coverage metrics are added into the UCDB. For those cases requiring further verification such as any unreachable code segment, designer can use Questa CoverCheck to help formally prove unreachability and apply the necessary directives to the simulator (or in UCDB) for its exclusion. Prior generated SCVerify settings and makefile can be used to simulate the RTL. The process gets iterated with more tests until RTL code coverage is reached.

In conclusion, migrating to HLS is a proven cost-savings and significantly shorten RTL development time. Mentor’s Catapult hardware-aware code coverage is a key metric in HLS flow and bridges coverage of higher abstraction codes (C++/SystemC) with RTL –enabling a fast convergence to verification sign-off.

For more details on Catapult HLS please check HERE.


TSMC and Synopsys are in the Cloud!

TSMC and Synopsys are in the Cloud!
by Daniel Nenni on 10-08-2018 at 7:00 am

EDA has been flirting with the cloud unsuccessfully for many years now and it really comes down to a familiar question: Who can afford to spend billions of dollars on data center security? Which is similar to the question that started the fabless transformation: Who can afford to spend billions of dollars on semiconductor manufacturing technology?

TSMC has partnered with cloud vendors Microsoft and Amazon to bring EDA into the 21st century. I have said it before, if anybody could do it TSMC could, which makes TSMC all that more sticky as a pure-play foundry. What other foundries have the ecosystem and trust of the semiconductor industry to do this?

The one issue that is still in process is the software business model. From what I am told EDA software licensing has not changed to a traditional pay-per-use cloud model yet. It really is uncharted territory so let’s look at how we got EDA licensing to where it is today.

We started with perpetual licenses that were locked to a specific machine (not good for EDA). Next was the WAN licensing that would let a perpetual license float around using a license server (good), followed by the flexible access model (FAM) which was a three year all-you-can-eat approach offered by a specific vendor (horribly not good). The software subscription licensing that we use today came next where you lease a software license for three years (very good). One company added a remix clause that allowed customers to change the license counts from one product to another (not good). The EDA company that I previously worked for added weekly tokens that can be used for peak simulation/verification times (very good). The token model worked quite well and added much more total revenue than previously thought and gave chip designers more time simulating and verifying. I feel that the pay-per-use cloud pricing would have a similar result, additional revenue above and beyond annual EDA budgets and better chips, absolutely.

The other thing that I want to point out is how important your relationship with the foundry is. I have made a career of it, helping emerging EDA and IP companies work with the foundries creating revenue streams inside the foundry and outside with the top foundry customers. It is interesting to note that Cadence and Synopsys are the two EDA partners TSMC chose to start with. I’m sure the others will follow but take note, Synopsys, the number one EDA and IP company, does not offer their own cloud, they are all-in with TSMC.

One of the keynotes at the TSMC OIP conference last week was Kushagra Vaid, GM and distinguished Engineer at Microsoft Azure (cloud). Before joining Microsoft in 2007 he spent 11+ years designing microprocessors at Intel. It is always nice to talk semiconductor design with someone who actually designed semiconductors. I spoke with Kushagra and Suk Lee after lunch and am convinced that, after numerous failed attempts, EDA is finally in the cloud and will stay there, my opinion.

“Microsoft Azure is pleased to be a TSMC premier partner in the OIP Cloud Alliance, and we’re honored to receive a 2018 partner of the year award from TSMC for our joint development of theVDE cloud solution,” said Kushagra Vaid, GM and Distinguished Engineer, Azure Hardware Infrastructure, Microsoft Corp. “Our collaboration with TSMC will help usher in modern silicon development that leverages the capabilities of the Azure cloud platform.”

“Synopsys has been a TSMC OIP Alliance member for EDA flows and IP for 11 years, and we have expanded our partnership with TSMC to enable IC design in the cloud,” said Deirdre Hanford, co-general manager, Synopsys Design Group. “We have collaborated with Amazon Web Services and Microsoft Azure to provide a secure and streamlined flow for TSMC VDE. The Synopsys Cloud Solution has passed the rigorous TSMC security and performance audits and is ready for our mutual customers to design in the cloud with TSMC collateral using Synopsys tools and IP.”

Synopsys Announces Availability of TSMC-certified IC Design Environment in the Cloud

TSMC Recognizes Synopsys with Four Partner Awards at the Open Innovation Platform Forum

Synopsys Design Platform Enabled for TSMC’s Multi-die 3D-IC Advanced Packaging Technologies

Synopsys and TSMC Collaborate to Develop Portfolio of DesignWare IP for TSMC N7+ FinFET Process

Synopsys Digital and Custom Design Platforms Certified on TSMC 5-nm EUV-based Process Technology

Synopsys Delivers Automotive-Grade IP in TSMC 7-nm Process for ADAS Designs


The real race for superiority is TSMC vs Intel

The real race for superiority is TSMC vs Intel
by Robert Maire on 10-07-2018 at 7:00 am

Recent talk of AMD vs Intel market share share is misguided, the real race for superiority is TSMC vs Intel underlying that, tech dominance between US & China.

There has been much discussion of late about market share between Intel and AMD and how much market share AMD will gain at Intel’s expense due to Intel’s very late 10NM technology node. On the surface this may be the minor symptoms of a deeper conflict between Intel and TSMC and ultimately between the US and China for technology dominance.

The real root cause of AMD’s resurgence also may not be only Intel’s stumble but Global Foundries stumble as well. GloFo’s stumble in ability to deliver to AMD allowed AMD to go outside of its relationship and hook up with TSMC which “leap frogged” it into a truly competitive position.

At some point in time , its not a question of if but rather when, TSMC will be subsumed by China (along with Taiwan..). This makes the AMD versus Intel story really all about technology dominance in the future between the US and China.

Right now TSMC (China) appears to be in the lead even though Intel (USA) appears to be finally recovering from its large 10NM stumble. TSMC long ago won the foundry wars as it makes Apple chips, communications and video chips. On the logic side, the only thing left for TSMC to win is CPU/PC/server chips which AMD will serve as the vehicle for.

Although it can be argued that Samsung is still the number one chip maker by dollar volume, that is obviously due to its dominance of memory as it doesn’t compare to TSMC in the more technologically important foundry market.

Although memory is also obviously extremely important in today’s data obsessed world, we think China can more easily copy both NAND and DRAM than logic in the future. There are a number of memory fabs in China already starting on that quest.

With GloFo (owned by Abu Dhabi) out of the race, Intel is the only real participant in the semiconductor race for the US. Intel in recent years has been focused on everything and anything not core semiconductor. Software, AI, AR, VR & drones… and of course Mobileye. One could argue that BK took his eye off the ball, distracted by the lure of new shiny toys. We would hope that Intel’s new Ceo return focus to its core technology heritage and doubles down on that focus.

Maybe Jerry Sanders (founder of AMD) quote “real men have fabs” is even more important today as the real “fab king”, TSMC, owns the foundry industry and is the barbarian at the gates to the CPU industry.

TSMC will make more money from AMD than AMD will
We have pointed out in previous articles that TSMC is the dominant partner in the relationship with AMD. AMD needs TSMC more than TSMC needs AMD. Now that the good ship GloFo has been burned to the ground by its 7NM abandonment, AMD has no escape from TSMC island (Samsung is not a viable rescue). TSMC can control AMD’s profit margins by controlling their chip supply costs. They can also control AMD’s market share by dropping costs to AMD to increase share versus Intel or raising costs to tighten the competition. TSMC is at the controls…

AMD is now largely a captive puppet of TSMC
What is the correct market valuation of a captive puppet? We had suggested in previous notes that AMD was getting overly expensive. The market appears to finally, belatedly, be figuring this out. AMD’s stock seems to have discounted huge share gains and profitability both of which were still in the further future.

We think analysts would be mistaken to assume that AMD’s supply costs from TSMC will fall sharply or even remain constant as TSMC ramps up supply. AMD’s model is not that of a fab owner with high fixed costs and marginal incremental costs. TSMC controls AMD’s financial model going forward.

The relationship between AMD and GloFo was much more even handed as both sides needed one another more equally than the one sided TSMC/AMD relationship.

Design versus Fabrication
Although modern chip design is ever more critically complex and important than before, the fabrication of chips into silicon (Moore’s Law) has become exponentially more complex and has hit many more costly and technically complex barriers. Just compare the revenues of EDA companies versus chip tool companies . Look at the cost of building a fab or mask sets.

While AMD has great CPU designs and great video and AI capabilities, those designs have no value without the ability to fab them. TSMC brings that to the table and at a level equal to and better than Intel can.

At the end of the day, its the fab’s capability that wins the race….. a great design is worthless without a fab that can make it a reality…..

10NM finally out of the woods?
We were the first to break the news of Intel’s 10NM delay years ago. At the time we never would have imagined that the delay would be this long. This was far from normal as previously Intel’s “Tick Tock” strategy was as precise as a Swiss watch. Something happened, not clear what, but we think it was beyond just a technical barrier.

High on the rumor mill of causes of the delay are discussions of cobalts insertion into the manufacturing process. This sounds somewhat similar to the industries switch from aluminum to copper only a few orders of magnitude harder.
It is interesting to note that TSMC has not done cobalt yet so perhaps it has not suffered the same pain but may in the future. It is also interesting to note that GloFo was trying cobalt as well.

In any event it seems that Intel has “broken the code” and is now yielding better or at least enough to ramp production starting next year.

As most investors know, Intel’s 10NM and TSMC’s 7NM are rough equivalents in geometry and performance. Intel ramping in Q1 2019 would put them about 9 months behind TSMC’s leading edge which has already ramped.

It sounds like TSMC is already pushing hard at 5NM so Intel is going to have to go even harder to make up for lost time. So far TSMC has not gotten tripped up but it has yet to do cobalt and/or EUV. Perhaps if TSMC hits some hurdles Intel may have a chance to catch up but I would not count on it.

The US versus China in the fight for technology dominance
China has already proven to be a formidable competitor in software, Internet and AI etc;. It has a $100B checkbook to advance its semiconductor industry. So far they have made good (perhaps not great) progress. TSMC alone is a big enough prize in the technology race to justify getting even more aggressive on Taiwan. TSMC coupled with some memory fabs in the mainland would be a lethal combination way bigger than Samsung.

Maybe the US government will wake up and support the US semiconductor industry more. Maybe encourage Intel and Micron to merge. Maybe seek a buyer for GloFo Malta that will pick up the baton. Maybe put export restrictions on key US tool technology to delay China’s ascent.

So far, there is no such support from the government for the industry that holds the key to future technology dominance. There have been some very small import tariffs on Chinese goods related to semiconductors but nothing more.

Even though China is obviously no longer a communist country, we think a Lenin quote is appropriate; “The capitalists will sell us the rope with which to hang them,” In other words, we will continue to supply China with the technology that they will eventually dominate us with.

The Stocks
As we have previously suggested, we think AMD is overvalued as investors have stampeded into the stock on a good story without digging much deeper. While we think there is a lot of upside in AMD we think much of it is already in the stock and not a lot of room for any negative news.

As for Intel, ramping 10Nm is obviously good news but it won’t really impact things until at least Q1 2019 at best and realistically speaking Q2. In the mean time Intel still is still short of 14NM capacity that has caused it to perform unnatural acts to serve unexpected demand. It has also ticked up capex to try to help out but this also won’t fix the 14NM shortage for at least a couple of quarters as equipment has to roll in. This suggests that AMD will get some opportunistic share gains near term.

Semi Equipment Stocks – News remains negative…RTEC preannounces
As for the equipment companies, the news flow continues to be negative. RTEC pre-announced a roughly 10% shortfall in revenue that will cut EPS by a third or so.

The weakness for Rudolph appears broad based, not just Samsung and not just memory, and sounds like a fair number of tools have either slipped into next year or been canceled.

This also fits with our view of another 10% down leg for the quarter following on KLACs reset of Q4 expectations. The chip equipment flu which started with Samsung has spread to Micron, GloFo, TSMC and others. Intel seems to be the only chip company increasing capex near term but obviously not enough to make up for all the other cuts.

It has been a while in the chip equipment industry since we last had a negative pre-anouncement. Things have been that good for too long. RTEC’s pre-announcement underscores that we are in a standard cyclical downturn. At this point 2018 H2 is all downhill and the real question is when will we hit bottom (trough)? Q1 2019, Q2 2019 or further out?nIn looking at previous down cycles and current demand it feels a lot like a 3 or 4 quarter downturn.

We would not be surprised to see further negative news either in pre-announcements or quarterly reports as the weakness will show up in revenues and EPS.

We may have another leg down in the stocks. We are also seeing analysts downgrading the stocks, as we did earlier this week, as we get closer to a bottom. This obviously fits the “locking the barn door after the horse has bolted” as the stocks are way down from their peaks, but many bought into the one quarter air pocket fantasy.

Buckle your seat belts for a bumpy earnings season…..


AVANTI: The Acquisition Game

AVANTI: The Acquisition Game
by Daniel Nenni on 10-05-2018 at 7:00 am

This is the eighteenth in the series of “20 Questions with Wally Rhines”

Gerry Hsu’s departure from Cadence to form Avanti (originally named ArcSys) is chronicled in legal testimony as accusations of theft of software were followed by legal battles, financial awards and even prison terms. Mentor and Synopsys were simply onlookers as the drama unfolded but both had an interest in the outcome. The outcome of the trial pointed to substantial civil damages that Avanti would have to pay to Cadence. Mentor went to work with some of the top legal advisors at O’Melvany and Myers to estimate just how much those damages would be. Synopsys was reluctant to engage but was worried that, if Mentor acquired Avanti, the EDA balance of power could shift.

Gerry Hsu had taken up residence in Taipei, having avoided criminal charges for which some of his employees were not so lucky. Chi-Foon Chan, then EVP of Synopsys, suspected that Mentor was negotiating with Gerry Hsu to buy Avanti and Chi-Foon has since told me that he called every major hotel in Taipei to see if I was registered as a guest. In reality, we were much more serious about buying Avanti than Chi-Foon imagined. I rented an apartment in Taipei and spent more than a month living there and regularly meeting with Gerry. Meanwhile, Greg Hinckley, who was then Mentor CFO but effectively becoming COO, conducted meetings with the investment bankers to determine how we could put together a successful proposal to buy Avanti.

The bankers paid a lot of attention to two issues: 1) Negotiating how much they would be paid for the transaction and 2) Removing the absurd benefit in Gerry Hsu’s contract as Avanti CEO that would pay him $10 million if he left Avanti for any reason. Why would a Board of Directors approve such a condition? The Board of Avanti at that time consisted of five people, four of whom were employees who reported to Gerry, and the fifth was a forestry major whose knowledge of semiconductors and EDA was very limited. Securing approval for this condition couldn’t have been very difficult for Gerry even though it seemed to stand in the face of most responsible corporate governance. Greg, who is one of the best “out-of-the-box” thinkers I’ve ever known, addressed the bankers with a different question. “Why don’t we triple the amount”, suggested Greg, “and offer to pay Gerry $30 million instead of $10 million”? The bankers were aghast. Why would we do that? Greg’s response: “There’s obviously only one decision maker for the sale of the company so why don’t we appeal to his self- interest?” The bankers were skeptical but we put together a proposal that incorporated this feature. As justification, we asked that Gerry extend his non-compete agreement from one year to three years in trade for tripling the severance payment.

I arranged to have dinner with Gerry in Taipei. He brought his son along and I presented the proposal. When I highlighted the change in severance arrangement for Gerry, he quickly became suspicious and began arguing with me that he was entitled to the $10 million severance payment. I had to repeat twice that I didn’t dispute his right to the payment; I just wanted to extend his non-compete agreement to three years and triple the severance payment. Once Gerry understood, he became enthusiastic about the proposal and asked how quickly we could close an agreement. I cautioned Gerry that the terms of the agreement must be confidential and I had Gerry approve the letter of intent and confidentiality agreement. We shook hands on the deal and I called the Mentor team to join us in Taipei to finalize the agreement.

I can’t be sure how Gerry communicated with Synopsys but, by the time the Mentor negotiating team arrived, Gerry was already expressing second thoughts about his agreement to be purchased by Mentor. It became apparent that he was talking to another potential buyer despite his commitment to Mentor. So we returned to the U.S. with no deal. Subsequently, Gerry’s team contacted our bankers to re-start negotiations but we held firm, responding that we didn’t feel we could trust him based upon our previous experience. We didn’t engage again. Negotiations between Synopsys and Avanti continued and a deal was announced on December 3, 2001. A long period of review by the International Trade Commission ensued. After more than six months, the transaction was approved. Details were then published in a joint S4A filing by Synopsys and Avanti – https://www.sec.gov/Archives/edgar/data/883241/000095012302004502/0000950123-02-004502-index.htm Among the most interesting details for me were:
[LIST=1]

  • Synopsys hired attorneys to estimate the cost of the civil damage award that would likely be incurred, just as Mentor had done, and the answer came out nearly the same as the estimate that Mentor had received. This was somewhat remarkable when you consider the uncertainty of outcomes in the U.S. legal system for disputes in high technology.
  • The agreement between Synopsys and Avanti included a $30.6 million cash payment to Gerry Hsu for his employment agreement. He didn’t ever thank me.There was a benefit for Mentor, however. Cirrus Logic was one of the first to detect anomalies in the Avanti software that led them to believe that the Cadence accusation of theft was credible. Under certain conditions, wavy lines appeared on the screen with the Avanti place and route software in the same manner as Cirrus had experienced with Cadence place and route software. Mike Hackworth, CEO of Cirrus Logic, became concerned and talked with Joe Costello, CEO of Cadence, about switching back to Cadence for place and route. Mike’s limitation was that Cadence would have to develop a tighter integration with Mentor’s Calibre design rule checking software which Cirrus had adopted. We had a three way conference call among Mike, Joe and me where I insisted that we needed to obtain detailed specifications for Cadence’s LEF and DEF standards. Joe readily committed and assigned Bob Wiederhold, previously CEO of HLD, a company that had been acquired by Cadence, to effect the transfer of information. That’s when we found out that DEF was not one standard, even in Cadence. There were many versions and interpretations. Despite all this, we were able to work together and Calibre became tightly integrated with Cadence, and also Synopsys, making it successful in most of the design flows in the industry.

    The 20 Questions with Wally Rhines Series


Hiring has been strong this year so why is hiring so difficult?

Hiring has been strong this year so why is hiring so difficult?
by Mark Gilbert on 10-04-2018 at 12:00 pm

Let’s start with hiring through Q3 and what to expect for Q4…

Hiring for Q3 (and the year as a whole) has been strong and robust. The EDA/Semi hiring needs are indeed stronger this year than in a long time, yet more exacting than ever. We have had an exceptionally strong year even though it has been exponentially so much more difficult to find the right candidates. Even with so much demand for good people, companies continue to be extremely picky about their preferred candidate requirements. I have said it before and it bears repeating…if a company finds a smart, talented, capable engineer, who is desirable of learning whatever is needed, and has several of the primary prerequisites, it is smarter to hire (the proverbial) bird in the hand and help get them up to speed now, than hope a better option comes along. While a better candidate might follow, they also MIGHT NOT! When passing on a candidate that could fit, in lieu of that perfect fit, companies must think about the valuable time lost in searching and interviewing for the more perfect candidate, (with no guarantee of how long that might take), when they could have had someone else already up and running, learning what is necessary. The time it takes to search, find, and interview, and (hopefully) hire can be considerable, especially in today’s environment. I have seem companies pass on a relatively decent candidate, in hopes of finding someone that is a closer fit, only to be looking months, or even a year, later…it is not uncommon.

As we move into Q4, it is looking to be yet another robust quarter and hiring will remain strong. Here is what I know… All the economic numbers for EDA/Semi are strong, targets are being met and exceeded, and growth is occurring across a wide array of sectors. As I said earlier and it bears repeating… It continues to get harder and harder to find the right candidates for the overly-exact specifications that exist today. People are not leaving quite as fast as in years past and that makes it harder to “recruit” them out; harder, but not impossible. Comps seem to be going up, which is good news and a direct result of a strong market, and which should hopefully entice more people to consider alternative opportunities.

Because hiring is so difficult, companies need to ask if they can afford to wait for the exactly right candidate. I realize that for the most part, hiring managers want someone that can come in and make an immediate contribution with the least amount of training and resources. Certainly that makes sense on paper. The reality is, that is rarely the case. Even with the best of hires, ramp-up time can be considerable and more of a drain on internal resources than contemplated. All tech companies work in varying general domains of one category or another, but each domain has a specialty, shall we say a new subset, frontier that they are tackling. That newness inherently requires a learning curve. Strong internal training and support for the new hire is mandatory for them to learn the specific specialized domain. Sometimes, the new hire requires more bring-up-to-speed time than anticipated. Realistically, reality actually shows that to be the norm…more training time was needed than anticipated, and that happens more times than not. That fact brings to light this question: Is it worth the risk to pass on the decent, fits-most-of-the-specs candidate or wait and hope that a better one comes along? This is a big question, and one every company should consider when they have pressing, critical hiring needs. Certainly I am not saying hire someone that MIGHT be able to do the job, but I am saying that if they have most of what you need, you should be weighing your options carefully.

Candidates too need to learn how to impress the hiring managers during the interview process with their willingness to learn. They need to be compelling and convincing enough so that hiring managers are confident about your commitment to excel. It is essential to convince the team that you have what it takes and will do what is necessary to get up to speed quickly and succeed. Even on your own time, after hours!

ARM TechCon is right around the corner and has a good mix of technology and a decent attendance. I will be there in my famous white jacket, walking the aisles, seeing clients all day. I already have several off-site interviews scheduled with both new and existing clients. It seems like both ARM TechCon and my quick visit in and out will be quite busy…I hope to see you there and you should always feel free to call me with any questions you may have. Perhaps we can meet during the conference.

http://eda-careers.com/


Accellera Tackles IP Security

Accellera Tackles IP Security
by Bernard Murphy on 10-04-2018 at 7:00 am

I recently learned that Accellera has formed an IP security working group. My first reaction was “Great, we really need that!”. My second reaction was “But I have so many questions.” Security in the systems world is still very much a topic in its infancy. I don’t mean to imply that there isn’t good work being done in both software and hardware domains. But it still mostly feels reactionary and ad-hoc. Where’s the ISO 26262 for security? How do we quantify strong security versus weak security? And so on. Here, in no particular order, are some questions that I hope the working group will eventually answer.


How does IP security tie to SoC security and then to system security? In part this feels like the system element out of context (SEooC) topic in ISO 26262. How can you demonstrate security in a sub-component when you don’t know how it will be used in the larger system? Moreover, we still don’t have a good handle on defining security for the whole stack. Even if we have a well-defined definition for the IP, how do we compose those measures into a system-level measure?

Which raises a scope question. I see the chair is from Intel, which is a great start. They probably know more than most about security than most, despite their recent stumbles. And Synopsys is involved which is also good, not just for their IP expertise but also for their software security expertise. I hope Rambus will join, also maybe someone from Google ProjectZero (you see where I’m going with this). I hope Accellera will become a regular presenter at BlackHat. Meantime, it would be good to know how the WG plans to connect with existing compliance requirements from PCI, NSA and others.

But even given a WG group loaded with experts from the industry, how much will they share, and will that be enough to build an effective standard? Security through obscurity is still important and will likely always be important. What you don’t share is harder to attack because that makes it harder to guess at vulnerabilities. So how much can be shared in a standard? Mechanisms almost certainly not because that would limit innovation and differentiation, which hackers would love and the industry would hate. Measures of security seem more likely as long as they’re fairly general. Targeted metrics might be clues to likely weak areas. Or maybe these could be a good way to demonstrate strengths against a spectrum of possible attacks? (I said I had questions, not answers.)

Back to the element out of context point, how effective can security measures at this level be? Consider timing-channel attacks. I can run these from inside a VM nowhere near the IP, as long as I have access to an accurate timer. I just have to launch an operation that will use the IP. You could argue that attention to such attacks is out of scope for this work and should be the responsibility of a different standard. But that begs the question – how useful will this standard be if it does not consider such attacks? Answering that question requires a way to compare, at least approximately, the class of attacks that will be covered versus the class of all likely attacks (as anticipated within the lifetime of a device using the IP).

I could go on, but I do want to stress that, despite all my questions, I am very much a fan of this effort. Certainly the people contributing on the WG will know far more about security than I do and must see further and more clearly than I can. And frankly security is a huge problem, so every possible angle is worth exploring. I look forward to learning more as this develops. You can learn more about the Accellera WG HERE.