Banner Electrical Verification The invisible bottleneck in IC design updated 1

2018 Women in Engineering Achievement Award

2018 Women in Engineering Achievement Award
by Daniel Nenni on 05-02-2018 at 7:00 am

Having spent my entire thirty plus year career in semiconductors and design enablement I have seen quite a change in diversity. When I first started I remember thinking that height and weight was the only diversity here in Silicon Valley. My wife really noticed it when she attended her first Design Automation Conference in 1985 and was shocked to see that she was one of the only women in a sea of men.

That changed quite quickly during the fabless semiconductor transformation when the foundries and the design enablement ecosystem became an integral part of the semiconductor industry. This brought all sorts of diversity and this diversity is clearly responsible for the amazing success we have experienced over the last 30 years, absolutely.

The Marie R. Pistilli Women in Electronic Design Award (named after the late Marie R. Pistilli, organizer of DAC) is an annual honor that recognizes individuals who have contributed to the advancement of women in EDA. I am quite honored to say that I have worked with quite a few of the recipients including this year’s winner Anne Cirkel:

Past Recipients of the Marie R. Pistilli Award

  • 54th DAC, 2017 – Janet Olson, Synopsys, Inc.
  • 53rd DAC, 2016 – Soha Hassoun, Tufts University
  • 52nd DAC, 2015 – Margaret Martonosi, Princeton University
  • 51st DAC, 2014 – Diana Marculescu, Carnegie Mellon University
  • 50th DAC, 2013 – Nanette Collins, Nanette V. Collins Marketing, and Public Relations
  • 49th DAC, 2012 – Dr. Belle W. Y. Wei, San Jose State University
  • 48th DAC, 2011 – Limor Fix, Intel Corp.
  • 47th DAC, 2010 – Mar Hershenson, Magma Design Automation, Inc.
  • 46th DAC, 2009 – Telle Whitney, Anita Borg Institute
  • 45th DAC, 2008 – Louise Trevillyan, IBM Research Center
  • 44th DAC, 2007 – Jan Willis, Cadence
  • 43rd DAC, 2006 – Ellen Yoffa, IBM
  • 42nd DAC, 2005 – Kathryn Kranen, Jasper Design Automation, Inc.
  • 41st DAC, 2004 – Mary Jane Irwin, Penn State Univ.
  • 40th DAC, 2003 – Karen Bartleson, Synopsys, Inc.
  • 39th DAC, 2002 – Ann Rincon, AMI Semiconductor
  • 38th DAC, 2001 – Deidre Hanford, Synopsys, Inc.
  • 37th DAC, 2000 – Penny Herscher, Previously Cadence

Anne Cirkel to Receive Marie R. Pistilli Women in Engineering Achievement Award
Past DAC General Chair and Technology Marketing Executive honored for her contributions to the EDA industry

In response to the award, Dr. Wally Rhines, CEO and President of Mentor, a Siemens Business, said, “I’m delighted that Anne’s contributions to our industry are being honored with this prestigious award. She has been a tireless advocate within Mentor for the design automation community and its flagship conference. As one of our most senior female executives, she is a superb role model for the leadership generations to come.”

Exactly!

Before joining Mentor 18+ years ago, Anne started her career in EDA with Viewlogic and Analogy. The Award will be presented during the 55th DAC General Session Awards on Monday, June 25, 2018 at Moscone West, San Francisco, CA. I hope to see you there.

Congratulations Anne!

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging EDA, silicon, intellectual property (IP) and design services providers. The conference is sponsored by the Association for Computing Machinery’s Special Interest Group on Design Automation (ACM SIGDA), the Electronic Systems Design Alliance (ESDA), and the Institute of Electrical and Electronics Engineer’s Council on Electronic Design Automation (IEEE CEDA).


Virtuoso at CDNLive – A Press Briefing With Yuval Shay

Virtuoso at CDNLive – A Press Briefing With Yuval Shay
by Alex Tan on 05-01-2018 at 12:00 pm

At CDNLive Silicon Valley 2018, I talked with Yuval Shay, Director of Product Management of Cadence Custom IC & PCB Group to scope out some more details on the recent Virtuoso product refresh announced earlier in the morning by Cadence Sr. VP & GM of the same group, Tom Beckley.

Tom shared his view on enabling the fourth industrial revolution (4IR). He illustrated the challenges faced by both the system and the semiconductor companies such as the growing need for complex integration across chip, package and board; the use of advanced technologies; designing systems that constantly connected to the cloud through the emerging 5G standard; and the changes in the automotive industry with autonomous cars.

Elaborating further, Tom talked about System Design Enablement (SDE) and the effort his organization is putting into SDE as well as how Virtuoso is enhanced to help address the challenges designing complex systems.

Yuval provided some Virtuoso press-release materials, which are also used as reference for my interview. In this release, Cadence revamped the Virtuoso design platform, dubbed ICADVM 18.1 to include the following updates, delivering productivity improvement across multiple segments.



Commenting on the announcement for Virtuoso upgrades that include feature to 5nm process support, Yuval first articulated the increased complexity of system’s integration and the effort to driving down system’s costs. For example, ADAS application with LIDAR technology, combining laser diode, MEMS, photonics, analog sensors and advanced node SOC’s –all integrated into a single system to provide a sub-$100 LIDAR system. Not only leading providers need a state of the art IC tools and solution across multiple technologies, customers also need system solutions.


Tell me about Virtuoso System Design Platform, and what’s new in ICADV18.1 ?

We first introduced Virtuoso System Design Platform (VSDP) in May 2017. It was an immediate hit; the rate of adoption is amazing. Only few months since the announcement and already customers using it in production. It showed us how strong the need is.

What VSDP really does, is allowing a “golden” schematic to capture the system design (SIP) using Virtuoso Schematic Editor and to drive downstream applications. Using Virtuoso’s schematic, the environment can drive Cadence SIP tool for layout implementation, automating the layout-versus-schematic (LVS) verification flow. At the same time, analysis done on the SIP layout, using Sigrity 3D-EM extraction and generating parasitics models, can be back-annotated into Virtuoso. Virtuoso will automatically manage the simulation environment, eliminating the highly manual and error-prone process of integrating system-level layout parasitic models back into the IC designer’s flow. All of this done by maintaining a single “Golden” schematic for both LVS and verification.

New in Virtuoso ICADVM18.1, is the ability to edit the SIP layout using Virtuoso Layout Suite and to be able to edit each die in the SIP layout and managing multiple process Design Kits (PDK’s), simultaneously.

Could you comment about concurrent layout access which was mentioned in the keynote?

Custom IC design depends on hierarchical design methodology, where each layout hierarchy (cellview) can be edited by a single user at a time. For example, if during sign-off review, there are thousands of DRC violations needed to be fixed at the top level, the ability to distribute the job across multiple users is limited. We developed a new technology which allows for multiple users to edit the same cellview/hierarchy and allow for selective integration of edits back into the master cellview. There is the ability for the cellview owner to accept or reject edits from other users and to add notes, in case edits get rejected. This new technology allows for a non-destructive editing of the layout artwork –data being touched just like editing a picture on your smartphone.

Could you share recent example of ML application in custom design space?

Design integrity and robustness need to be guaranteed for Custom Analog IP’s while designed within a timely manner. Virtuoso’s Electrically Aware Design (EAD) environment enables users to perform parasitics extraction and EM analysis on partial layout. We use Machine Learning (ML) to generate unique technology database with highly trained model for fast extraction of parasitic with high accuracy. Incremental RC extraction can be done within seconds as design being developed. For example, we have a customer using this top-level design with thousands of nets, to check coupling between noisy and sensitive nets to prevent crosstalk. This can be done in-design within seconds. In this new release of Virtuoso, we enhanced our layout environment to be driven by simulation data, introducing for the first time “Simulation Driven Layout” to mitigate EM problems, in-design.

What are the top three challenges in custom design space?

First, in the analog design space, there is a need of increased automation in order to shorten turn-around time. For example, people aspires to do top-down design, floorplanning and explore ways to reduce die-size. In reality, for a given project, there is no time to explore multiple floorplans and run multiple routing jobs to see if a new plan will converge. So designer usually goes with prior design knowledge and floorplan, or make a best educated guess. We developed new capability in Virtuoso to efficiently plan the design hierarchies, floorplan the design and analyze it for congestion and convergence –enabling multiple floorplanning exercises to occur in a short amount of time. This allows designers to find opportunities to do die size reduction.

Second, custom design gets extremely challenging in the advanced node process. Layout effort keeps increasing to a point where there is not adequate PD (Physical Designer) to complete the jobs. We advocate for a smarter methodology to better manage the sheer number of design rules, EM, layer’s density requirements and design variation. Virtuoso has the environment to address all of these. The 3X speed-up in keynote is conservative, in reality it could be even higher.

Third, design is not an isolated (effort) anymore –designers need to care about packaging and packaging effects on their design as well as to have the ability to manage multiple IC’s and complexity of advanced packages including its associated critical effects (for example, parasitic/magnetic on PCB affect IC design). How to capture those effects early, taking SiP design into analysis. We enhanced our Virtuoso System Design Platform to help with those challenges and collaborating with EDA connection partners to help customer with the solution.

Can you highlight a recent customer engagement experience?

At this CDNLive we had Bosch presenting the result of our collaboration on Simulation-Driven Layout. As you can imagine, Bosch care greatly about design reliability and wanted a solution to mitigate electro-migration problems, while not dragging down overall productivity. We worked together on this and came-out with a solution we are proud of. The new environment allows users to examine and plan their layout based on both current flow and current density. In addition, when using Virtuoso Wire Editor, wires will be sized automatically to address current density requirements.

At the end of the interview, I was inquiring about further anticipation of expanding cloud availability similar to the OrCAD Entrepreneur case. He tactfully mentioned Cadence have currently hosted design solution through (design) services and cloud readiness is being considered across Cadence.


ARM IoT Mbed Update

ARM IoT Mbed Update
by Bernard Murphy on 05-01-2018 at 7:00 am

Normally press release events with ARM tend to be somewhat arms-length – a canned pitch followed by limited time for Q&A. Through a still unexplained calendar glitch I missed a scheduled call for a recent announcement. To make up I had the pleasure of a 1-on-1 with Hima Mukkamala, GM of IoT cloud services at ARM. Hima is a heavy hitter, previously head of engineering and a senior exec for GE Digital’s Predix platform. So while the rest of us pronounce on the IoT, he’s actually lived it, particularly the IIoT.

The topic was an announcement ARM made recently at the Hanover Messe, on their Mbed solutions for the IoT, from edge devices to the cloud. Hima walked me through the standard pitch of course but with much more opportunity for me to ask questions. Thanks to which I now have (I think) a better understanding of the strategy and advantages of the solution.

This starts with the very rapid growth ARM is seeing– they’re anticipating 100B devices shipped from 2017 to 2021, equal to the number shipped in the preceding 26 years. Since they should have better insight than most into what’s coming out over the coming 3 years, this clearly supports exponential growth. Which makes management and security a big potential headache. I’ve talked before about security and the advantages of diversity, but at this scale I can also see the appeal of putting all your eggs in one basket and watching that basket very carefully.

You should know the essentials of the solution by now – Mbed OS running on the edge device and at the other end, Mbed cloud providing the platform for management, provisioning and authorizations. ARM naturally points to the ubiquity of their hardware footprint and their strength in building partnerships as essential in delivering solutions to a very wide variety of needs in the inevitably diverse IoT space.

That said, I was encouraged to hear that this isn’t a monopolistic play, not just because we need healthy competition or because it would anyway be impractical but also because I’m a big believer that focused targets lead to more quickly to tangible results. So Mbed OS is free and open (at least to run on Cortex-M class cores) and provides for the security and communication features you would build around such designs. Meanwhile in Mbed cloud, Hima acknowledged that the customer landscape is already very mixed; they’re already running multiple apps at this level so Mbed cloud has to coexist, bridging gaps where needed but otherwise letting customer solutions call the shots. He noted a partnership they have established with IBM Watson IoT as an example where they are collaborating in a broad solution.


On the focus topic, Hima’s pitch opened with what I had assumed was a generic IoT applications slide, but he clarified these are ARM’s IoT solution focus areas. As one example he cited work they have done with Alphatronics to support building a solution for waste recycling management in Belgium. You’ve probably heard of a similar concept before; trash containers at centralized disposal sites signal when they are empty, near full or full, enabling trash disposal trucks to optimize pickup. This reduces transportation costs and staffing at these trash facilities and also checks that only approved customers are using the facility.

Another example Hima offered was in utilities. This area bristles with regulations. One aspect of those restrictions, at least today, is that management cannot be in a public cloud (I can imagine this being a concern in many IoT applications, whether or not constrained by regulations). To support this kind of need, ARM is now offering Mbed on Premises, a way to support Mbed Cloud, with all the connectivity, security, certification etc. management, in your private cloud.

ARM also announced an out-of-the-box Mbed solution they call Mbed Client Lite, for constrained applications such as pallet management (in the logistics focus area). Part of the advantage of this being a canned solution is that it comes pre-packaged with channel security and remote secure update. Solutions for pallet management and other high-volume, low-cost uses are exactly where providers will be tempted to cut security corners unless the right solutions are handed to them on a plate. ARM have also partnered with a couple of established certificate authority/identity service providers, so there’s hope that eventually IoT security may be wrested under control (and that day can’t come too soon).

It’s a good story – device-to-cloud integration allowing for private as well as public clouds, out of the box security for low-end devices, broad partnerships as always with ARM and application examples already in deployment. How well competitive solutions (eg RISC-V-based devices) can work in this environment is less clear to me, but I guess we’ll find out. You can learn more about the Mbed solutions HERE.


Hard IP for an embedded FPGA

Hard IP for an embedded FPGA
by Tom Dillinger on 04-30-2018 at 12:00 pm

As Moore’s Law enables increased integration, the diversity of functionality in SoC designs has grown. Design teams are seeking to utilize outside technical expertise in key functional areas, and to accelerate their productivity by re-using existing designs that others have developed. The Intellectual Property (IP) industry has emerged, where IP vendors offer design cores to SoC teams for integration. Specifically, “hard IP” refers to a core with a physical implementation, which has been optimized for performance, power, and area in a specific target process technology. Correspondingly, Moore’s Law also implies that the amount of programmable logic available in an embedded FPGA has expanded. It would make sense that SoC design teams leveraging the unique capabilities of an eFPGA would likewise be seeking to incorporate/re-use existing IP designs.

I’ve been having coffee periodically with the team at Flex Logix, to try to stay current on the rapidly expanding eFPGA industry segment. I asked Geoff Tate, CEO, and Cheng Wang, Senior VP Engineering, whether there is indeed a confluence of eFPGA customers and IP providers.

“Absolutely.”, Geoff replied. “Our customers are seeking the expertise of outside sources of IP, such as for functional accelerator cores.”(Refer to an earlier article on how eFPGA technology can provide a significant speedup on specific algorithms, compared to a software implementation on a microprocessor core – link.)

I asked Cheng,“How does the IP provider develop the implementation? What models are delivered to the end customer?” He replied, “The IP for an embedded FPGA is developed using the same model compilation flow as the customer, with only a single switch setting to differentiate the physical implementation.”

Figure 1. Illustration of the connectivity of a “hard IP” design in a customer eFPGA design.

“Like the customer design, the IP based on a set of eFPGA tiles.”, Cheng highlighted. “The physical distinction for the IP design is that the input and output pins of the IP are designed for internal connections, not to the top-level drivers/receivers.”

(For more on tile design, and the allocation of internal connections and receiver/driver circuits at the tile edge for primary inputs/outputs, here’s a link to an earlier article.)

“Other than the floorplan pin location assignment restriction, the compilation of the IP design is the same.”, Cheng said. “This is indeed a ‘hard IP’ approach. The end customer receives the eFPGA bitstream for the IP to merge with the personalization of their design. PPA optimizations for the IP have been completed by the IP provider, as reflected in the delivered personalization. Within the compilation environment, the customer has no visibility into the black box physical model.”

Figure 2. eFPGA compilation platform — the IP model is delivered as a physical black box, with no internal visibility.

“What IP models are delivered?”
, I asked. “In addition to the final bitstream, the IP compilation flow automatically generates the physical abstract and the timing abstract for the customer. We collaborate with the IP provider on a power model.”, Cheng replied.

“Although we’ve discussed an IP vendor delivering models to an end customer, this same methodology certainly applies to any customer seeking the productivity benefits to re-using an existing optimized design block.”, Geoff added. “They would create an IP library for internal use in the same manner.”

The Flex Logix team refers to their hard IP flow as the design of an eFPGA “virtual array”. The figure below highlights a recent test chip of a 7×7 tile configuration incorporating a 4×4 IP virtual array.

Figure 3. Illustration of a 4×4 virtual array integrated into a 7×7 tile eFPGA design.

The IC industry has rapidly adopted the technical (and economic) model of hard IP integration and re-use, to augment the expertise of the design team and boost their productivity. Thus, it is a natural extension that this same model is also being adopted as part of the increasing capacities of embedded programmable logic.

For more information on the Flex Logix virtual array methodology for hard IP design, please follow this link.

-chipguy


imec and Cadence on 3nm

imec and Cadence on 3nm
by Daniel Nenni on 04-30-2018 at 7:00 am

One of the more frequent questions I get, “What is next after FinFETs?” is finally getting answered. Thankfully I am surrounded by experts in the process technology field including Scotten Jones of IC Knowledge. I am also surrounded by design enablement experts so I really am the man in the middle which brings us to a discussion between Rod Metcalfe, product management group director in theDigital & Signoff Group at Cadence, Peter Debacker R&D team leader at imec, and SemiWiki on the 3nm testchip announcement.
Continue reading “imec and Cadence on 3nm”


Intel 10nm Yield Issues

Intel 10nm Yield Issues
by Scotten Jones on 04-29-2018 at 4:00 pm

On their first quarter earnings call Intel announced that volume production of 10nm has been moved from the second half of 2018 to 2019 due to yield issues. Specifically, they are shipping 10nm in low volume now, but yield improvement has been slower than anticipated. They report that they understand the yield issues but that improvements will take time to implement and qualify.

During the question and answer segment they said it is “really tied to this being the last technology tied to not having EUV and the amount of multi patterning and the effects of that on defects”.

This has led to a lot of speculation about what is causing the yield issues. I have seen some comments that everyone is doing multi-patterning Intel’s explanation doesn’t make sense and there is some speculation that the yield issues are related to cobalt. I thought it would be useful to explore multi-patterning usage and cobalt usage and how these differ between companies and what the impact may be on yields.

There are four companies currently pursuing leading edge logic: GLOBALFOUNDRIES (GF), Intel, Samsung, and TSMC. The following will explore multi-patterning and cobalt usage by company.

Multi-Patterning
In the front end of the line all four companies are using Self Aligned Quadruple Patterning (SAQP) with multiple cut masks for Fin formation and Self Aligned Double Patterning (SADP) for gate formation. At contact some versions of Litho-Etch are used, either Litho-Etch-Litho-Etch (LE2), or Litho-Etch-Litho-Etch-Litho-Etch (LE3) or even LE4 (EUV for Samsung). These are all similar between the companies except for Samsung’s use of EUV. In the Back End Of Line (BEOL) is where we see a significant differences. GF and TSMC both use SADP for critical metal layers, Intel uses SAQP for 2 metal layers and Samsung is expected to use EUV for critical metal layers.

We believe GF and TSMC are both ramping yield on schedule. It is possible that the yield issues Intel is seeing are related to SAQP in the BEOL. BEOL metal layers require multiple block layers and this is complex to implement. The first block layer would remove the layers needed for subsequent block layers, the way this is addressed is block layers are applied as reverse images and then once all the block layers are done, the whole pattern is reversed. Implementing multiple block layers in concert with SAQP versus fewer block layers at SADP is more difficult. This could explain why multi-patterning may be more difficult at Intel. Intel has a 36nm pitch in the BEOL versus GF and TSMC at 40nm and the smaller pitch is more difficult to achieve. We don’t know much about Samsung’s process yield ramp or exact specifications but certainly their early use of EUV may presents some challenges for them and we wouldn’t be surprised if Samsung encounters yield issues as well.

Cobalt Usage
I have also seen comments about cobalt usage suggesting Intel’s use of cobalt may be the issue. The first comment I want to make here is everyone is using cobalt, not just Intel although there are differences in usage.

  • Liners/caps – we believe all four companies are using cobalt for liners and caps on critical metal layers. Historically liners are Ta/TaN and switching to Co/TaN improves electromigration and the copper “wetting” during processing. Cobalt caps on top of the metal lines also improve electromigration.
  • Contacts– we believe all four companies will also use cobalt filled contacts although there may be differences in how it is deposited (more on this later).
  • Local interconnect – Intel uses cobalt filled metal lines for M0 and M1, GF does not and we don’t think TSMC does either. A key here is that as interconnect pitch shrinks, copper resistance goes up and eventually cobalt becomes a lower resistance solution. We believe Intel went to cobalt because it is beneficial for resistance at 36nm, with GF and TSMC at 40nm they likely didn’t see the need. We are curious to see what happens with Samsung, we believe they may also have a 36nm minimum metal pitch and it will be interesting to see if they use cobalt interconnect. They are co-authors on technical papers for 7nm with cobalt M0 so they have certainly looked at it.

We know that GF uses CVD to deposit cobalt for their cobalt filled contact and we have heard that Intel deposits cobalt with plating. We have also heard that Intel may have void issues. Perhaps plating cobalt is creating some cobalt issues, we do not think there are fundamental issues with cobalt.

Conclusion

I believe Intel’s comment on multi-patterning issues is probably the driver of their yield problems. They were more aggressive in their shrink than others and getting to 36nm minimum metal pitches with SAQP and multiple block layers is in my opinion the likely problem.

Cobalt may also be a contributor but I don’t think it is the main problem.

Also Read: Intel delays mass production of 10nm CPUs to 2019


Data Center Powers Intel but 10NM Still Slow

Data Center Powers Intel but 10NM Still Slow
by Robert Maire on 04-29-2018 at 12:00 pm

Intel (INTC) blew away expectations based on strong performance in the data center. Revenues of $16.1B versus street of $15.05B and EPS of $0.93 versus street of $0.72. While revenue was up 9% over prior year, earnings were 50% higher. Guidance is for Q2 revenue of $16.3B and EPS of $0.85 versus street of $15.55B and EPS of $0.81. IOT, NSG and PSG were also up a nice 18%.

We probably could have guessed that Intel would be a big beneficiary of the huge uptick in capital spending at Google who is obviously rolling out data centers and capacity as quickly as possible on the back of strong business.

Data center spending has long legs in our view and we think 2018 could be a super year for Intel’s data center group. Intel is well positioned to capitalize on this.

Intel’s financial performance and discipline has been very good and we think management will keep a tight rein on the model and profitability.

The only fly in the ointment was the long delayed 10NM roll out is still not rolling out. Though management talked about some products shipping, its pretty clear its not in volume. Intel and 10NM sounds a lot like ASML and EUV. Its coming, its just around the corner and the check is in the mail….

The delay hasn’t hurt Intel yet, and this quarters financial performance obscures the technology failings.

KLAC had a great quarter, breaking the $1B mark, beating both revenue and EPS expectations.
The most important fact that may be overlooked is that KLAC is projecting an increasing year with H2 higher than H1 , versus Lam that is projecting the opposite, a softening year. KLAC has three things going for it; Memory, China & EUV.

KLAC is seeing well over half its business from memory plus the additional driver of China which needs to buy metrology and yield management tools ahead of process tools made by Applied and Lam. Additionally KLAC gets a benefit from the transition to EUV as lots of new metrology and inspection tools are needed to deal with the new problems associated with EUV. This is compared to AMAT and Lam that see etch and dep steps reduced when process flow is replaced with shorter, but more difficult, EUV process flow.
KLAC remains in a very good position for 2018.

Tick, Tock, Tock, Tock, Tock, Tock…….
Moore’s Law in hibernation at Intel….It’s groundhog day all over again.

Intel, the company built on Moore’s Law and maintaining a technology leadership position has been stalled at 14NM going on 10NM since 2015. We were one of the first to point out Intels delay way back when. In the meantime both TSMC and Samsung seem to have caught up and may be about to pass Intel. Intel’s 10NM is about the same as TSMC’s 7NM and so far it looks neck and neck.

Also Read: Leading Edge Logic Landscape 2018

Broadwell, Skylake, Kaby Lake, Coffee Lake…we are getting tired of being under water at 14NM….maybe the next 14NM parts should be “Coffee Cake”…enough already, get on with it

Uptick in capex from $14B to $14.5B probably EUV high NA

We coincidentally saw an uptick of $800M in capex at TSMC for new mask capacity and a $300M pre payment to ASML for a place holder for a high NA system sometime in the future of ASML’s product line. We would be willing to bet that at least part, if not most ($300M) of Intel’s uptick is earmarked for a placeholder at ASML’s high NA waiting list.

KLAC has three strong drivers- Memory, EUV & China

KLAC has great positioning in that it has a more diverse set of business drivers as compared to others in the industry.

Both EUV and China are going to be multi year, long term secular growth stories that will go on despite what happens in memory of with Samsung. While a lot of new capacity in China is directed at memory, we think China will continue to spend even if the existing players such as Samsung, SK and Micron slow in the face of overcapacity. China needs and wants to get a foothold in the memory market and essentially has to build.

Also Read: SPIE Advanced Lithography 2018 – ASML Update on EUV

Transitioning to EUV is another “must have” for the industry. Sooner or later, Samsung, TSMC and Intel will all go to EUV and its clearly going to be very painful and expensive but there is simply no choice. The industry can delay the inevitable only so much, sooner or later (probably at 5NM) its going to happen and KLA will sell a lot of metrology and yield management tools to sort it all out.

An improving year is better than a slowing year
KLA is looking at high single digit to low double digit growth in 2018 with H2 bigger than H1. This is certainly better than a weakening H2 projected by Lam even though it was just a slow softening.
We thinks KLAC’s diversified drivers puts them in a much less risky position as compared to Lam at 84% memory and nowhere to grow.

We are also concerned that even though Applied has diversification in display tools, its highly likely that display revenue will be down sharply as Samsung cuts off display spending very quickly. This suggests that KLAC has better market positioning than either AMAT or LRCX for the near balance of the year.

The stocks
We saw both Intel and AMD blow away numbers as data center spend has been great as evidenced by Google. We think both stocks could still be buyable here as the data center spend is not going away any time soon.

Likewise, we think KLAC is one of the better positioned, less risky and more diversified plays of the semiconductor equipment tool makers. Business remains strong and future upside from Orbotech will add to the story in the fall.

And we still like Micron……

Also read: TSMC Adds Negative Semiconductor News


Samsung has another record quarter in chips

Samsung has another record quarter in chips
by Robert Maire on 04-29-2018 at 7:00 am

Samsung throws further gas on the fire of weak handset and CAPEX not set but will be down versus 2017. Samsung reported revenues of KRW 60.56 Trillion and KRW 15.64 Trillion operating profit ($56B and $15B). Chips accounted for whopping KRW 11.55 Trillion in operating profit on revenues of KRW 20.78 Trillion ( $11B and $19B)….a huge margin!

Capex was KRW 8.6 Trillion of which semiconductor was KRW 7.2 Trillion and display was KRW 0.8 Trillion ($8B, $6.7B and $740M) Although capex for the year has not been set yet the company forecasts a decline;

“Samsung has not yet finalized its capex plan for 2018, but the company expects it to decline YoY. Capex rose substantially in 2017 due to efforts to respond to market growth and emerging technologies, which included expanding the production capacity for flexible OLED panels.”

The company expects its memory business to continue strong into the second quarter but mobile handsets and display business will be down owing to weak demand (obviously pointing a finger both at their own handsets as well as their supply of OLED to Apple)

Samsung is projecting continued strength in both NAND and DRAM going into Q2 as demand maintains good pricing. It’s clear that memory products will be making up for the rest of the company for the next few quarters as handsets slow and displays follow. So far there seems to be no end in sight for memory but we still have concern about the weak handset disease spreading into DRAM

More bad news for smartphones and semi equipment
Samsungs report clearly blames weak handset sales as the culprit for holding back performance. Display sales to others, such as Apple just mean that Samsung gets a double whammy of weak handset problems. The saving grace is that memory is so strong that it easily overpowers even the bad handset news.

Moderating capex is a good thing
Samsung may be slowing capex to make sure it doesn’t get into an oversupply condition given all the capacity coming on line. The weak handset market which has only developed lately is likely the reason that they haven’t set the new, lower capex as they are likely still figuring out the impact of weak sales.

This may be a little hard to do as we are going through a seasonally weak period for handsets so it may be harder to figure out what’s seasonal and whats secular decline.

The slowing capex matches up to what we heard out of LAM
Lams projection of second half moderation sounds like it matches expectations of its biggest customer. The only problem is that even the customer doesn’t know how much it will slow so Lam’s projection is likely an educated guess at this point. The tone from Samsung sounds like the capex cut may be a bit steeper. We are sure that display capex will get an even sharper cut given profitability issues.

Obviously Samsung doesn’t want to hinder its memory business which is going gangbusters but it also doesn’t want to get to an oversupply condition and its probably better to come in on the low side of capex.

We’ll see what KLAC says
Given the negative flow of earnings news so far we have set low expectations for KLAC on Thursday. Metrology and yield management are less memory centric and are bought at different points in the life of a fab or a process so their view may differ some and probably differs more from Samsung than Lam which is in lock step with it biggest customer. KLAC also does not have the 84% memory exposure that Lam does. Samsung does sound like foundry is doing well just not as well as the blowout performance on the memory side.

Obviously when Applied reports we will likely get similar guidance as we heard out of Lam but that is weeks away.

The Socks
The Samsung news is just piling on to what we already know and beating a dead horse does not usually push the stocks down further. Sometimes investors view this as getting past all the bad news flow as we finally let all the wind out of the sails.

What it does reinforce is that the recovery from this negative news will not be all that quick and will impact even Samsung over the next few quarters.

Teflon NAND
So far the NAND business and even DRAM has remained largely impervious to other issues in tech and the semi business. The pile on of negative news just increases our nervousness to even higher levels as things would get really bad really fast if memory fell off….but so far it hasn’t….we are still buyers of Micron because of that.

Also read: TSMC Adds Negative Semiconductor News


Webinar: ASICs Unlock Deep Learning Innovation

Webinar: ASICs Unlock Deep Learning Innovation
by Daniel Nenni on 04-27-2018 at 12:00 pm

In March, an AI event was held at the Computer History Museum entitled “ASICs Unlock Deep Learning Innovation.” Along with Samsung, Amkor Technology and Northwest Logic, eSilicon explored how these companies form an ecosystem to develop deep learning chips for the next generation of AI applications. There was also a keynote presentation on deep learning from Ty Garibay, CTO of Arteris IP.

Over 100 people showed up, including myself, for an afternoon and evening of deep learning exploration and some good food, wine and beer as well. The audience spanned chip companies, major OEMs, emerging deep learning startups and research folks from both a hardware and data science/algorithm point of view. The event covered a lot of ground.

For those who couldn’t make it and interested parties around the world, there will be a webinar version of this event broadcast on May 2 from 8-9AM and 6-7PM Pacific time. I’ll be introducing the webinar and moderating the event. More than 400 people have registered already which is a record number for webinars I have been involved with, absolutely! You can sign up here:

ASICs Unlock Deep Learning Innovation HBM2/2.5D Ecosystem for AI Webinar
Deep learning algorithms, powered by neural nets, hold promise to automate our world in ways previously reserved for science fiction. Computers and cell phones that recognize us and talk to us, along with cars that drive us are just a few of the revolutionary products on the near horizon.

Practical implementation of this technology demands extreme performance, low power and efficient access to massive amounts of data. Advanced ASICs play a critical role in the path to production for these innovations. In fact, many applications cannot be realized without the performance and security that a custom chip provides.

What is needed is an implementation platform supporting 14nm and 7nm FinFET process nodes to address the challenges of deep learning.

Please join Samsung Electronics, Amkor, eSilicon and Northwest Logic as we explore a complete implementation platform for deep learning ASICs. The webinar will be moderated by Dan Nenni, CEO and founder of SemiWiki.

May 2, 2018
8:00-9:00AM or 6:00-7:00PM

8AM Registration and 6PM Registration

And here is my opening statement thus far:

Our insatiable need for data is driving IP address traffic growth to increase by 3X from 2015 to 2020. In the five years following 2015 there will be (as we are seeing now) a dramatic increase in the number of connected devices and the improvement in broadband speeds by almost 100%. This coupled with the increase in internet users and the huge amount of video that we are posting (and viewing) is driving semiconductor companies to build highly complex chips to meet the underlying requirements for bandwidth. WAN redesign: moving applications to the cloud and improving Ethernet switching speed from 40Gbps to 400Gbps by 2020.

Deep learning is a specific machine learning technique that uses neural network architectures, requiring large amount of data & compute power. There is a need for hardware acceleration for deep learning computation.

Deep learning is deployed today in all major data centers (cloud) and in devices (edge)

Deep Learning chips have 2 main functions: Training & Inference

Deep Learning applications can be divided into 3 main categories covering most industries:

  • Image/Video: (object/image/facial recognition) Main industry: Automotive, Social Media, IoT, Advertising, Surveillance, Medical Imaging
  • Voice: (speech recognition, language translation) Main industry: Social Media, Smart Homes, IoT
  • Text/Data: (data mining, big data analytics, decision making) Main industry: Finance, cloud services, research.

Deep learning chipsets focus on 2 main functions – training and inference.

Training:

Training the neural network requires massive amount of training data, storage and compute power. Training ICs are typically in the cloud and some high-end devices.

Inference:

Uses the trained network to provide an ‘intelligent’ analysis/result which requires less data, storage and compute power than training. Inference ICs are typically in the AI devices and some in the cloud for low latency applications.

I hope to see you there!


Achronix Momentum Building with Revenue Growth, Product/Staff Expansion, New HQ

Achronix Momentum Building with Revenue Growth, Product/Staff Expansion, New HQ
by Camille Kokozaki on 04-27-2018 at 7:00 am

5G Wireless, Network Acceleration, Data centers, Machine Learning, Compression, Encryption fueling the Growth

Building on its increasing momentum, Achronix Semiconductor Corporation held a ribbon-cutting ceremony on Tuesday, April 25, with the presence of Santa Clara’s Mayor Lisa Gillmor, customers, and partners, employees and company executives. President & CEO Robert Blake addressed the attendees with details on the company’s roots, from its founding in 2004 to its move from New York to the Valley in 2006. He thanked the investors and the employees for their continuing support and dedication in getting them to the current state of fast growth, exceeding $100 Million in revenue, reaching 700% growth year-over-year, achieving profitability and prompting an increase in talent search to develop and support their growing customer base.

He highlighted the evolution from the 80’s glue logic integration to ASIC prototyping to the current state-of-the-art implementation efficiencies replacing in certain cases CPUs, GPUs, and ASICs in Data Acceleration, Machine Learning, Artificial Intelligence applications. Blake stressed the importance and the contribution to their success by their partners in the Semiconductor, EDA, Packaging, IP, Test, Manufacturing, in addition to their customers and employees. He was followed by Santa Clara’s Mayor Lisa Gillmor who expressed her delight to see Achronix grow amid Santa Clara, a city rapidly becoming Silicon Valley’s technology center of the world. She reminded the audience that a testament to Achronix success was the 2017 Americas ACE Awards selection of Achronix as Company of the Year.

Santa Clara Mayor Lisa Gillmor and Robert Blake, Achronix CEO inaugurating the new Achronix headquarters

The new HQ facilities will occupy 25,000 square feet and house from 75-80 employees out of about 100 worldwide and is located at 2903 Bunker Hill Lane, Santa Clara right by the Santa Clara Convention Center and Levi’s Stadium. Achronix has a Research and Development and Design Center presence in Bangalore, India and other sales offices and representation in the US, Europe, and China.

Achronix offers both FPGA and embedded FPGA leading technologies with its Speedster® FPGA family and its Speedcore™ eFPGA IP solutions, respectively. These offerings are in high demand in the high-growth market of hardware accelerators designed to offload CPU data processing and supercharge system performance for applications such as 5G wireless, AI/ML, high-performance computing (HPC), data centers, and automotive stressed Steve Mensor, Achronix VP of Marketing. Speedster® 22i FPGAs support up to 1 million effective Look-Up-Tables (LUTs) and 86 Mbit of embedded RAM built o Intel’s 22nm FinFET process. Speedster22i HD devices include embedded hard IP for communication applications including 10G/40G/100G Ethernet, 100G Interlaken, PCIe Gen3 x8 and DDR3 ×72. Additionally, Speedster22i FPGAs have up to sixty-four (64) lanes of 10.375 Gbps SerDes and up to 996 high-speed general purpose I/O.

Achronix also provides accelerator boards and design tools. The Achronix PCIe Accelerator-6D board is the industry’s highest memory bandwidth, FPGA-based PCIe add-in card for high-speed acceleration applications. On the card is the Speedster22i HD1000 FPGA, which connects six independent DDR3 memory controllers allowing for up to 192 GB of memory and 690 Gbps of memory bandwidth. The ACE tools work with industry-standard synthesis tools, easing the mapping of user designs into Achronix.

Achronix is currently developing its next-generation FPGAs based on TSMC’s 7nm process node, capable of up to 5 Million LUTs and a core frequency of 750 MHz. The ACE tools will be extended to address Machine Learning (ML), AI, search, compression and encryption solutions and use cases.

About Achronix Semiconductor Corporation
Achronix is a privately held, fabless semiconductor corporation based in Santa Clara, California. The Company developed its FPGA technology which is the basis of the Speedster22i FPGAs and Speedcore eFPGA technology. All Achronix FPGA products are supported by its ACE design tools that include integrated support for Synopsys (NASDAQ:SNPS) Synplify Pro. The company has sales offices and representatives in the United States, Europe, and China, and has a research and design office in Bangalore, India.