RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

I Have Seen the Future – Cornami’s TruStream Computational Fabric Changes Computing

I Have Seen the Future – Cornami’s TruStream Computational Fabric Changes Computing
by Mike Gianfagna on 11-03-2020 at 10:00 am

Cornami The Next Mega Trend in Computing

Here is another installment regarding presentations at the Linley Fall Processor Conference. Every now and again, you see a presentation at an event like this that shakes you up.  Sometimes in a good way, sometimes not so much. I attended the Cornami presentation on its new TruStream computational fabric and I was definitely shaken up and in a good way. As I watched the presentation, I kept thinking that I was seeing the future. And that brough back memories of the 1964/1965 World’s Fair in New York. I recall
General Motors had a pavilion there who’s tagline was “I Have Seen the Future”. I did a little digging and found that they used the exact same tagline at the 1939 World’s Fair, also in NY. Now that’s long-range, consistent branding. Enough history. Back to how Cornami’s TruStream computational fabric changes computing.

Paul Master

The presentation was delivered by Paul Master, CTO and co-founder of Cornami. Paul began by talking about post-Von-Neumann algorithms. These are algorithms that cannot be implemented with current computing architectures typically due to their computational complexity and memory access requirements.  Machine learning is like this, but the focus of Paul’s talk was implementing fully homomorphic encryption. This algorithm is definitely a post-Von-Neumann candidate as it breaks current computing architectures. The Forbes article cited here talks about this barrier. Hang in there, this will all start to make sense in a moment.

Valuable and Fragile Data

Paul then touched on the value of data in the connected, AI/ML-driven world. He pointed out that “data is the new oil” in the AI/ML economy. Vast amounts of data are uploaded to the cloud each minute. The processing of that data informs tremendous potential for new products and services. But how is all this value protected? Paul offered a laundry list of things to consider:

Can you guarantee that…

  • The processor/network/support silicon has not been compromised with a hardware trojan?
  • The PCB has not been compromised by the addition of some trojan hardware?
  • The microcode on your machine/network has not been compromised?
  • The firmware has not been compromised?
  • The software compilers and the RTL synthesis tools have not been compromised?
  • The operating system has not been compromised?
  • The Apps have not been compromised?
  • There are no man-in-the middle attacks?
  • Your employees have not been compromised?
  • The employees of every company your data touches have not been compromised?

Have a headache yet?  I did. Sadly, the answer to all these questions is NO, and this is the very essence of the fragile, unprotected world we live in. There’s plenty of work going on here and no shortage of ideas on how to secure the supply chain, data storage, the network, the cloud and so on. All these approaches take an “outside in” view. That is, protect the data with safeguards around anything that touches that data. What if there was another way? What if you could inherently protect the data itself? I’m sure encryption is coming to mind as a cure. The problem with traditional encryption is twofold. First, current encryption algorithms are threatened by things like quantum computing, an example of a post-Von-Neumann architecture. While this threat is not here today, it will be soon. Second, the fact that you need to decrypt the data to operate on it is here today.

Ready to give up? Not so fast. Paul explained that fully homomorphic encryption (FHE) makes it possible to analyze or manipulate encrypted data without revealing the data to anyone. DARPA has called FHE the holy grail of cryptography. By protecting the data at its source, you can elegantly solve the massive problem of securing everything the data touches. This is an “inside out” view of the problem and it can change the world. There’s just one problem. FHE needs to run in real time to support the process of operating on encrypted data and, as mentioned, FHE breaks current Von-Neumann architectures.

The Answer

We’re now in a position to discuss why Cornami’s TruStream computational fabric changes computing and how  it will change the world. Analogies help to illustrate concepts and Paul had a good one. Consider the piston engine. It takes in fuel, detonates it to create torque and power and then expels the fuel and repeats the process. There are built-in limits to the performance of such a system. The reciprocating mass of the piston is one such limit. At some frequency, the mechanical parts of this system will fail. There is also the movement of fuel in and out of the combustion chamber. There are physical limits to how fast this can be done. These are reasonable proxies for the limitations of Von-Neumann architectures – the raw speed of the processor is the piston limitation and memory access is the fuel limitation. In spite of these shortcomings, the piston engine powered aircraft for many years and things were fine.  Then the jet engine was invented. Mechanical limitations were replaced by continuous flow elements and the world changed.

Paul explained that the Cornami TruStream computational fabric is like this. It is composed of basic building blocks called FracTLcores. Each core has a wrapper, memory and a reconfigurable core. These blocks are self-similar at each level of hierarchy and are connected by a very unique network on chip (NOC). These configurable processing elements can be assembled in many ways to solve specific problems, including running FHE in real time. Below is an example of 16 FracTLcores that are configured to implement a 64-bit RISC-V core. Some of the secret sauce here is that this type of configuration has no speed degradation and FracTLcores can be configured to optimally solve specific computational problems. If you need more speed, just add more FracTLcores.

Hierarchical assembly of FracTLcores

Paul explained that each FracTLcore can execute seven decisions per clock cycle, and multiple chips can be assembled to solve larger problems with no performance degradation. The current estimates suggest running FHE in real time will require a 1,000,000X speedup. Paul explained that Cornami can deliver that.

The Future

So there is a glimpse into the future. It should be noted that not only can the Cornami TruStream computational fabric run FHE in real time, it can also secure the model and training data for all kinds of AI algorithms and run those algorithms at speeds that are unattainable today. These are indeed bold statements and I’ve only scratched the surface regarding the details behind how all this works.

You can learn more on the Cornami website. I can also recommend an insightful interview Dan Nenni did with Wally Rhines, who happens to be the president and CEO of Cornami. Without a doubt, Cornami’s TruStream computational fabric changes computing. Now you’ve all seen the future. I’ll leave you with a quote on one of Paul’s last slides.

Secure the data – Not the data center

 

Also read:

Conquering the Impossible with Aspiration and Attitude

CEO Interview: Wally Rhines of Cornami

Podcast EP65: Trust But Verify – The Backstory of Applied Materials and Cornami with Wally Rhines


Verifying PCIe 5.0 with PLDA, Avery and Aldec

Verifying PCIe 5.0 with PLDA, Avery and Aldec
by Bernard Murphy on 11-03-2020 at 6:00 am

little fish big fish min

Mike Gianfagna, a fellow SemiWiki blogger and a one-time colleague at Atrenta shared a useful piece of marketing advice. If your company is not the biggest fish in the pond and you want to appear more significant, team up with other companies to put on an event, say a webinar. Pick your partners so that you can jointly offer a larger, more rounded view of a current topic than any one of you would be able to provide on your own. Which will automatically draw more registration and views. A pretty powerful idea that small companies should exploit more often. Aldec, PLDA and Avery did just that, in a joint webinar on verifying PCIe 5.0. There’s power in numbers.

PLDA

PLDA is a European-based company with offices in Silicon Valley. They specialize in high-speed interconnect IP: PCIe, CCIX, CXL and GenZ primarily, designed for implementation in FPGA or ASIC applications. They’re cagey about customers, as small companies usually are. They have presence in Europe and the US, now also in Asia. And they recently announced a design win on 5nm. PLDA supplied the PCIe 5.0 core used at the heart of this webinar/demo.

Avery

I knew of Avery many years ago, founded by Chilai Huang, previously of Gateway Design (the people who started Verilog) and Cadence. Chilai started Avery after he left Cadence, with a focus as I remember on symbolic simulation. This is still offered I believe in SimXACT, with an emphasis on X-verification. More interesting though in this context, they offer quite a broad range of VIP and they claim they are leaders in NVMe and CCIX/CXL, all of which I believe are founded on PCIe. So Avery provided the PCIe 5.0 VIP for this webinar/demo.

Aldec

I’ve written before on Aldec. They are a venerable EDA company, founded in 1984 and based in Henderson, Nevada. Again, they’re not especially open about customers, but I believe they are quite heavily used in FPGA design, especially in mil-aero applications (they have a big focus on DO-254 compliance and requirements tracking). I have no doubt that some of those designs transition to ASIC so they cover both implementation options. They have a full-range solution for FPGA design, simulation, emulation and prototyping. They also offer their own development and prototyping boards, based on Xilinx Zynq devices. Aldec provided their Riviera-PRO simulation platform for this webinar/demo.

The Demo

The demo itself is pretty technical. All about how they build the DUT and testbench. And how they encapsulate the PCIe IP with a pipe box to feed in transactions. With the Avery bus functional model as a front-end to this process, generating transactions to feed that pipe box. While Riviera-PRO simulates the whole thing and monitors transactions, checks and the like. All far above my technical paygrade I’m afraid. You should watch the webinar to get the real meat.

Which brings me back to my original point. These three companies together were able to deliver a complete story. One that none of them could have delivered on their own. Worth remembering. You can register to watch the webinar HERE.

About Aldec
Established in 1984, Aldec is an industry leader in Electronic Design Verification and offers a patented technology suite including: RTL Design, RTL Simulators, Hardware-Assisted Verification, SoC and ASIC Prototyping, Design Rule Checking, CDC Verification, IP Cores, High-Performance Computing Platforms, Embedded Development Systems, Requirements Lifecycle Management, DO-254 Functional Verification and Military/Aerospace solutions. www.aldec.com


SkillCAD Adds Powerful Editing Commands to Virtuoso

SkillCAD Adds Powerful Editing Commands to Virtuoso
by Tom Simon on 11-02-2020 at 10:00 am

SKillCAD Creating a Metal Bus

Despite the large role of place and route in IC design, there will always be a need for custom layout design. This is particularly true in radio frequency (RF), power management (PM) and power amplifier (PA) circuits, among others. Cadence Virtuoso is by far the leading tool for creating these custom designs. Virtuoso has a sophisticated data model that includes connectivity and wide range of objects for use in advanced design layout. Yet, creating many design elements in Virtuoso can become extremely repetitive, increasing the time required and the likelihood of error. SkillCad is a provider of tools written in SKILL to make creating custom designs faster and more reliably. Their developers, who are experts in layout design and SKILL language coding, have been working with custom design teams at leading IC companies for many years to refine new commands for custom layout.

WEBINAR: Increase Your Layout Team’s Productivity with SkillCAD

The nice thing about  SkillCAD  is their product IC (LAS) runs seamlessly within the  Virtuoso environment.   The layout produced is completely compatible with all the built in Virtuoso commands. This means user never have to leave Virtuoso environment to use SkillCAD commands, the layout created by SkillCAD tools can be edited like any other layout data generated by Virtuoso. With over 120 commands not offered by Virtuoso,  SkillCAD address many specific design styles and common tasks that designers must perform. 

SkillCAD can also be easily integrated into a layout teams design methodology.  Whatever layout design approach is used, bottom-up, top-down, or any combination of approaches, the power and versatility of the SkillCAD tools will significantly shorten layout cycle times.

  • The powerful pin placement and modifying tools can take the placement of hundreds of pins, from hours to a matter of a few minutes.
  • The many metal routing and bus routing tools make routing and editing metal routes, easy and efficient. Running wide power and ground metals and creating mesh ground metal planes with the slotted metal tools, is as easy as routing a single metal wire.
  • The dummy fill and density checking tools, make generating matched dummy metals over critical circuit areas and quickly checking density percentages in circuit blocks, as easy as specifying the layers and identifying a circuit region.

SKillCAD Creating a Metal Bus

In addition to these commonly used tools, SkillCAD also provides powerful tools for generating and editing guard rings around devices, circuit elements, and even entire circuit blocks.  There are tools for generating shielding around sensitive metal signals, and even creating the complex twisted metal structures, with shielding, that are common for sensitive RF (Radio Frequency) transmission lines.  SkillCAD also includes tools for measuring circuit data, comparison viewing of old versus new circuit data, viewing cross sections of MOS devices, and many other tools not mentioned here.

Cadence provides the base tools and the design framework.  SkillCAD provides the versatility, ease of use, and automation.  Together Cadence plus SkillCAD provides a powerful, versatile, and easy to use design environment for custom Integrated Circuit layout. More information is available on the SkillCAD website.

Also Read:

SkillCAD Layout Automation Suite has Over 120 Commands Backed by 60 Customers

CEO Interview: Pengwei Qian of SkillCAD

Webinar: A Practical Approach to FinFET Layout Automation That Really Works


Designing Smarter, not Smaller AI Chips with GLOBALFOUNDRIES

Designing Smarter, not Smaller AI Chips with GLOBALFOUNDRIES
by Mike Gianfagna on 11-02-2020 at 6:00 am

Designing Smarter not Smaller AI Chips with GLOBLFOUNDRIES

On October 20 at the Linley Fall Processor Conference, GLOBALFOUNDRIES made a compelling case for designing smarter, not smaller AI chips. The virtual conference was filled with presentations on the latest architectures and chips for all types of AI/ML applications. It was therefore a refreshing change of pace to hear the fab technology view from GF. After all, without a fab everyone is just presenting simulation results.

Hiren Majmudar

Hiren Majmudar, vice president and general manager, Computing Business Unit at GF gave the presentation. Hiren has a strong command of the subject matter and gave a very well thought out presentation. He has a long career at Intel with some time at Magma/Synopsys and SiFive before joining GLOBALFOUNDRIES.

Hiren began with data from ABI Research showing a substantial rise in ASIC TAM for the AI silicon market over the next few years when compared to GPU, FPGA and CPU. I have a long history in the ASIC market, so Hiren was singing my song. At the end of the day, a purpose-built ASIC accelerator will out-perform all other approaches by a substantial margin. Hiren presented a nice graphic that drives the point home.

AI accelerators address emerging power and memory bottleneck

Next, Hiren discussed a couple of very relevant trends that are well-known. Moore’s Law is slowing at advanced nodes, so cost reduction and speed improvement aren’t what they used to be. At the same time, Dennard’s Law has stopped working, so power is no longer reduced by moving to an advanced node either. Hiren then proposed the solution to all this – use domain-specific architectures that address performance and power needs with custom approaches and build them on an optimized process for the problem at hand. Two GF platforms were presented as fitting the model quite well:

  • 12LP/12LP+ (FinFET), which delivers excellent performance/watt, and improves total cost of ownership while enabling optimal return on investment
  • 22FDXR®  (FD-SOI), which offers the highest TOPS/W for power constrained solutions, economically

Hiren offered some examples of how these platforms are being used. For cloud and edge servers, performance/watt for training and inference is important as these are high performance applications. Two customer applications using 12LP/12LP+ are:

  • Tenstorrent , with 368TOPS in a 75W PCI-E FF
  • Enflame, who has packaged 14B transistors in advanced 2.5D

For edge devices, low power inference is key as most of these applications are battery powered. Two customer applications using FD-SOI from GF are:

  • Perceive, with a massive number of neural networks in a 20mW envelope with no external RAM
  • Greenwaves, who is achieving 150 giga ops at 0.33 mW/GOP

Regarding FinFET, Hiren discussed their power optimized SRAM. Memory is a critical building block for AI applications.  He also mentioned GF’s design technology co-optimization where GF will work with the customer. He also described GF’s dual work function gates, which deliver several advantages:

  • Lower variability
    • Analog matching
    • SRAM Vmin
    • Improved derate in AOCV
  • Less gate induced drain leakage (better static power)
  • Better mobility (better drive)
  • Lower junction cap (improved AC performance)

An AI-focused logic library and memory structures are also provided, and there is a third-party IP ecosystem for the technology as well.

Hiren described the fully-depleted silicon on insulator 22FDX technology as FinFET-like.  This technology delivers many of the benefits of FinFET in a simpler bulk technology. Thanks to GF’s unique body bias ecosystem, ultra-low leakage designs are possible. The technology is useful in the automotive sector for ADAS applications. GF’s MRAM technology also helps here to deliver low power and low latency.

Hiren concluded with some more color about how GF partners with their customers to deliver optimized chips that target specific AI workloads. It was clear that designing smarter, not smaller AI chips is a winning strategy with GF. You can learn more about GF’s 12LP technology here and their 22FDX technology here. You can also get the views of Linley Gwennap, Principal Analyst, The Linley Group in his piece entitled Building Better AI Chips.

Also Read:

The Most Interesting CEO in Semiconductors!

GLOBALFOUNDRIES Goes Virtual with 2020 Global Technology Conference Series!

Designing AI Accelerators with Innovative FinFET and FD-SOI Solutions


Are TSMC and Intel Partnering in Arizona?

Are TSMC and Intel Partnering in Arizona?
by Daniel Nenni on 11-01-2020 at 10:00 am

TSMC Career Opporunitites

After months of back and forth TSMC finally announced plans to build a fab in Arizona. The announcement was not made in the press or on the most recent investor call but on LinkedIn. A sign of the times I guess but since they need to hire a bunch of semiconductor people it was more than appropriate.

“We’re delighted to catch up with you that TSMC had announced its intention to build and operate an advanced semiconductor fab in Arizona. This U.S. advanced foundry fab not only enables us to better support our customers and partners, we also wish to attract global talents to work with us to change the world.At TSMC, we are working consistently to provide the most advanced technologies to enrich human life. Join us to initiate and witness the new semiconductor era with remarkable people around the world. https://lnkd.in/gX-aEre

The question is why? TSMC can build fabs in partnership with the Taiwanese Government for pennies on the dollar which is in fact one of the reasons why TSMC is the dominant semiconductor foundry. TSMC did build a leading edge fab in China to better relations with the Chinese Government but spying was a serious problem so TSMC has slowed that effort and now has extensive security protocols in place which is for the greater good of TSMC, absolutely.

One rumor is that TSMC is working with the Federal and State Government to better secure the semiconductor supply chain in the United States. The US Government is finally putting up some money to offset costs of semiconductor manufacturing in the US. Let’s not forget that manufacturing started here but was shooed away by the EPA about the time I started in semiconductors in the 1980s.

U.S. lawmakers propose $22.8 billion in aid to semiconductor industry

Another rumor, which I may have just started, is that TSMC and Intel are already working together in Arizona. Arizona is Intel’s home court so is it a coincidence that TSMC is landing there? Another coincidence, Intel is discussing outsourcing designs to a foundry, a decision to be made by the end of the year according to Intel CEO Bob Swan. I’m sorry but I really don’t believe in coincidence especially when it comes to TSMC. TSMC management is the best this industry has seen in decades so this storyline is all about TSMC.

Another rumor is that Samsung is in the running for the Intel outsourcing gig which was fortified by an Intel executive giving a keynote at the Samsung Foundry day last week. Or maybe that was just part of what Bob Swan said about Intel working closer with the semiconductor ecosystem? Personally I think it was a bad move by Intel to keep TSMC on its toes in the outsourcing discussions. Given AMD’s recent moves, TSMC holds all of the outsourcing cards so Intel should keep this negotiation at the utmost professional level, my opinion.

This reminds me of the head-to-head foundry battle between Altera and Xilinx. In the FPGA business the first to silicon got a market share boost. Altera was partnered with TSMC and Xilinx UMC. Xilinx actually had a dedicated floor in the UMC Hsinchu HQ. At 40nm UMC fell behind so Xilinx jumped to TSMC and actually beat Altera to 28nm. The rest is history but Xilinx beat Altera to first silicon from that day forward and now dominates the $5B+ FPGA market. AMD acquiring Xilinx makes this even more interesting. I will write more about that later because it is a great move by AMD.

Bottom line: To better compete with AMD, Intel will have to closely partner with TSMC like Xilinx did. Just my opinion of course but who would know better than me?

 

 

 


Elon Musk’s Aspirational Automation

Elon Musk’s Aspirational Automation
by Roger C. Lanctot on 11-01-2020 at 8:00 am

Elon Musk Aspirational Automation

Tesla Motors kicked off the release of its latest earnings with news of record results and the release of the company’s Full Self Driving beta. Leading up to the earnings, most attention was focused on whether Tesla will meet its original target of 500,000 vehicle shipments this year in spite of the negative impact of the pandemic.

Chief Financial Officer Zachary Kirkhorn re-affirmed the company’s commitment to its original guidance of shipping 500,000. The real news, though, was Tesla’s ongoing transformation of the automobile industry.

In an automotive world seemingly obsessed with the pursuit of autonomous vehicle operation, Tesla continues to rewrite the rules. In the absence of significant regulatory guidance from the National Highway Traffic Safety Administration or enforceable global limitations, Tesla continues to press its automation case with Autopilot as a beta requiring driver attentiveness.

The function was already slotted as a premium feature with an $8,000 price tag on new Tesla’s like the Model 3. Tesla is now hinting at a future $2,000 price increase to $10,000 for FSD. More importantly, though, the new Autopilot beta is being offered “to a small number of people who are expert and careful drivers.”

So this premium vehicle function is not only a luxury, it is a privilege. Tesla owners with suitably outfitted vehicles – capable of supporting the software update – will somehow have to measure up to FSD eligibility. I write “somehow” because only Tesla knows what the qualifications are or how to attain them.

From Tesla’s last earnings call the meaning of this is clear. Tesla is monitoring and evaluating its drivers even as it is working to determine which driving parameters are most predictive for reduced claims allowing for less expensive insurance premiums.

As Tesla CEO Elon Musk states on the latest earnings call, “insurance could very well be 30% to 40% of the automotive business.” So Tesla is offering multiple tradeoffs for data sharing from Tesla vehicles – 1) FSD availability; 2) lower insurance premiums.

Tesla is driving a narrative whereby FSD capability is an aspirational function that any proud Tesla owner would want to possess. So Tesla owners will be vying for the privilege of buying a car that will be partially capable of driving itself. (Tesla is even offering retrofits for Tesla buyers regretting having NOT purchased the FSD option.)

The reality, of course, is that Tesla owners are actually paying for the privilege of teaching Tesla how to refine its computing algorithms. If Tesla were to overtly ASK its customers to help teach its systems how to drive (like Microsoft asking for your help refining its software by sharing your error messages) the sheer awkwardness would be a non-starter. Simply offering FSD as an expensive option has completely altered the value proposition.

With its connected cars enabled for semi-autonomous operation with the FSD beta Tesla is simultaneously letting drivers teach its systems to refine their self-driving capabilities, while laying the groundwork for the most detailed and accurate insurance underwriting platform in the world. Musk has made it clear that only the best drivers will be invited into the FSD beta domain.

To be clear, Musk is not overtly penalizing bad drivers – i.e. using their data against them. He is rewarding those drivers considered to be, let’s say, least crazy, in Musk’s own words.

Looking across the autonomous vehicle landscape, there is no comparably compelling value proposition under development. Waymo and Cruise Automation are dithering with low-speed robotaxis practically doomed to failure within an operational domain that will not scale and cannot compete with human driven alternatives.

AV Shuttles are glorified robotaxis that are gaining traction but at a pace comparable to their own low speed operation. Aftermarket AV solutions from Comma.ai and Ghost are simultaneously intriguing and terrifying and likely to spur regulatory and insurance industry backlash.

Auto makers Audi and Daimler are struggling to overcome internal legal and external regulatory barriers to bringing Level 3 automation systems into the market – so far, unsuccessfully. The Germans face a third barrier of a driving public raised on the importance of vehicle performance in human hands. It’s not clear that these companies have come to grips with how a self-driven experience will enhance their brand value – especially at lower speeds.

One standout among a vast field of competing auto makers is General Motors and its Super Cruise enhanced cruise control system. Current owners of Super Cruise-equipped Cadillac’s have made their enthusiasm for the feature known and GM is steadily expanding its scope to more highways and a broader range of road classifications – setting the stage to deliver the feature to as many as 22 additional vehicle models.

GM has not yet positioned Super Cruise (or the soon-to-arrive Ultra Cruise) as a system built around both aspiration and data sharing. But GM probably has the best shot at delivering such a Tesla-like value proposition.

Tesla’s industry impact does not end at making autonomous driving aspirational. Like Philippe Petit, Tesla’s Musk likes to engage in his high wire act without a net.

Tesla’s “generalized neural net-based” self-driving system is intended to operate without a map, without a cellular connection, and without LiDAR. “There is no need for high-definition maps or a cellphone connection,” Musk told investors and analysts on the earnings call.

This is certainly where GM and Tesla part company. GM’s Super Cruise is indeed relying on a map, enhanced positioning technology, and wireless connectivity.

Many electric vehicle companies have stepped forward to take on Tesla. None have come to grips with the underlying aspirational value proposition that is increasingly defining the brand. It’s not clear that any competitor has the stomach for the kind of risk threshold demonstrated by Tesla’s Musk. It’s not about batteries or charging stations or gigafactories. It’s about vision and balls.


Downplaying SMIC – Uplaying TSMC

Downplaying SMIC – Uplaying TSMC
by Robert Maire on 11-01-2020 at 6:00 am

SMIC China TSMC
  • KLAC sports solid QTR & Guide- Foundry & Logic drivers
  • Management remains dismissive of SMIC embargo
  • Execution & financials are solid but macro headwinds remain
  • Nice September Quarter

KLA reported revenue of $1.54B and Non GAAP EPS of $3.03 versus street expectations of $1.49B and EPS of $2.77. Guidance is for revenues of $1.585B+-$75M and EPS range of $2.82 to $3.46 , midpoint of $3.14.

Revenues and revenue guidance was at the high end but perhaps a bit conservative as compared to street expectation.

Also applying for Chinese export license with no anticipated impact

The company claims that there will be no impact in the December due to the embargo on SMIC.

Obviously SMIC was never a significant amount of business but none the less, our view is that there was likely some small impact. Management said they will be applying for export licenses but unlike Lam did not give any body language on expectation of those licenses being granted.

China remains a large exposure at 32% of business

China was 32% of KLA’s revenues (versus Lam’s 37%) and the largest geographic area with Taiwan (read that as TSMC) second at 24% of business. Korea was 12% and the US was 11%.

At roughly a third of business, China is a very large exposure if the embargo spreads beyond SMIC. Management is clearly trying to downplay the SMIC embargo and potential related issues that could impact business. What, if any licenses are granted and the final impact are yet to be determined and we will start to get a clue when the December quarter is reported

Foundry at 59% remains the key driver

Not surprisingly memory was 31% and logic at 10%. TSMC and China are the two big buyers with TSMC going whole hog on all things EUV.

Though management talked about growing faster than industry average, growth seems to be in high single digits with WFE growing about 10% from 2019’s $52B-$53B. This likely suggests that the acquired business is a bit slower than the traditional business.

Reticle Inspection is a big ticket item

High priced reticle inspection tools were a significant driver in the quarter. The industry shift to EUV likely helps this business quite a lot even without actinic inspection. We have heard that Lasertec is growing even faster in reticle inspection as demand is obviously off the charts from customers.

As new products roll out for KLA we could see more upside. In the near term there could be some lumpiness as customers digest both EUV scanners and associated yield management tools.

Acquired business a little soft offsetting strength in core

Specialty Semiconductor & PCB was down 10% Q/Q while core process control was up 10% Q/Q. Higher margins on the core business likely helped offset some of the revenue weakness. Going forward we might expect some variability on the non core business.

The Stock

While KLA put up a nice quarter and good guide it was by no means a blow out that could offset the near term macro market sentiment nor fully make up for concerns over the SMIC embargo and further related problems. It seems to us that these headwinds will act in concert to slow progress of an otherwise great story.

KLA’s execution remains near perfect with financial returns virtually bullet proof with $329M in buy backs and dividends and 96% free cash flow conversion.

So while it remains best in class we don’t think investors are going to be adding to existing positions in the near term despite the recent softness.
The macro issues likely remain overwhelming as COVID, coupled with China concerns and economic growth concerns create a cloud.

Though there is no near term slowing forecast for semiconductor equipment we remain cognizant of gathering storm clouds on a higher level.

We see little impact on related names such as AMAT or LRCX and ASML.

Also Read:

Coronavirus Remains Good for Semiconductors but not China

Is Intel Losing its Memory?

ASML is Strong Because TSMC is Hot!


Free Webinar on SPICE Simulation

Free Webinar on SPICE Simulation
by admin on 10-31-2020 at 10:00 am

SPICE Simulation

The world of SPICE simulators is one filled with compromises. Typically, it is possible to choose the highest accuracy and pay a performance and capacity penalty, or to choose high speed and capacity but give up accuracy in the process. Many semiconductor companies have been turning to Primarius Technologies to help escape these compromises. Over the last ten years, with a team of industry veterans and support from tier one VCs, they have developed a comprehensive SPICE simulation offering that speeds up full SPICE, offers extremely accurate high-speed SPICE and adds some unique features to enable statistical analysis and Design for Yield (DFY).

SPICE Simulation

In an upcoming free webinar to air on November 11th at 10AM Pacific time, Yeuh Wang from Primarius will present a complete overview of their NanoSpice, GigaSpice and NanoSpice Pro products. NanoSpice is a pure SPICE simulator that offers parallel processing implemented such that there is excellent linear scaling as cores are added. NanoSpice offers comprehensive model support and has wide foundry acceptance. NanoSpice offers a capacity of over 50 million elements, and more than a 2X speed up over other pure SPICE simulators. It is ready for FinFET designs from 16 down to 7nm.

GigaSpice is a high accuracy alternative to FastSPICE simulators which can be difficult to set up properly and suffer from reduced accuracy. In the webinar Yeuh will talk about how with lower supply voltages and higher speeds, it is not always acceptable to only run the top level of large designs with a less accurate FastSPICE. GigaSpice can handle over 10^9 elements and runs faster than FastSPICE. Primarius has eliminated the difficult process of tuning the simulator with FastSPICE options that can often lead to inaccurate results.

Fitting in between these two offerings NanoSpice Pro, is a dual engine multicore SPICE that can run large size designs that would normally require FastSPICE. It offers a manifold improvement in runtimes with the accuracy necessary to tackle all aspects of memory design, among other tasks. It can utilize over 32 cores for parallel processing.

The icing on the cake is that Primarius also offers unique DFY capabilities within their SPICE simulation tools. These include Fast PVT, Monte Carlo and High Sigma analysis. They also have sophisticated design for reliability features that can help predict aging effects.

The webinar will cover a lot of material, including benchmark results and a deeper dive into the technology than we can offer here. If you are looking for a way to improve the speed, accuracy, or usability of your SPICE flows, this webinar will be worth your time. You can register here for the webinar replay.

About Primarius
Primarius was founded in 2010 by a group of industry veterans, and has seven worldwide offices in Beijing, Hsinchu, Jinan, Seoul, Shanghai, and Silicon Valley. Primarius is well recognized as an essential EDA partner for advanced process technology development and high-end chip designs. It has over 100 worldwide customers including all top foundries and tier-one memory houses, and most other leading semiconductor companies. www.primarius.com


Nvidia Arm Acquisition Talking Points

Nvidia Arm Acquisition Talking Points
by Daniel Nenni on 10-30-2020 at 10:00 am

Nvidia Arm Acquisition 2020

Before founding SemiWiki I competed with Arm on many different levels throughout my career and I have had various business dealings with them since. SemiWiki also published the definitive book on Arm: “Mobile Unleashed” which goes deep into the history of Arm and the top SoC companies (Qualcomm, Apple, and Samsung).

In my expert opinion the Softbank acquisition of Arm was disruptive but in the end a fizzle for the semiconductor industry. Softbank made significant investments in Arm but was not able to properly monetize them under such a diverse corporate umbrella.

To be clear, Arm is a legend on the edge, always has been and always will be, but artificial intelligence and the HPC (cloud) market is critical in moving semiconductors forward. Unfortunately, after more than 10 years and billions of dollars invested Arm is not a player in the HPC market and AI on the edge  and cloud must be tightly coupled which brings us to the Nvidia acquisition.

NVIDIA to Acquire Arm for $40 Billion, Creating World’s Premier Computing Company for the Age of AI

I also know Nvidia quite well and have a huge amount of respect for Jensen Huang, in fact, I would never bet against him and his team. The Nvidia – Arm acquisition would definitely be disruptive and it could have a huge upside for the semiconductor industry so it is definitely worth further discussion.

Here are the talking points I have gathered thus far:

1.  Does the transaction make sense?

  • For Nvidia, who is using stock that is at near record multiple valuation, it’s a way to expand beyond graphics and AI into general purpose computing without spending much real money. Given that, the $40B price tag is closer to $27B in real money.
  • Nvidia has already made double digit penetration into the data center. The obvious next step would be displacing more of the Intel-dominated infrastructure.
  • Success at building an ARM-based general purpose server is a possible path. Many have tried, e.g. Calxeda, AMCC, Qualcomm, et al, with no success so far.
  • IT professionals will resist a change from Intel or from an Intel/AMD infrastructure so the path to software compatibility is not that clear.
  • It does, however, give Nvidia a way to move from special purpose AI/ML HPC to take over more of the I/O processing that now front ends the Nvidia servers with Intel/AMD.
  • Most users would like to see more competition for Intel.
  • AMD’s recent processing advantages through TSMC is a start but the world would like an additional player.
  • Another potential advantage is the relationship building that it creates for Nvidia in a host of application areas dominated by ARM (especially mobile communications which is 40% of semiconductor revenue).
  • Since usage data may be embargoed from Nvidia access, the insights provided by ARM to Nvidia are probably limited.

2. Will the transaction make it through anti-trust restrictions?

  • Estimates range from 50%-80% probability.
  • There’s no legitimate way to stop it but there are political reasons (China).
  • For Europe, they lose yet another hope of being a driver for the worldwide semiconductor industry but the U.S. is only modestly more distasteful than Japan as an owner of ARM.
  • For the UK, the same is true but they also have a set of commitments from Softbank, e.g. hire an additional about 2500 employees in the UK within five years of the Softbank acquisition of ARM; keep the headquarters in the UK; etc. Nvidia is clearly willing to honor these commitments.
  • For China, it’s probably more useful as a leverage point than as a blocked transaction. Negotiations for approval can resolve the Allen Wu CEO controversy and give increased autonomy to ARM China.
  • While China is totally dedicated to RISC-V standardization at the expense of ARM, they need to support existing designs that are ARM-based and to leave open a choice for applications where the RISC-V product and infrastructure is just not robust enough.
  • For the U.S., politics can interfere but, if this goes through the normal process in the FTC, it’s hard to see how it is turned down.
  • On the positive side, the trigger point for anti-trust restrictions is, if either the individual or combined entities, exceeds 40% market share in the defined area of competition. Politics affects the definition of defined area.
  • If the defined area is general purpose computing, Nvidia/ARM are far behind the competition so approval would enhance competition.
  • If the defined area is graphics chips, then the transaction raises a very modest flag for the ARM MALI graphics IP family. Nvidia would likely be willing to spin MALI off if necessary, removing that flag.
  • On the negative side for the transaction, the FTC normally listens to customers (not competitors). Apple, Qualcomm, Broadcom, etc. are unified in opposing the transaction.
  • This will probably require some pledges (with teeth) from Nvidia to sustain the independence and openness. They say they are willing to do this.
  • Assuming Nvidia complies, the advantage of the acquisition is diminished.
  • Additionally, Nvidia is perceived to be the opposite of ARM in terms of openness. That should add to the emotion of discussions with the FTC.

3. If the transaction goes through, how does the semiconductor industry change in the future?

  • Unless Nvidia successfully develops an ARM-based general purpose Intel compatible server, the answer is “not that much”.
  • ARM’s growth has been somewhat stunted since the Softbank acquisition and Nvidia probably can’t do much to restart the growth that ARM once had.
  • In addition, key SoC customers like Qualcomm, Samsung, MediaTek, et al may be increasingly skeptical of new ARM based designs and will try to find ways to use RISC-V.
  • Meanwhile, China’s path forward to ultimately increasing RISC-V penetration is unchanged and will be part of the national strategy for semiconductor independence.
  • Nvidia adds $2B of revenue (with no COGS but lots of R&D expense) and visibility as another potentially challenger for the leading non-memory semiconductor company in terms of revenue.

Bottom line: Given the pace of semiconductor acquisitions and the race for HPC supremacy I expect to see many more transactions in the near future but probably not as disruptive as this one. Nvidia is an AI pioneer (chips and ecosystem) and an active player in the HPC market. Arm dominates edge computing and has mastered the customer centric business model that will grandfather Nvidia AI into thousands of companies around the world.

Saying this acquisition has a 1+1=3 value proposition is an understatement, absolutely.


Mentor User2User Virtual Event 2020!

Mentor User2User Virtual Event 2020!
by Daniel Nenni on 10-30-2020 at 6:00 am

banner u2u2020 virtual digital 500x350px 5

Now that we have gone virtual, life has never been easier, for me anyway. There are literally events every day beamed into my living room. The question is which should I attend? The answer is I should attend the ones with the most customer-based content, which is what User2User is all about. I will miss attending this one live as it was in my backyard at one of my favorite locations and included some great food and camaraderie.

You for sure don’t want to miss industry legend Malcom Penn’s talk on “What’s in Store for the Chip Industry in 2021?”. Malcom and I do not always see eye to eye so this should be a spirited one for sure. I’m in the trenches and he is in the officers’ club so to speak.

Take a look at the agenda, put it on your calendar, and I will see you there. Work from home casual attire is recommended and since this is an international event cocktail hour is any hour so stock up the minibar and let’s enjoy the virtual day, absolutely.

User2User North America Info Sheet

Join the Mentor user community virtually at User2User for North America (November 10) and Europe (December 1).

For over 30 years, User2User has provided a forum for sharing best practices and discovering new techniques for tackling EDA’s biggest design challenges. This one-day virtual conference not only includes innovative keynotes from industry leaders, but a host of technical sessions as well as a chance to network with colleagues and industry peers. Thousands of engineers attend U2U events around the globe to hear how their peers are achieving design excellence with Mentor tools and to gain insight on the latest Mentor products.

The technical content at U2U is driven by Mentor tool users. Listen to your peers share their experiences and the solutions they employ when designing, developing, and deploying high-quality products across these domains:

  • IC Design, Physical Verification, Circuit Verification and DFM
  • Analog/Mixed-Signal Verification
  • Functional Verification
  • HW-Assisted Verification and Validation
  • Design-for-Test, Bring-up and Yield
  • High-Density Advanced Packaging Design and Verification
  • Custom/Analog Design, MEMS and Silicon Photonics
  • High-Level Synthesis and RTL Power Estimation/Optimization
  • PCB System Design

Who Should Attend

Electronic design engineers and their managers interested in exploring innovative products and solutions that help engineers to solve design challenges in the increasingly complex worlds of board and chip design.

What to Expect

  • Fascinating keynote presentations from EDA industry leaders
  • Leading-edge technical presentations from your peers and colleagues.
  • The U2U Exchange, a product expo and hub for face-to-face interaction with Mentor experts.
  • Interact with our partners to see their innovative solutions using Mentor tools
  • Networking opportunities throughout the day offer you ample time to connect

Agenda Highlights

“Variation-Aware Design Verification of Standard Cells using Solido Variation Designer” – Rajnish Garg, STMicroelectronics

“Characterization of a Fuse-Array using Tessent SiliconInsight” – Hanene Jammoussi, Intel

“Winning the Race to UVM and RTL Debug” – Jagannath Panduranga Rao, Microsoft

“GLOBALFOUNDRIES and Mentor Partnering to Provide HLS Solutions for AI & Edge Applications” –

Pratik Rajput, GLOBALFOUNDRIES

“Adopting Best Practices in Printed Circuit Design Layout” – Stephen Chavez, Collins Aerospace

“What’s in Store for the Chip Industry in 2021?” – Malcolm Penn, Future Horizons

Also Read:

ASIC and FPGA Design and Verification Trends 2020

Siemens is the True Catalyst for Secure and Trusted Digital Transformation

Arm Design Reviews add Mentor for Verification Review