webinar banner2025 (1)

The SiFive Tech Symposiums are Heading To Portland and Seattle Next Week!

The SiFive Tech Symposiums are Heading To Portland and Seattle Next Week!
by Swamy Irrinki on 10-17-2019 at 2:00 pm

We’re confirming seats in Portland and Seattle for the Pacific Northwest leg of our worldwide 2019 SiFive Tech Symposiums. We are pleased to have Mentor, A Siemens Business as our co-host, and Lauterbach, a leader in microprocessor development tools, as our partner in both cities. The Portland symposium will take place Tuesday, October 22 at the Portland Community College. Our Seattle symposium will be on Wednesday, October 23 at thinkspace Seattle. All of the SiFive Tech Symposiums have been significantly instrumental in engaging the hardware community in the RISC-V ecosystem, and spearheading the emergence of new applications. We are constantly in awe of the brilliant minds that convene at these events. We thrive on watching intense conversations and the sharing of ideas between those already entrenched in RISC-V and others who are simply exploring design alternatives.

The symposiums in Portland and Seattle will both feature presentations by the RISC-V Foundation, SiFive, Mentor and Lauterbach, as well as other ecosystem partners and academic luminaries. There will also be a tutorials, demos and presentations on RISC-V developments tools, platforms, core IP and SoC IP. As always, we have arranged for plenty of time for networking.

Attendance is free, but registration is required!

  • To view the agenda, and to confirm your seat in Portland, please click here.
  • To view the agenda, and to confirm your seat in Seattle, please click here.

We look forward to seeing you!

About SiFive
SiFive is the leading provider of market-ready processor core IP, development tools and silicon solutions based on the free and open RISC-V instruction set architecture. Led by a team of seasoned silicon executives and the RISC-V inventors, SiFive helps SoC designers reduce time-to-market and realize cost savings with customized, open-architecture processor cores, and democratizes access to optimized silicon by enabling system designers in all market verticals to build customized RISC-V based semiconductors. With 14 offices worldwide, SiFive has backing from Sutter Hill Ventures, Spark Capital, Osage University Partners, Chengwei, Huami, SK Hynix, Intel Capital, and Western Digital. For more information, www.sifive.com.

About the RISC-V Foundation
RISC-V (pronounced “risk-five”) is a free and open ISA enabling a new era of processor innovation through open standard collaboration. Founded in 2015, the RISC-V Foundation comprises more than 325 members building the first open, collaborative community of software and hardware innovators powering innovation at the edge forward. Born in academia and research, the RISC-V ISA delivers a new level of free, extensible software and hardware freedom on architecture, paving the way for the next 50 years of computing design and innovation.

The RISC-V Foundation, a non-profit corporation controlled by its members, directs the future development and drives the adoption of the RISC-V ISA. Members of the RISC-V Foundation have access to and participate in the development of the RISC-V ISA specifications and related HW / SW ecosystem. The Foundation has a Board of Directors comprising seven representatives from Bluespec, Inc.; Google; Microsemi; NVIDIA; NXP; University of California, Berkeley; and Western Digital.

In November 2018, the RISC-V Foundation announced a joint collaboration with the Linux Foundation. As part of this collaboration, the Linux Foundation will also provide an influx of resources for the RISC-V ecosystem, such as training programs, infrastructure tools, as well as community outreach, marketing and legal expertise.

Each year, the RISC-V Foundation hosts global events to bring the expansive ecosystem together to discuss current and prospective RISC-V projects and implementations, as well as collectively drive the future evolution of the instruction set architecture (ISA) forward. Event sessions feature leading technology companies and research institutions discussing the RISC-V architecture, commercial and open-source implementations, software and silicon, vectors and security, applications and accelerators, simulation infrastructure and much more. Learn more by visiting the Event Proceedings page.


eSilicon White Paper on Chiplets – Good Read

eSilicon White Paper on Chiplets – Good Read
by Randy Smith on 10-17-2019 at 10:00 am

eSilicon recently released a paper detailing its experiences and its thoughts on the future of chiplets. The author of the white paper is Dr. Carlos Macián. I have also covered a presentation given by Carlos recently at the AI Hardware Summit, and he is well-spoken and quite knowledgeable. To get the white paper, go to the eSilicon website white paper page where you can access a lot of white papers they have developed.

Chiplets are more than an interesting concept. There are many large companies and start-ups all investing in this approach. Even the US government, in the form of the Defense Advanced Research Projects Agency (DARPA), is trying to develop a useful methodology for this approach. So, what is a chiplet?

When you design using chiplets, you put multiple dies into the same package. This approach by itself is not a new technique. We have had multichip modules (MCM) for quite some time. But, MCM designs were usually reserved for high-end, somewhat expensive products. Today’s chiplet market is not about stacking a big memory die over a big processor die. The modern chiplet market is about standardizing the connection method to place multiple dies on a substrate to build a complete system. The problem is, there is not a standard specification for chiplets.

One of the benefits of using a chiplet approach is that you can develop each chiplet in a different technology node, and of course they can be from different manufacturers. This method would seem to be a more efficient approach than putting everything in one very fast expensive process, if not all of the design implementation needs to be at that expensive node. Analog or RF portions of the design may very well be best suited to 28nm or higher. Slower portions of the design may be just fine at 90nm. But, these chiplets have to “plug in” to the substrate used to connect them. To make an effective market out of this, you need a standardized “socket.”

The appropriate socket will depend on the target market. There are low-end applications that could plug in multiple sensors, small processors, small radios, and a bit of memory to make IoT devices. These could be done using a BGA model, though the pitch and electrical interfaces would need some standardization. There are already companies trying to build a design environment around this approach, such as zGlue. You could use faster chip-to-chip interfaces such as those from NVIDIA, Intel, or eSilicon for higher-end applications with 2.5D/3D interconnects, and other approaches to get memory closer to the processors, creating a huge benefit. This approach to chiplets is a good method for some designs, but if you want to be a chiplet provider, how do you standardize your chiplet products across the different vendor technologies? Then there is DARPA, who might be able to build a solution that is not as dependent on what is best from a cost perspective. You can read more about DARPA’s Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program here.

I think Carlos’ white paper provides the answer applicable to the sweet spot of this market. I don’t expect an off-the-shelf market to develop anytime soon for high-end applications. But data center, machine learning, domain-specific processors, and other high-performance, high-efficiency solutions are needed right now. Fortunately, this technology is also available now. Grab the eSilicon white paper here.


Virtualizing 5G Infrastructure Verification

Virtualizing 5G Infrastructure Verification
by Bernard Murphy on 10-17-2019 at 5:00 am

5G backhaul, midhaul, fronthaul

Mentor have pushed the advantages of virtualized verification in a number of domains, initially in verifying advanced networking devices supporting multiple protocols and software-defined networking (SDN), and more recently for SSD controllers, particularly in large storage systems for data centers. There are two important components to this testing, the first being that simulation is clearly impractical; testing has to run on emulators simply because designs and test volumes are so large. The second factor is that the range of potential testing loads in these cases is far too varied to consider connecting the emulator test platform to real hardware, the common ICE model for testing in such cases. The “test jig” to represent this very wide range must be virtualized for pre-silicon (and maybe even some post-silicon) validation.

Jean-Marie Brunet, Dir. Marketing for Emulation at Mentor, has now released another white paper following this theme for 5G, particularly the radio network infrastructure underlying this technology. This makes for a good yet quick read for anyone new to the domain, in part on what makes this technology so complex. In fact “complex” hardly seems to do justice to this standard. Where in simpler times we’ve become familiar with mobile/edge devices connecting to base stations and from there to the internet/cloud through backhaul, in 5G there are more layers in the radio access network (RAN).

These are not only for managing aggregation and distribution. Now backhaul to the internet connects (typically through fiber) to the central unit (CU) which handles baseband processing. The CU then connects to distribution units (DUs) and those connect to remote radio units (RRUs), which are possibly small cells. The CU to DU connection is known as midhaul and the DU to RRU connection is known as fronthaul. (More layers are also possible.) This added complexity has been designed to allow for greater capacity with appropriate latency in the fronthaul network, for example ultra-low latencies are only possible if traffic can flow locally without need to go through the head node.

With this level of layering in the network it shouldn’t be surprising that operators want software-defined networking, in this domain applied to something called network slicing to offer different tiers of service. It also shouldn’t be surprising to learn that more compute functionality is moving into these nodes, known here as Multi-Access Edge Computing (MEC), more colloquially as fog computing. If you don’t want to take the latency hit of going back to the cloud for everything, you need this local compute. And I’m guessing the operators like it because this can be another chargeable service.

Then there’s the complexity of radio communication in ultra-high-density edge-node environments. This requires support for massive MIMO (multi-input-multi-output) where DUs and possibly the edge nodes themselves sport multiple antennae. The point of this is to allow communication to adaptively optimize, through beamforming, to highest available quality of link. Some indications point to that link adaptation moving under machine-learning (ML) control since look-up table approaches in LTE are becoming too complex to support in 5G.

ML/AI will also play a role in adaptive network slicing, judging by publications from a number of equipment makers (Ericsson, Nokia, Huawei et al). This obviously has to be robust around point failures in the RAN, and it also needs to be able to adapt to ensure continued quality of service in guaranteed latency provisions. I also don’t doubt that if ML capability is needed anyway in these RAN nodes, operators will be tempted to add access to that ML as an additional service to users (perhaps for more extensive natural language processing for example).

So – multi-tiered RANs, software-defined virtual networks through these RANs, local compute within the network, massive MIMO with beamforming and intelligently adapted link quality, machine learning for this and other applications – that’s a lot of functionality to validate in highly complex networks in which many equipment providers and operators must play.

To ensure this doesn’t all turn into a big mess, operators already require that equipment be proven out in compliance-testing labs to be allowed to sit within and connect to sub-5G networks. This concept no doubt continues for 5G, now with all of these new requirements added. Unit validation against artificially constructed tests and hardware rigs is a necessary but far from sufficient requirement to ensure a reasonably likelihood of success in that testing. I don’t see how else you could get there without virtualized network testing against models running on an emulator.

You can read the Mentor white-paper HERE.


Optimizing High Performance Packages calls for Multidisciplinary 3D Modeling

Optimizing High Performance Packages calls for Multidisciplinary 3D Modeling
by Tom Simon on 10-16-2019 at 10:00 am

For all the time we spend thinking and talking about silicon design, it’s easy to forget just how important package design is. Semiconductor packages have evolved over the years from very basic containers for ICs into very specialized and highly engineered elements of finished electronic systems. They play an important role in every aspect of chip operation. New packaging technologies, such as 3D IC, SiP, etc., have actually made packages integrated to chip operation. Package design and analysis is becoming more critical because packages can strongly influence cost, reliability, performance, area and a host of other characteristics.

The list of “care abouts” for package design has become pretty long and without a doubt calls for a multidisciplinary approach. Packages essentially are the cocoon that protects the IC die from the effects of its environment. Outside the package there can be threats from moisture and physical shock and vibration. The package also plays a critical role in removing thermal energy away from the IC. Due to expansion and contraction of materials with different CTEs, thermal stress arises at material interfaces in the package. Ultimately, this stress can cause fractures and cracking, leading to failures. Packages also play a significant role in signal and power integrity, which is extremely important for high speed RF and digital applications.

Because of the wide range of factors and issues involved in package design, a comprehensive approach is called for. Dassault Systèmes offers an in-depth solution for every aspect of package design. The3DEXPERIENCE platform allows designers to look at electromagnetic, thermal, and mechanical design considerations using advanced 3D simulators and solvers. With Knowledge Based Modeling, design changes can be quickly updated and analyzed. 3DEXPERIENCE offers the tools and infrastructure to enable rapid design updates to all stakeholders to accelerate the design process.

The solvers in the CST Studio Suite can be used for a wide range of electromagnetic, thermal and mechanical simulations. Applying them to package design and analysis allows designers fully understand each of the multi-disciplinary aspects of the package design. The integrated environment allows a DOE approach to specifying and verifying the package design to fully understand the performance tradeoffs.

Dassault Systèmes has also thought a lot about the user experience for engineers using the 3DEXPERIENCE platform. They offer advanced HPC capabilities and Cloud computing services for faster throughput and reduced simulation costs. Dassault Systèmes has built lightweight visualization technologies for viewing and sharing 3D models and simulation results in web-based apps.

Packaging can be make-or-break for many semiconductor products and systems. This is especially true when looking at product lifecycle management and reliability. It is one thing to design something that works when brand new, but over time residual stresses from operation and the environmental impact can lead to reduced fatigue life or even failure. In applications such as automotive, the expected lifetime of a product in the field should extend out decades, way beyond the expected lifetime of many consumer gadgets. The economic or even human cost of a failure can also be incredibly high.

The Dassault Systèmes website has detailed information on their solutions for advanced electronics packaging. For more information click HERE.

Also Read

A Brief History of IP Management

Delivering Innovation for Regulated Markets

Webinar: Next Generation Design Data & Release Management


Automating Timing Arc Prediction for AMS IP using ML

Automating Timing Arc Prediction for AMS IP using ML
by Daniel Payne on 10-16-2019 at 6:00 am

Empyrean, Qualib-AI flow

NVIDIA designs some of the most complex chips for GPU and AI applications these days, with SoCs exceeding 21 billion transistors. They certainly know how to push the limits of all EDA tools, and they have a strong motivation to automate more manual tasks in order to quicken their time to market. I missed their Designer/IP Track Poster Session at DAC titled, Machine Learning based Timing Arc Prediction for AMS Design, but the good news is that I did attend a webinar from Empyrean that covered the same topic.  Anjui Shey from Empyrean was the webinar presenter and he talked about Qualib-AI, an EDA tool with AI-powered timing arc prediction for AMS IP blocks.

First off, let’s look at some of the AMS IP modeling challenges:

  • Complex IP creates higher design risks
  • Difficulty modeling AMS IP
  • A missed timing arc can lead to chip failure
  • Toggling within an IP changes thermal condition and timing
  • Process variation creates a large number of PVT corners

Empyrean updated their Qualib library analysis and validation tool to use ML, creating the Qualib-AI tool as shown below:

A timing arc is where you define a timing dependence between any two pins of an IP block. CAD groups are tasked with adding timing arc information for each IP block, and for AMS designs this has been a time consuming manual effort that is error prone.

There are three flows used in this methodology for timing arc prediction: Initial Training, Prediction, Incremental Training.

 

The training data comes from previous AMS cell libraries. Benefits to using ML for predicting timing arcs include:

  • Higher coverage of predicted timing arcs
  • Better accuracy for timing type predictions
  • Fewer false positives and negatives, saving time

NVIDIA ran initial training on 411 IP blocks and then used prediction on a set of 30 new IP blocks as shown below, automatically finding 6,774 arcs and identifying 15,124,093 non-arcs:

The CPU runtimes for initial and increment training plus prediction were pretty fast at about one hour, a much shorter amount of time had the engineers done manual timing arcs:

With an incremental training flow there were three improvements in predicting the timing arcs:

  • Number of false positives decreased 9x, number of false negatives decreased 40X
  • Reached 99.6% coverage to predict timing arc with timing type
  • Reached 98.96% accuracy to predict the timing type

After incremental training there were fewer than 100 false positives for the engineers to review, which is a manageable amount compared to previous efforts.

Summary

Engineers creating AMS IP blocks can now automate how timing arcs are created, saving them time and reducing the risk of a silicon re-spin. Libraries of AMS blocks can be run through this flow to uncover missing arcs, thus improving the quality of the library. The number of false positives has been decreased greatly, and run times for this approach are acceptable.

Watch the archived webinar video here.

Related Blogs


Cadence and Green Hills Share More Security Thoughts at ARM Techcon

Cadence and Green Hills Share More Security Thoughts at ARM Techcon
by Randy Smith on 10-15-2019 at 10:00 am

On Wednesday, October 9, 2019, I had the pleasure of spending the day at ARM Techcon at the San Jose Convention Center. In the morning, in addition to getting some sneak peeks into the exhibitor area, I attended some of the morning keynote presentations, which focused on artificial intelligence (AI) and machine learning (ML) topics. Those were great presentations and a special thanks to the ARM marketing team for the high-end production value. Next, I was off to see some of the technical sessions, giving me a chance to get back to my EDA and verification roots.

I attended a joint presentation by Frank Schirrmeister of Cadence and Joe Fabbre of Green Hills titled, Pre-silicon Continuum for Concurrent SoC, Software Development and Verification for Safety and Security Critical Systems. At the Cadence Automotive Summit just this past July 30th, I also saw a presentation from Dan Mender, Green Hills Software’s VP, Business Development called Addressing the State of Safety and Security in Today’s Autonomous Vehicles System Designs. These two presentations have demonstrated to me just how serious these two companies are about working together to solve the critical requirement of system security. You can see my blogs from the Cadence Automotive Summit here and here.

One common message from both presentations is the simple theme that you cannot have safety if you do not also have security. The point is that no matter how much you work on safety, a system that is vulnerable to a malicious attack, will not remain safe. This concept applies to all types of systems, not just automotive. Working with Cadence and Green Hills allows a system architect to utilize security measures in both the hardware and software design. One nugget I saw in the presentation was the theme shared by both companies to simplify architectures, specifically, “Separation of critical components with an emphasis on simplicity for critical components is key.”

As we all know, the earlier you find a design flaw, security gaps, and other bugs, the less expensive it is to fix. This perception increases the importance of two concepts – Hardware/Software Co-Verification, and Virtual Platforms. These are areas where Cadence has succeeded and has a thorough product portfolio to support its customers.

As the diagram above shows, several different techniques can be applied to co-verification as we move through the different design stages. You will find the largest number of bugs at the beginning of the design process, and the rate of finding bugs should decrease over time. But these later bugs are still important as they may not show up until the design process reaches its more refined stages. Test coverage and test suites will also get more thorough in time, and you will want to use these tests at the most refined version of your system available at that time. The full range of Cadence verification products can solve that for you.

The presentation also reviewed some of the technology integration points as they are developed and optimized through the Cadence collaboration with Green Hills. For instance, the Green Hills Hypervisor and RTOS technology Integrity – focused on safety and security – can run on the Cadence dynamic verification engines. The software is compiled using safety-aware, certified compilers, and it can be debugged using the Green Hills Multi IDE, which is connected via standard interfaces like JTAG. As an example integration, a virtual platform using Arm Fast Models was shown to boot Linux and get debugged using the Green Hills Multi IDE.

The earlier you can do software development, the more time you will have to find and fix bugs – and security flaws are just that, bugs! Being able to run software testing on virtual platforms rather than waiting for functional silicon is a huge benefit. You will only be able to find some security flaws when running software on a model of the hardware or the hardware itself. Cadence’s Virtual System Platform enables you to start testing your software long before RTL or prototypes of the hardware are available. The virtual system platform can be combined with other parts of the Cadence  System Development Suite, such as Cadence’s emulation and prototyping products, to give you a fast, reliable environment for doing early software development and validation.

This session was one of several very useful presentations at ARM Techcon. If you missed it this year, make sure to put it on your schedule for next year. If you sign up to be notified when ARM TechCon 2020 registration opens, ARM will give you a $100 discount on the regular price of an All-Access conference pass for the 2020 event.


Formal in the Field: Users are Getting More Sophisticated

Formal in the Field: Users are Getting More Sophisticated
by Bernard Murphy on 10-15-2019 at 5:00 am

Formal SIG 2019 meeting at Synopsys

Building on an old chestnut, if sufficiently advanced technology looks like magic, there are a number of technology users who are increasingly looking like magicians. Of course when it comes to formal, neither is magical, just very clever. The technology continues to advance and so do the users in their application of those methods. Synopsys recently hosted an all-day Special Interest Group event on formal in Sunnyvale, including talks from Marvell and others, with a keynote given by my fellow author Manish Pandey and Pratik Mahajan. A number of points captured my attention.

Regression with ML

Nvidia talked about regressing formal runs. This started with an observation that complexity is growing in many directions, one of which is arbiters, FIFOs and state machines all talking to each other. Proving you have covered all the bases quickly runs out of gas in dynamic verification of possible interactions between these systems. In fact, even trying to do this through bottom-up property checking is dodgy; who knows what interactions between subsystems you might miss? So they chose to go with end-to-end (E2E) property checks to comprehensively cover all (necessary) systems in proving.

The problem in that idea is proof convergence. Taken together these are very big state machines. Nvidia turned to the standard next step – break each property down into sub-properties (with assume-guarantee strategies for example). The sub-properties are easier to prove in reasonable time, but each requires its own setup and proving, and these E2E goals generate lots of sub-properties. This generates so many sub-proofs that resource competition with other forms of verification becomes a problem.

Their next refinement was to apply ML capabilities (RMA) available with VC formal, both within the tools and in learning between tool runs, to accelerate runs and to reduce resource requirements. They do this initially in interactive convergence and subsequently in regression runs. In both cases the advantages are significant – order of magnitude improvements in net run-times. Clearly worthwhile.

Proof Using Symbolic Variables

Microsoft talked about validating a highly configurable interrupt controller IP more efficiently. Their approach was based on connectivity checks for each configuration; they found that in a previous rev this expanded to tens of thousands of assertions and took 2 days to completely validate. In a newer and naturally more complex rev of the IP this grew to 2.5M assertions, the complete proof wouldn’t converge, and initially they were forced to reduce the scope of proving to a sample set, not exciting when the goal had been to demonstrate complete proofs.

Then they got clever, looking at the symmetries in the problem and using symbolic variables for key values in the design, each constrained to lie within its allowable range. This isn’t entry-level formal (you have to think about what you’re doing), but it is very powerful. Proof engines will prove over an assumption that such variables can take any allowed value within their constrained ranges, so the proof is complete but can operate on effectively a smaller problem. That allows for much faster run-times. The large example (which wouldn’t complete before) now ran in 24 hours. Out of curiosity they re-ran the smaller example that previously took 2 days; this now ran in 15 minutes.  As everywhere in verification, clever construction of a testbench can make huge difference in runtimes and in coverage.

Sequential equivalence checking

This (SEQ) is the standard way to verify clock gating, however the Samsung speaker talked about a number of applications beyond that scope: validating for RTL de-featuring (where you turn off ifdef-ed functionality), sequential optimizations (e.g. moving logic across flop boundaries to improve timing), shortening pipeline stages and power optimizations, all verification tasks you can’t pull off using conventional equivalence checking. Given the extended nature of such checks they can be a little more involved than conventional checking. He talked about multiple capabilities they are able to use in VC Formal to aid in convergence of proofs – orchestration, CEGAR memory and operator abstraction and specialized engines. Overall this enabled them to find 40+ RTL bugs in 5 months. He added that 40+% of the bugs were found by new hires and interns, highlighting that this stuff is not only for advanced users.

Datapath Validation

Lastly, Synopsys have now folded their HECTOR datapath checker under VC Formal Datapath Validation (DPV) App. This employs transaction-level equivalence to validate between a C-level model and the RTL implementation. Datapath elements for a long time were one of those areas you couldn’t use formal, so this is an important advance. Marvell talked about using this capability to verify a number of floating-point arithmetic function blocks. In standard approaches, whether for simulation or “conventional” formal, the number of possibilities that must be covered is exponentially unreachable.

The Datapath Validation App works with Berkeley SoftFloat models, widely recognized as a solid reference for float arithmetic. This team used those models in DPV equivalence checking with their RTL implementations and found a number of bugs, some in the RTL and some in the C++ models and added that they subsequently found no bugs in simulation and emulation. This suggests to me that this type of verification is going to see a lot of growth.

 

Interested users can request the material from VC Formal SIG from their Synopsys contacts. You can checkout VC Formal HERE.


Free webinar – Accelerating data processing with FPGA fabrics and NoCs

Free webinar – Accelerating data processing with FPGA fabrics and NoCs
by Tom Simon on 10-14-2019 at 10:00 am

FPGAs have always been a great way to add performance to a system. They are capable of parallel processing and have the added bonus of reprogramability. Achronix has helped boost their utility by offering on-chip embedded FPGA fabric for integration into SoCs. This has had the effect of boosting data rates through these systems by eliminating data movement through IOs and off chip connections.

Achronix is now smashing down the next barrier to SOC performance with the addition of a Network on Chip (NoC) that works in conjunction with the FPGA fabric and all the other elements of the SOC. To help explain how this works, Achronix is offering a free webinar on how data processing algorithms can be accelerated by combining embedded FPGAs and NoCs.

The webinar presenter will be Kent Orthner, Achronix, Senior Director, Systems. He has a long and varied career in both FPGA and NoC technologies. At Achronix he is a key contributor to leading edge FPGA architecture and SoC integration. With his level of expertise this promises to be extremely informative.

NoCs offers many advantages for FPGA based SoC design. Using Achronix’s novel approach, NoC access points can be placed throughout the FPGA fabric to facilitate high speed data transfers within and to outside of the SoC. Off chip memory can be accessed efficiently and PCie ports can also be utilized in the same way. The NoC pipes offering 512Gbps are located as needed in the FPGA processing array. Additionally, there are specialized data transfer modes for 400G Ethernet ports.

Combined SoCs that utilize Achronix NoC and embedded FPGA processing arrays should offer formidable performance. This webinar looks like it could be useful for engineering management, SoC architects and system designers. The webinar, “Accelerate Data Processing Algorithms using FPGAs with 2D Network-on-Chip”, will be offered on October 24th at 10AM Pacific Time.

More information and the registration page can be found here. I am a frequent attendee of webinars because they offer a painless and quick way to keep up on the latest trends in semiconductor technology. I am definitely looking forward to this one. Achronix has consistently been an innovator and looks to be continuing that trend.


Response to IP’s Growing Impact On Yield And Reliability

Response to IP’s Growing Impact On Yield And Reliability
by Daniel Nenni on 10-14-2019 at 6:00 am

One of the reasons I founded SemiWiki nine years ago was the lack of EDA, IP and Foundry content in the media. The problem is that unless you work in the industry it is very difficult to write about it in competent technical detail. Most media outlets only know what vendors tell them which is how the semiconductor industry worked before social media (blogging) came into power.

This is an example, but certainly not a bad one, nothing scandalous here, definitely not DeepChip worthy. This is an email exchange in regards to IP Quality and the Fractal Crossfire product. I have known the Crossfire co-founders for 20+ years, they are a SemiWiki Sponsor, and I help them with relationships in Taiwan so I know this to be true. The majority of the top semiconductor companies, IP companies, and foundries use Crossfire collaboratively so this is worth a look, absolutely:

date: Sep 19, 2019, 12:00 AM
subject: Re: IP quality article in Semiconductor Engineering

Recently SemiEngineering published an excellent article discussing the need
for IP quality management because of its increasing importance to realize
design-schedules and failure-free silicon.

Various executives from IP and EDA companies provide their views allowing us to identify the root-causes. It’s no surprise that this results in another instance of “round up the usual suspects”: increasing design complexity, enabled by advanced manufacturing technology that demands more detailed characterization and management of manufacturing variation, Add to that the needs of increasing design-reliabilty for the automotive sector – you really care more about the controllers in your self-driving vehicle then about camera management in your cell-phone- and it’s obvious design and IP-quality should be addressed from day 1 and throughout the entire design-process when starting a new SoC project.

One of the solutions called for are “IP management systems with an eye
on quality“, As an addendum to the article, we’d like to point here to the solutions provided Fractal Technologies. Their Crossfire IP qualification tool and Transport format for IP-requirements allows for a clean handshake between IP designers and their customers. In the Transport format, a customer can specify IP integrity requirements that it expects from its IP suppliers. Fractal customers use Crossfire for incoming inspection on the IP releases shipped to them, using these requirements and only decide to introduce new versions in their design flow if all requirements are met.

A growing trend is to use the Transport IP-requirements as a standard to be met by IP suppliers. Thus the SoC design-team is guaranteed the IP quality they need as their suppliers now run Crossfire on the designs before shipment and attach the validation report as proof.

With the Crossfire IP qualification tool, Fractal is able to take the IP integrity verification burden away from the design-team. thus freeing up resources to deal with verifying the actual functionality of their design, not of the correctness of the sub-components.

Fractal on SemiWiki

Fractal Company Page

ABOUT FRACTAL
Fractal Technologies is a privately held company with offices in San Jose, California and Eindhoven, the Netherlands. The company was founded by a small group of highly recognized EDA professionals. Fractal Technologies is dedicated to provide high quality solutions and support to enable their Customers to validate the quality of internal and external IP’s and Libraries. Thanks to our validation solutions, Fractal Technologies maximize value for its Customers either at the Sign Off stage, for incoming inspection or on a daily basis within the Design Flow process. Fractal Technologies goal is to become the de facto IP & Library Validation Solutions Provider of reference for the Semiconductors Industry, while staying independent to keep its intrinsic value by delivering comprehensive, easy to use and flexible products.

ABOUT CROSSFIRE
Crossfire checks consistency and validates all data formats used in designs and subsequently improves the Quality of Standard Cell Libraries, IO libraries and general-purpose IP blocks (Digital, Mixed Signal, Analog and Memories). It reports mismatches or modeling errors for Libraries and IP that can seriously delay an IC design project.

Library and IP integrity checking has become a mandatory step for a “state of the art” deep submicron design due to the following challenges:

  • The sheer number of different views
  • The complexity of the views (ECSM, CCST/N/P)
  • The loss of valuable design time
  • Time to market

Crossfire helps CAD teams and IC designers achieve high quality design data in a short time. Crossfire assures that the information represented across the various views is consistent and does not contain anomalies.


Comparing Applied Materials with Lam Research

Comparing Applied Materials with Lam Research
by Robert Castellano on 10-13-2019 at 8:00 am

Lam Research (NASDAQ:LRCX) will announce its quarterly earnings on October 23, 2019, and Applied Materials (NASDAQ:AMAT) the following month on November 14, 2019. Both companies make equipment used to manufacture semiconductor devices. While private and institutional investors often own both individual stocks, this article presents a comparative analysis of both companies.

Table 1 shows that both companies have comparable profitability, with LRCX reporting better net margins. Since these figures are a percentage of revenue, my focus of this article is on revenues, which are based on the technological prowess of the company. This, in turn, is a function of management abilities to devote R&D to making “best-of-breed” equipment that stand up to the demands of semiconductor manufacturers as they move to next technology nodes.

Applied Materials competes in six major WFE (wafer front end) equipment sectors, while Lam Research competes in three. I’ll discuss these later. Of these sectors, the two companies compete in only two sectors – deposition and etch. According to The Information Network’s report “Global Semiconductor Equipment: Markets, Market Shares, Market Forecasts,” the deposition sector is comprised of several subsectors, including Epitaxy, CVD, PVD, and ECD. To complicate matters, the CVD sector is further divided into PECVD, LPCVD, APCVD, and ALD.

Deposition Sector

Although AMAT and LRCX don’t compete directly in all subsectors, Chart 1 shows market shares for the overall deposition sector from 2007 to 2018, with data obtained from The Information Network’s above-mentioned report. Chart 1 shows that AMAT’s share of the deposition sector decreased from 48% in 2007 to 38% in 2018. The blue trendline clearly shows market share erosion.

Conversely, LRCX’s share has been increasing since 2011, when the company acquired deposition equipment supplier Novellus.

A critical issue with the deposition sector is that it is the largest of the WFE market, representing 22% of revenues in 2018. YoY growth of the sector was 7%. Bottom line is that AMAT is losing market share in the largest sector of the market, while LRCX is gaining.

Chart 1

Etch Sector

AMAT and LRCX compete in the etch sector. Chart 2 shows a similar situation to the deposition sector. AMAT’s share dropped from 22% in 2007 to 18% in 2018 while LRCX’s share increased from 42% in 2007 to 47% in 2018. The trendlines clearly show its growth.

A critical issue with the etch sector is that it is the second largest of the WFE market, representing 21% of revenues in 2018. YoY growth of the sector was 13%. Bottom line is that AMAT is losing market share in the second largest sector of the market, while LRCX is gaining.

Chart 2

Other Sectors

Table 2 shows AMAT’s share of its SAM (served available market) between 2007 and 2018 (excluding deposition and etch in Charts 1 and 2). There are three takeaways from Chart 1:

  • AMAT’s share of the CMP sector is 70% and growing, but it essentially competes with only one other company, Japan’s Ebara. The CMP sector represented only 3% of the WFE market in 2018. YoY this sector dropped 0.1%.
  • AMAT’s share of the Implant/Doping sector was 67% in 2018. But it has been decreasing since 2012 when the company acquired implant equipment company VSEA, and along with it, its current CEO Dickerson. The Implant/Doping sector represented only 3% of the WFE market in 2018 as the sector’s revenues grew 14% YoY in 2018.
  • AMAT’s share of the RTP (3% of WFE) and Process Control (11% of WFE) decreased between 2016 and 2018.

Table 3 shows LRCX’s share of the cleaning sector, which has been growing consistently from 8% in 2010 to 15% in 2018. This sector represented 7% of the WFE market.

Total WFE Market

Chart 3 shows total revenues for AMAT and LRCX of the overall WFE market. AMAT’s share has grown from 17.9% in 2007 to 18.4% in 2018. If we include only share since the VSEA acquisition debacle, share has grown from 15.1% in 2011 to 18.4% in 2018.

LRCX’s share has more than doubled, growing from 6.0% in 2007 to 15.3% in 2018. If we include only share since the Novellus acquisition, LRCX’s share has grown from 8.9% in 2012 to 15.3% in 2018.

Chart 3