Bronco Webinar 800x100 1

A Brief History of Perforce

A Brief History of Perforce
by Daniel Payne on 01-28-2021 at 10:00 am

helix core workflow min

In 2020 Perforce acquired Methodics, a provider of IP Lifecycle Management (IPLM) tools, and Daniel Nenni blogged about that in July 2020, but a lot has happened since Perforce was founded in 1995. In the beginning Christopher Seiwald founded Perforce in his Alameda basement based on his background as a software developer, and focused the company on software configuration management (SCM), naming their first product Perforce.

Christopher Seiwald, founding CEO

The Perforce Helix product got renamed as Helix Core and is a software tool used for version control on large projects.

Helix Core workflow

Seven Pillars of Pretty Code was written by Seiwald in 2003, and the principles are just as relevant today for software developers to understand and use in their projects to be effective and understood by other programmers.

O’Reilly published a book called Beautiful Code in 2008, where Seiwald and Laura Wingerd opined about software development practices, building upon the earlier principles of Seven Pillars of Pretty Code. Laura joined the company in 1997 and also authored Practical Perforce in 2005, still available on Amazon.

Practical Perforce, 2005

Summit Partners acquired Perforce in 2016, and founder Seiwald handed the reigns over to Janet Dryer as the new CEO. That’s also the year that the headquarters moved from California to Minneapolis, and they reported some 200 employees.

Janet Dryer, CEO, 2016

Acquisitions soon started during Dryer’s leadership as Hansoft was acquired for their Agile planning tool in 2017, quickly followed by Deveo for repository management services.

The next CEO change was in 2018 when Mark Ties moved from the COO/CFO role, and since joining Perforce the company acquired 8 companies and almost doubled sales. Private equity firm Clearlake Capital became the new owner in January 2018. Perfecto was acquired in October 2018 for their mobile and web automation testing.

Mark Ties, CEO, 2018 – present

Rogue Wave Software got purchased in 2019, adding development tools for the growing HPC segment.

In 2020 both Methodics and TestCraft Technologies (web application testing) were acquired.

Still a private company in 2021 with over 15,000 customers, the number of employees listed on LinkedIn is 510, so quite a rapid growth in the past five years.

Industries Served

In the past 26 years the company has gradually expanded their customer base into the following industries:

  • Aerospace & Defense
  • Embedded Systems
  • Finance
  • Government
  • Semiconductor
  • Virtual Production
  • Automotive
  • Energy & Utilities
  • Game Development
  • Life Sciences
  • Software

The Methodics products serve the Semiconductor industry segment, and have the potential to also grow into more industry segments over time. Here’s the list of products and where Methodics fits into the mix.

  • Version Control System
  • Enterprise Agile Planning
    • Hansoft
  • Dev Collaboration
    • Helix TeamHub
    • Helix Swarm
    • Helix4Git
  • Development Lifecycle Management
    • Helix ALM
    • Surround SCM
  • Static Analysis
    • Helix QAC
    • Klocwork
  • Development Tools & Libraries
    • HostAccess
    • HydraExpress
    • PV-WAVE
    • SourcePro
    • Stingray
    • Visualization

Perforce Summary

From humble beginnings of a lone programmer starting out in Alameda, Perforce now has offices in: Minneapolis, Ohio, Alameda, Colorado, Canada, UK, Australia, Sweden, India and Estonia. I know from talking with Michael Munsey of Methodics, that the future looks exciting within the company as they serve the semiconductor and other markets with a growing family of products all aimed at software developers and electronic systems.

I’ll be attending their Virtual DevOps Summit for Embedded Software on February 4th, so why not attend to learn more.

About Perforce

Perforce powers innovation at unrivaled scale. With a portfolio of scalable DevOps solutions, we help modern enterprises overcome complex product development challenges by improving productivity, visibility, and security throughout the product lifecycle. Our portfolio includes solutions for Agile planning & ALM, API management, automated mobile & web testing, embeddable analytics, open source support, repository management, static & dynamic code analysis, version control, and more. With over 15,000 customers, Perforce is trusted by the world’s leading brands to drive their business critical technology development. For more information, visit www.perforce.com.

Also Read:

Conference: Embedded DevOps

Third Generation of IP Lifecycle Management Launched

Perforce Software Acquires Methodics!


Probing UPF Dynamic Objects

Probing UPF Dynamic Objects
by Tom Simon on 01-28-2021 at 6:00 am

Probing UPF Dynamic Objects

UPF was created to go beyond what HDL can do for managing on-chip power. HDLs are agnostic when it comes to dealing with supply & ground connections, power domains, level shifters, retention and other power management related elements of SoCs. UPF fills the breach allowing designers to specify in detail what parts of the design are connected to what supply and ground lines. It also allows implementation of a wide range of necessary additions to make on-chip power management work. Thus, HDL and UPF have been designed to work together to allow a complete definition of the design. Implementation and verification tools have evolved to support HDL and UPF together. Yet there has been a hole in probing UPF dynamic objects during simulation.

A white paper from Siemens EDA, formerly Mentor, describes a methodology developed for solving this problem. The paper written by Progyna Khondkar titled “Probing UPF Dynamic Objects: Methodologies to Build Your Custom Low-Power Verification” was presented at DVCon Europe 2020. The author found a problem with monitoring the state of various power state transitions. Without this it is hard to create verification environments that can work effectively.

The paper provides an overview of the basic elements of a UPF implementation. This includes a definition of UPF itself, followed by descriptions of UPFIM (UPF Information Model), the processing stages and UPF’s bind checker command. There are two APIs for accessing the UPFIM: Tcl and the native SystemVerilog HDL API. Simulation related controls are enabled during phases 3 to 5 – compilation of HDL code, elaboration & optimization and execution of the simulation. Because the UPFIM database is created at the end of the optimization step, there are limitations on accessing it for custom verification productivity.

The paper presents an approach for allowing UPF processing at the design elaboration and optimization steps so that the necessary data is available through the APIs. The query functions needed for dynamic query are: upf query object properties, upf query object pathname, upf query object type, and upf object in class. Their proposed use model relies on the Tcl API query to UPFIMDB used together with bind checker whose interface uses the corresponding SV HDL native representation types.

Probing UPF Dynamic Objects

The author provides code snippets for the SystemVerilog assertion checker. This is followed by an example of the UPF bind and query functions for their retention checker example. Then the transcript for the results are shown with the output of the assertion checks. The simulator used in the example is the Siemens EDA Questa Power-Aware simulator. The author summarizes the new methodology and shows that it is effective in providing a way to continuously probe UPF dynamic objects.

The Questa Power-Aware simulator has outstanding IEEE 1801 UPF standard support, providing processing capabilities such as: architectural analysis, the latest UPF 3.1 simulation semantics with built-in dynamic PA checkers, extensive reporting for insight into the behavior of the power management system, and advanced power-intent debug. Questa also provides users with automated PA coverage and test plan generation driven from a UPF file.

UPF has already proven itself effective for capturing and implementing complex power management regimes. Many of today’s advanced products would not be feasible to implement without it. Of course, UPF has a long history and has gone through many revisions – each of which has allowed it to become more useful and comprehensive. This paper shows an interesting way to expand the verification capabilities of UPF in simulation flows. The full paper with references can be found here for download.

Also Read:

Calibre DFM Adds Bidirectional DEF Integration

Automotive SoCs Need Reset Domain Crossing Checks

Siemens EDA is Applying Machine Learning to Back-End Wafer Processing Simulation


Register Automation for a DDR PHY Design

Register Automation for a DDR PHY Design
by Daniel Nenni on 01-27-2021 at 10:00 am

Six Semi Graphic

Several months ago, I interviewed Anupam Bakshi, the CEO and founder of Agnisys. I wanted to learn more about the company, so I listened to a webinar that covered their latest products and how they fit together into an automated flow. I posted my thoughts and then I became curious about their customers, so I asked Anupam to arrange an interview. Following are my notes from a nice talk with Ricky Lau, CTO and one of the founders at The Six Semiconductor.

Who is The Six Semiconductor and what does the company name mean?
We’re an analog mixed-signal integrated circuit IP startup. Our initial focus is on physical (PHY) layer IP for DDR memory applications. Our headquarters are in Toronto, which the rapper Drake nicknamed “The 6” several years ago. A lot of people use the term now, so we chose a name that reflects our location.

What is your typical IP development flow?
We provide optimized circuit design with a full custom design flow to achieve best performance in a minimal footprint. We’re experts in schematic-layout co-design optimization, which is rather a lost art in many modern design flows. This enables us to customize the IP quickly and efficiently to meet the needs of our customers. On the digital side, we follow a standard RTL-based design and verification flow, and that’s my main responsibility.

Are your designs “big A/little D” or “little A/big D?
Our IP is split about equally between analog and digital, so I guess you could say that we are “medium A/medium D” for the most part.

What are your biggest digital design and verification challenges?
Since our designs must interface with memory devices, we use digital techniques to compensate for non-ideal effects in the system, such as board skews and voltage/temperature variations. Our logic calibrates the system so that the DDR interface can communicate with the memory chips reliably and at the highest frequency possible. Verifying this functionality in simulation is challenging because we have to model the sensing and adjusting circuits and the conditions in the system.

What role do control and status registers (CSRs) play in your designs?
Registers in the digital portion directly control much of the circuitry in the analog portion. For example, timing adjustments are calculated in the digital logic based on the sensor inputs, and the results are written into registers that feed the analog circuits. The calculation algorithms have tuning “knobs” that can be tweaked, and these are also controlled by registers that can be programmed by end-user software. Registers make our IP flexible and customizable, able to be used for multiple target processes with no changes.

Why did you consider a register automation solution?
Even before our first project, we knew that we needed a register-generation flow. Our design has enough registers that we wanted to be able to generate the RTL from a specification rather than coding it all by hand.

How did you end up selecting Agnisys for a solution?
We did consider developing our own register flow, but we are a small company and it didn’t make sense to spend our precious engineering resources unless we had to. I did some investigation and evaluations of commercial solutions, and ended up choosing IDesignSpec (IDS) from Agnisys. We really liked its ability to generate register verification models and documentation in addition to the RTL design.

How do you use IDS in your development process?
We define our registers and fields in spreadsheets and use the standard comma-separated-value (CSV) format to communicate our specification to IDS. Then IDS automatically generates the register RTL, which we verify in simulation along with the rest of our logic. We have a testbench based on the Universal Verification Methodology (UVM), and we include the UVM register models that IDS also automatically generates. Simulation verifies that our register specification is correct and that the rest of our design properly interfaces with the registers. Finally, we use the Word file produced by IDS as the official register documentation provided to our end users.

Can you quantify the value of IDS in your process?
We have used IDS on every project, so I really can’t compare time and resources saved by the automated flow versus a manual process. We estimated that developing our own register flow would have taken at least six engineer-months, plus ongoing maintenance and support. Add to that the time saved by not having to write RTL, UVM models, and documentation, and it’s clear that IDS is a big win for us.

Do you run IDS multiple times on a project?
Yes we do, and that’s a really important point. Our register specification changes constantly during most of our IP project schedule, and we simply re-run IDS to propagate those changes and re-generate the output files. Without IDS, every time that a register or field changes, we would have to hand-edit the RTL, the UVM models, and the documentation. That would take a lot of effort, run the risk of typos and coding errors, and make it hard to keep all the files in sync. I think the biggest value of IDS and register automation is this repeated usage. While I can’t give a precise number, clearly it saves many engineer-weeks of effort across the duration of a project.

How has your experience working with Agnisys been?
Overall, I’d say that I am happy. Just like any piece of software, we have found some bugs and requested some new features in IDS. Agnisys always gets back to us within a day or two, and they have a smooth process to ship us a “hot fix” so we don’t have to wait until the next general release to address our issues.

What’s in your future?
We have new projects underway and will be using IDS on all of them. As our design complexity grows, we’re looking into some of its more advanced features so I expect that we will continue to work closely with the Agnisys team.

Thank you for your time!
You’re welcome, and thanks to you as well.

Also read:

Automatic Generation of SoC Verification Testbench and Tests

Embedded Systems Development Flow

CEO Interview: Anupam Bakshi of Agnisys


Change Management for Functional Safety

Change Management for Functional Safety
by Bernard Murphy on 01-27-2021 at 6:00 am

safety min

By now we’re pretty familiar with the requirements ISO 26262 places on development for automotive safety. The process, procedures and metrics you will apply to meet various automotive safety integrity levels (ASIL). You need to train organizations. In fact you should establish a safety culture across the whole company or line of business to do it right. These days following ISO 26262 is as much about following the spirit of the standard as well as the letter. But – what do you do about change management for functional safety?

Design under ISO 26262

You need to have well established quality management systems such as the Capability Maturity Model for Integration. These aren’t about software tools, though tools may play a supporting role. They’re much more about the whole product development process. And then there’s what you do in the product design to isolate areas that may be susceptible to single point failures. And what you’re going to do to detect and mitigate such failures. Then you’ll run FMEDA analysis to make sure your safety mitigation techniques will actually deliver. You’ll document the whole thing in a safety manual and development interface agreement to ensure integrators will use your product within agreed bounds.

A process to make changes

Phew. You release and ship the product with ISO 26262 requirements all tied up in a bow. Time to celebrate, right? Well – no. Suppose for the sake of argument that what you released is an IP, perhaps targeted for ASIL-D-compliant systems, the highest level of safety compliance. In the normal course of events after release, customers will report bugs and enhancement requests. Things you need to change or extend in your product. What does ISO 26262 have to say about managing these changes?

Any changes in a well-defined quality management system require use of configuration management. In section 8, the ISO standard is quite succinct about what must be achieved in such a system:

  • Ensure that the work products, and the principles and general conditions of their creation, can be uniquely identified and reproduced in a controlled manner at any time.
  • Ensure that the relations and differences between earlier and current versions can be traced.

Why so brief? Because the automotive industry already has a well-established standard for quality management systems – IATF 16949. No need to reinvent that wheel and indeed there are already linkages between the two standards.

Synopsys application of ISO 26262 and IATF 16949

Synopsys has authored a white paper on how they apply these standards to automotive-grade DesignWare IP development. Under IATF 16949, this starts with an Impact Analysis, per requested change. This will assess not only what product features will be impacted but also what stakeholders will be impacted by the change (I assume this applies through the supply chain). The analysis also looks at what previously made assumptions may need to change and how those change can ripple through the process.

Analysis then drills deeper to quantify the impact of the change, root-cause analysis on what led to the need for this change, dependency considerations and any impact on assumptions of use (AoU). Building on these considerations you create a project plan identifying responsibilities for everyone  involved in implementation. Along with bi-directional traceability requirements to track those changes against the original objective(s).

Once impact analysis is completed, then implementation, verification and validation of the plan can start. Again with a lot of process and checkpoint requirements. And finally, there is a confirmation step, per change request tying back each implementation, verification and validation phase to the original request and impact analysis. At which point you can accept, reject or delay the change.

Double phew! We shouldn’t be surprised that this level of effort comes with tracking post-release change requests to an ASIL-D product (for example)  Nice job by Synopsys on documenting the detail. You can read the white paper HERE.

Also Read:

What Might the “1nm Node” Look Like?

EDA Tool Support for GAA Process Designs

Synopsys Enhances Chips and Systems with New Silicon Lifecycle Management Platform


System-level Electromagnetic Coupling Analysis is now possible, and necessary

System-level Electromagnetic Coupling Analysis is now possible, and necessary
by Tom Dillinger on 01-26-2021 at 10:00 am

FlexMesh min

With the increasing density of electronics in product enclosures, combined with a broad range of operating frequencies, designers must be cognizant of the issues associated with the radiation and coupling of electromagnetic energy.  The interference between different elements of the design may result in coupling noise-induced failures and/or reduced product reliability due to electrical overstress.

While traditional rules-of-thumb have been very successful in the design of high-speed signals on printed circuit boards – e.g., positioning of ground planes, differential pair impedance matching, route shielding – the complexity of current designs necessitates a much more comprehensive electromagnetic analysis.  It is necessary to incorporate detailed electrical models for passive components, connectors, and (flex) cables, in addition to the (motherboard, daughter, and mezzanine) PCBs, then simulate the electromagnetic response of the system when excited by signal energy of the appropriate bandwidth.

Fortunately, there have been numerous advances over the years in the capabilities to build and simulate full-wave electromagnetic system models.  I recently had the opportunity to review some of these advances with Matt Commens, Principal Product Manager at Ansys, relating to the HFSS toolset.

Introduction

Full-wave computational electromagnetic simulation tools for electronic systems, such as HFSS, attempt to solve Maxwell’s equations for a general 3D environment.  The system is placed in a box that envelops the domain for electromagnetic analysis.  This volume and the electronics within are discretized into a suitable “mesh”.  A large number of (tetrahedral) 3D mesh cells are created, with a denser mesh associated with the detailed, conformal geometries of individual components.

The electric and magnetic fields at the vertices (and the corresponding electric currents across the surfaces) of each mesh cell are represented by a summation of “basis” functions to approximate the solution to the three-dimensional (differential form of) Maxwell’s  equations, at a given frequency of excitation.

A large, but typically very sparse, matrix is generated for the discretized mesh.  The excitation and boundary conditions are specified, and the coefficients of all the basis functions are then solved, providing an excellent approximation to the full system electromagnetic behavior.  Only one matrix solve is needed for all excitations in the system.

Note that this is a fully-coupled electromagnetic analysis, incorporating the material properties and 3D geometry through the discretized volume.

Why is electromagnetic coupling important?  Consider the simple example illustrated above – three examples of a microstrip line on a board are shown, all the same length, but with varying serpentine properties.  Due to the electromagnetic self-coupling present between different segments of the line, the frequency response (e.g., the insertion loss) of each varies significantly –a discretization mesh of the lines with meandering segments is needed to accurately calculate the behavior.

Now consider the example below, where the detailed field distribution is infinitely more complex.  Matt provided this electronic system as a representative example of the types of models for which designers are seeking to analyze the electromagnetic behavior.

Matt chuckled, “When I first starting working with HFSS over 20 years ago, we were solving systems with maybe 10K to 40K matrix unknowns.  Now, we are routinely solving models with more than 100M matrix elements.  The ongoing advances in electromagnetic analysis have dramatically expanded the types of designs that are able to be simulated.”  Matt elaborated on some of those advances.

Computational Electromagnetics

Several algorithmic enhancements have been incorporated into HFSS, to enable the use of HPC resources.

  • matrix partitioning and solving across distributed systems

Unique domain decomposition algorithms partition the system-level model (without adding simplifying assumptions at domain interfaces).

  • utilization of cloud computing resources, for both the mesh generation and matrix solver
  • efficient frequency sweep analysis, across CPU cores and distributed nodes

The broadband frequency response uses an interpolating sweep;  additional sampling points are selected in ranges where the calculated S-parameter response is rapidly changing.

  • sensitivity analysis to variations in model parameters (“analytic derivatives”)

This last feature is worth special mention.  Matt indicated, “HFSS supports virtually disturbing the mesh for variation analysis.  Designers can identify a set of parameters in the system model, and readily see how the electromagnetic analysis results change with manufacturing variations, for a small overhead in simulation time, far more efficient than running full simulations on different parameter samples and unique meshes with small dimensional changes.”  This feature provides great insights into where designers could focus on cost versus manufacturing tolerance tradeoffs.

Ansys has prepared an informative demo of how designers can quickly visualize the response to parameter sensitivity – link.

Algorithmic Enhancements for Mesh Generation

Matt identified three key HFSS enhancements of late related to mesh generation.

Adaptive Meshing

The introductory section above described the importance of the 3D mesh to the resulting accuracy.  An initial mesh is solved for the fields – a calculation of the electric field gradient is indicative of where local mesh refinements are appropriate.  (The basis functions for representing the local fields could also be updated.)  A new mesh is solved, and the process iterates until successive passes very less than the convergence criteria.

HFSS recently extended this capability to adapt the mesh each iteration using multiple frequency solutions, over a user-specified range, to enhance the results accuracy when a broad range of spectral energy is present.

3D Components

Traditionally, it has been difficult for designers to build a comprehensive (“end-to-end”) model of even a single long-reach, high-speed signaling interface.  The PCB trace S-parameter model generation from the stack-up was relatively straightforward, but obtaining a model for connectors and cables from the vendor was typically difficult.

Ansys realized that enabling link simulation, and ultimately, system-level analysis required a novel method, and developed the “3D Components” methodology:

  • vendors have the tools to generate an encrypted model for release (without applying specific excitations and boundary conditions)
  • these “intrinsic” models are simulation-ready

HFSS has full access to the model, but the vendor is able to protect their proprietary IP.

  • model re-use is readily supported, through user-defined parameter values (see the figure below)

HFSS Mesh Fusion

Of the steps in the electromagnetic analysis of an electronic system:

  • materials specification
  • definition of boundary conditions and excitations
  • identifying the frequency range of interest
  • mesh generation
  • matrix solve/simulation, across the range of frequencies
  • results post-processing

the key to the final accuracy of the results is mesh generation.  Matt stated, “The optimum meshing approach differs for IC packages, connectors, PCBs, and the chassis – yet, there are coupled fields throughout.  It is crucial to locally use the appropriate mesh technology.”

The combination of adaptive mesh refinement and 3D Component models has enabled Ansys to focus on using the specific meshing technique best suited to the MCAD geometry throughout the system.  The latest Ansys HFSS release incorporates this mesh fusion feature.  Although Matt and I didn’t get a chance to discuss mesh fusion in great technical detail during our call, he indicated there is an upcoming webinar that will go into more specifics – definitely worth checking out.  (Webinar registration link)

Here are the mesh and electromagnetic simulation results from the complex example shown above.

Summary

The traditional method for electromagnetic analysis in electronic systems focused on PCB designs and high-speed signaling.  The board stack-up and materials properties were defined, and the signal traces were simulated.  S-parameter response models for signal loss and (near-end/far-end) crosstalk from adjacent traces were generated, and incorporated into subsequent circuit simulations to measure the overall transmit/receive signal fidelity.  However, the complexity of current electronic systems necessitates a more comprehensive approach to electromagnetic coupling simulation, as compared to concatenating individual S-parameter models.  Systems will be integrating a broad range of signal frequencies from audio to mmWave, with advanced packaging present in aggressive volume enclosures.

The HFSS team at Ansys has focused on numerous technical advances – both computationally and in the critical area of mesh generation – to enable this analysis.  Designers can now evaluate and optimize models of a scope that was once unachievable, with manageable computational resources.

For more info on these Ansys HFSS features, please follow these links (and don’t forget to sign up for the Mesh Fusion webinar):

Broadband Adaptive Meshing – link

Ansys cloud HPC resources – link

3D Components – link1, link2

Mesh Fusion webinar – link

-chipguy

Also Read

HFSS – A History of Electromagnetic Simulation Innovation

HFSS Performance for “Almost Free”

The History and Significance of Power Optimization, According to Jim Hogan


Calibre DFM Adds Bidirectional DEF Integration

Calibre DFM Adds Bidirectional DEF Integration
by Tom Simon on 01-26-2021 at 6:00 am

Siemens EDA DFM flow

GDS and LEF/DEF each came about to support data exchange in different types of design flows, custom layout and place & route respectively. GDS (or stream format) was first created in the late 1970s to support the first generation of custom IC layout tools, such as Calma’s GDSII system. Of course, the GDS format has been updated over the years to support the capabilities of newer IC design tools. LEF/DEF (library exchange format & design exchange format) came along later to support the larger but somewhat simpler data found in P&R flows. Yet there has always been a dichotomy of support for these formats among various design tools. DFM has tended to fit in between custom layout and P&R, with the general assumption that reading LEF/DEF and writing GDS made sense because at or near tape out the flow would move to GDS. However, this has always been problematic because P&R systems have grown more powerful and designers want to see the final layout in them as opposed to custom tools at the end of the flow.

DFM (design for manufacturing) tools, such as Siemens EDA’s Calibre YieldEnhancer, provided a path to DEF so DFM changes could be brought into P&R tools. But the flow was cumbersome and often required steps not in foundry approved rule decks. Siemens EDA now has a white paper that discusses the addition of a fully bidirectional DEF flow for their DFM tools. The paper, titled “Optimizing the Integration of DFM and P&R” by Armen Asatryan and James Paris describes the limitations of the previous extra conversion step to get to DEF and then talks about their new direct write to DEF.

Siemens EDA DFM integration with P&R

The previous DEF conversion process involved a utility called fdiBA which ran as a separate operation. It involved extra steps and needed inputs for attaching net names to geometry. It had limitations on geometry shapes and did not deal well with multiple orientations of vias. With larger design sizes it tended to face capacity limits and long runtimes.

The new direct write DEF feature in Calibre can create a DEF format database without the need for an intermediate GDS (or OASIS) database conversion. To take advantage of this, only one rule deck option needs to be added – “DFM RDB DEF”. Along with the obvious simplicity of the flow, there are a number of added benefits. Direct write to DEF supports all-angle metal shapes, including metal extension end-caps and via caps. It recognizes multiple orientations of via instances. To reduce size, it performs via array detection. Direct write DEF also provides automated via repair.

One area that will make a big difference is the treatment of fill. Compressed fill is handled properly with direct write to DEF so Calibre YieldEnhancer SmartFill can efficiently transfer fill to P&R tools.

Having been in this industry as long as I have, I am often amazed at the longevity of the GDS format. I actually knew Sheila Brady, the woman who created the GDS format. Even back in the 80’s she would comment on how it had taken on a life of its own and was initially really just intended as a quick and reliable back up for GDSII databases. Yet here we are with complicated flows that rely on legacy formats such as this. Making rational the movement of P&R data between essential tools in the flow with DEF is vital as design size and complexity increase. The full white paper is available for download here.

Also Read:

Automotive SoCs Need Reset Domain Crossing Checks

Siemens EDA is Applying Machine Learning to Back-End Wafer Processing Simulation

CDC, Low Power Verification. Mentor and Cypress Perspective


HCL Provides an On-Ramp to the Amazon Elastic Compute Cloud for HCL Compass

HCL Provides an On-Ramp to the Amazon Elastic Compute Cloud for HCL Compass
by Mike Gianfagna on 01-25-2021 at 10:00 am

HCL Provides an On Ramp to the Amazon Elastic Compute Cloud for HCL Compass

Last August I detailed a webinar about HCL Compass, a tool that provides low-code/no-code change management capability for enterprise scaling, process customization and control to accelerate project delivery and increase developer productivity. There is a lot of activity these days to migrate various enterprise applications to the cloud for better scalability, access and performance. I know from first-hand experience that, when it comes to cloud migration, the devil is in the details. So, a comprehensive guide that explains how to do this is quite valuable. If you’re considering such a move, read on and you’ll discover that HCL provides an on-ramp to the Amazon Elastic Compute Cloud for HCL Compass.

First, a bit about Amazon Elastic Compute Cloud (EC2). Provided by Amazon Web Services (AWS), EC2 is a web service that provides resizable computing capacity—literally, servers in Amazon’s data centers—that you use to build and host your software systems. Features of EC2 include:

  • Increase or decrease capacity within minutes, not hours or days
  • Service level commitment of 99.99% availability for each Amazon EC2 region. Each region consists of at least 3 availability zones
  • The AWS Region/AZ model has been recognized by Gartner as the recommended approach for running enterprise applications that require high availability

EC2 has quite a global footprint that includes nearly 400 instances for virtually every business need. There are 24 regions and 77 availability zones globally. EC2 is the choice of Intel, AMD, and Arm-based processors and they are the only cloud provider that supports macOS. Also, both EC2 instances and HCL Compass web server support Windows and Linux OS versions. So, a complete HCL Compass installation using EC2 instances is possible in AWS.

Virtual resources such as those provided by EC2 remove the capital expense of procuring and maintaining equipment as well as the expense of maintaining an on-premises data center, for example, cooling, physical security, janitorial services, etc. In AWS Cloud, AWS provides EC2 instances as the servers in Amazon’s data centers, to build and host HCL Compass.

The whitepaper provided by HCL is quite comprehensive and covers general guidance for cloud installation and migration from on-premises ClearQuest or HCL Compass to AWS HCL Compass. Note HCL Compass is the next generation of IBM Rational ClearQuest. The intended audience for the whitepaper is administrators of HCL Compass and administrators of ClearQuest intending to migrate and deploy to the cloud. The document includes strategy for a fresh installation of HCL Compass and migration from ClearQuest to HCL Compass in AWS. HCL Compass 2.0.0 supported platforms include 64-bit Windows and Linux as follows:

  • Windows: Windows 10 Enterprise (x86_64) all updates, Windows Server 2016, Windows Server 2019
  • Linux: RHEL 7.4 + (x86_64), RHEL 8.0 (x86_64)

Database support includes:

  • Microsoft SQL Server: 2017
  • Oracle: 12cR2, 18c, 19c
  • DB2:5

Browser support includes:

  • Google Chrome: 37 and future versions, releases and fix packs
  • Microsoft Edge: 20 and future versions, releases and fix packs
  • Microsoft Internet Explorer: 11 and future fix packs
  • Mozilla Firefox: 54 and future fix packs
  • Mozilla Firefox ESR: 38 and future versions, releases and fix packs

All other required software is detailed as well.  The document covers a lot of other topics, including:

  • Administration
  • Performance
  • Port setup
  • Load balancing
  • SSL enablement
  • SSO external server
  • LDAP authentication server
  • Multi-site and email relay considerations
  • License server configuration
  • Database migration
  • Sample usage scenarios

Pretty much everything you’ll need to know. Additional documentation on virtualization is also provided. You can get your copy of the white paper, entitled HCL Compass in AWS here. You can learn more about HCL Compass here. If you’re considering a move to the cloud, you’ll be happy to know that HCL provides an on-ramp to the Amazon Elastic Compute Cloud for HCL Compass.


New Intel CEO Commits to Remaining an IDM

New Intel CEO Commits to Remaining an IDM
by Robert Maire on 01-25-2021 at 6:00 am

Pat Gelsinger Intel CEO

-Intel good results had a little extra help to be great
-New CEO commits to remaining an IDM versus fabless
-Claims of strong progress on 7NM fuel optimism inside
-Outsourcing to TSMC will not go away but will increase

A good quarter but with some silicon enhancements from ICAP

Intel reported Revenues of $20B and EPS of $1.52, which were well above consensus and expectation. Expectations were for revenues of $17.5B and EPS of $1.10.

But if you look below the surface there was a bit of “artificial enhancement” from ICAP (Intel Capital). $1.692B of the $7.488B pre tax income (or almost 23%) was from ICAP and not operations. This compares to $212M in ICAP benefit for the prior 9 months total and versus $617M in ICAP benefit in the year ago quarter. Backing out this unusually large one time gain would result in a good beat but not the large beat that was surprisingly printed prior to market close Thursday.

Guidance was also a beat with Q1 guide of $17.5B (down 12% YoY) and EPS of $1.10 (down 24% YoY) versus analyst expectation of $16.4B and EPS of $0.87

More importantly Intel staying “Inside”

Intel’s new CEO, Pat Gelsinger, was on the call, along with the chairman of the board. Pat made it clear that he thought that 7NM was on the road to recovery and that Intel will continue to produce the majority of its chips inside as an IDM.

This is perfectly in line with what we had predicted in our last two notes on Intel. We obviously also have a strong bias to see Intel remain a true IDM and “own its own fabs”. We had also predicted that the final decisions would not be made until Pat was there for a while, which they also said on the call as they did not give full year guidance.

It is also clear that, as we had suggested, Intel has no choice but to continue to outsource to TSMC and in fact will increase the outsourcing as even if they fix 7NM they still will not be able to ramp capacity fast enough.

Pat even referred to the same view we have about Intel being somewhat of a national treasure and key to the US’s technology infrastructure.

Get ready for numbers to look ugly and get sandbagged

Although Intel did not give full years guidance, we would hold onto our seats as we have suggested that the dual costs of increased outsourcing to TSMC added to increased spend on Internal efforts to regain Moore’s Law pace will be high and pressure margins in the short term which we would view as at least the next two years and maybe more.

If Pat is smart he will sandbag strongly and lower expectations to numbers that can be beaten easily and overestimate the costs of fixing 7NM and ramping capacity while still paying TSMC to make chips. He doesn’t want to put out numbers he will miss in his first couple of quarters on the job.

We would hope that this would include a large jump in Capex and R&D to give the engineers and manufacturing the latitude they need.

It may not be the $28B or $30B of TSMC and Samsung but it should be generous.

So what was the problem with 7NM?

While Intel did not explicitly say what was wrong, they did say that the fix required them to re-design a significant number of steps in the process flow to fix the problem. This clearly means that it was not one step or tool or material or even a design.

Intel obviously drove down a dead end from which there was no escape. It meant that they had to back up quite a bit and start over which obviously accounted for the extra time.

Backing that far up means literally going back to the drawing board (EDA tools) and re laying out all the designs and layers. It means a new set of masks, new and additional tools. It can get ugly and out of control quickly.

As with plane crashes its never just one thing that went wrong nor has to be fixed.

Bringing people out of retirement

We had also mentioned that we thought that Intel had lost some of its brain trust through RIFs and retirements etc; We are happy to see that Pat is already bringing back Intel’s prior stars to help bail them out. Maybe they should buy out Jim Keller’s AI chip start up to get him back to Intel. Intel did mention AI a significant number of times on the call, maybe there’s a coded message there.

The Stocks
Intel jumped before the close on Thursday as numbers were somehow released while the market was still open. Intel was up 6.5% during market hours largely due to the leak and down 1.5% in after hours.

We don’t know how many analysts or investors dug deeper into the numbers to find that they appeared better due to the Billion dollar plus ICAP benefit.

As we suggested in our last note we think there is a potential opportunity to make some money on a near term pop in Intel’s share price due to the double whammy of new CEO and unusual beat. We would likely want to be out of the stock prior to Pat’s resetting of numbers a projections for the full year which will likely show higher expenses.

We view this as a neutral to a positive for AMD as the underlying strength of demand is a good thing and not being n the same TSMC lifeboat with Intel gives AMD some room.

Some investors took it as a negative for TSMC which we disagree with. TSMC has more than enough demand to deal with and heaping even more demand would likely strain them in the short run and piss off even more customers on the lower end who won’t get serviced or be pushed out by the big boys. TSMC has way more than enough potential business and profits and doesn’t need to drown in demand.

Equipment companies should see this as a positive, which we already suggested, as Intel will likely not only continue to spend but likely spend more to catch up and ramp up. So it should be positive across the board. Perhaps ASML may getter a bigger slice of the benefit as Intel will have to ramp EUV much as TSMC already has and Samsung is trying to do. Perhaps it would make up for the recent slowing of a couple of EUV customers.

Long Live “Intel Inside!”

Also Read:

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory

2020 was a Mess for Intel

SMIC Blacklist puts ASML in Jam


Playing Pandemic Roulette in Cars

Playing Pandemic Roulette in Cars
by Roger C. Lanctot on 01-24-2021 at 10:00 am

Playing Pandemic Roulette in Cars

A study published in ScienceAdvances has shown computer simulations of the movement of virus-laden airborne droplets in cars. The objective of the study appears to be to assess the degree of driver-to-passenger and passenger-to-driver exposure given different open-or-closed window configurations in the context of modeled airflow around and within a four-windowed car.

“Airflows inside passenger cars and implications for airborne disease transmission” – ScienceAdvances – 

The study does not recommend any particular window open or closed configuration. This would be difficult as a configuration that might be advantageous to the passenger may not be optimal for the driver. The ideal configuration, not surprisingly, is to have all windows open. In fact, the more windows that are open, the lower the exposure of either driver or passenger.

The study is important for shining a light on the question of driver-to-passenger (and vice versa) exposure in the context of the unique air pressure gradients surrounding the car and the counter clockwise movement of air within the car and the prevailing front to back movement of air when air conditioning or heating is in use. The study also looks at the number of air changes per hour (ACH) per different open-closed window configurations.

What the study does NOT consider is the impact of introducing an in-vehicle partition. The study also does not take into account multiple passengers. What the study really highlights, though, is the reality that no configuration is a guarantee of safety for either passenger or driver. It also fails to determine whether the driver or the passenger is at greater risk – although the driver is presumably at greater risk given the number of potential exposure opportunities during a working day.

The study also highlights the lack of research into COVID-19 infections among taxi, limousine, and ride hailing drivers – and frequent users/passengers. In fact, a broad study of transmission on busses, subways, and taxis – including driver/operators and passengers is overdue.

It would seem that the only real takeaway from this study is that there is no safe window open-or-closed configuration when sharing a vehicle with a stranger or family member. The lack of any testing of partition-equipped cars is unfortunate.

Too many ride hailing companies have adopted Center for Disease Control and Prevention guidelines such as mask wearing and sanitation as sufficient. The CDC guidelines for taxi, limousine, and ride-hail operators can be found here:

https://www.cdc.gov/coronavirus/2019-ncov/community/organizations/rideshare-drivers-for-hire.html

The CDC guidelines have a single reference to vehicle partitions: “If you work for a company that offers a large fleet of vehicles, ask company management for a car/taxi (when applicable) with a partition between driver and passengers, if available.” The agency makes no public policy recommendation – such as requiring partitions.

Some ride hailing companies – like Alto, Bolt, Didi Xuching – get it.  These operators have required and added partitions to their vehicles.

If partitions are deemed effective at restaurants, grocery stores, gyms, and in schools and on busses, it stands to reason they will be effective in taxis and ridehail vehicles. Somehow this message has not penetrated senior management levels at Uber – now obsessed with shedding assets and shoring up profitability. Lyft makes partitions available to its drivers, but does not require them.

The study discussed here was conducted by Dr. Varghese Mathai, a physicist at the University of Masssachusetts, Amhert, and three colleagues at Brown University — Asimanshu Das, Jeffrey Bailey and Kenneth Breuer. The study has spurred consumer advocates to recommend that the windows opposite the driver and the passenger each be rolled down for the optimal and safest airflow in the vehicle.

Still, it seems that the only really safe means of offering shared vehicular transportation is with a partition installed in the car. Maybe a follow-up study can take a look – and maybe next time the scientists will keep public policy in mind. Passengers and drivers shouldn’t have to calculate their odds of getting a COVID-19 infection based on how many windows are rolled up or down in their vehicle.


ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory
by Robert Maire on 01-24-2021 at 6:00 am

ASML SMIC TSMC EUV DUV

ASML has good quarter driven by DUV & Logic (@72%)
– SMIC & other major customer slow EUV plans
– Logic (read that as TSMC) remains key demand led driver
– We are happy memory remains muted given cyclical potential

A very solid quarter with a continued road to growth
The quarter came in at Euro4,254B and net income of Euro1,351B on gross margin of 52%. The company reported year total sales of Euro14B and Euro3.6B in income. EUV fell from last quarters 66% of sales to 36% of sales. The company increased dividends by another 15% showing strong intention to return excess cash to shareholders.

Bookings were even more biased towards logic with 78% of bookings being logic driven. Six EUV systems were booked.

Outlook is for sales of Euro4B +- Euro100M which is somewhat flattish

EUV demand variations cause slowdown ripple through supply chain
Making EUV scanners is a very complex business requiring an impressive global supply chain of technologically complex parts.

ASML has done a very good job of helping out or acquiring and supporting key supply chain manufacturers, starting with Cymer a long time ago.

Even with all the effort put into the supply chain there is still a limit of how fast production of these key components can be ramped up or down given the complexity which means long lead times.

However, customer demand seems to vary a lot more than the flexibility of the supply chain such as we are now seeing. SMIC was an unpredictable issue. We would expect some natural digestion periods and annual cyclical behavior in order patterns as EUV continues its rollout.

We might compare this problem to the whiskey industry where the global demand may increase or decrease in the short term due to global economies or politics the long term supply of 15 year old single malt scotch really can’t be changed that quickly.

DUV was a very good fallback
Falling back on DUV tools wasn’t all that bad as evidenced in the 52% gross margins so its not like ASML is suffering due to the variability of EUV orders and shipments.

We think it also shows the huge strength in the overall semiconductor market not just at the bleeding edge.

There have been numerous recent articles about shortages of chips for the auto industry. We would point out that most cars that we know of don’t use a lot of 5NM or 7NM chips made with EUV tools but have hundreds or thousands of chips made on 200MM and 150MM relatively ancient fabs and toolsets.

DUV will be around for a very long especially given the memory market and trailing edge demand.

Execution remains solid
We think management has done a good job of managing the many issues and complexities especially in light of Covid. Shifting gears from EUV to DUV and dealing with customer demand changes have been handled relatively transparently to the overall sales and earnings impact.

This is not easy to say given the complex global nature of the tool supply chain and organization.

Financial execution remains slid and the company continues to transition to returning more and more cash to shareholders as it throws off more cash from operations.

The Stocks
We think we will see a somewhat muted response in the stock. While the results were good, the outlook is flat. Investors will likely be a bit less happy with the air pocket in EUV sales that may concern them more.

Logic at 72% is not a problem and likely more of a positive given the variability of memory which can be scary.

All in all a good quarter but some complications.

The impact on collateral stocks is also likely somewhat neutral to slightly negative as we didn’t have overly positive results nor definitive recovery coming from the memory area.

Given that logic is so strong we would expect to see better results from KLAC than LRCX as they report their quarters given their respective concentrations.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

2020 was a Mess for Intel

SMIC Blacklist puts ASML in Jam

Noose tightens on SMIC- Dead Fab Walking?