Bronco Webinar 800x100 1

NetApp Simplifies Cloud Bursting EDA workloads

NetApp Simplifies Cloud Bursting EDA workloads
by Daniel Nenni on 05-19-2021 at 6:00 am

NetApp Cloud Bursting

Why burst EDA workloads to the cloud
Time to market challenges are nothing new to those of us who have worked in the semiconductor industry.  Each process node brings new opportunities s along with increasingly complex design challenges. 7nm, 5nm and 3nm process nodes have introduced scale, growth, and data challenges at a level previously unheard of, particularly for backend design processes.

Design teams are looking to hyperscale clouds providers like AWS, Azure and Google Cloud as a way of providing the on-demand scale and elasticity required to meet the time to market challenges.  An ever-increasing number of semiconductor companies have either evaluated or are regularly using the cloud for burst capacity. Runtime analytics as well as Spot pricing has lowered cost barriers of Cloud by enabling jobs to run on the lowest cost servers for the required job and for the right amount of time.  This has nearly removed the cost barriers to enable increased use of cloud – particularly for burst or peak periods of the design process.

Increased use of AI and GPU enabled EDA tools are driving a need to include more and more GPU enabled servers in the flow.  The cloud enables design teams to quickly spin up the right mix of server types to match the workloads based on feature requirements and cost.

Bursting cloud might seem like the right solution but data mobility and data transfer to and from the cloud makes bursting outside of traditional on-prem data centers challenging due to the size and gravity of data.

Example workloads burst to the cloud
Front-end verification jobs are often the first jobs companies attempted to burst to the cloud.  The ever-increasing number of simulations, Lint, CDC, DFT and power analysis runs at the block, sub-system and full chip level are as many as 20k-50k jobs in a nightly run.  The more jobs that can be run in parallel, the faster the jobs will finish, and the faster issues can be detected and resolved.

The server requirements of these jobs can vary widely from very small IP level jobs that take just a few minutes to Fullchip runs which require large core count, high memory servers.  The range of jobs are ideal for the cloud where the wide range of server types and sizes can match the requirements of each job.  Frontend jobs tend to be tolerant to job failure, preemption, and restart, which makes them ideal for running on lower cost Cloud SPOT instances.

AWS Quote “You can launch Spot Instances on spare EC2 capacity for steep discounts in exchange for returning them when Amazon EC2 needs the capacity back. When Amazon EC2 reclaims a Spot Instance, we call this event a Spot Instance interruption.  You can specify that Amazon EC2 will Stop, Hibernate or terminate interrupted Spot Instances (Terminate is the default behavior).”

New AI driven workflows like DSO.ai lend themselves to burst to cloud use models.  Instead of doing a single run, analyzing results, then tweaking parameters and re-running.  These new workflows kick off 30-40 runs in parallel all with different optimization parameter settings, then AI analyzes which of the runs had the best outcomes and then uses those results to seed the next 30-40 runs.

Designs that have 20 or more blocks/subsystems/Fullchip runs will then require 30-40 runs per analysis for a 30-40x increase in the number of jobs and data required to run the analysis.  The tradeoff of increased compute demand, supplied by additional cloud compute capacity, to quickly zero in on improved PPA results in a fixed schedule can be achieved.  Fabs are also getting in on the burst to cloud use model.  OPC (Optical Process Correction), RET (reticle enhancement tech) and MDP (mask data prep) jobs are some of the most compute intensive and time to market sensitive jobs in the chips design cycle. Cloud scale and availability enables fast turn-around and speeds chip production.

Challenges with Bursting to Cloud
Setting up an automated burst to cloud use model is the first challenge.  Cloud providers have EDA reference architectures for setting license servers, grid engines (LSF, Grid, and SLURM) and providing automation for provisioning the compute, network, and storage infrastructures. The FlexLM based license setup is typically unchanged from on-prem or can even use the same on-prem license server.  The biggest challenge is often figuring out which data needs to be replicated (copied) to the cloud to run the workflows.

Most design flows point to a myriad of different design files scattered across many different volumes of data.  Tools, libraries, 3rd party IP, CAD flow scripts, RCS files (like P4, ICManage, etc.) and even in some files in users’ directories.  The first challenge is figuring out WHAT files (or volumes) of data are needed for the flow which is being burst to the cloud.  The second obvious issue is HOW do you transfer these files to the cloud.

Sadly, there is no simple solution to the WHAT Files question.  The obvious RCS, tools and library files are typically easy to identify. The others might require running the job in the cloud and then repeated trial and error, copy missing files, repeat.  This can be very challenging and time consuming particularly if you must copy lots of data.

The other issue is the size of the data.  The easy thing to do is to copy the entire /mnt/tools/ directory to the cloud.  But then do you really need every tool and version of tool, or maybe just the latest or the specific versions your flow requires.  Simply copying ALL files will result in long data transfer times and increased storage costs.

Then there is the question of HOW to copy the files to the cloud.  Rsync, ssh scp, gtar/FTP, or even reinstall the tools in the cloud.  All these methods work and are tried and true, but then how do you keep the cloud data in sync with the on-prem data.  Tool versions and new libraries are always being updated, how do you ensure the environment you setup and got working in the cloud will work tomorrow after someone commits a change that points to new tools or libraries versions.  The challenges of keeping data in-sync between the on-prem and cloud can become a maintenance headache.

The following diagram shows the various directories (or volumes of data) which need to be exported to the cloud.  Design flows will typically only use just one version of tools and libraries in a given flow but will point to tool and library installations which contain many different versions.  By definition, burst to cloud is a short-term activity – it needs to be available at a moment’s notice, but for cost control, it needs to be terminated when no longer needed.

NetApp makes data mobile and on-demand
NetApp’s ONTAP storage operating system has been the tried-and-true solution for 20 years of semiconductor innovation on-prem.  Cloud Volume ONTAP (CVO) is the same tried and true feature rich storage operating system your IT teams have relied on for years, and  it runs in all three clouds.  CVO has been available in AWS, Azure and Google Cloud since before 2017 and has made migrating or bursting to cloud as easy as running on-prem.

ONTAP’s FlexCache technology data replication technology that enables fast and secure replication of data into the cloud.  FlexCache is ideal for replicating tool and library data to the cloud.  Once a FlexCache volume is provisioned in the cloud and set to cache a pre-existing on-prem volume, almost instantly all the files on-prem are visible in the cached volume in the cloud.  Even though the files appear to be in the cloud, it is until the file is read, that the data is actually transferred to the cache.  The second read from the cache is instant since the file is already cached.   This means that when a job is run in the cloud, the flow will find all the files it needs – even if the on-prem data was recently changed.  With ONTAP 9.8 the cache can be pre-warmed via a script, but it is more common to just run one job first to warm the cache, so when the large job set runs, the files are already pre-populated in the cache.

Cached tools and library volumes are typically read heavy with writes only occurring when new tools or libraries are installed.  FlexCache makes the distribution of new tools and libraries easy – requiring no additional automation or synchronization.  The CAD teams only need to install tools in the Source volume and then those new files are instantly visible and available on the FlexCache volumes.    ONTAP’s FlexCache volumes can support a fan-out of up to 100 FlexCache volumes from a single Source volume.  This means a single tool volume can be replicated to many remote datacenters and cloud regions.

FlexCache makes replicating on-prem environments into the cloud fast, easy and storage efficient, since only the files that are need get copied to the cloud.  FlexCache volumes are also a great way to DNS load balance tools and library mounts across large server farm installations.  Instead of having 10k cores all reading from a single set of tools and library mount, multiple FlexCache replicas can be created to spread out NFS mounts to improve read access performance.

FlexCache volumes can also be used in the reverse.  Instead of replicating data to the cloud, a FlexCache volume on-prem can point to a volume in the cloud.  Reverse Caching can enable designers to view and debug data on-prem without having to log into the cloud.

Summary/Conclusion
Hybrid cloud use models have matured and are ready for mainstream semiconductor development.  Rapidly spinning up cloud environments to enable “peak sharing” or “burst to clouding” has proven to meet aggressive project schedule.

NetApp’s ONTAP storage operating system makes connecting on-prem data to the cloud easy.  It can eliminate manual or other ways of managing multiple copies of data or accelerating data access.  It can dramatically reduce storage footprint via sparse volumes ensuring storage needs are a fraction of the original dataset. Data connections between on-prem and cloud are secure utilizing secure connections, including encryption both at rest and while in flight.

If you would like to learn more contact your local NetApp sales or support.

Also Read:

NetApp Enables Secure B2B Data Sharing for the Semiconductor Industry

NetApp’s FlexGroup Volumes – A Game Changer for EDA Workflows

Concurrency and Collaboration – Keeping a Dispersed Design Team in Sync with NetApp


Extending Moore’s Law with 3D Heterogeneous Materials Integration

Extending Moore’s Law with 3D Heterogeneous Materials Integration
by Tom Dillinger on 05-18-2021 at 10:00 am

nFET Si pFET Ge

A great deal has been written of late about the demise of Moore’s Law.  The increase in field-effect transistor density with successive process nodes has slowed from the 2X every 2 1/2 years pace of earlier generations.  The economic nature of Moore’s comments 50 years ago has also been scrutinized – the reduction in cost per transistor has also abated.

The traditional technology scaling model has become significantly more complex, due to the requirements for:  new lithography systems and resists;  alternative deposition and etch equipment;  the introduction of new interconnect and dielectric materials;  and, the increasingly reliance on new design-technology co-optimization (DTCO) integration methods.

Parenthetically, the emergence of various 2.5D and 3D multi-die packaging offerings has led to the use of the term “More than Moore” integration.  The potential diversity of die functionality and process selection in these packages offers additional tradeoffs in realizing effective density and cost, the foundations of Moore’s Law.

Despite all of the commentary on Moore’s Law, there remains a tremendous R&D investment on new devices that will continue to offer improved performance, power, and area.  At the recent Advanced Semiconductor Manufacturing Conference (ASMC), sponsored by SEMI, a highlight was the keynote presentation by Gary Patton, CVP & GM, Design Enablement, at Intel, who presented an overview of these R&D efforts.  His “Continuing Moore’s Law” talk offered an optimistic view on future technology features.

Gary covered the transition to gate-all-around (GAA) devices, expected to be the immediate successor to FinFETs.  (With the re-introduction of devices where the individual transistor width is again a design parameter, the transistors/mm**2 density measure will likely need a re-interpretation.)

There are numerous research initiatives underway as a potential long-term transition beyond CMOS – e.g., (arrays of) 2D semiconductor materials, such as MoS2, WS2, and WSe2.

Of particular note in Gary’s talk was the description of an area of process technology development that perhaps does not receive due consideration – the 3D monolithic integration of heterogeneous semiconductor materials, used for fabrication of optimized nFET and pFET devices.  This approach provides continued device scaling, integration of mature process fabrication techniques, and builds upon existing (CMOS-based) circuit design experience.

Before elaborating on some of the monolithic 3D possibilities, a description of the bonding of heterogeneous materials would be insightful.

Oxide Bonding and Donor Wafer Cleaving

The goal of monolithic 3D integration is to provide multiple, stacked semiconducting materials for device fabrication.  A subset of transistors is fabricated in the host wafer.  Subsequently, a donor wafer (of a different semiconductor composition) is bonded to the host, and cleaved to provide a thin material layer on top of the host for subsequent device processing.  The figures below illustrate the wafer process flow.

The full-thickness host wafer provides the mechanical support;  the thin donor layer does not add significantly to the overall thickness, enabling the use of existing process equipment and fabrication flows.  (As will be discussed shortly, there are restrictions on the thermal budget for processing the donor layer devices, so as not to adversely impact the existing host device characteristics.)

Briefly, the sequence of steps for preparation of the 3D monolithic stack is:

  • devices are fabricated on the host (300mm) wafer
  • the host wafer receives a deposition of a thin dielectric layer (e.g., chemical vapor deposition of SiN and SiO2)
  • the host wafer surface is polished (e.g., using chemical-mechanical polishing)
  • a (300mm) donor wafer is subjected to an implant of H+ (protons), using an optimized implant energy and dose
  • the donor and host wafers are bonded

Prior to bonding the host and donor wafers, specific wafer surface cleaning chemistries are employed.  It is necessary that the two wafer surfaces are hydrophilic, “atomically smooth”, and have a high density of chemical bonding sites (to preclude micro-voids forming at the interface).

In a special aligner (with dual wafer chucks), the host and donor wafers are loaded facing each other, aligned, and brought in contact.  After the initial wafer-to-wafer interface bonding has stabilized, the donor chuck is released.

Then, a thermal annealing step is applied to the composite.   This anneal performs two critical functions:  it strengthens the bonded interface, and it allows the implanted hydrogen to diffuse in the semiconductor crystal, and nucleate to form H2.  

A very thin H2 layer forms in the donor wafer, at a depth equivalent to the point of highest crystalline dislocation after the H+ implant.  This H2 layer introduces a structurally weak interface within the donor wafer crystal.

  • the donor wafer is cleaved at the internal H2 interface

A combination of mechanical edge force and/or thermal cycling results in fracturing of the donor wafer at the H2 layer depth.

  • the resulting monolithic wafer with the stacked sequence of semiconductor layers is annealed (to reduce residual implant damage), and polished

As illustrated above, the fracturing step may result in a rough surface topography, which needs to be polished before subsequent device fabrication, and layer-to-layer contact formation.

This technique for oxide bonding and donor layer transfer has been used in production for silicon-on-insulator (SOI) wafer preparation for many years.  (A deeper understanding of the mechanics behind H+ diffusion, H2 layer formation, and the structural impact on the donor wafer crystal during the nucleation annealing step remains an active area of research.)

Gary’s presentation highlighted two areas where the Intel Research division is adapting this layer transfer technique to 3D monolithic integration, to further extend Moore’s Law.

nFET in Si, pFET in Ge

One of the issues faced in advanced process development is the relatively weak hole mobility in Si, especially at higher hole free carrier density and electric field.  Current process technologies incorporated compressive mechanical stress in the pFET device channel to improve the hole mobility.  More recent advances strive to utilize a stoichiometric combination of Si and Ge directly in the pFET device channel – i.e., Si(x)Ge(1-x) – to leverage the higher hole mobility in Ge.

The team at Intel Research has been pursuing 3D monolithic integration using a Ge donor layer bonded on top of the Si host wafer, as depicted below. [1]

In this case, a FinFET device structure was fabricated on the host wafer for the nFETs, while a GAA topology was used for the pFETs in the Ge donor layer.  As mentioned above, the process flow and materials selection for the nFET high-K, metal gate, source/drain doped epitaxy, and contact metal is chosen to be compatible with the subsequent thermal processing of the Ge donor layer and pFET fabrication (e.g., <600C).

After the fabrication of the GAA pFET source/drain epi, device oxide and metal gate (using a replacement gate process), and source/drain contacts, vias are formed between the two transistor layers.

Also illustrated above is an example profile of the Ge donor layer thickness across a 300mm wafer, showing excellent uniformity of the monolithic layer transfer process (<3nm variation across the entire wafer).

The figures below depict the final 3D cross-section, the (short-channel) Si nFET and Ge pFET characteristics, and the Vout versus Vin transfer characteristics of a 3D monolithic inverter logic gate (down to VCC = 0.5V).  The Ion versus Ioff curve for the Ge pFET illustrates the improved characteristics over strained Si devices.

The use of a Ge layer stacked vertically on top of a Si layer for heterogeneous integration offers a unique opportunity for CMOS logic implementations, helping to extend Moore’s Law.

Si donor wafer on GaN host

The previous section described an approach to realize improved hole mobility in Ge pFETs.  Another area where advanced process development issues have arisen is the need for high-efficiency RF-class devices, integrated with conventional CMOS logic.  The demand for 5G (and beyond) applications requires optimum device cutoff frequency (Ft) and maximum oscillation frequency (Fmax) response, for mmWave power amplifiers, with corresponding low noise characteristics for low-noise amplifiers, and with fast switching speed for RF switches.  The excellent Ioff and low Ron of the enhancement-mode GaN device is attractive for high-efficiency integrated voltage regulator designs, as well.

Gary highlighted the work done by the Intel Research team to develop monolithic heterogeneous integration of GaN devices with conventional Si CMOS circuitry. [2]

The figures below illustrate the fabrication of a variety of GaN components, fabricated in an epitaxial layer on the host wafer (a Si substrate) – e.g., enhancement-mode and depletion-mode nFETs, Schottky gate FETs, and Schottky diodes (without the high-k gate oxide dielectric).  A cross-section of the final structured is also shown.

In this case, the donor wafer is Si, used for fabricating nFET and pFET devices, as would be used for analog functions, digital signal processing, and logic/memory.  (P-channel GaN devices are extremely challenging to fabricate.)

Whereas the circuit-level CMOS integration of the previous Si nFET and Ge pFET monolithic integration necessitates consistent (and aggressive) design rules, the distinct applications for the (RF) GaN devices and (CMOS) Si devices decouples the two technologies.  The GaN devices can be much different in dimension than the FETs – e.g., W > 10um for very low Ron – or with much longer channel lengths supporting high-voltage applications, compared to the Si FinFETs.

As with the host Si nFETs fabricated prior to bonding the donor Ge pFET layer, the GaN devices much tolerate the thermal budget of the subsequent donor Si layer transfer and nFET/pFET device fabrication.

Representative Ids versus Vg curves for the (long-channel) GaN enhancement-mode and depletion-mode nFET devices are shown below, along with the Si nFET and Si pFET device characteristics fabricated in the donor layer.

Summary

The next evolution in Moore’s Law from FinFET devices will be GAA topologies.  The opportunity to continue Moore’s Law may indeed be facilitated by 3D monolithic integration, extending the bonded layer transfer technology used for SOI wafer fabrication to a wider variety of semiconducting materials, such as Ge and GaN.  This will help alleviate the risks associated with the introduction of “beyond CMOS” materials processing.

It will be extremely interesting to track the progress and innovations in vertical stacking of devices of various types, for applications ranging from high-performance computation to high-frequency RF signal processing.

Epilogue

A passing comment at the ASMC from a member of the academic community caught my attention.   He said, “I’m seeing a diminished interest among students in pursuing microelectronics as an area of study.  They hear that ‘Moore’s Law is dead’, and conclude the field has stagnated.” 

Frankly, I cannot recall a time when there have been more opportunities for major advances in device research, processing technology, and circuit/systems applications development than at present.  If you are a student reading this article, please realize that there are many exciting careers ahead in extending Moore’s Law.

-chipguy

References

[1]  Rachmady, W., et al., “300mm Heterogeneous 3D Integration of Record Performance Layer Transfer Germanium PMOS with Silicon NMOS for Low Power High Performance Logic Applications”, IEDM, 2019, p. 29.7.1 – 29.7.4.

[2]  Then, Han Wui, et al., “GaN and Si Transistors on 300mm Si(111) enabled by 3D Monolithic Heterogeneous Integration”, 2020 VLSI Symposium, paper THL.2.


Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads
by Kalar Rajendiran on 05-17-2021 at 10:00 am

SuperCharge ML Performance

During the week of April 19th, Linley Group held its Spring Processor Conference 2021. The Linley Group has a reputation for convening excellent conferences. And this year’s spring conference was no exception. There were a number of very informative talks from various companies updating the audience on the latest research and development work that is happening in the industry. The presentations had been categorized under eight different subject matters. The subject matters were Edge AI, Embedded SoC Design, Scaling AI Training, AI SoC Design, Network Infrastructure for AI and 5G, Edge AI Software, Signal Processing and Efficient AI Inference.

Artificial Intelligence (AI) as a technology has garnered lot of attention and investment over the recent years. The conference certainly reflected that in the number of subject matter categories relating to AI. Within the broader category of AI, Edge AI was a subject matter that had an unfair share of presentations and justifiably so. Edge computing is seeing rapid growth driven by IoT, 5G and other low-latency requirement applications.

One of the presentations within the Edge AI category was titled “Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads.” The talk was given by Chris Lattner, President, Engineering and Product at SiFive, Inc. Chris made a strong case for why SiFive’s RISC-V vector extensions based solution is a great fit for AI driven applications. The following is my take.

Market Requirements:

As fast as the market for edge computing is growing, the performance and power requirements of these applications are also getting more and more demanding. Many of these applications are AI driven and fall into the category of machine learning (ML) workloads. And AI adoption is pushing processing requirement more toward data manipulation rather than general purpose computing. Deep learning underlies ML models and involves processing large arrays of data. With ML models fast evolving, an ideal solution would be one that optimizes for: performance, power, ease of incorporating emerging ML models and scope of resultant hardware and/or software changes.

RISC-V Vector Advantage:

The original motivation behind the initiative that have given us the RISC-V architecture is experimentation. Experimenting to develop chip designs that yield better performance in the face of expected slowdown of Moore’s law. RISC-V is built upon the idea of being able to tailor-make particular chips where you can choose which instruction set extensions you are using. Vector extensions allow for processing of vectors of any length using functions which process vectors of fixed lengths. Vector processing enables existing software to run without a recompile when hardware is upgraded in the form of more ALUs and other functional units. Significant progress has happened in terms of established hardware base and supporting ecosystem such as compiler technologies.

RISC-V can be optimized for a particular domain or application through custom extensions. As an open standard instruction set architecture, RISC-V users enjoy lot of flexibility in choosing a supplier for their chip design needs.

SiFive’s Offering:

SiFive has enhanced the RISC-V Vector advantage by adding new vector extensions for accelerating execution of many different neural network models. Refer to Figure 1 to see an example of the kind of speedup that can be gained using SiFive’s add-on extensions compared to using just the base vector extensions of RISC-V. Its Intelligence X280 solution is a multi-core capable RISC-V Vector solution (hardware and software) to make it easy for its customers to implement optimized Edge AI applications. The solution can also be used to implement data center applications.

Figure 1:

 

SiFive Advantage:

  • SiFive’s Intelligence X280 solution fully supports TensorFlow and TensorFlow Lite open-source platforms for machine learning (Refer to Figure 2)
  • SiFive provides an easy way to migrate customer’s existing code based on other architectures to RISC-V Vector architecture. For example, SiFive can translate ARM Neon code to RISC-V V assembly code
  • SiFive allows its customers to explore adding custom extensions to their RISC-V implementations
  • SiFive through its OpenFive business unit extends custom chip implementation services to address domain-specific silicon needs

 

Figure 2:

 

Summary:

In a nutshell, SiFive customers can easily and rapidly implement their applications, whether the applications involve Edge AI workloads or traditional data center type of workloads. If interested in benefitting from SiFive’s solutions for accelerating performance of your ML workloads, I recommend you register and listen to Chris’ entire talk and then discuss with SiFive on ways to leverage their different offerings for developing your products.

Also Read:

Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets

Enabling Edge AI Vision with RISC-V and a Silicon Platform

WEBINAR: Differentiated Edge AI with OpenFive and CEVA


Webinar: Challenges in creating large High Performance Compute SoCs in advanced geometries

Webinar: Challenges in creating large High Performance Compute SoCs in advanced geometries
by Daniel Nenni on 05-17-2021 at 6:00 am

Sondrel Webinar 1

When we think about Compute and AI SoCs, we often focus on the huge numbers of calculations being carried out every second, and the ingenious IPs that are able to reach such high levels of performance. However, there also exists a significant challenge in keeping the vast quantities of data flowing around the chip which is solved by using a Network on Chip (NoC). In this webinar, we be discussing some of the challenges involved in developing such NoCs, and what we can do to overcome them.

REGISTER HERE

NoCs are very complex IPs which touch almost every part of an SoC. They are intrinsically linked to the chip’s floorplan, architecture, functional requirements, startup, security, safety and many other aspects. The functional correctness and performance capabilities of a NoC can also be time consuming and difficult to verify. High performance NoCs can also take up significant area on a chip.

All of this means that there can be a high likelihood that the NoC will suffer change through the life of the project, and this change can ultimately disrupt the floorplan, and therefore have a significant impact on the whole chip.

To try and reduce the probability of this disruption happening, we use various tools to allow us to carry out performance exploration and verification early in the process. By securing the requirements early on, and being able to quickly verify NoC spins meet those requirements, we can also stabilize the floorplan, and reduce unnecessary churn in the design.

REGISTER HERE

Webinar abstract: The challenges in creating AI and High Performance Compute chipsets are not only limited to those around developing IPs that can carry out large numbers of calculations per second. To allow these number-crunching IPs to do their calculations also requires increasingly large volumes of data to be moved around the SoC at high speed. Sondrel explains how this can be done with a customized Network on Chip (NoC) solution.

What you’ll learn: The challenges and solutions for developing a Network on Chip as part of a large complex SoC. Who should attend: People working on or commissioning large SoCs

Presenter: Ben Fletcher is Director of Engineering at Sondrel and is involved in all aspects of SoC development from initial customer engagement through to bring-up and validation. He has over 20 years of experience primarily in ASIC and SoC development within the consumer electronics market, specializing in architecture of Audio, Video and AI chipsets.

About Sondrel™
Founded in 2002, Sondrel is the trusted partner of choice for handling every stage of an IC’s creation. Its award-winning, define and design ASIC consulting capability is fully complemented by its turnkey services to transform designs into tested, volume-packaged silicon chips. This single point of contact for the entire supply chain process ensures low risk and faster times to market. Headquartered in the UK, Sondrel supports customers around the world via its offices in China, India, France, Morocco and North America. For more information, visit www.sondrel.com

Also Read:

Sondrel Explains One of the Secrets of Its Success – NoC Design

SoC Application Usecase Capture For System Architecture Exploration

Sondrel explains the 10 steps to model and design a complex SoC


The Quest for Bugs: Bugs of Power

The Quest for Bugs: Bugs of Power
by Bryan Dickman on 05-16-2021 at 10:00 am

title and image

Shooting beyond the hill…

In former times (think WW1 before GPS and satellites), an artillery battery trying to shell targets out of sight behind a hill would have to rely on an approximate grid reference and a couple of soldiers on top of the hill (who could see the target) to tell them where the shots were landing. These range finders would semaphore back to the battery telling them to adjust range and direction until they were on target. A process of successive refinement, which shares much in common with the search for Power Bugs!

Trying to identify where to find power bugs suffers from similar limitations; very often the target area may not be known at all, or least only be approximated. Surveys show that over 80% of designs are now actively managing power in some way. Missing your power targets can be catastrophic, especially if this is only realized when you get silicon back. Power has never been more important.

Power Matters! Applications drive real power needs…

A product-level power requirement example might be that the user should be able to watch Netflix for 15 hours on a full charge. This determines the chip-level power targets, which in turn determines the sub-system and block-level power targets. To meet this “15 hours of Netflix” requirement, you will need to be able to perform power analysis from the chip-level downwards and validate that power targets are being met under the conditions of this real workload. For complex ASIC devices, you really need to run the full system software in order to see what is going on power-wise under the target operating conditions.

Short, directed test sequences cannot predict accurately how the device will behave under more complex conditions.

Hit the target – Why use emulation for power analysis?

When it comes to finding power bugs, the first step is to find the right platform. All SOC/ASIC product developments are a combination of software development and hardware development. Power management capabilities are provided by the hardware and controlled by the software, so both must be validated, and the power bugs can be on both sides.

Hardware Emulation systems are effective and performant platforms for software development in advance of available hardware reference boards, system-level validation demonstrating that the hardware with the target firmware/software delivers the required capabilities, system-level verification bug searching/hunting from a system context, and also system-level power verification (achieved through power analysis capabilities). We will talk about the role of emulation not only for the well-known generation of design activity but also from the perspective of an integrated power analysis flow.

Power analysis using emulators with multi-MHz performance, support for checkpoint-restore, system-level debug, and fast power analysis turnaround times, is the only way to achieve this by running the full system or substantial sub-system software enabling designers to perform:

fast “silicon-approximate” power consumption analysis tested on a real-world system workloads

So, what are Power Bugs?

We consider power bugs in the context of any error (hardware or software) in the product, no matter if it results in an observable functional error (a more traditional bug perhaps), or a failure to meet power consumption (or power drain) objectives and targets. Either way, the result is the same; an error or omission in either the hardware (as RTL, power intent (UPF) or implementation), or the power management software (firmware/device drivers), which must be fixed. The fix must then be re-validated.

We are treating both functional power bugs and power consumption problems as “Power Bugs”.  Power Bugs really fall into 2 categories…

Power Management causes a functional failure

This category of power bugs is related to function, but that functionality is associated with power management and is described by both RTL and power intent code. Examples could be, an incorrect behavior in power controller logic (which may contain complex Finite-State-Machines and control logic), or a functional error in clock-gating logic, an error in power domain logic (or associated isolation or retention logic), or Dynamic Voltage and Frequency Scaling (DVFS) control logic.

A missed critical low-power bug could result in functional failure of your first silicon the worst scenario.

That’s a very expensive mistake, but it has happened!

Too much power is consumed

For those Power Bugs that do not present as an observable functional error, you really have to adopt a power analysis approach to Power Bug hunting. We refer to these bugs as “power consumption bugs”. They are errors in the measured power drain in relation to the expected or estimated power, possibly arising from errors, omissions or missed opportunities in either the RTL, or power intent (UPF).

Serious power consumption bugs can render the final product less-competitive, or non-viable in the worst case.

Imagine that you have implemented a range of low-power capabilities and tested them all thoroughly, all the software is working, but the first silicon is measured to be consuming 30% more power than expected. That has also happened!

Classes of Power Bugs

Given the two categories described above, we can further enumerate Power Bugs into the following bug classes:

Modern ASIC power verification is many-layered. Low-power architectures must be considered and evaluated right at the start of the specification and the high-level design processes. It cannot be left as an implementation process as that will be far too late! Many traditional verification workflows have been enhanced to be “power-aware”, and some new ones that have been created to statically analyze power-intent. It can start long before RTL is written, using power-aware virtual prototyping. With a VP, you have a capability to dynamically explore different low-power architectures at the system level whilst running development power management software. While accurate power analysis is not possible, relative estimates of power consumption are. It enables designers to explore the hardware/software split, develop and debug early power management software, and use high-level power intent to model power architectures.

Power aware verification…virtually

Early power estimates can feed-forward to the RTL development workflows.

As early RTL is developed and the power intent is refined, you need workflows that will enable early RTL power estimation so that you can keep track of power consumption throughout development and use power analysis to refine microarchitecture design choices. There are power-aware static analysis workflows and power-aware simulation workflows that support this phase of development. As we know, simulation testbench environments are well suited to short directed and constrained-random test vectors, with the benefit of coverage and assertions, and with a gold-standard debug environment; running the real software is not generally feasible. However, there will be a class of Power Bug that can only be found when you are running the real software in a system environment,

that emulates realistic I/O traffic, and with realistic power transition sequences under the control of the actual (or close to actual) system power management software. We will refer to this as “software-driven power verification”.

Why Software-Driven Power Verification?

ASIC pre-silicon validation demands that at some point you need to validate the full system hardware model with the target firmware and software. Emulation offers fast initial compile and bring-up, fast turnaround time as the RTL changes, advanced RTL debug capabilities and delivers up to a few MHz levels of runtime performance. This is enough to be able to compile your RTL, boot your OS, run applications and perform RTL debug well within the working day, opening up the ability to find power bugs that are only observed when running system software testing workloads.

Source – Acuerdo and Valytic Consulting

Software test workload #4 above clearly shows some unexpected power consumption that might arise from 2 different power bugs. The root causes could be hardware or power management software. Debug will determine.

Scale-up and Speed-up

Emulators can model large design sizes which enables you to,

“scale-up” verification to the full system.

In addition, modern emulators provide power analysis workflows that allow you to extend your power analysis from the constraints of short simulation sequences, to the billions of cycles consumed when running the actual software enabling the user to,

“speed-up” detailed power analysis over large samples of system software.

Users generally need the following capabilities and outputs that emulation power analysis offers.

  1. Visualization of power activity across billions of cycles. Accuracy is less critical but relative activity should accurately guide the user to identify power-critical time-windows of interest.

Source – Acuerdo and Valytic Consulting

  1. Ability to measure the average power for power-critical time-windows. Emulator power analysis workflows do this by generating standard Switching Activity Interchange Format (SAIF) data which can be processed by power sign-off tools to compute average power data.
  2. Calculation of cycle-by-cycle power waveforms for power-critical time-windows, in order to debug Power Bugs and identify shorter power sign-off windows.
  3. Calculation of peak power. Emulator power analysis workflows do this by generating waveform files (e.g., VCD, FSDB) that can be processed by power sign-off tools to compute peak-power.

Source – Acuerdo and Valytic Consulting

  1. Ability to increase the fidelity of the analysis by performing the power analysis using RTL, post-synthesis netlists and post place-and-route netlists. Post place-and-route analysis should be able to consume real timing data (SDF), real network capacities data (SPEF) and technology library data (.lib)) and achieve full accuracy power sign-off.

Turnaround time is critical

What matters here is the end-to-end turnaround time from RTL compile to running the payload, and then extracting, processing and analyzing the power data. Fast turnaround times make many iterative power analyses feasible. Emulation power analysis can generate terabyte volumes of data per run. This data has to be processed using scalable compute grids to slice and process the data in order to generate the average power and the cycle power curves.

This data-processing part of the workflow is time-critical and requires scalable compute.

Booting the operating system could account for the first 30 billion cycles, and that might not be the region of most interest system activity-wise. You might then be looking at billions of further cycles to analyze interesting power-critical windows where more functionality is active.

Emulation checkpoint and restore capability allow you to jump to a power-critical timepoint of interest for targeted power analysis.

Developers need to be able to cycle through multiple turns of the power verification workflow per day. The hardware developers may need to re-verify a change to the RTL or the power intent. The software team may need to re-verify a patch to the power management code and get the result back within the same day. Additionally, it is highly likely that there are multiple power-critical windows that need to be analysed when performing power analysis over multi-billion cycle windows.

Software-driven power verification using emulation is the only way to achieve this pre-silicon, by running the full system (or substantial sub-system) software.

Beyond using emulation, the only way to get closer to real systems is with actual silicon, by which time it is too late to make power design choices and to find power related bugs in the hardware.

Finally, some things to remember…

Plan to Succeed

As with any other verification challenge, you need to start with a test plan. What strategies are you going to apply to power validation and hunting for Power Bugs?

Don’t leave power verification to chance; brainstorm, review and refine your test plan just as you would for any other class of verification

Power verification should be a chapter in an overall verification test plan and together with a decision on the tools and methodologies you are going to apply to the problem. There should be power targets or objectives that will need to be validated, and scenario planning of the set of sequences and power modes that need to be exercised.

Power regressions

Performance and power are in a constant trade-off. Increasing performance often implies adding more logic, replicating structures, and controlling fan-out with additional buffers to reduce logic depth between flops. Hence there is a need to perform power analysis regressions to keep checking that iterative refinements of the design, do not cause power bugs to be introduced as power is unintentionally negatively impacted.

Power data analytics

As with all other aspects of verification, power analysis regressions will generate power datasets that need to be stored, maintained, visualized and explored. Look at the trends for max power and average power results, both at the top-level and hierarchically, to track progress over time as the RTL code is developed and refined.

In order to improve, you have to measure.

The power metrics are just another dataset that you will need to track alongside all other design and verification metrics. Ideally, your data analytics platform will support the cross correlation of power metrics with other key metrics such as performance and area, bug rates, and code churn rates. When you look at all of these measurements in the round, great insights are possible.

Read the full whitepaper “Power Bug Quest” for a more detailed analysis of finding Power Bugs using software-driven power analysis.


What is a Data Lake?

What is a Data Lake?
by Ahmed Banafa on 05-16-2021 at 6:00 am

What is a Data Lake

“Data Lake” is a massive, easily accessible data repository for storing “big data”. Unlike traditional data warehouses, which are optimized for data analysis by storing only some attributes and dropping data below the level aggregation, a data lake is designed to retain all attributes, especially when you do not yet know what the scope of data or its use.

Data Lake vs. Data Warehouse

Data warehouses are large storage locations for data that you accumulate from a wide range of sources. For decades, the foundation for business intelligence and data discovery/storage rested on data warehouses. Their specific, static structures dictate what data analysis you could perform. Data warehouses are popular with mid- and large-size businesses as a way of sharing data and content across the team- or department-siloed databases. Data warehouses help organizations become more efficient. Organizations that use data warehouses often do so to guide management decisions—all those “data-driven” decisions you always hear about.

A data lake holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. Each data element in a lake is assigned a unique identifier and tagged with a set of extended metadata tags. When a business question arises, the data lake can be queried for relevant data, and that smaller set of data can then be analyzed to help answer the question.

Now that data storage and technology is cheap, information is vast and newer database technologies don’t require an agreed upon schema up front, discovery analytics is finally possible. With data lakes, companies employ data scientists who are capable of making sense of untamed data as they trek through it. They can find correlations and insights within the data as they get to know it.

 

 

Five key components of a data lake architecture:

1.Data Ingestion: A highly scalable ingestion-layer system that extracts data from various sources, such as websites, mobile apps, social media, IoT devices, and existing Data Management systems, is required. It should be flexible to run in batch, one-time, or real-time modes, and it should support all types of data along with new data sources.

2.Data Storage: A highly scalable data storage system should be able to store and process raw data and support encryption and compression while remaining cost-effective.

3.Data Security: Regardless of the type of data processed, data lakes should be highly secure from the use of multi-factor authentication, authorization, role-based access, data protection, etc.

4.Data Analytics: After data is ingested, it should be quickly and efficiently analyzed using data analytics and machine learning tools to derive valuable insights and move vetted data into a data warehouse.

5. Data Governance: The entire process of data ingestion, preparation, cataloging, integration, and query acceleration should be streamlined to produce enterprise-level Data Quality. It is also important to track the changes to key data elements for a data audit.

Like big data, the term data lake is sometimes disparaged as being simply a marketing label for a product that supports it. However, the term is being accepted as a way to describe any large data pool in which the schema and data requirements are not defined until the data is queried.

The data lake promises to speed the delivery of information and insights to the business community without the hassles imposed by IT-centric data warehousing processes.

Data Lake Advantages

  • Data Lake gives business users immediate access to all data.
  • Data in the lake is not limited to relational or transactional
  • With a data lake, you never need to move the data
  • Data Lake empowers business users and liberating them from the bonds of IT domination
  • Data Lake speeds delivery by enabling business units to stand up applications quickly
  • Helps fully with product ionizing & advanced analytics
  • Offers cost-effective scalability and flexibility
  • Offers value from unlimited data types
  • Reduces long-term cost of ownership
  • Allows economic storage of files
  • Quickly adaptable to changes
  • The main advantage of data lake is the centralization of different content sources
  • Users, from various departments, may be scattered around the globe can have flexible access to the data

Data Lake Disadvantages

  • Unknown area of Data Processing
  • Data governance
  • Dealing with Chaos
  • Privacy issues
  • Complexity of Legacy Data
  • Metadata Lifecycle Management
  • Desolate Data Islands
  • The Issue of Integration
  • Unstructured Data may lead to Ungoverned and Unusable Data, Disparate and Complex Tools
  • Increases storage & computes costs
  • There is no way to get insights from others who have worked with the data because there is no account of the lineage of findings by previous analysts
  • The biggest risk of data lakes is security and access control. Some data can be placed into a lake without any oversight, as some of the data may have privacy and regulatory need

The Future

There are many organizations that are making this approach a reality, the internal infrastructures developed at Google, Amazon, and Facebook provide their developers with the advantages and agility of the data lake dream. For each of these companies, the data lake created a value chain through which new types of business value emerged:

  • Using data lakes for web data increased the speed and quality of web search
  • Using data lakes for clickstream data supported more effective methods of web advertising
  • Using data lakes for cross-channel analysis of customer interactions and behaviors provided a more complete view of the customer
  • Data lakes can give retailers profitable insights from raw data, such as log files, streaming audio and video, text files, and social media content, among other sources, to quickly identify real-time consumer behavior and convert actions into sales. Such 360-degree profile views allow stores to better interact with customers and push on-the-spot, customized offers to retain business or acquire new sales.
  • Data lakes can help companies improve their R&D performance by allowing researchers to make more informed decisions regarding the wealth of highly complex data assets that feed advanced predictive and prescriptive analytics.
  • Companies can use data lakes to centralize disparate data generated from a variety of sources and run analytics and ML algorithms to be the first to identify business opportunities. For instance, a biotechnology company can implement a data lake that receives manufacturing data, research data, customer support data, and public data sets and provide real-time visibility into the research process for various user communities via different user interfaces.

Regardless of where you are now, take some time to look to the future. We’re on a journey towards connecting enterprise data together. As business is increasingly becoming pure digital, access to data will become a critical priority, as will speed of development and deployment. The data lake is a dream that can match those demands. The global data lake market was valued at $7.9 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR) of 20.6 percent by 2024 to reach $20.1 billion.

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at: Prof. Banafa website

References

https://www.bmc.com/blogs/data-lake-vs-data-warehouse-vs-database-whats-the-difference/

https://www.guru99.com/data-lake-architecture.html#21

Data Lakes: What They are and How to Use Them

http://www.gartner.com/newsroom/id/2809117?

http://datascience101.wordpress.com/2014/03/12/what-is-a-data-lake/

http://en.wiktionary.org/wiki/data_lake

http://searchaws.techtarget.com/definition/data-lake

http://www.forbes.com/sites/edddumbill/2014/01/14/the-data-lake-dream/

http://www.platfora.com/wp-content/uploads/2014/06/data-lake.png

http://www.b-eye-network.com/blogs/eckerson/archives/2014/03/beware_of_the_a.php

http://usblogs.pwc.com/emerging-technology/the-future-of-big-data-data-lakes/

http://siliconangle.com/blog/2014/08/07/gartner-drowns-the-concept-of-data-lakes-in-new-report/

http://www.pwc.com/us/en/technology-forecast/2014/issue1/features/data-lakes.jhtml

http://www.ibmbigdatahub.com/blog/don%E2%80%99t-drown-big-data-lake

http://www.wallstreetandtech.com/data-management/what-is-a-data-lake/d/d-id/1268851?

http://emcplus.typepad.com/.a/6a0168e71ada4c970c01a3fcc11630970b-800wi

http://hortonworks.com/wp-content/uploads/2014/05/TeradataHortonworks_Datalake_White-Paper_20140410.pdf


Podcast EP20: How to Secure Any Chip. Any Time. Any Place

Podcast EP20: How to Secure Any Chip. Any Time. Any Place
by Daniel Nenni on 05-14-2021 at 10:00 am

Dan is joined by Pim Tuyls, founder and CEO of Intrinsic ID. Pim provides background on what a physically unclonable function (PUF) is and how Intrinsic ID developed the technology around SRAMs that are found on virtually all chips. Pim discusses the multiple applications for SRAM PUFs and how they are implemented. He concludes with a view of the future in this area.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


COO Interview: Michiel Ligthart of Verific

COO Interview: Michiel Ligthart of Verific
by Daniel Nenni on 05-14-2021 at 6:00 am

Michiel Ligthart wpcf 120x170

Today, Semiwiki profiles Verific Design Automation, perhaps the most popular company at DAC (when it’s an in-person event) because of its giveaway –– a 10”stuffed giraffe for anyone who walks up to its booth and listens to its story.

But, Verific is also a popular EDA company for more reasons than its tradeshow  giveaway. If you’re using any type of FPGA implementation or EDA verification tool, Verific is probably inside. That’s because it is the leading provider of SystemVerilog, Verilog, VHDL and UPF parsers and elaborators used by startups, emerging companies and some of the best-known semiconductor companies worldwide. Parser platforms from Verific eliminate costly internal development of front-end EDA software, accelerating time to market with vastly improved quality. Need proof? Verific’s licensees have shipped more than 80,000 copies of its software since it was founded in 1999.

The image of a giraffe is used as mascot and also a recurring theme. On its website, one cheeky giraffe leans her head over the logo, a nod to Verific’s stature in the industry. Several of its employees tower over many of us, including Michiel Ligthart, Verific’s president and COO, who stands at about 6’4,” and whom I recently interviewed about the company and its success. Before we began, Michiel wanted to emphasize that although this segment is called ‘the CEO interview’,  he is not Verific’s CEO. Verific’s founder and CTO Rob Dekker (also 6’4” by the way)  is the CEO of record.

Once you finish reading my interview with Michiel, you will understand why Verific has the well-earned reputation of being, “Head and shoulders above the rest.”

What brought you to semiconductors?
I was studying Electrical Engineering at Delft University of Technology in the Netherlands. I was interested in digital design and took a new course based on “Introduction to VLSI Systems” by Carver Mead and Lynn Conway. That’s how I learned about chip design in general. Later on I got an internship at Philips Research in the fields of semiconductor test and EDA, and afterward went to Signetics Research Labs in California. Early on, I was involved with research activities in  logic synthesis, including a stint  as a visiting scholar at the Center for Integrated Systems at Stanford University.

What is the Verific backstory?
Rob founded Verific in 1999 after working for Exemplar Logic, acquired by Mentor Graphics (now Siemens EDA) in 1995. Rob, looking for a change of pace, liked the startup experience and wanted to start one of his own. He had an idea about developing verification software but had no specific application in mind. While working through his idea he knew that whatever he did, he would need a Verilog parser and began building one. Several EDA companies asked to license the parser and so that became the business. And here I need to give a shout out to Real Intent, one of those early adopters and still a licensee twenty years later !

Lawrence Neukom was Rob’s first hire, a year or so after Verific’s founding. I recommended Lawrence, who I knew from Theseus Logic, an asynchronous logic startup and also an early Verific customer. I joined a few years later.

Verific celebrates its 22nd birthday this year. What is the secret to your success and core strengths?
We credit our longevity and our great success to our corporate culture. We believe we have a primary responsibility to our customers, secondly to our employees, and thirdly to the environment in which we operate at large, not just the semiconductor industry. If we fulfill all these responsibilities, we automatically fulfill our commitment to shareholders.

We emphasize the quality of our product, and proven by our many satisfied customers we have succeeded.  Our attention to customer needs and requirements by providing first-class support is an important part of that success. They must feel we are part of their R&D team.

What customer challenges are you addressing?
We have an interesting challenge. We implement IEEE standards. EDA companies who do not use our parsers may at times have a slightly different implementation or interpretation of that standard in their simulators or synthesis tools. Now we could claim that we got it right and everybody else has to change, but that would not be very helpful to the end-users. Instead, we try to match that other tool’s behavior so the end-user is not dead in the water, The challenge is that we only find this out by trial and error. Our customers find these mismatches in the field, report them to us and we try to be compliant with both the IEEE standard and this other  EDA tool.

I recently reported on the DARPA toolbox. How will Verific’s relationship with DARPA help the industry?
We always had an academic program, though admittedly ad hoc, where we provide linkable library access. Over the years, several interesting university projects have used Verific. DARPA is helping by streamlining an actual process, giving access to EDA software for U.S. academic projects. We already have our first engagement through the DARPA toolbox with  the University of Utah.

Our agreement with DARPA is not part of the open-source movement.

What other trends are you seeing?
We see a really big EDA push out of China as it builds up its EDA infrastructure. In the last 12 months, we closed four new licenses there. A host of semiconductor companies are getting funding here in the U.S., which bodes well for EDA companies as well

Another big growth area for us is in the semiconductor space with INVIO, our Python-based higher-level API set for SystemVerilog, VHDL, and UPF flows ,built on top of our traditional Verific platform. INVIO is especially useful for companies that are developing  their own design methodologies,  not necessarily supported by off-the-shelf tools. AI chip design groups with strict power, test and scalability requirements are able to build their own methodologies and flows. Custom processors for large compute farms are another good example.

How has the pandemic affected Verific and its customers?
Not at all. When the pandemic struck, we closed our offices and took our laptops home. We already had VPN networks in place and were Zoom users for several years. We secured some additional  cloud back up and didn’t miss a beat. Our customers didn’t seem to have any significantly negative effects either.

However, not everyone was that fortunate. Restaurants and coffee shops that we normally patronize around our offices in Alameda, Calif., and Kolkata, India, were very much affected, along with many other people in the services industries. As a company where no paycheck was ever in jeopardy  we decided to make significant contributions to  food banks in our local communities, which for us are Alameda and Kolkata.

As an entrepreneur, what advice would you give someone founding a startup or thinking about starting one?
If you have a good idea, go ahead and try it. If you do, always listen to your customers. That said, also do not always listen to your customers. You will know when to listen and when to do it your way.

Also Read:

CEO Interview: Srinath Anantharaman of Cliosoft

CEO Interview: Rich Weber of Semifore, Inc.

CEO Interview: Dr. Rick Shen of eMemory


Your IP Portfolio is Probably Leaking. What Can You Do About It?

Your IP Portfolio is Probably Leaking. What Can You Do About It?
by Mike Gianfagna on 05-13-2021 at 2:00 pm

Your IP Portfolio is Probably Leaking What Can You Do About It

This topic is inspired by a presentation at last year’s DAC presented by Methodics, now part of Perforce. The issues raised by the original presentation are still quite relevant in the current business climate. IP leakage is something everyone should consider as part of their normal business operations. Your design IP really IS the essence of your business and you should protect it. Let’s assume for the sake of argument that your IP portfolio is probably leaking. What can you do about it?

We live in a highly connected and distributed world which fosters lots of cracks and holes that can create leaks. Now that Methodics is part of Perforce, the focus on this problem has become broader and more comprehensive. Check out a podcast Dan and I recently did that explains the story behind combining Methodics and Perforce if you’d like some more context.

Let’s begin with IP export rules. Export Administration Regulations (EAR) cover commercial and government items and International Traffic in Arms Regulations (ITAR) cover export of defense items. These rules govern IP export and are much broader than most people realize. To set the stage, IP is considered exported if:

  • It is shipped internationally into a restricted geography
  • It is ‘conveyed’ to a foreign national of a restricted geography even within the US
  • It is released from one foreign (potentially unrestricted) geography into a restricted geography

You should start seeing lots of shades of gray at this point. Let’s look at some definitions of IP leakage:

  • IP can leak if it is “exported” as defined by EAR without a valid export license
  • IP can leak if it is covered by ITAR and is “disclosed” to an unauthorized person
  • Beyond EAR and ITAR, IP can leak if it is considered confidential or “trade secret” within the organization, and is exposed beyond employees who need to know

The DAC presentation from Methodics had an informative graphic that portrayed what could happen during a routine trip (assuming we ever travel again). The scenario described is quite familiar:

Travel Abroad Scenario

“Accidental export” can result in heavy fines and significant business disruption. According to the Bureau of Industry and Security, export control fines since 2000 total $2.2B. Based on publicly available information, semiconductor companies such as Intel, Dell/EMC, Lattice, Infineon and Maxim have been impacted. This can happen to you and your company.

What can be done to avoid all this? As you might imagine Perforce and Methodics have some excellent ideas. To start with, IP access control needs to be consolidated. No more multiple, disjoint systems across the enterprise. There is simply no way of ensuring a consistent approach with this situation. Some of the key points offered in the Methodics DAC presentation includes:

  • Consolidate Access Control with IP-centric processes
  • IP R/W permissions drive access control to all systems
  • Managing lightweight directory access protocol (LDAP) / active directory (AD) group membership automatically propagates across the enterprise

Graphically, it looks like this:

Consolidated IP Access

The concept of geofencing is useful here as well. Geofencing restricts IP availability in certain geographies regardless of access. An IP Lifecycle Management (IPLM) system can enable geofencing with the following capabilities:

  • Allow IPs to carry ‘include’ and ‘exclude’ lists of geographies
  • Restrict IPs, regardless of access, based on these lists
  • IP data cannot reside in user workspaces or caches in these restricted geographies
  • IP meta-data cannot be visible or extracted in these geographies

IPLM systems should also preserve IP hierarchies, preventing IPs from being flattened into a single object or code base. If this occurs, insidious embedded IPs can be exported without the user’s knowledge. This approach also allows IPs and their versions to be tracked even when used as part of another hierarchy. Note that the rest of the hierarchy may be “safe”, but key IPs need to be restricted. A centralized approach is the way to accomplish this.

Stopping IP leakage requires a comprehensive system, and it cannot be solved by any single action. The overall goal is to know who used which version of which IP in what context at all times. You can learn more about how Methodics can help you manage your IP throughout your development lifecycle in their upcoming webinar, IP Security: Protecting Your Most Important IP Assets With Methodics IPLM. Your IP portfolio is probably leaking, and now you know what you can do about it.

Also Read:

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Single HW/SW Bill of Material (BoM) Benefits System Development

A Brief History of Perforce


Developing Verification Flows for Silicon Photonics

Developing Verification Flows for Silicon Photonics
by Tom Simon on 05-13-2021 at 10:00 am

Silicon Photonics DRC

Silicon photonics is getting a lot of interest because it can be used in many applications to improve bandwidth, reduce power and provide novel new functionality. It is especially interesting because it offers an ability to combine electronics and optical elements into the same die. Though it is fabricated with familiar silicon processes and techniques, it is a very different animal when it comes to design automation and verification. In a recent white paper Siemens EDA opines that if methods similar to those used for Electronic Integrated Circuits (EIC) can be applied to the design and verification of Photonic Integrated Circuits (PIC) that there could be explosive growth in their development and use.

The white paper titled “Advancing silicon photonics physical verification through innovation” cites how the development of process design kits and EDA tools allowed the exciting growth we have seen in semiconductors in the last few decades. With silicon photonics it makes sense to follow a parallel path to expand their use. The authors make the point that we are definitely in the early stages of developing complete flows for PICs. However, important first steps have been taken, despite the challenges. PICs do not follow some of the foundational assumptions that work for EICs to help enable software tool development.

The authors focus on PIC verification to make their point. While equivalents to LVS, DRC and DFM are needed, the existing EIC tools cannot be directly applied without specific modifications and adaptations. There are several examples of this. Because CMP is still used there is a need for fill. Yet if traditional square fill shapes are used, they can affect waveguides. PICs require circular fill shapes to avoid these issues. Fortunately, Calibre YieldEnhancer has features in its SmartFill Tool that effectively deal with these new requirements around waveguides. This is just the tip of the iceberg.

Silicon Photonics DRC

The Siemens EDA white paper has a section that discusses how DRC needs to work in PICs. PIC based devices tend to be curvilinear. Piecewise linear approximations used by traditional DRCs lead to inaccuracies. The Calibre nmPlatform supports equation-based DRC that is used to apply complex checks that make allowances to eliminate false errors. The one catch is that this requires modifying the foundry rule deck. To avoid this Siemens introduced a methodology that hands off traditional DRC violations to an Auto-Waiver processing step that differentiates true errors from false errors. The result is that designers can get accurate results within a single nmDRC run.

Another important area is circuit verification. Siemens notes that PIC components are not natively recognized. Each photonic component is essentially a custom device, which prevents recognition and the use of device parameter extraction. Indeed, waveguides are analogous to wires found in EICs, but their behavior and importance are completely dissimilar. The white paper provides several examples of the departures in device interpretation in PICs. Further compounding verification difficulties is that there are often no corresponding schematics for PICs.

Siemens has explored interesting ways to validate PIC devices. They mention one approach where the device being verified is re-rendered and then compared to the on-chip geometry. This requires the use of advanced pattern recognition algorithms. This is clearly an area that is under development, yet where we can expect to see more innovations.

The Siemens white paper offers a valuable discussion of this new area of design automation. As increasing PIC automation is developed, the use of photonics in ICs will grow, bringing their benefits to larger markets. The two flows, EIC and PIC will remain distinct, but may someday reach parity in terms of ease of design. The Siemens white paper offers an interesting glimpse into what is available today and what lies ahead.

Also Read:

Formal Verification Approach Continues to Grow

Transistor-Level Static Checking for Better Performance and Reliability

Embedded Analytics Becoming Essential