RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Webinar: Challenges in creating large High Performance Compute SoCs in advanced geometries

Webinar: Challenges in creating large High Performance Compute SoCs in advanced geometries
by Daniel Nenni on 05-17-2021 at 6:00 am

Sondrel Webinar 1

When we think about Compute and AI SoCs, we often focus on the huge numbers of calculations being carried out every second, and the ingenious IPs that are able to reach such high levels of performance. However, there also exists a significant challenge in keeping the vast quantities of data flowing around the chip which is solved by using a Network on Chip (NoC). In this webinar, we be discussing some of the challenges involved in developing such NoCs, and what we can do to overcome them.

REGISTER HERE

NoCs are very complex IPs which touch almost every part of an SoC. They are intrinsically linked to the chip’s floorplan, architecture, functional requirements, startup, security, safety and many other aspects. The functional correctness and performance capabilities of a NoC can also be time consuming and difficult to verify. High performance NoCs can also take up significant area on a chip.

All of this means that there can be a high likelihood that the NoC will suffer change through the life of the project, and this change can ultimately disrupt the floorplan, and therefore have a significant impact on the whole chip.

To try and reduce the probability of this disruption happening, we use various tools to allow us to carry out performance exploration and verification early in the process. By securing the requirements early on, and being able to quickly verify NoC spins meet those requirements, we can also stabilize the floorplan, and reduce unnecessary churn in the design.

REGISTER HERE

Webinar abstract: The challenges in creating AI and High Performance Compute chipsets are not only limited to those around developing IPs that can carry out large numbers of calculations per second. To allow these number-crunching IPs to do their calculations also requires increasingly large volumes of data to be moved around the SoC at high speed. Sondrel explains how this can be done with a customized Network on Chip (NoC) solution.

What you’ll learn: The challenges and solutions for developing a Network on Chip as part of a large complex SoC. Who should attend: People working on or commissioning large SoCs

Presenter: Ben Fletcher is Director of Engineering at Sondrel and is involved in all aspects of SoC development from initial customer engagement through to bring-up and validation. He has over 20 years of experience primarily in ASIC and SoC development within the consumer electronics market, specializing in architecture of Audio, Video and AI chipsets.

About Sondrel™
Founded in 2002, Sondrel is the trusted partner of choice for handling every stage of an IC’s creation. Its award-winning, define and design ASIC consulting capability is fully complemented by its turnkey services to transform designs into tested, volume-packaged silicon chips. This single point of contact for the entire supply chain process ensures low risk and faster times to market. Headquartered in the UK, Sondrel supports customers around the world via its offices in China, India, France, Morocco and North America. For more information, visit www.sondrel.com

Also Read:

Sondrel Explains One of the Secrets of Its Success – NoC Design

SoC Application Usecase Capture For System Architecture Exploration

Sondrel explains the 10 steps to model and design a complex SoC


The Quest for Bugs: Bugs of Power

The Quest for Bugs: Bugs of Power
by Bryan Dickman on 05-16-2021 at 10:00 am

title and image

Shooting beyond the hill…

In former times (think WW1 before GPS and satellites), an artillery battery trying to shell targets out of sight behind a hill would have to rely on an approximate grid reference and a couple of soldiers on top of the hill (who could see the target) to tell them where the shots were landing. These range finders would semaphore back to the battery telling them to adjust range and direction until they were on target. A process of successive refinement, which shares much in common with the search for Power Bugs!

Trying to identify where to find power bugs suffers from similar limitations; very often the target area may not be known at all, or least only be approximated. Surveys show that over 80% of designs are now actively managing power in some way. Missing your power targets can be catastrophic, especially if this is only realized when you get silicon back. Power has never been more important.

Power Matters! Applications drive real power needs…

A product-level power requirement example might be that the user should be able to watch Netflix for 15 hours on a full charge. This determines the chip-level power targets, which in turn determines the sub-system and block-level power targets. To meet this “15 hours of Netflix” requirement, you will need to be able to perform power analysis from the chip-level downwards and validate that power targets are being met under the conditions of this real workload. For complex ASIC devices, you really need to run the full system software in order to see what is going on power-wise under the target operating conditions.

Short, directed test sequences cannot predict accurately how the device will behave under more complex conditions.

Hit the target – Why use emulation for power analysis?

When it comes to finding power bugs, the first step is to find the right platform. All SOC/ASIC product developments are a combination of software development and hardware development. Power management capabilities are provided by the hardware and controlled by the software, so both must be validated, and the power bugs can be on both sides.

Hardware Emulation systems are effective and performant platforms for software development in advance of available hardware reference boards, system-level validation demonstrating that the hardware with the target firmware/software delivers the required capabilities, system-level verification bug searching/hunting from a system context, and also system-level power verification (achieved through power analysis capabilities). We will talk about the role of emulation not only for the well-known generation of design activity but also from the perspective of an integrated power analysis flow.

Power analysis using emulators with multi-MHz performance, support for checkpoint-restore, system-level debug, and fast power analysis turnaround times, is the only way to achieve this by running the full system or substantial sub-system software enabling designers to perform:

fast “silicon-approximate” power consumption analysis tested on a real-world system workloads

So, what are Power Bugs?

We consider power bugs in the context of any error (hardware or software) in the product, no matter if it results in an observable functional error (a more traditional bug perhaps), or a failure to meet power consumption (or power drain) objectives and targets. Either way, the result is the same; an error or omission in either the hardware (as RTL, power intent (UPF) or implementation), or the power management software (firmware/device drivers), which must be fixed. The fix must then be re-validated.

We are treating both functional power bugs and power consumption problems as “Power Bugs”.  Power Bugs really fall into 2 categories…

Power Management causes a functional failure

This category of power bugs is related to function, but that functionality is associated with power management and is described by both RTL and power intent code. Examples could be, an incorrect behavior in power controller logic (which may contain complex Finite-State-Machines and control logic), or a functional error in clock-gating logic, an error in power domain logic (or associated isolation or retention logic), or Dynamic Voltage and Frequency Scaling (DVFS) control logic.

A missed critical low-power bug could result in functional failure of your first silicon the worst scenario.

That’s a very expensive mistake, but it has happened!

Too much power is consumed

For those Power Bugs that do not present as an observable functional error, you really have to adopt a power analysis approach to Power Bug hunting. We refer to these bugs as “power consumption bugs”. They are errors in the measured power drain in relation to the expected or estimated power, possibly arising from errors, omissions or missed opportunities in either the RTL, or power intent (UPF).

Serious power consumption bugs can render the final product less-competitive, or non-viable in the worst case.

Imagine that you have implemented a range of low-power capabilities and tested them all thoroughly, all the software is working, but the first silicon is measured to be consuming 30% more power than expected. That has also happened!

Classes of Power Bugs

Given the two categories described above, we can further enumerate Power Bugs into the following bug classes:

Modern ASIC power verification is many-layered. Low-power architectures must be considered and evaluated right at the start of the specification and the high-level design processes. It cannot be left as an implementation process as that will be far too late! Many traditional verification workflows have been enhanced to be “power-aware”, and some new ones that have been created to statically analyze power-intent. It can start long before RTL is written, using power-aware virtual prototyping. With a VP, you have a capability to dynamically explore different low-power architectures at the system level whilst running development power management software. While accurate power analysis is not possible, relative estimates of power consumption are. It enables designers to explore the hardware/software split, develop and debug early power management software, and use high-level power intent to model power architectures.

Power aware verification…virtually

Early power estimates can feed-forward to the RTL development workflows.

As early RTL is developed and the power intent is refined, you need workflows that will enable early RTL power estimation so that you can keep track of power consumption throughout development and use power analysis to refine microarchitecture design choices. There are power-aware static analysis workflows and power-aware simulation workflows that support this phase of development. As we know, simulation testbench environments are well suited to short directed and constrained-random test vectors, with the benefit of coverage and assertions, and with a gold-standard debug environment; running the real software is not generally feasible. However, there will be a class of Power Bug that can only be found when you are running the real software in a system environment,

that emulates realistic I/O traffic, and with realistic power transition sequences under the control of the actual (or close to actual) system power management software. We will refer to this as “software-driven power verification”.

Why Software-Driven Power Verification?

ASIC pre-silicon validation demands that at some point you need to validate the full system hardware model with the target firmware and software. Emulation offers fast initial compile and bring-up, fast turnaround time as the RTL changes, advanced RTL debug capabilities and delivers up to a few MHz levels of runtime performance. This is enough to be able to compile your RTL, boot your OS, run applications and perform RTL debug well within the working day, opening up the ability to find power bugs that are only observed when running system software testing workloads.

Source – Acuerdo and Valytic Consulting

Software test workload #4 above clearly shows some unexpected power consumption that might arise from 2 different power bugs. The root causes could be hardware or power management software. Debug will determine.

Scale-up and Speed-up

Emulators can model large design sizes which enables you to,

“scale-up” verification to the full system.

In addition, modern emulators provide power analysis workflows that allow you to extend your power analysis from the constraints of short simulation sequences, to the billions of cycles consumed when running the actual software enabling the user to,

“speed-up” detailed power analysis over large samples of system software.

Users generally need the following capabilities and outputs that emulation power analysis offers.

  1. Visualization of power activity across billions of cycles. Accuracy is less critical but relative activity should accurately guide the user to identify power-critical time-windows of interest.

Source – Acuerdo and Valytic Consulting

  1. Ability to measure the average power for power-critical time-windows. Emulator power analysis workflows do this by generating standard Switching Activity Interchange Format (SAIF) data which can be processed by power sign-off tools to compute average power data.
  2. Calculation of cycle-by-cycle power waveforms for power-critical time-windows, in order to debug Power Bugs and identify shorter power sign-off windows.
  3. Calculation of peak power. Emulator power analysis workflows do this by generating waveform files (e.g., VCD, FSDB) that can be processed by power sign-off tools to compute peak-power.

Source – Acuerdo and Valytic Consulting

  1. Ability to increase the fidelity of the analysis by performing the power analysis using RTL, post-synthesis netlists and post place-and-route netlists. Post place-and-route analysis should be able to consume real timing data (SDF), real network capacities data (SPEF) and technology library data (.lib)) and achieve full accuracy power sign-off.

Turnaround time is critical

What matters here is the end-to-end turnaround time from RTL compile to running the payload, and then extracting, processing and analyzing the power data. Fast turnaround times make many iterative power analyses feasible. Emulation power analysis can generate terabyte volumes of data per run. This data has to be processed using scalable compute grids to slice and process the data in order to generate the average power and the cycle power curves.

This data-processing part of the workflow is time-critical and requires scalable compute.

Booting the operating system could account for the first 30 billion cycles, and that might not be the region of most interest system activity-wise. You might then be looking at billions of further cycles to analyze interesting power-critical windows where more functionality is active.

Emulation checkpoint and restore capability allow you to jump to a power-critical timepoint of interest for targeted power analysis.

Developers need to be able to cycle through multiple turns of the power verification workflow per day. The hardware developers may need to re-verify a change to the RTL or the power intent. The software team may need to re-verify a patch to the power management code and get the result back within the same day. Additionally, it is highly likely that there are multiple power-critical windows that need to be analysed when performing power analysis over multi-billion cycle windows.

Software-driven power verification using emulation is the only way to achieve this pre-silicon, by running the full system (or substantial sub-system) software.

Beyond using emulation, the only way to get closer to real systems is with actual silicon, by which time it is too late to make power design choices and to find power related bugs in the hardware.

Finally, some things to remember…

Plan to Succeed

As with any other verification challenge, you need to start with a test plan. What strategies are you going to apply to power validation and hunting for Power Bugs?

Don’t leave power verification to chance; brainstorm, review and refine your test plan just as you would for any other class of verification

Power verification should be a chapter in an overall verification test plan and together with a decision on the tools and methodologies you are going to apply to the problem. There should be power targets or objectives that will need to be validated, and scenario planning of the set of sequences and power modes that need to be exercised.

Power regressions

Performance and power are in a constant trade-off. Increasing performance often implies adding more logic, replicating structures, and controlling fan-out with additional buffers to reduce logic depth between flops. Hence there is a need to perform power analysis regressions to keep checking that iterative refinements of the design, do not cause power bugs to be introduced as power is unintentionally negatively impacted.

Power data analytics

As with all other aspects of verification, power analysis regressions will generate power datasets that need to be stored, maintained, visualized and explored. Look at the trends for max power and average power results, both at the top-level and hierarchically, to track progress over time as the RTL code is developed and refined.

In order to improve, you have to measure.

The power metrics are just another dataset that you will need to track alongside all other design and verification metrics. Ideally, your data analytics platform will support the cross correlation of power metrics with other key metrics such as performance and area, bug rates, and code churn rates. When you look at all of these measurements in the round, great insights are possible.

Read the full whitepaper “Power Bug Quest” for a more detailed analysis of finding Power Bugs using software-driven power analysis.


What is a Data Lake?

What is a Data Lake?
by Ahmed Banafa on 05-16-2021 at 6:00 am

What is a Data Lake

“Data Lake” is a massive, easily accessible data repository for storing “big data”. Unlike traditional data warehouses, which are optimized for data analysis by storing only some attributes and dropping data below the level aggregation, a data lake is designed to retain all attributes, especially when you do not yet know what the scope of data or its use.

Data Lake vs. Data Warehouse

Data warehouses are large storage locations for data that you accumulate from a wide range of sources. For decades, the foundation for business intelligence and data discovery/storage rested on data warehouses. Their specific, static structures dictate what data analysis you could perform. Data warehouses are popular with mid- and large-size businesses as a way of sharing data and content across the team- or department-siloed databases. Data warehouses help organizations become more efficient. Organizations that use data warehouses often do so to guide management decisions—all those “data-driven” decisions you always hear about.

A data lake holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. Each data element in a lake is assigned a unique identifier and tagged with a set of extended metadata tags. When a business question arises, the data lake can be queried for relevant data, and that smaller set of data can then be analyzed to help answer the question.

Now that data storage and technology is cheap, information is vast and newer database technologies don’t require an agreed upon schema up front, discovery analytics is finally possible. With data lakes, companies employ data scientists who are capable of making sense of untamed data as they trek through it. They can find correlations and insights within the data as they get to know it.

 

 

Five key components of a data lake architecture:

1.Data Ingestion: A highly scalable ingestion-layer system that extracts data from various sources, such as websites, mobile apps, social media, IoT devices, and existing Data Management systems, is required. It should be flexible to run in batch, one-time, or real-time modes, and it should support all types of data along with new data sources.

2.Data Storage: A highly scalable data storage system should be able to store and process raw data and support encryption and compression while remaining cost-effective.

3.Data Security: Regardless of the type of data processed, data lakes should be highly secure from the use of multi-factor authentication, authorization, role-based access, data protection, etc.

4.Data Analytics: After data is ingested, it should be quickly and efficiently analyzed using data analytics and machine learning tools to derive valuable insights and move vetted data into a data warehouse.

5. Data Governance: The entire process of data ingestion, preparation, cataloging, integration, and query acceleration should be streamlined to produce enterprise-level Data Quality. It is also important to track the changes to key data elements for a data audit.

Like big data, the term data lake is sometimes disparaged as being simply a marketing label for a product that supports it. However, the term is being accepted as a way to describe any large data pool in which the schema and data requirements are not defined until the data is queried.

The data lake promises to speed the delivery of information and insights to the business community without the hassles imposed by IT-centric data warehousing processes.

Data Lake Advantages

  • Data Lake gives business users immediate access to all data.
  • Data in the lake is not limited to relational or transactional
  • With a data lake, you never need to move the data
  • Data Lake empowers business users and liberating them from the bonds of IT domination
  • Data Lake speeds delivery by enabling business units to stand up applications quickly
  • Helps fully with product ionizing & advanced analytics
  • Offers cost-effective scalability and flexibility
  • Offers value from unlimited data types
  • Reduces long-term cost of ownership
  • Allows economic storage of files
  • Quickly adaptable to changes
  • The main advantage of data lake is the centralization of different content sources
  • Users, from various departments, may be scattered around the globe can have flexible access to the data

Data Lake Disadvantages

  • Unknown area of Data Processing
  • Data governance
  • Dealing with Chaos
  • Privacy issues
  • Complexity of Legacy Data
  • Metadata Lifecycle Management
  • Desolate Data Islands
  • The Issue of Integration
  • Unstructured Data may lead to Ungoverned and Unusable Data, Disparate and Complex Tools
  • Increases storage & computes costs
  • There is no way to get insights from others who have worked with the data because there is no account of the lineage of findings by previous analysts
  • The biggest risk of data lakes is security and access control. Some data can be placed into a lake without any oversight, as some of the data may have privacy and regulatory need

The Future

There are many organizations that are making this approach a reality, the internal infrastructures developed at Google, Amazon, and Facebook provide their developers with the advantages and agility of the data lake dream. For each of these companies, the data lake created a value chain through which new types of business value emerged:

  • Using data lakes for web data increased the speed and quality of web search
  • Using data lakes for clickstream data supported more effective methods of web advertising
  • Using data lakes for cross-channel analysis of customer interactions and behaviors provided a more complete view of the customer
  • Data lakes can give retailers profitable insights from raw data, such as log files, streaming audio and video, text files, and social media content, among other sources, to quickly identify real-time consumer behavior and convert actions into sales. Such 360-degree profile views allow stores to better interact with customers and push on-the-spot, customized offers to retain business or acquire new sales.
  • Data lakes can help companies improve their R&D performance by allowing researchers to make more informed decisions regarding the wealth of highly complex data assets that feed advanced predictive and prescriptive analytics.
  • Companies can use data lakes to centralize disparate data generated from a variety of sources and run analytics and ML algorithms to be the first to identify business opportunities. For instance, a biotechnology company can implement a data lake that receives manufacturing data, research data, customer support data, and public data sets and provide real-time visibility into the research process for various user communities via different user interfaces.

Regardless of where you are now, take some time to look to the future. We’re on a journey towards connecting enterprise data together. As business is increasingly becoming pure digital, access to data will become a critical priority, as will speed of development and deployment. The data lake is a dream that can match those demands. The global data lake market was valued at $7.9 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR) of 20.6 percent by 2024 to reach $20.1 billion.

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at: Prof. Banafa website

References

https://www.bmc.com/blogs/data-lake-vs-data-warehouse-vs-database-whats-the-difference/

https://www.guru99.com/data-lake-architecture.html#21

Data Lakes: What They are and How to Use Them

http://www.gartner.com/newsroom/id/2809117?

http://datascience101.wordpress.com/2014/03/12/what-is-a-data-lake/

http://en.wiktionary.org/wiki/data_lake

http://searchaws.techtarget.com/definition/data-lake

http://www.forbes.com/sites/edddumbill/2014/01/14/the-data-lake-dream/

http://www.platfora.com/wp-content/uploads/2014/06/data-lake.png

http://www.b-eye-network.com/blogs/eckerson/archives/2014/03/beware_of_the_a.php

http://usblogs.pwc.com/emerging-technology/the-future-of-big-data-data-lakes/

http://siliconangle.com/blog/2014/08/07/gartner-drowns-the-concept-of-data-lakes-in-new-report/

http://www.pwc.com/us/en/technology-forecast/2014/issue1/features/data-lakes.jhtml

http://www.ibmbigdatahub.com/blog/don%E2%80%99t-drown-big-data-lake

http://www.wallstreetandtech.com/data-management/what-is-a-data-lake/d/d-id/1268851?

http://emcplus.typepad.com/.a/6a0168e71ada4c970c01a3fcc11630970b-800wi

http://hortonworks.com/wp-content/uploads/2014/05/TeradataHortonworks_Datalake_White-Paper_20140410.pdf


Podcast EP20: How to Secure Any Chip. Any Time. Any Place

Podcast EP20: How to Secure Any Chip. Any Time. Any Place
by Daniel Nenni on 05-14-2021 at 10:00 am

Dan is joined by Pim Tuyls, founder and CEO of Intrinsic ID. Pim provides background on what a physically unclonable function (PUF) is and how Intrinsic ID developed the technology around SRAMs that are found on virtually all chips. Pim discusses the multiple applications for SRAM PUFs and how they are implemented. He concludes with a view of the future in this area.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


COO Interview: Michiel Ligthart of Verific

COO Interview: Michiel Ligthart of Verific
by Daniel Nenni on 05-14-2021 at 6:00 am

Michiel Ligthart wpcf 120x170

Today, Semiwiki profiles Verific Design Automation, perhaps the most popular company at DAC (when it’s an in-person event) because of its giveaway –– a 10”stuffed giraffe for anyone who walks up to its booth and listens to its story.

But, Verific is also a popular EDA company for more reasons than its tradeshow  giveaway. If you’re using any type of FPGA implementation or EDA verification tool, Verific is probably inside. That’s because it is the leading provider of SystemVerilog, Verilog, VHDL and UPF parsers and elaborators used by startups, emerging companies and some of the best-known semiconductor companies worldwide. Parser platforms from Verific eliminate costly internal development of front-end EDA software, accelerating time to market with vastly improved quality. Need proof? Verific’s licensees have shipped more than 80,000 copies of its software since it was founded in 1999.

The image of a giraffe is used as mascot and also a recurring theme. On its website, one cheeky giraffe leans her head over the logo, a nod to Verific’s stature in the industry. Several of its employees tower over many of us, including Michiel Ligthart, Verific’s president and COO, who stands at about 6’4,” and whom I recently interviewed about the company and its success. Before we began, Michiel wanted to emphasize that although this segment is called ‘the CEO interview’,  he is not Verific’s CEO. Verific’s founder and CTO Rob Dekker (also 6’4” by the way)  is the CEO of record.

Once you finish reading my interview with Michiel, you will understand why Verific has the well-earned reputation of being, “Head and shoulders above the rest.”

What brought you to semiconductors?
I was studying Electrical Engineering at Delft University of Technology in the Netherlands. I was interested in digital design and took a new course based on “Introduction to VLSI Systems” by Carver Mead and Lynn Conway. That’s how I learned about chip design in general. Later on I got an internship at Philips Research in the fields of semiconductor test and EDA, and afterward went to Signetics Research Labs in California. Early on, I was involved with research activities in  logic synthesis, including a stint  as a visiting scholar at the Center for Integrated Systems at Stanford University.

What is the Verific backstory?
Rob founded Verific in 1999 after working for Exemplar Logic, acquired by Mentor Graphics (now Siemens EDA) in 1995. Rob, looking for a change of pace, liked the startup experience and wanted to start one of his own. He had an idea about developing verification software but had no specific application in mind. While working through his idea he knew that whatever he did, he would need a Verilog parser and began building one. Several EDA companies asked to license the parser and so that became the business. And here I need to give a shout out to Real Intent, one of those early adopters and still a licensee twenty years later !

Lawrence Neukom was Rob’s first hire, a year or so after Verific’s founding. I recommended Lawrence, who I knew from Theseus Logic, an asynchronous logic startup and also an early Verific customer. I joined a few years later.

Verific celebrates its 22nd birthday this year. What is the secret to your success and core strengths?
We credit our longevity and our great success to our corporate culture. We believe we have a primary responsibility to our customers, secondly to our employees, and thirdly to the environment in which we operate at large, not just the semiconductor industry. If we fulfill all these responsibilities, we automatically fulfill our commitment to shareholders.

We emphasize the quality of our product, and proven by our many satisfied customers we have succeeded.  Our attention to customer needs and requirements by providing first-class support is an important part of that success. They must feel we are part of their R&D team.

What customer challenges are you addressing?
We have an interesting challenge. We implement IEEE standards. EDA companies who do not use our parsers may at times have a slightly different implementation or interpretation of that standard in their simulators or synthesis tools. Now we could claim that we got it right and everybody else has to change, but that would not be very helpful to the end-users. Instead, we try to match that other tool’s behavior so the end-user is not dead in the water, The challenge is that we only find this out by trial and error. Our customers find these mismatches in the field, report them to us and we try to be compliant with both the IEEE standard and this other  EDA tool.

I recently reported on the DARPA toolbox. How will Verific’s relationship with DARPA help the industry?
We always had an academic program, though admittedly ad hoc, where we provide linkable library access. Over the years, several interesting university projects have used Verific. DARPA is helping by streamlining an actual process, giving access to EDA software for U.S. academic projects. We already have our first engagement through the DARPA toolbox with  the University of Utah.

Our agreement with DARPA is not part of the open-source movement.

What other trends are you seeing?
We see a really big EDA push out of China as it builds up its EDA infrastructure. In the last 12 months, we closed four new licenses there. A host of semiconductor companies are getting funding here in the U.S., which bodes well for EDA companies as well

Another big growth area for us is in the semiconductor space with INVIO, our Python-based higher-level API set for SystemVerilog, VHDL, and UPF flows ,built on top of our traditional Verific platform. INVIO is especially useful for companies that are developing  their own design methodologies,  not necessarily supported by off-the-shelf tools. AI chip design groups with strict power, test and scalability requirements are able to build their own methodologies and flows. Custom processors for large compute farms are another good example.

How has the pandemic affected Verific and its customers?
Not at all. When the pandemic struck, we closed our offices and took our laptops home. We already had VPN networks in place and were Zoom users for several years. We secured some additional  cloud back up and didn’t miss a beat. Our customers didn’t seem to have any significantly negative effects either.

However, not everyone was that fortunate. Restaurants and coffee shops that we normally patronize around our offices in Alameda, Calif., and Kolkata, India, were very much affected, along with many other people in the services industries. As a company where no paycheck was ever in jeopardy  we decided to make significant contributions to  food banks in our local communities, which for us are Alameda and Kolkata.

As an entrepreneur, what advice would you give someone founding a startup or thinking about starting one?
If you have a good idea, go ahead and try it. If you do, always listen to your customers. That said, also do not always listen to your customers. You will know when to listen and when to do it your way.

Also Read:

CEO Interview: Srinath Anantharaman of Cliosoft

CEO Interview: Rich Weber of Semifore, Inc.

CEO Interview: Dr. Rick Shen of eMemory


Your IP Portfolio is Probably Leaking. What Can You Do About It?

Your IP Portfolio is Probably Leaking. What Can You Do About It?
by Mike Gianfagna on 05-13-2021 at 2:00 pm

Your IP Portfolio is Probably Leaking What Can You Do About It

This topic is inspired by a presentation at last year’s DAC presented by Methodics, now part of Perforce. The issues raised by the original presentation are still quite relevant in the current business climate. IP leakage is something everyone should consider as part of their normal business operations. Your design IP really IS the essence of your business and you should protect it. Let’s assume for the sake of argument that your IP portfolio is probably leaking. What can you do about it?

We live in a highly connected and distributed world which fosters lots of cracks and holes that can create leaks. Now that Methodics is part of Perforce, the focus on this problem has become broader and more comprehensive. Check out a podcast Dan and I recently did that explains the story behind combining Methodics and Perforce if you’d like some more context.

Let’s begin with IP export rules. Export Administration Regulations (EAR) cover commercial and government items and International Traffic in Arms Regulations (ITAR) cover export of defense items. These rules govern IP export and are much broader than most people realize. To set the stage, IP is considered exported if:

  • It is shipped internationally into a restricted geography
  • It is ‘conveyed’ to a foreign national of a restricted geography even within the US
  • It is released from one foreign (potentially unrestricted) geography into a restricted geography

You should start seeing lots of shades of gray at this point. Let’s look at some definitions of IP leakage:

  • IP can leak if it is “exported” as defined by EAR without a valid export license
  • IP can leak if it is covered by ITAR and is “disclosed” to an unauthorized person
  • Beyond EAR and ITAR, IP can leak if it is considered confidential or “trade secret” within the organization, and is exposed beyond employees who need to know

The DAC presentation from Methodics had an informative graphic that portrayed what could happen during a routine trip (assuming we ever travel again). The scenario described is quite familiar:

Travel Abroad Scenario

“Accidental export” can result in heavy fines and significant business disruption. According to the Bureau of Industry and Security, export control fines since 2000 total $2.2B. Based on publicly available information, semiconductor companies such as Intel, Dell/EMC, Lattice, Infineon and Maxim have been impacted. This can happen to you and your company.

What can be done to avoid all this? As you might imagine Perforce and Methodics have some excellent ideas. To start with, IP access control needs to be consolidated. No more multiple, disjoint systems across the enterprise. There is simply no way of ensuring a consistent approach with this situation. Some of the key points offered in the Methodics DAC presentation includes:

  • Consolidate Access Control with IP-centric processes
  • IP R/W permissions drive access control to all systems
  • Managing lightweight directory access protocol (LDAP) / active directory (AD) group membership automatically propagates across the enterprise

Graphically, it looks like this:

Consolidated IP Access

The concept of geofencing is useful here as well. Geofencing restricts IP availability in certain geographies regardless of access. An IP Lifecycle Management (IPLM) system can enable geofencing with the following capabilities:

  • Allow IPs to carry ‘include’ and ‘exclude’ lists of geographies
  • Restrict IPs, regardless of access, based on these lists
  • IP data cannot reside in user workspaces or caches in these restricted geographies
  • IP meta-data cannot be visible or extracted in these geographies

IPLM systems should also preserve IP hierarchies, preventing IPs from being flattened into a single object or code base. If this occurs, insidious embedded IPs can be exported without the user’s knowledge. This approach also allows IPs and their versions to be tracked even when used as part of another hierarchy. Note that the rest of the hierarchy may be “safe”, but key IPs need to be restricted. A centralized approach is the way to accomplish this.

Stopping IP leakage requires a comprehensive system, and it cannot be solved by any single action. The overall goal is to know who used which version of which IP in what context at all times. You can learn more about how Methodics can help you manage your IP throughout your development lifecycle in their upcoming webinar, IP Security: Protecting Your Most Important IP Assets With Methodics IPLM. Your IP portfolio is probably leaking, and now you know what you can do about it.

Also Read:

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Single HW/SW Bill of Material (BoM) Benefits System Development

A Brief History of Perforce


Developing Verification Flows for Silicon Photonics

Developing Verification Flows for Silicon Photonics
by Tom Simon on 05-13-2021 at 10:00 am

Silicon Photonics DRC

Silicon photonics is getting a lot of interest because it can be used in many applications to improve bandwidth, reduce power and provide novel new functionality. It is especially interesting because it offers an ability to combine electronics and optical elements into the same die. Though it is fabricated with familiar silicon processes and techniques, it is a very different animal when it comes to design automation and verification. In a recent white paper Siemens EDA opines that if methods similar to those used for Electronic Integrated Circuits (EIC) can be applied to the design and verification of Photonic Integrated Circuits (PIC) that there could be explosive growth in their development and use.

The white paper titled “Advancing silicon photonics physical verification through innovation” cites how the development of process design kits and EDA tools allowed the exciting growth we have seen in semiconductors in the last few decades. With silicon photonics it makes sense to follow a parallel path to expand their use. The authors make the point that we are definitely in the early stages of developing complete flows for PICs. However, important first steps have been taken, despite the challenges. PICs do not follow some of the foundational assumptions that work for EICs to help enable software tool development.

The authors focus on PIC verification to make their point. While equivalents to LVS, DRC and DFM are needed, the existing EIC tools cannot be directly applied without specific modifications and adaptations. There are several examples of this. Because CMP is still used there is a need for fill. Yet if traditional square fill shapes are used, they can affect waveguides. PICs require circular fill shapes to avoid these issues. Fortunately, Calibre YieldEnhancer has features in its SmartFill Tool that effectively deal with these new requirements around waveguides. This is just the tip of the iceberg.

Silicon Photonics DRC

The Siemens EDA white paper has a section that discusses how DRC needs to work in PICs. PIC based devices tend to be curvilinear. Piecewise linear approximations used by traditional DRCs lead to inaccuracies. The Calibre nmPlatform supports equation-based DRC that is used to apply complex checks that make allowances to eliminate false errors. The one catch is that this requires modifying the foundry rule deck. To avoid this Siemens introduced a methodology that hands off traditional DRC violations to an Auto-Waiver processing step that differentiates true errors from false errors. The result is that designers can get accurate results within a single nmDRC run.

Another important area is circuit verification. Siemens notes that PIC components are not natively recognized. Each photonic component is essentially a custom device, which prevents recognition and the use of device parameter extraction. Indeed, waveguides are analogous to wires found in EICs, but their behavior and importance are completely dissimilar. The white paper provides several examples of the departures in device interpretation in PICs. Further compounding verification difficulties is that there are often no corresponding schematics for PICs.

Siemens has explored interesting ways to validate PIC devices. They mention one approach where the device being verified is re-rendered and then compared to the on-chip geometry. This requires the use of advanced pattern recognition algorithms. This is clearly an area that is under development, yet where we can expect to see more innovations.

The Siemens white paper offers a valuable discussion of this new area of design automation. As increasing PIC automation is developed, the use of photonics in ICs will grow, bringing their benefits to larger markets. The two flows, EIC and PIC will remain distinct, but may someday reach parity in terms of ease of design. The Siemens white paper offers an interesting glimpse into what is available today and what lies ahead.

Also Read:

Formal Verification Approach Continues to Grow

Transistor-Level Static Checking for Better Performance and Reliability

Embedded Analytics Becoming Essential


Arm Announces Neoverse Update, Immediately Following V9

Arm Announces Neoverse Update, Immediately Following V9
by Bernard Murphy on 05-13-2021 at 6:00 am

Arm Neoverse Update

Among marketing principles, “Stay Visible’ must rank as one of the highest. Meaning that if you don’t have something new to announce on a regular basis, you disappear. Most important, among the people you hope to influence, you cease to exist. As true for small ventures as large, though small ventures struggle to understand or prioritize the importance of visibility. Major tech operations (such as Arm and NVIDIA) don’t make this mistake. They’ll have one or more annual big announcements, followed by regular progress updates through the rest of the year. This Arm Neoverse update makes sure they stay highly visible as innovators and thought leaders.

2021 updates

I wrote recently about the Arm v9 announcement, perhaps not a blockbuster but lots of new goodies. They have quickly followed with more of a blockbuster update on their Neoverse series. The product family that aims to address everything infrastructure-related, from cloud servers to communication backbones to the edge. Last year, Chris Bergey, Sr. VP and GM of Infrastructure detailed the product plan. V-series for maximum performance, N-series for scale-out performance and E-series for maximum throughput. This year he announced V-series and N-series updates along with a new and improved CMN (coherent mesh network).

V1 is the first introduction in the V-series and offers a 50% performance uplift (over plan I assume). It also supports 2x256b SVE (scalable vector extensions) and bfloat16. Per Google, bfloat16 is ideal in cloud applications, especially in TPUs. Take from that what you will.

N2 is the second release in the N-series, with a 40% performance uplift, also with SVE but here 2x128b, and again bfloat16.

CMN only rated one slide in the briefing but I know it will be integral to server architectures. Indeed any regular arrayed structure, though multi-core server chips are the most obvious application. The new CMN-700 supports more cores, caches, crosspoint nodes, memory ports and CCIX ports per die (they also support CXL).

Hyperscalar and supercomputing growth

Some big announcements here, starting with another leading hyperscalar adoption from Tencent (they’re already announced in support of cloud gaming). And Neoverse is coming soon to Oracle Cloud, here delivered by Ampere Altra processors. Good to see that Ampere is finding serious traction. The hyperscalar oligopoly needs external competition.

AWS Graviton2 is already available as an EC2 instance. AnandTech ran an analysis last year with comparison to the latest (and then not yet released) Intel and AMD processors. The comparison is clouded in mysteries of how AWS rate their EC2 instances, so difficult to draw black and white conclusions. But it’s telling that the Arm-based server is now being compared directly with top-of the line servers. And AWS shows that growth in EC2 instances is now dominated by Graviton instances.

Alibaba have tested their own Arm-based cloud instances, announcing a significant performance boost in a SPECjbb benchmark and a jump in their DragonWell Open Java development kit on N2 by 50%. I think I see a theme here. If you’re big in cloud, you’re building (or buying) Arm-based instances

Also the Ministry of Electronics and Information Technology (MeitY) in India is driving an exascale project under their center for development of advanced computing (C-DAC). This will leverage French SiPearl Rhea for servers (72 V1 cores, HBM2 and DDR5 memory, in TSMC 6nm). And South Korean ETRI K-AB21 (based on Arm Zeus, an earlier name for one of the Neoverse cores) for high performance and low power inference.

5G growth

Marvell has launched their OCTEON family addressing 5G RAN applications, with applications in remote radio units, distributed units and central units. Also to Smart NIC cards. All building on N2 cores

At the edge, Arm has been collaborating with Vodaphone on uCPE (universal customer premises equipment) which is – and I quote – “a general-purpose platform that integrates compute, storage and networking on a commodity, off-the-shelf server. This allows it to provide network services (such as SD WANfirewall etc.) as virtual functions to any site on a network. uCPE is the equivalent of a ‘Cloud for network services’, but at the customer site.” Reducing total cost of ownership for the customer and reducing their carbon footprint.

Lots of good progress. You can read the release HERE.

 


Formal Verification Approach Continues to Grow

Formal Verification Approach Continues to Grow
by Daniel Payne on 05-12-2021 at 10:00 am

formal history min

After a few decades of watching formal verification techniques being applied to SoC designs, it  certainly continues to be a growth market for EDA vendors. In the first decades from 1970-1990 the earliest forms of formal tools emerged at technical conferences, typically written by University students earning their Ph.D.s, and the users had to be at the Ph.D. level to understand how best to use the limited tools and interpret the results for theorem proving. In the next twenty years, 1990-2010 we saw formal property checking emerge, and you still had to manually write the properties and have the domain experience in formal technology. Thankfully, since 2010 to present the formal tool user is a verification engineer that is using a variety of automated formal apps. Now that’s progress.

Siemens/Infineon had their own internally developed formal apps and decided to spin-out a separate company in 2005 called OneSpin, and they’ve been a strong #3 player in the formal market for a number of years now. Just last month Siemens EDA announced the acquisition of OneSpin, adding to what the Questa product family has been offering, so now the combination places Siemens EDA as the #2 vendor in formal apps. It’s kind of rare to find a successful EDA spin-out that was later acquired.

I’ve been following OneSpin ever since I passed by their booth at DAC one year and McKenzie Ross literally pulled me into their booth for an update, maybe it was the DAC badge that said Press or Blogger. Brett Cline is another OneSpin person that I’ve followed ever since his days at Summit Design in 1998. You can follow OneSpin on Twitter, where they are quite active and relevant.

With formal you hear about Assertion Based Verification (ABV) and apps for specific tasks like Sequential Logic Equivalence Checking (SLEC), Clock Domain Crossing (CDC), etc. Mentor acquired formal vendor 0-In back in 2004, and continued to grow the product family over the years. Now Siemens EDA has over a dozen formal apps to choose from, more choice is always better.

Verification engineers have been quick to adopt new tools and methodologies in order to get their job done on increasingly larger designs, with huge state spaces, however the traditional functional simulation techniques are not sufficient to reach verification goals. Adoption of formal methods and apps have enabled verification engineers to complete their tasks more quickly.

Acquisition Benefits

Two combined engineering teams instead of competing engineering teams makes sense, to grow the market for formal tools. With a bigger scale comes some benefits of better ideas, and knowing what was already tried before.

The technology combination of OneSpin and Questa Formal will offer users even more ways to automate their verification, and I’ll be interested to learn what the product roadmap looks like.

Customers of both Questa Formal and OneSpin should be happy, knowing that their vendor is investing even more resources in this critical area.

Growing a point tool company into a successful business, then getting acquired by a larger EDA company is a good sign, because it means that the formal product segment is healthy, so expect to see continued news of formal tools being adopted across the hot industries, like: 5G, AI, automotive, IoT, HPC and mission critical fields of aerospace and defense.

Summary

There’s always the classic decision of growing your own product line through internal software development, or acquiring a smaller and successful company. I’d say that Siemens EDA made another savvy decision to acquire OneSpin in order to accelerate into the formal market. My favorite list of past acquisitions shows that Siemens EDA has the history of making these deals work out: Berkeley DA, Tanner EDA, 0-In, Model Technology.

Related Blogs


3rd Party Semiconductor Intellectual Property Market Update

3rd Party Semiconductor Intellectual Property Market Update
by Richard Wawrzyniak on 05-12-2021 at 6:00 am

IP Market

The 3rd Party Semiconductor Intellectual Property (IP) market has seen great innovation in the products it offers to System-on-a-Chip (SoC) designers over the last ten years. If any market segment in the semiconductor industry typifies the intense evolutionary pressures that the entire electronics market has undergone, it is the 3rd Party IP market.

Most of these evolutionary forces are driven by the need to integrate more functionality in fewer devices at the system level. The primary method to accomplish this is using 3rd Party IP. The IP market has evolved to supply the solutions SoC designers require to craft their silicon products in response to ever-changing market requirements.

Rather than looking at the 3rd Party IP market as a monolithic segment, tracking revenues by year, Semico Research Corp. has analyzed the IP market by functional category and then sub-divides these categories into revenues by quarter. This analysis of the data and some additional data on design starts, IP costs and SoC unit shipments. Semico has arranged the IP market by the following IP types:

Memory CPU Core DSP Core
Graphics Analog Interface
Logic / Embedded Analytics Chip Enhancement Interconnect
Security Audio eFPGA

IP Market on an IP Category Basis

  • The CPU market is the largest IP category, and it will remain so for the foreseeable future, accounting for 34.4% of total market revenues by 2025.
    • A new CPU architecture has been introduced by ARM that is focused on processing data around safety standards and implementing those protocols in a system. This continues the theme of developing CPU architectures that are not general purpose but are specific in nature such as processors for vision, signal processing, graphics processing and video processing.
    • With the rise of Artificial Intelligence (AI) applications through the deployment of Convolutional Neural Networks (CNNs), the market is seeing the creation of CPU IP architectures created specifically with AI functions in mind.
    • The introduction of ‘free’ Instruction Set Architectures (ISAs) such as RISC-V and open-source versions of the MIPS architecture is expected to continue invigoration of the discovery and exploration phase the market is now enjoying driven by AI applications and other silicon solutions.
  • Memory IP was the 2nd largest category in 2020 driven by the need for increased resources on SoCs to handle the large number of CPU cores in silicon solutions. By 2025, the Memory IP market will account for 11.4% of total market revenues.
    • New embeddable memory architectures as licensable IP entered the market two years ago. MRAM is being supported by TSMC, GLOBALFOUNDRIES, SMIC and UMC. Other non-volatile memory types such as STTRAM, ReRAM, CBRAM and FeRAM are entering the market. In addition, a new memory type based around carbon nanotubes (CNTs) is also starting to gain some traction.
    • Presumably, Intel’s reentry into the silicon foundry market will include support for MRAM and many other new memory architectures.
  • Interface was the 3rd largest IP category in 2020, with an 11.0% share of the market driven by the need for faster interfaces to move the enormous amount of data created through AI. However, by 2025, the Interface category’s share of the market will fall to 9.6%, having been overtaken by GPU IP.
  • Graphics was the 4th largest IP category in 2020 driven by the increasing need for SoCs to incorporate embedded vision functions for AI and to process data locally. The Graphics market will account for 10.5% of total IP revenues by 2025, overtaking the Interface category.
  • A licensable programmable logic fabric has entered the market from several different companies and could generate significant revenues in the SoC market as adoption increases and the technology matures.
    • While starting from a small base, eFPGA will have the highest growth rate with a CAGR of 68.4% through 2025.
  • The market for Security is mid-sized today but will grow in importance and necessity as portable wireless devices permeate our society and require increased protection from viruses and denial-of-service attacks. This will be especially true to secure data in the IoT market. This category has one of the highest growth rates after eFPGA.
    • The industry discussion as to the need for Security IP has ended in the affirmative. Now the discussion has shifted to how much Security IP in the silicon is enough.
  • From 2020 to 2025, Licensing revenues will increase at a faster rate than will Royalty revenues due to the new wave of architectural refreshes and exploration initiated by designers looking to incorporate of AI functionality in their silicon solutions. This signals continued market growth over the forecast period of 2021 – 2025.
    • This is a change from the recent past when royalty revenue growth outstripped that of licensing revenues.
    • Once SoCs using these new architectures start shipping in volume, Semico anticipates that royalties will once again be somewhat larger than licensing revenues.

By all measures, the IP market today is not a mature market, nor is it nearing maturity. This is evident simply by looking at its dynamic nature and the level of innovation IP users are asking for and the IP vendors are delivering. In a mature market this would not be possible since lower growth levels would not allow for adequate resources to achieve the innovation this market is delivering. Semico believes as the end markets that IP serves evolve, so too will the IP market evolve.

Taken from the report, Figure 175 shows the total IP market by product category by quarter. Semico forecasts that the IP market will exceed $2.0B in revenue per quarter by 2Q22 and continue to reach $2.7B by 4Q25, a CAGR of 8.9% by the end of the forecast period.

Figure 175: Total IP Market Revenues Actual and Forecast by Quarter, 1Q06 – 4Q25

*Forecast  Source: Semico Research Corp.

The lid has been removed from the innovation box and we all will benefit with better, more integrated and higher-performance products!

Semico has recently written a report covering this topic:  Licensing, Royalty and Service Revenues for 3rd Party IP: 2021 Market Analysis and Forecast (SC105-21) April 2021

Here is a link to the Table of Contents on our website:

https://semico.com/content/licensing-royalty-and-service-revenues-3rd-party-ip-2021-market-analysis-and-forecast