webinar banner2025 (1)

A Review of TSMC’s OIP Ecosystem

A Review of TSMC’s OIP Ecosystem
by Daniel Nenni on 10-06-2019 at 10:00 am

Each year, TSMC conducts two events – the Technology symposium in the Spring and the Open Innovation Platform (OIP) ® Ecosystem Forum in the Fall.  Yet, what is the OIP ecosystem?  What does it encompass?  And, how does the program differentiate TSMC from other foundries?  At the recent OIP Forum in Santa Clara, Suk Lee, Senior Director, TSMC Design Infrastructure Management Division, presented a review of OIP, highlighting the breadth of technology and support available for TSMC process nodes.  Additionally, and very significantly, he described the extensive engineering investment made to develop and qualify new process design kit materials (PDK) and IP offerings.  Here are some of the highlights of Suk’s presentation.

Figure 1.  OIP partner overview  (Source:  TSMC)

OIP entails the collaboration between TSMC and numerous partners, spanning a range of technical facets of silicon foundry support:

  • EDA tool providers
  • IP developers
  • Design Center Alliance (DCA) providers, offering services ranging from system-level front-end design to back-end physical/test implementation
  • Value Chain Aggregators (VCA), additional service providers commonly offering a different set of capabilities, including assembly/test and supply chain management support

and, the newest aspect of OIP partnership, which was announced last year with the introduction of its OIP Virtual Design Environment (OIP VDE), lowering entry barriers of Cloud adoption for the customers of all sizes

  • Cloud service providers

To perhaps better distinguish between the DCA and VCA partners, Suk overlaid the previous chart on top of the Design Enablement and Process Development groups at TSMC – see the figure below.

 Figure 2.  OIP partner interactions with TSMC R&D  (Source:  TSMC)

Yet, these OIP designations are much more than companies enrolling with TSMC.  Suk emphasized the qualification requirements and ongoing monitoring to which each OIP partner is subjected, as illustrated below.

Figure 3.  Examples of OIP partner qualification requirements  (Source:  TSMC)

From TSMC’s own derivative of ISO9000 qualify management and assurance (aka, TSMC9000) to qualification of process interconnect technology definitions to ongoing certification of OIP service providers, there is a major emphasis on ensuring foundry customer success.

Perhaps the best illustration of OIP collaboration is the activity pursued with EDA partners during new process development.  This activity ensures new tool features and full methodology reference flow capabilities are qualified concurrently with initial process availability for IP developers and early end customer adopters.  Suk provided examples where new EDA tool attributes were defined, developed, and integrated into reference flows, driven by both process innovations (e.g., EUV lithography) and design reliability and manufacturability (e.g., via pillars, statistical analysis).  The figures below illustrate how the digital and custom design flows from EDA OIP partners were enhanced in support of these advanced process requirements.

Figure 4.  New EDA tool features addressing process requirements – digital flows  (Source:  TSMC)

Figure 5.  New EDA tool features addressing process requirements – custom implementation flows  (Source:  TSMC)

Engineers are skeptical by nature, seeking a silicon-proven demonstration of models, EDA tool features, and reference flows.  Suk highlighted the specific collaboration with ARM – for nearly a decade, TSMC and ARM have used leading ARM core IP as a testchip vehicle.  Full EDA (and IP) qualification reports are available at TSMC’s customer portal. On the day of OIP Forum, the two companies also announced the latest result of their collaboration, an industry-first 7nm silicon-proven chiplet system based on multiple Arm® cores and leveraging TSMC’s Chip-on-Wafer-on-Substrate (CoWoS®) advanced packaging solution.

 

Figure 6.  Screenshot of example EDA qualification reports on the TSMC portal   (Source:  TSMC)

With all the news relative to new process node announcements, it is easy to overlook the underlying activities and resources required to synchronize process availability with the companies supporting the related EDA, IP, and service provider ecosystem.  Suk’s presentation reminded the OIP Ecosystem Forum audience of the tremendous investment made as part of the effort to sustain Moore’s Law (for at least another node or two).  The focus that TSMC has applied to design enablement has truly been a differentiating characteristic of their corporate philosophy.


SiFive Continues to Foster RISC-V in the Middle East With Tech Symposiums

SiFive Continues to Foster RISC-V in the Middle East With Tech Symposiums
by Swamy Irrinki on 10-05-2019 at 8:00 am

Workshops Coming to Istanbul, Amman, Cairo and Abu Dhabi

SiFive is continuing its tour through the Middle East with highly educational RISC-V Tech Symposiums and Workshops in the key locations of Istanbul, Amman and Cairo. These cities are some of the most technologically advanced in the region, and we are eager to collaborate with our co-hosts and partners to foster the growth of the RISC-V ecosystem. Attendance at these workshops is free, but registration is required. Here is a glimpse of what’s happening in each city. You can also visit https://sifivetechsymposium.com to learn more about these workshops and the SiFive Tech Symposiums being held throughout the world.

Istanbul – Tuesday, October 8

We are pleased to have Turkey’s TÜBİTAK BİLGEM as our co-host. This RISC-V Tech Symposiums and Workshop will feature a keynote presentation by Shafy Eltoukhy, senior vice president of operations and general manager of the Silicon Business Unit at SiFive. In addition, Tufan Karalar, associate professor at Istanbul Technical University will also be a featured speaker. Other speakers include Soheila Lighvani, director of IP strategic alliances at SiFive, and many more. Rajesh Varadhrajan, director of IP engineering at SiFive, will lead the hands-on workshop portion of the event. There will also be a hands-on workshop where attendees will have the unique opportunity to configure their own RISC-V core and bring up on an FPGA. To view the full agenda and to register to attend, please visit https://sifivetechsymposium.com/agenda-istanbul/

Amman – Thursday, October 10

With the University of Jordan and IEEE Jordan Section as our co-hosts, this will be a powerful event. There will be keynote presentations by Shafy Eltoukhy, senior vice president of operations and general manager of the Silicon Business Unit at SiFive, as well as a keynote presentation by Jamil AlKhatib, founder of IBTECAR. Ramzi Saifan, chairman of the Computer Engineering Department at the University of Jordan, will all be a featured speaker. In addition to these and other presentations, there will be a hands-on workshop where attendees will have the unique opportunity to configure their own RISC-V core and bring up on an FPGA. To view the full agenda and to register to attend, please visit https://sifivetechsymposium.com/agenda-amman/

Cairo – Saturday, October 12

Our co-hosts will be Mentor Graphics, the Egyptian Information, Telecommunications, Electronics, and Software Alliance (EiTESAL), and The American University in Cairo. This event will be highly educational and powerful. There will be a keynote presentation by Hazem El Tahawy, managing director of the MENA region for Mentor Graphics, about the Impact of AI on Semiconductor and EDA. There will also be a keynote presentation by Mohamed Shedeed, managing director of EiTESAL, about the first incubation specialized hardware based on IoT in the Middle East. A third keynote presentation will be given by Shafy Eltoukhy, senior vice president of operations and general manager of the Silicon Business Unit at SiFive, about the design revolution that is taking place in the semiconductor industry. There will also be a presentation by Mohamed Kassem, CTO of Efabless about a RISC-V based microcontroller. In addition to these and many other presentations by industry veterans, ecosystem partners and academic luminaries, there will be a hands-on workshop where attendees will have the unique opportunity to configure their own RISC-V core and bring up on an FPGA. To view the full agenda and to register to attend, please visit https://sifivetechsymposium.com/agenda-cairo/

Abu Dhabi – Tuesday, November 5

Our co-host for this workshop is the Khalifa University of Science and Technology, a leading independent research university , a leader among research intensive universities of the 21st century and catalyzing the growth of Abu Dhabi and the UAE’s rapidly developing knowledge economy. There will be keynote presentations by Naveed Sherwani, CEO of SiFive. In addition to these and many other presentations by industry veterans, ecosystem partners and academic luminaries, there will be a hands-on workshop where attendees will have the unique opportunity to configure their own RISC-V core and bring up on an FPGA. To view the full agenda and to register to attend, please visit

https://sifivetechsymposium.com/agenda-abu-dhabi/

We look forward to seeing you!


Workflow Automation Applied to IP Lifecycle Management

Workflow Automation Applied to IP Lifecycle Management
by Daniel Payne on 10-04-2019 at 10:00 am

Methodics, Flowable

I often blog about a specific EDA tool, or an IP block, but the way that SoC design teams approach their designs and then use tools and IP can either be a manual, ad-hoc process, or part of something that is well-documented, following a design methodology. Back in the 1980’s while at Intel our team first created a design methodology document in order to communicate with all team members how we were going to build a graphics chip for the emerging IBM-based PC market, but this was a paper document with no automation or enforcement, so our implementation was fraught with inconsistencies and too much manual effort.

WEBINAR: Concerned about IP security threats for your SoCs?

Today there are products for workflow automation like Flowable, also called a Business Process Model Notation (BPMN) platform. Think of a workflow as steps in the process of designing, testing or deploying a product. A workflow for IP Lifecycle Management enables an IP designer to create IP that follows best practices, quality standards and company policies..

Two specific examples of workflows in this context include:

  • IP development and release – IP has to meet a requirements check against specifications.
  • IP maturity classification – IP passes through multiple steps towards production readiness.

Some products need to meet a Functional Safety (FuSa) compliance, so they include traceability of the entire IP design flow – tools used, design process certification, verifying specifications are met for each stage. Using workflow automation is a big benefit to meeting FuSa compliance.

Methodics provides a tool for IP Lifecycle Management named Percipient, and it has features that workflows can use to automate projects:

  • Server hooks – a script attached to an IP that runs an action after IP creation or changes are made, checking the quality.
  • Client hooks – check an IP for quality within the user workspace, prior to allowing it into the Percipient server.
  • Events platform – receive real-time updates for any change to IP data.
  • Public API – used with the Events platform to work on data object.
  • Custom properties and attributes – a way to extend the data model for any IP.

So the Percipient tool has these features that allow for workflow automation, and then for more complex workflow automation you should use a workflow engine, like the open-source tool Flowable.

As an example, consider a workflow for the planning Bill of Materials (BoM) where you want to plan out each IP design block and the design schedules, creating a planning manifest for the whole project.

Using the Flowable Workflow Engine you can model this workflow, saved in the standard BPMN 2.0 format:

Main workflow
Component schedule

For this project IP we use multiple resource IPs, and the project schedules are created in Microsoft Project. Let’s look into each workflow step:

  • Start Event – the circle with triangle icon, creates a new IP with project Label, sending a request to the Flowable REST API.
  • New IP Notification – a user is notified about a new IP.
  • Planning IP Designation – notification sent to planning IP manager for review and classification.
  • Retrieve Resource IP schedules – the top IP has dependent IPs, which use the sub-process workflow in the second diagram, component schedule.
  • Generate IP Schedule Manifest – top-level IP has a manifest of all resource IP schedules.
  • IP BoM Approval – a user approves the planning BoM, while a project manager reviews the complete Bill of Materials.

The Percipient tool integrates with the Flowable modeling tool by defining a new IP attribute in Percipient, and setting a JSON object for that property.

A workflow instance is made for a workflow model, linking the instance to a specific IP version. Version identifiers are used and passed as variables using the Percipient REST API. Attributes and properties can also be set or updated after a specific workflow operation.

If a workflow requires human interaction then in Flowable you use a Forms application to design and process forms. Another way to get user input is through IP attributes with the Percipient REST API.

Summary

The combination of Percipient for IP Lifecycle Management and Flowable for workflow automation makes the IP and SoC design process more consistent and comply with FuSa and security standards. This blog has introduced the concepts of workflow automation, events platform, REST APIs, hooks and the Flowable tool for workflow modeling.

Why approach your design methodology in an ad-hoc method, when you can define and enforce how the project should follow best practices?

Vadim Iofis, VP of Engineering at Methodics has written a 10 page White Paper which you can read now after a brief registration process.

Related Blogs


The GF Pivot, Specialization Defined

The GF Pivot, Specialization Defined
by Randy Smith on 10-04-2019 at 6:00 am

On August 27, 2018, GLOBALFOUNDRIES (GF) announced that they were no longer going to compete in the race to the next smaller semiconductor node, at that time, the 7nm node. While surprising to some, on further analysis this move made sense. TSMC had announced its plan to invest around $25B in the 5nm technology node. GF revenue is about $6B annually. Easy decision. Winning in semiconductor manufacturing is like most businesses – you must deliver value. Since GF wasn’t going to deliver the most advanced nodes how was it going to stand apart as a supplier? The answer at that time was stated as “Differentiated Offerings.” At GF’s recently held Global Technology Conference 2019 in Santa Clara, CA, we got a chance to see how that pivot was working. From what I heard, it appears to be going quite well.

After the keynote speeches, three GF senior vice presidents (SVPs) spoke about their team’s efforts to deliver differentiated solutions to their markets. First up was Bami Bastani, SVP and GM, Mobile and Wireless Infrastructure SBU. A recurring theme was seen right away. In the keynotes we learned that GF’s definition of the Innovation Equation – (platforms) x (application features) x (IP) = (specialized application solutions). So, to succeed in any market GF was going to specialize in, it would need to deliver advanced solutions for each of these areas for the applications in those markets.

Bami’s area is mobile and wireless, where low power is the dominant concern. Here, GF has a terrific platform with 22FDX®, a very low power Fully-Depleted Silicon-On-Insulator (FD-SOI) process with the industry’s only body-bias ecosystem. This process supports good speeds with excellent results for lower power and lower leakage. GF’s 22FDX also supports an eMRAM-F low power embedded memory. This type of low power on-chip memory will be critical for AI/ML at the Edge as it provides a nice combination of power/performance/density combination for IoT devices. If you need higher performance than could be achieved in 22FDX while still maintaining lower power consumption, then there is GF’s 12LP+. Here there are ongoing advanced techniques including 2.5D/3D packaging and an AI reference package. Bami also brought up GF AutoPro™ which brings features in support of ADAS, infotainment, powertrain, and other automotive subsystems.

You can’t discuss GF processes without talking about its work in wireless. Building on that expertise is the 22FDX® 77GHz radar transceiver. Radar usage will be growing in the automotive area, but we also cannot overlook the impact of 5G on automotive and many other application areas. While GF may not plan on building the big processor chip on the latest node, it looks like GF will provide very competitive solutions for much of the remaining electronics in 5G handsets. GF will be supporting the likely processes for wireless charging, NFC, display interfaces, and more. Of course, they will also be supporting the processes that will dominate in the base station radio and digital (baseband) components. There is clear differentiation provided by GF for these markets.

If you want to economically provide leveraged differentiation, you look for things you can do that will provide value to several different application areas. Gregg Bartlett, GF’s SVP, Engineering, Technology, and Quality was up next to discuss these types of efforts at GF. As Gregg pointed out, we are moving into an era of high specialization. Generalized solutions are inefficient, both in speed and in power. This shift has led to many advances in application-specific processors. What GF is showing is that it also leads to specialized technologies – different processes for different applications. They can now show this across many different markets. The slide below is a small but interesting subset of the markets served. The number of unique processes support by GF very large.

The final speaker of the morning was Mike Cadigan, SVP, Customer Design Enablement. Mike first went through the various partnership models GF uses to support its customers. These relationships include infrastructure areas such design technology co-optimization (DTCO), process design kits (PDKs), reference designs, IP/EDA ecosystem support, tape out services, and post-fab services. These are required services to enable customer success. GF has hundreds of participating partners to enable customer designs. However, GF goes further, such as its 22FDX® Body Bias solutions and use models developed with Dolphin Integration. GF has many companies supporting IoT and wearables design on its platforms. There are also collections for partners for supporting automotive designs. Quite noticeable were the prior and new solutions attributed to ARM. More on that in an upcoming blog.

It looks like the transformation of GF is going quite well, and the fruits of the transition are quite visible.  If you are looking for a semiconductor partner anything RF, wireless, low power, automotive, or wearables you should check out what GF has to offer now.


Synopsys and Infineon prepare for expanding AI use in automotive applications

Synopsys and Infineon prepare for expanding AI use in automotive applications
by Tom Simon on 10-03-2019 at 10:38 am

We all know that cars are using processors for many tasks, but it is easy to fail to comprehend just how many there are in a typical modern car. Browsing through the Infineon AURIX automotive processor application guide, you can start to see just how pervasive processors are. The AURIX processors are specifically designed for automotive and industrial applications. The subsystems are found in include:

      • doors, alarm, windows, locks, seats and mirrors
      • transmission, traction control and braking
      • airbags and safety
      • lights and blinkers
      • fuel injection, emission control and engine monitoring
      • battery management and charging for conventional and EV systems
      • infotainment
      • driver assistance and autonomous driving
      • navigation and communication

Each one of these systems require one or more processors. Interestingly, if asked where AI might fit in, most people would automatically choose driver assistance and autonomous driving. However, with the increasing power and utility of AI systems, there are many new applications in cars for AI based processing. Systems like engine control or traction management typically need to process huge amounts of data and perform large numbers of computations to do their jobs. Applying AI to these and other systems can dramatically improve their efficiency and even reduce the number of sensors needed. Some of these sensors are expensive, difficult to service and can be prone to failure.

 

Infineon understands the value of adding AI capabilities to their proven AURIX family of processors. They have chosen to work with Synopsys to develop a Parallel Processor Unit (PPU), which integrates Synopsys’ ARC® EV processor IP, for  the AURIX processor line. The addition of the PPU to the AURIX processors will greatly enhance real-time data processing capabilities. Since both Infineon and Synopsys are well versed in ISO 26262 and other automotive safety standards, end users can rest assured that safety and reliability will be paramount.

The PPU will support a wide range of AI algorithms, such as Recurrent Neural Network (RNN), Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Radial Basis Function (RBF). AI use can be expanded to applications such as intrusion detection and system monitoring. If anything, more and new uses for AI will be developed in the automotive space. During the ARC Processor Summit in September Infineon gave a presentation on “System Modelling for Real-Time Automotive Applications using Deep Learning and Complex Data Processing.”

By integrating the ARC EV processors in their PPU, Infineon AURIX customers will have immediate access to the comprehensive Synopsys MetaWare EV Development Toolkit for Safety, which should speed up software development for these applications.

Today we look back at cars that were developed before microprocessors and wonder at how they even worked. In the future, we’ll look back at cars built without AI in their systems and wonder how difficult it must have been to build them. This is part of a larger shift in computing in general, but the automotive segment is the tip of the arrow for applying many of these new technologies. Yet, while being on the leading edge, it is also an area where there can be no compromise on safety and reliability. This makes the automotive market a fascinating crucible for creating the systems and software that will be pervasive in the future. More information about the new AURIX PPUs powered by Synopsys DesignWare ARC EV Processor IP can be found on the Synopsys website.

 


AI Hardware Summit, Report #3: Enabling On-Device Intelligence

AI Hardware Summit, Report #3: Enabling On-Device Intelligence
by Randy Smith on 10-03-2019 at 6:00 am

This is the third and final blog I have written about the recent AI Hardware Summit held at the Computer History Museum in Mountain View, CA. Day 1 of the conference was more about solutions in the data center, whereas Day 2 was primarily around solutions at the Edge. This presentation from Day 2 was given by Dr. Thomas Anderson, Head, Machine Learning and AI, Design Group at Synopsys. Thomas started his presentation with an analysis of the types of AI/ML applications that are particularly difficult to implement today and how Synopsys is helping designers solve this challenge. The journey this presentation went through then got increasingly interesting.

When we look at some of the current and near-future AI/ML challenges, we see huge scaling issues. Thomas pointed to the massive numbers shown in the diagram above. Granted these are challenges for the design centers, but there are similar problems at Edge as well. Next Thomas mentioned some recent breakthrough advances in AI. Thomas first pointed to Natural Language Processing, a problem that is typically solved using supervised learning, as an application that needs to be solved in a model that will run (on TPU2) with a 100ms response time. As another example, Generative Adversarial Networks uses two neural nets – one generates images and the other analyzes images – to learn how to detect fake images. AI breakthroughs are coming at a furious pace.

By now we have all seen the progression in AI solutions in the Cloud migrate from CPU, to GPU, and now to FPGA and application-specific processors. Synopsys EDA tools have played a large role in building these solutions. Putting together these advanced technologies for computing at the Edge is just as difficult. Synopsys support these efforts in many ways through software, tools, IP, and other advanced technologies. One example of this is the Synopsys DesignWare EV6x and EV7x Vision Processors. In a 16nm process, the EV6x convolutional neural network (CNN) engine delivers a power efficiency of up to 2,000 GMACs/sec/W, and the EV7x delivers an industry-leading 35TOPS.  Both processors support a comprehensive software programming environment based on existing and emerging embedded vision and neural network standards.  Using these software and hardware building blocks can save a tremendous amount of time when building your AI chip. Read more about the EV Processor family here.

The presentation by Thomas went on to discuss many other tools and the IP Synopsys has available for these types of designs. I won’t go into detail here, but Platform Architect Ultra looked like a very useful tool for architectural exploration, a topic that came up repeatedly at the summit.

Rather than go into the other tools that were discussed in the presentation, I want to take a quick look at where the conversation went next – using AI in the chip design process itself. A couple of things we know AI can do well is search a large solution space and self-training. So, the question is, “Can we train machines to build ICs?”

The AI program, AlphaGo Zero, went from zero to world champion in 40 days! It did this by teaching itself. By using a technique called reinforcement learning, the program played against itself to learn how to play better. It learned very quickly. This type of learning is quite interesting since it doesn’t rely on human knowledge or experience. Applied to chip design, we might find AI solutions that are completely different from previous human-designed chips. However, chip design is a lot more difficult than Go. According to Thomas, the estimated number of states in Go is 10360, while the number of states in a modern placement solution likely exceeds 1090,000.

So, if the design space is so huge, how can we use AI to design chips in the new future? It seems likely by using a combination of reinforcement learning along with Neural Acceleration Search and focusing on one functional design area at a time for now, likely physical design problems such as placement. This technique, Neural Acceleration Search, just announced by MIT earlier this year, provides a way to speed up learning by about 200x. While it is unlikely that AI technique will be designing entire chips from a functional specification in the next decade, we may see tremendous advances by applying AI to several chip design tasks. It is good to know that Synopsys is researching these new technology advancements from non-EDA areas to apply them to difficult EDA problems.


Debugging SoCs at the RTL, Gate and SPICE Netlist Levels

Debugging SoCs at the RTL, Gate and SPICE Netlist Levels
by Daniel Payne on 10-02-2019 at 10:00 am

Concept Engineering - auto schematic

Debugging an IC is never much fun because of all the file formats used, the levels of hierarchy and just the sheer design size, so when an EDA tool comes around that allows me to get my debugging done quicker, then I take notice and give it a look. I was already familiar with debugging SPICE netlists using a tool called SPICEVision Pro, but hadn’t spent much time learning about Gate and RTL debugging automation tools. Perfect timing, because EDA Direct organized a webinar recently on this very topic, so I signed up and will share what I learned.

Our webinar presenter was an AE named Sujit Roy from EDA Direct, and he has plenty of hands-on experience debugging designs at the RTL, Gate and SPICE netlist levels. The four products from Concept Engineering that were discussed and shown included:

  • RTLvision Pro – RTL debugging (Verilog, SystemVerilog, VHDL)
  • GateVision Pro – gate-level netlist debugging: Verilog, EDIF 2.0.0, LEF/DEF, Liberty, VCD, SDF
  • SpiceVision Pro – SPICE netlist debugging: HSPICE, Spectre, CDL, Calibre, Eldo, SPICE, SPEF/DSPF
  • StarVision Pro – All three of the previous tools, combined

By debugging, I mean the act of reading in netlist files, then graphically viewing, filtering and examining portions of a design throughout its hierarchy to understand the connectivity. Sujit first demonstrated RTLVision Pro by reading a digital design from Open Cores, then traversing the hierarch using a tree widget in the left pane, while showing the auto-generated schematic in the right pane:

Sure, using a text editor you can view an RTL design, but you won’t understand the connectivity or be able to trace a signal path as quickly as a graphical tool. In the above diagram a net called RXD was selected and highlighted, in order to understand which block drives RXD, and which blocks read RXD. The tool allowed us to create a cone of logic starting from RXD and going any direction we wanted to explore.

Logic Cone

RXD was driven by a FF cell, so we looked inside the FF cell and could view the source code in another pane. The Clock signal to the FF was selected and we could view all of the cells that used Clock, while hiding other signals to improve clarity.

Source Code

The drop-down menus had plenty of useful commands, and you can even run any of the 100 or so pre-built Tcl scripts. CAD groups can extend or even modify these Tcl scripts to automate design-specific debugging. SPICE netlists can be loaded, viewed and traversed:

SPICE Netlist

During the demo we saw a mixed-signal design that had an Analog block called Parity, along with a digital block called CPU:

Mixed-Signal Design

Even netlists that have extracted parasitics from a SPEF file can be quickly loaded, viewed and followed:

Parasitic Netlist

We saw even more cool, time-saving features of the StarVision tool in action on real designs, but I’ve covered the high-level features.

Summary

Most design engineers use simulators and some formal tools, so adding a tool like StarVision Pro is going to complement your debug flow and make your debug process go much quicker, because now you can really see where all of the signals go in a design; how all of the cells, blocks and modules are connected in a hierarchy; and what comes before and after any signal in a design. Design capacity is more than adequate, with customer designs at 20B gates being run.

You can start out using StarVision Pro in GUI mode first, then start to run batch scripts for anything repetitive.

To view the recorded webinar, follow this link, or contact EDA Direct to schedule a demo of StarVision Pro to learn more.

Related Blogs


Acceleration in a Heterogenous Compute Environment

Acceleration in a Heterogenous Compute Environment
by Bernard Murphy on 10-02-2019 at 5:00 am

Acceleration

Heterogenous compute isn’t a new concept. We’ve had it in phones and datacenters for quite a while – CPUs complemented by GPUs, DSPs and perhaps other specialized processors. But each of these compute engines has a very specific role, each driven by its own software (or training in the case of AI accelerators). You write software for the CPU, you write different software for the GPU and so on. Which makes sense, but it’s not general-purpose acceleration for a unified code-set. Could there be an equivalent in heterogenous compute to the multi-threading we use every day in multi-core compute?

Of course we need to think outside the box on how this might work; you can’t just drop general code on a mixed architecture and expect acceleration, any more than you can drop general code on a multi-core system with similar expectations. But given sufficient imagination, it appears the answer is yes. I recently came across a company called CacheQ which claims to provide a compelling answer in this space.

The company was founded just last year and is headed by a couple of senior FPGA guys. Clay Johnson (CEO) was VP of a Xilinx BU for a long time before going on to lead security ventures, and Dave Bennett (CTO) has a similar background, leading software dev at Xilinx for many years before again joining Clay in the security biz and now in CacheQ. Funding comes from Social Capital (amount not disclosed).

Given their background, it’s not surprising they turned first to FPGAs as a resource for acceleration. FPGAs are becoming more common as resources in datacenters (just look at Microsoft Azure) and in a lot of edge applications, I’m guessing for flexibility and easy field update, also as an appealing option for relatively low volume applications. And they are also starting to appear as embedded IP inside SoCs.

Back to the goal. CacheQ’s objective is to let software developers start with C-code (or object code) and to be able to significantly accelerate that code by leveraging a combination of CPU and FPGA resources while speeding and simplifying implementation and partitioning between processor and FPGA. At this point I started to wonder if this was some kind of high-level synthesis (HLS) play. It isn’t (the object code option is perhaps a hint). They position their product as an ultravisor (think amped-up hypervisor). They build a virtual machine around an application which then goes through an optimization and partitioning phase, then into code generation and mapping across any of several possible targets: x86 servers or desktops, FPGA accelerators, embedded Arm devices or heterogenous SoCs.

Still, doesn’t mapping onto FPGAs require going through RTL with all its concomitant challenges? Here the company provides some detail though they are understandably cagey about providing too much. So here is what I can tell you. Part of what they’re doing is unrolling complex loops in the code and mapping these, with pipelining, into the FPGA. They also automatically create the stack to manage CPU to FPGA communication and they manage memory allocation transparently across these domains.

The “aha” here is that they’re providing a way for software developers to get acceleration, not a way for hardware developers to build a design. This is a quite different intent from HLS and a goal that many have been chasing for a while. They don’t have to map everything to the FPGA, they just have to provide significant net speedup in critical pieces of code. They show some impressive numbers for key functions on their website

I asked about active applications today. Clay mentioned use in weather simulation, industrial and government applications. I also asked about support for other potential accelerators (GPU, DSP, …). He said that these are in long-term planning; each can offer acceleration in its own way, I would guess for big matrix operations as an example.

This looks like an interesting challenge to the long-standing problem of making FPGAs (and ultimately other platforms) more accessible to the general-purpose programmer. Worth a closer look. The website is HERE.


Webinar: OCV and Timing Closure Sign-off by Silvaco on Oct 10 at 10AM

Webinar: OCV and Timing Closure Sign-off by Silvaco on Oct 10 at 10AM
by Daniel Nenni on 10-01-2019 at 10:00 am

The old adage that goes the one constant thing you can always count on is change, could easily be reworded for semiconductor design to say the one constant thing you can count on is variation. This is doubly true. Not only is variation, in all its forms, a constant factor in design, additionally the methods of analyzing and dealing with are continuously changing as well. For both of these reasons it is necessary to stay current and on top of the latest developments in the role of variation in timing closure.

Designers are often faced with a tradeoff between less pessimistic results and extraordinarily long runtimes and large data sets. There is a long history of innovation that has brought us to what are considered the best approaches for massive designs. The foundation of the entire process relies on library characterization to produce models that support effective chip level analysis. Advanced nodes have further complicated the process.

Fortunately, Silvaco is offering a free webinar that will provide a useful update on both the history and state of the art for how on-chip variation can affect the sign-off timing flow. Silvaco has longstanding expertise at the process and cell level, as well as flows for chip level timing verification.

The October 10th webinar will be presented by Bernardo Culau, Director of Library Characterization at Silvaco. He has in-depth experience developing tools for library characterization. The topics that will be covered include:

    • What variation means in the context of library characterization
    • What on-chip variation is and its different causes
      • Inter-chip variation
      • Intra-chip variation
    • Review of different industry approaches to account for on-chip variation
      • OCV, AOCV, LVF
      • Their advantages and limitations
    • Current industry standards for variation-aware libraries
    • The improvements needed to handle leading-edge technology nodes
    • The characterization challenges involved in creating variation-aware libraries
    • Silvaco solutions for variation-aware library characterization

The free webinar will be offered at 10AM on October 10th. This seems like a good opportunity to stay current with the latest trends in a critical area of chip design. It also looks like it might provide a glimpse of what is ahead in this area. Should be interesting.

About Silvaco, Inc.
Silvaco Inc. is a leading EDA tools and semiconductor IP provider used for process and device development for advanced semiconductors, power IC, display and memory design. For over 30 years, Silvaco has enabled its customers to develop next generation semiconductor products in the shortest time with reduced cost. We are a technology company outpacing the EDA industry by delivering innovative smart silicon solutions to meet the world’s ever-growing demand for mobile intelligent computing. The company is headquartered in Santa Clara, California and has a global presence with offices located in North America, Europe, Japan and Asia.


Webinar: Finding Your Way Through Formal Verification

Webinar: Finding Your Way Through Formal Verification
by Bernard Murphy on 10-01-2019 at 6:00 am

Finding your way through formal book

Formal verification has always appeared daunting to me and I suspect to many other people also. Logic simulation feels like a “roll your sleeves up and get the job done” kind of verification, easily understood, accessible to everyone, little specialized training required. Formal methods for many years remained the domain of academics and one-time-academics performing an essentially black-box service to solve really hard problems unreachable in simulation. That’s changed, because those hard problems are becoming much more common and because verification tool providers have made it much easier to attack many cases without needing a PhD or induction into the formal priesthood. Better yet, there are excellent books to introduce novices to the domain and walk them through their first steps in using those tools.

REGISTER HERE to watch the recorded webinar.

One thing we felt was missing was a higher-level introduction, for a verification engineer, manager or director who is curious about formal but not yet ready to commit. They don’t want a tutorial; they want to know first “is this right for my organization?” “Is it really going to improve our verification quality or throughput?” “What are we going to have to do differently?” We wrote “Finding Your Way Through Formal Verification” for them. “We” here is myself (Bernard Murphy), Manish Pandey and Sean Safarpour. The book was published by SemiWiki and is available for download in the handouts section of this webinar.

Speakers:

Bernard Murphy – SemiWiki

Bernard Murphy is a freelance blogger and author, content marketing/messaging advisor for several companies and serves on the board of Mother Lode Wildlife Care in the California Gold Country. In a previous life he held down a real job as CTO at Atrenta. Earlier still, he held technical contributor, management, sales and marketing roles variously at Cadence, National Semiconductor, Fairchild and Harris Semiconductor. In his re-invention as a writer, Bernard has published well over 400 blogs between SemiWiki and EETimes. He received his BA in Physics and D. Phil in Nuclear Physics from the University of Oxford.

Manish Pandey – Synopsys

Manish Pandey is a Fellow at Synopsys, and an Adjunct Professor at Carnegie Mellon University. He completed his PhD in Computer Science from Carnegie Mellon University and a B. Tech. in Computer Science from the Indian Institute of Technology Kharagpur. He currently leads the R&D teams for formal and static technologies, and machine learning at Synopsys. He previously led the development of several static and formal verification technologies at Verplex and Cadence which are in widespread use in the industry. Manish has been the recipient of the IEEE Transaction in CAD Outstanding Young author award and holds over two dozen patents and refereed publications. 

Sean Safarpour – Synopsys

Sean Safarpour is the Director of Application Engineering at Synopsys, where his team of specialists support the development and deployment of products such as VC Formal, Hector and Assertion IPs. He works closely with customers and R&D to solve their current verification challenges as well as to define and realize the next generation of formal applications. Prior to Synopsys, Sean was Director of R&D at Atrenta focused on new technology, and VP of Engineering and CTO at Vennsa Technologies, a start-up focused on automated root-cause analysis using formal techniques. Sean received his PhD from the University of Toronto where he completed his thesis entitled “Formal Methods in Automated Design Debugging”.