CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Even More Integration and Automation for ARM-based Designs

Even More Integration and Automation for ARM-based Designs
by Daniel Payne on 06-03-2015 at 8:00 am

The attraction to an IP-based design methodology is that you can assemble an SoC from ready-made IP blocks, saving you valuable engineering development and verification time, while reducing risks from having to develop something from scratch and hoping that they meet industry standard specs. ARM is well known for supplying processor IP, interconnect IP and even debug IP, but how do you quickly connect all of that IP together correctly along with 3rd party IP? The clever engineers at ARM have just come to our rescue by announcing three configuration and integration tools:

  • CORESIGHT Creator
  • CORELINK Creator
  • Socrates DE

Here’s the high-level view of where each of these three tools are used to enable you to quickly configure and integrate IP:

As a methodology you would start with the Socrates DE tool and browse the IP catalog for the processor or subsystem you want, and configure that IP, creating an actual Bill of Materials (BOM) for your new system. Double-clicking on the CORELINK block takes you to a GUI called CORELINK Creator where you create a micro-architecture synthesis and even generate RTL code that includes testbenches. Your AMBA interconnect is now correctly constructed and ready to use as part of your RTL-based design and verification flow. CORESIGHT Creator will generate your debug and trace IP blocks.

With these new levels of automation in Socrates DE you can expect to save months of effort versus manual integration because the configuration is correct-by-construction, lowering the amount of verification you would normally require.

Related – New Suite of ARM IP for Mobile

ARM has been using this more automated approach internally as they develop IP, and now ARM partners can start to enjoy the benefits as well. You will see ARM adding more IP to their catalog in Socrates DE with each new quarterly release cycle, and ARM partners will be adding their IP too. There are at least 50 system IP tooling partners now, giving you a lot of IP choices. Within the next few months you’ll find all of the ARM IP included in the IP catalog of Socrates DE.

I was so pleased to learn that when ARM acquired Duolog last year along with the Socrates technology that it would be further developed to automate IP configuration and integration on ARM and 3rd party IP. The ARM systems and software group really is bringing it all together for the SoC community to benefit.

Related – EDA Mergers and Acquisitions Wiki

For those of you attend DAC next week in San Francisco, then visit them at booth 2414, or see all of the community partners at booth 2428.


Making Things Visible for 25 Years

Making Things Visible for 25 Years
by Paul McLellan on 06-03-2015 at 7:00 am

This year is most notably the 50th anniversary of Moore’s Law. It is also the 25th anniversary of Concept Engineering. They were founded in 1990 in Freiburg Germany. They started by providing automatic schematic generation from netlist. They sold primarily to other EDA companies and to internal development groups in semiconductor and system companies. As synthesis became the dominant methodology for digital design in the 1990s, it became necessary to visualize the output of synthesis. The challenge was to make a schematic from a netlist in a way that was understandable by the designer. Just randomly throwing the gates on the screen and hooking them up with wires wouldn’t work. Concept became the standard for doing this and pretty much every EDA company (except Synopsys which created their own viewer earlier) standardized on Concept’s viewer. When I was VP engineering at Ambit in the late 1990s, we used it too. I was by no means alone, they have over 40 OEM customers in the EDA and semiconductor markets (including FPGA).

With the growth of IP-based design in the 2000s to today, the need to take netlists of various kinds and visualize them became even more important. IP from 3rd parties and even from other groups inside the same company needed to be understood by the designers building the SoCs, so that they could use the IP correctly and, often, remove functionality from the IP that was not required on that design. So Concept started to sell tools for visualizing netlists, RTL, transistor netlists and system. Over 250 chip design companies have licensed Concept’s VISION line of customizable products to debug digital, analog and mixed-signal designs.

As the company’s tag-line says “We make things visible.”

It is impossible to argue with Gerhard Angst, Concept’s CEO, when he says:
A quarter of a century is a significant milestone for any technology company. We could not have reached this milestone without the continued support and trust of our customers, and the passion and commitment of our staff.


But they are not just blowing out candles on their birthday cake, they have a complete new release, version 6, of their product line. This will be on show next week at the DAC on booths 2208 and 2210. You can see the latest releases of StarVision PRO, RTLvision PRO, GateVision PRO and SpiceVision PRO.

What’s new in version 6? Some notable enhancements are:

  • Improved netlist pruning: In addition to Verilog and SPICE netlist export and pruning, StarVision PRO now also allows netlist pruning for the most common post-layout formats, DSPF and SPEF.
  • Advanced post-layout debugging: Improved visualization and debugging of parasitic networks.
  • API improvements: Improvements in the database API and GUI API allow even more sophisticated code to be developed and executed by the tool.
  • Advanced batch processing: Enhanced batch processing capabilities allow more efficient processing of user-defined analysis and debugging tasks.
  • Unified File Open Dialog: Makes it easier to load complex mixed-language SoC designs and libraries.
  • Improved visual debugging capabilities such as: Smart connectivity lens view, improved schematic navigation history, and on-the-fly hierarchy exploration with built-in fold and un-fold controls.

As an example of how other EDA companies use Concept’s technology as a foundation, earlier this week Aldec and Concept Engineering announced that Aldec’s ALINT-PRO-CDC clock-domain crossing verification tool is using Concept’s Nlview schematic visualization engine. This allows the tool to combine Aldec’s advanced analysis with Concept’s easy-to-read schematic diagrams to create an advanced debugging cockpit for tracking down and fixing clock-domain crossing problems.

The press releases are:

  • 25th anniversary here
  • Version 6 of the product line here
  • Aldec’s clock domain crossing solution here

A Robust Lint Methodology Ensures Faster Design Closure

A Robust Lint Methodology Ensures Faster Design Closure
by Pawan Fangaria on 06-03-2015 at 4:00 am

With the increase in SoC designs’ sizes and complexities, the verification continuum has grown larger to an extent that the strategies for design convergence need to be applied from the very beginning of the design flow. Often designers are stuck with never ending iterations between RTL, gate and transistor levels at different stages of designs. In this light, full-chip analysis and verification completion for a large SoC may look like a distant dream. A significant number of iterations can be reduced by identifying and fixing bugs at the source, i.e. RTL.

One of the most effective ways to fix issues in the RTL is by running lint checks on the RTL code. But imagine the RTL code for a design with several hundred millions of gates; not only can the tool’s capacity and performance become prohibitive, but a huge number of violations can also become a problem to manage. So, what are the alternatives? Well, if we could use the link checks in a smarter way to cover the complete design in a reasonable time and effort, it could be a great alternative to make the design robust at RTL, for better convergence throughout the downstream design flow.

For complete analysis coverage, a flat design investigation is required which necessitates longer runtime and higher memory consumption. Also, block level waivers are required to defer violations for verified sub-blocks when run at the top level without sufficient or consistent constraints. It could be difficult and time consuming to resolve inconsistencies between the block and chip level lint rules.

A flat design analysis can also be carried out with IP or bocks used as black boxes, thus focusing only on chip level modules and glue logic. This approach can improve analysis runtime and reduce memory consumption and violation management. However, a major drawback in this approach is reduced analysis coverage and poor QoR. In this approach inter-block issues and several other issues such as improper use of clock and set/reset signals generated by an IP module remain undetected and uncovered.

Atrentahas come up with a novel approach that can provide the complete analysis coverage for an SoC with shorter runtime and lower memory consumption, and without the need of any waiver at the block level. They use smart or “Abstract Models” for IP blocks, a concept pioneered by Atrenta for full-chip analysis. How does the methodology with abstract models work? Let’s see an example –


In the above pictures there are abstract model views with a couple of typical input and output ports. An abstract model contains important information about the block’s interfaces such as its port types, their directions and the connected signals with them. This information is utilized in inter-block lint checks such as combinational loop fanning across multiple blocks, un-driven input terminal and so on. The abstract model also allows constant propagation that helps in detecting structural issues. The comprehensiveness of interface level information in the abstract models ensures the completeness of analysis coverage at the SoC level.

The “Abstract Model” based SoC lint analysis is done hierarchically in two steps –

In the first step, the block level constraints and assumptions are verified within the context of the SoC. This step ensures that the abstract models are in sync with the SoC analysis environment and requirements. Any inconsistencies and mismatches between block and chip level analysis are identified at this stage. In the second step, the final SoC analysis is done by using these verified abstract views for lower level blocks or IPs. No waiver is required at the SoC level. Since the lower level blocks are fully verified, the violations occur only at the chip level and can be easily managed.

Atrenta’s customers have verified many SoCs with this approach. The larger SoCs of the order of 200 to 350 million gates show an improvement of ~10x in runtime and reduction of ~5x in memory consumption with this hierarchical approach compared to the flat analysis approach. The hierarchical approach also shows better inter-block coverage compared to the IP black-box approach. At the same time, the violations are meaningful and easily manageable.

By using this lint methodology effectively at the RTL level, designers can quickly identify and remove potential issues related to design initialization, bus integrity, unreachable or unknown states, underflow or overflow of FIFOs, and so on to signoff the RTL for synthesis and implementation. The lint-clean RTL makes way for faster convergence of the SoC through downstream implementation and verification flow.

Atrenta is unveiling this new lint methodology for SoC signoff at the 52[SUP]nd[/SUP] DAC. Visit their booth #1732 to learn more.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Why is Intel going inside Altera for Servers?

Why is Intel going inside Altera for Servers?
by Eric Esteve on 06-02-2015 at 12:30 pm

You should be happy to listen that Intel will buy Altera FPGA challenger, if you expect always more power to be consumed in datacenter! In 2013 the power consumption linked with the Servers and Storage IC activity, plus the electricity consumed in the systems cooling these high performance chips has reached 91 BILLION KWh (or the equivalent of 34 500MW power plant, or $9.1 Billion electricity bill). Could we see stabilization or even decrease of this power consumption in the near future, due to Moore’s law or whatever else? No way! At first because the amount of data exchanged (and stored) in the cloud is growing by 60% per year. This is the natural evolution linked with the smartphone explosion and with our common behavior evolution: we want to capture image and sound (store it), share it by sending it through the cloud, see TV, series or movies when moving, and so on.

Why linking the Intel/Altera deal with power consumption increases in the datacenters? As a matter of fact, Intel has a lion’s share in datacenter servers, based on x86 architecture. We know that this CISC architecture has been initially designed for performance, at a time (1990’s) where the need for compute power in PC and servers was crucial. In the datacenter, the needs for ever higher compute power is asserted along with the need for certain level of flexibility in order to quickly adapt an installed system to protocol evolution or new features. On top of this need for lack flexibility it has been shown that x86 architecture is not well tailored to run search engine algorithms. In the x86 case, the only option is to write software. Designers have tried to improve efficiency by using GPU –better than x86, but not optimum. A team with Microsoft have used FPGA instead, and reported 95% improvement compared with x86. Not surprising as FPGA design offer a much better flexibility than x86.

Which was surprising was the way Wall Street has reacted to this news. Some people who clearly didn’t understood anything to high tech, in particular to the difference between software design, FPGA development and ASIC technology, thought that FPGA was the panacea for search engine algorithm development in datacenters. Not only they thought it, but they wrote it (search for: “What Intel’s Buyout Of Altera Means For The FPGA Industry”, a superb example of writing about a topic that the author absolutely don’t understand). And Wall Street has decided that Intel should buy Altera to create a synergy, shipping $2,000 FPGA consuming 50 to 100W along with their $500 server chips!

The problem is simple: the same algorithm running on a $2,000 FPGA (consuming several dozens of Watts) will run, probably faster, on a $20 ASIC consuming 5 to 10 Watt! I agree that Intel will be happier to sold $2,000 part than $20 ASIC, but is it enough to build a strategy? By the way, if you don’t trust me, just think to the secure networking chips designed by Broadcom (Netlogic), able to screen networking frames on the fly and detect viruses, the virus database being updated daily (thanks to Flash Memory).
So, is Intel buying Altera a good deal? Altera is part of the top 20 semi vendors, selling certain new products with 80% GPM, enjoying a strong customer base in Networking, Industrial, Automotive, Consumer and more, so Intel will most probably get benefit from this investment (when the interest rate is close to zero, almost any acquisition is most valuable than leaving the money at bank!). Should Intel/Altera develop synergic solution for datacenter? Not only I don’t think so,… but I hope not, at least for the planet!

From Eric Esteve from IPNEST


The State of Desktops, Notebooks and Tablets

The State of Desktops, Notebooks and Tablets
by Daniel Payne on 06-02-2015 at 10:00 am

The personal computing market started out back in the late 1970’s, with IBM being a relative late-comer in 1981, however over many decades we’ve seen the unit volumes steadily increasing each year driving demand of semiconductors of all types. IC Insights is a research company that follows the personal computer market and they define this market as having several categories:

  • Standard PCs (desktops and notebooks)
  • Tablets (i.e. iPad)
  • Internet/Cloud-computing systems (i.e. Chromebook)

I’ve been a notebook user for the past 15 years so it’s been a long time since I owned a desktop, although I have three sons that use custom-built desktops for gaming with high-end graphics cards. Our household also has four notebooks, two iPads, a Nexus 7 tablet, and Chinese-brand tablet. We own no cloud-based systems.

Observations from the newest IC Insights report include:

  • Tablets took off in 2010 with the iPad
  • Tablet sales became greater than notebooks in 2013
  • Tablet growth slowed suddenly in 2014
  • Large-screen phones like my Samsung Note 4 slowed tablet growth even more in 2015
  • The overall PC market will decline in 2015
  • Desktop and Notebooks peaked in 2012 at $345M

I agree that the utility of a 5.7″ display on my smart phone makes it easier for me to carry around everywhere, all day, decreasing my need to use a tablet or notebook device to review updates on LinkedIn, Twitter or Strava.

IC Insights now predicts a CAGR of just 2.1% for PC unit shipments from 2013 to 2018, where total PC shipments will reach 578 million in 2018. Tablets are expect to make up a healthy 45% of total systems sold in 2018, down from the previous forecast of 57%. It’s startling for me to read how quickly our world economy responds to technology trends and to witness the first decline in total PC shipments this year.

I’ve played with the Internet-centric Chromebooks from Google and Samsung while shopping at my local BestBuy store, but have never been impressed enough to actually buy one, especially in light how they compare against a more fully-featured laptop at a slightly higher price point. When my youngest daughter started college we thought about buying a Chromebook, but instead opted for a Samsung laptop priced at just $75 more.

For the full report details visit IC Insights and request their IC Market Drivers 2015 report, priced at $3,390 to an individual or $6,490 for a multi-user corporate license.


Ultra-Low Power Non-Volatile Memory Solutions for the Smart Connected Universe

Ultra-Low Power Non-Volatile Memory Solutions for the Smart Connected Universe
by Tom Simon on 06-01-2015 at 6:00 pm

DAC is a great place to gather information about products and technologies. However it can be difficult to chase down the information you need because you may need to cover a lot of ground to hear or talk to the people with the right knowledge. Fortunately there are a few places you can go to learn about a number of products at one place. A really good example of this is the Open Innovation Platform Theatre that is hosted by TSMC. Throughout this year’s DAC they will have speakers from their ecosystem partners giving short presentations on important topics.

One such presentation will be given by Sidense CTO Wlodek Kurjanowicz on the topic of Ultra-Low Power Non-Volatile Memory Solutions for the Smart Connected Universe. Sidense sees the combined market segments of mobile, IoT, medical, automotive and the cloud infrastructure needed to support them as a key areas for product development. Low power requirements are prevalent within this category. Durable non-volatile memory is needed for many purposes, including security codes, calibration trim information storage, device ID’s, secure boot code storage, etc. Having the ability to use standard CMOS processes and minimize power consumption are important success factors.

Sidense will also be presenting at the Chip Estimate booth on the more general topic of Memory Requirements in the Smart Connected Universe. As is usually the case I’m sure there will be a video of this presentation produced by Chip Estimate for viewing later. There are already videos concerning Sidense available at Chip Estimate. A lot of useful information about Sidense and their offerings can be found on that page, including recent additions to their process availability matrix.

However seeing the presentation in person and having the opportunity to speak directly with their technical experts is invaluable. Webinars and video conference calls are convenient and useful, but meeting people face to face can never really be replaced. Hopefully you can leave DAC with your questions answered and much more confident in a vendor’s ability to deliver critical elements for your projects.

In the case of Sidense, it is their IP that get incorporated into the finished product design. Understanding their foundry qualification process, design methodology, as well as interface & programming options by having them explained first hand would be hard to pass up if you are looking for a non volatile memory solution.

The TSMC OIP presentation will be offered on Monday 6/8 at 11:30 AM, Tuesday 6/9 at 3:15PM and Wednesday 6/10 at 2:00PM in booth #1933. The Chip Estimate presentation will be offered on Tuesday 6/9 at 2:30PM and Wednesday 6/10 at 1:30PM at booth #2433. For info on both presentations there is a link here.


Aldec packs 6 UltraScale parts on HES-7

Aldec packs 6 UltraScale parts on HES-7
by Don Dingee on 06-01-2015 at 12:00 pm

A few months ago, when the Xilinx UltraScale VU440 FPGA began shipping, one of the immediate claims was a quad-FPGA-based prototyping board touted as “Godzilla’s Butcher on Steroids”. That was a refreshing and creative PR approach, frankly. I’m always careful with less creative terms like “world’s biggest” or “world’s fastest”, because they can overstate a snapshot – such a claim can easily dissipate tomorrow. I prefer a term like “industry first” since it recognizes that.

If the news of four UltraScale parts on a board was big back then, Aldec’s announcement of six UltraScale VU440 parts on a single board is bigger. This is a major upgrade to the Aldec HES-7 12000 platform, supplanting the Xilinx Virtex-7 devices on it. In the previous configuration, four boards each with six Virtex-7 parts offered up to 288M gates.


This new UltraScale frontier for HES-7, coming in 3Q15, puts together four boards each with six UltraScale VU440s for a capacity of up to 633M gates. I do expect others to try to match the sheer capacity of this FPGA-based prototype offering soon, but for now, Aldec leads with an industry first.

Matching the rest of the Aldec HES-7 offering will be more challenging for its competitors. Aldec’s high speed backplane supports the interconnect needed to keep UltraScale parts running at their potential. Their HES-DVM automated partitioning capability leverages SCE-MI to help connect the physical FPGA hardware to software simulation features for more complete verification. Aldec offers a range of off-the-shelf daughtercards for HES-7, including a Xilinx Zynq-based board, along with support for FMC modules. They also offer custom design services for daughtercards, aiding in incorporating exact copies of hardware, crucial in safety-critical and DO-254 validation.

UltraScale parts introduce another aspect where Aldec has an industry leading solution: clock domain crossings (CDCs). The clock resources in UltraScale architecture have been completely redesigned. The good news is the design flexibility and ease of synthesis closure has been greatly increased. At the same time, the odds of CDCs occurring have also increased. Without mitigation, CDCs can cause unpredictable effects such as metastability and data incoherence. Aldec ALINT-PRO-CDC is geared to comprehensively find CDCs, examine synchronizer constructs, and flag issues for designers. This tool is handy for both FPGA designers and SoC teams using FPGA-based prototyping, since CDCs play no favorites – especially when doing manual partitioning.

In other words, Aldec is not just gluing constantly bigger parts on boards and calling it a day. They are assembling a complete set of capability for FPGA-based prototyping, from design to simulation to debug to verification to compliance, enabling more FPGA and SoC designers to get more done quickly and reliably. There is more of this story in the press release:

Aldec HES-7 with Xilinx Virtex UltraScale Devices Enables True FPGA-based Verification

For those attending DAC 52 in fabulous San Francisco with exhibits starting June 8, Aldec will be offering eight technical sessions on a range of topics. Session 1 focuses on FPGA-based prototyping and the scalability UltraScale devices bring to HES-7 Ultra. Registration for these sessions is free to DAC exhibit attendees but first-come, first-served with limited seating, simply follow the online form linked above to reserve a spot.


The Trojan Horse Was Free Too

The Trojan Horse Was Free Too
by Paul McLellan on 06-01-2015 at 7:00 am

Timeo Danaos et dona ferentes. I fear the Greeks especially when bearing gifts. In Virgil’s Aeneid these words are spoken by the Trojan priest Laocoön warning about the wooden horse that the Greeks have offered Troy. But to no avail, Laocoön is slain by serpents and the Trojans bring the horse inside the walls of Troy. Since the horse was full of Greek soldiers this turned out to be, shall we say, sub-optimal.

The free eFuse cell from foundries can be sub-optimal too. Is it a just a gift or a Trojan horse. While it is going too far to fear the foundries when they are bearing gifts of free IP, in the case of eFuses they do come with their own set of problems hidden inside. This is especially so as we get down to small process geometries. At 16nm and below the eFuse cell is getting so large that it threatens to dominate the chip in designs that require large NVMs. A much more practical choice is to use an antifuse approach. With lower power and higher speed, and with 1/300th of the area, what’s not to like?

When you start to take security into account, and many NVMs are used for holding serial numbers, encryption keys and the like, then antifuse becomes even more attractive. eFuse bits can be read out by looking at the bit cells to see if the fuse is blown or not. Antifuse stands up to even the most vigorous and destructive attempts to read out the programmed value, not just to commercial standards but military too.

One more big advantage is that the antifuse-based NVMs do not require special power supply voltages and so can be programmed after packaging, whereas eFuse-based memories normally have to be programmed on the wafer before they are packaged. Programming after packaging can make supply chain management a lot easier since there are many applications where the code to be programmed is not known until late in the manufacturing cycle.

If Orange is the New Black then Antifuse is the new Fuse. Kilopass will be on three booths at DAC next week showing various aspects of their antifuse NVM technology and why it is the NVM bitcell for the future.

Tney will presentAntifuse Memory: The New NVM Foundation IP in the ChipEstimate booth (#2433) on:

  • Monday, June 8, at 1:30 p.m.
  • Tuesday, June 9, at 11:30 a.m.
  • Wednesday, June 10, at 2:30 p.m.

They will demonstrate in the ICScape Booth (#1602) how ICScape’s design tools have contributed to Kilopass’ development of a wide range of NVM IP’s ultra low-power and high-performance features, fast access speed, megabits of capacity and more than 10 years of data retention.

In the TSMC Booth (#1933), Kilopass will showcase its NVM IP’s availability on all TSMC process nodes from 180nm to 20nm and offer a look at how antifuse technology is the future NVM foundation IP, replacing eFuse below 16nm. Their sessions are scheduled for:

  • Monday, June 8, at 2:15 p.m.
  • Tuesday, June 9, at 4:45 p.m.
  • Wednesday, June 10, at 10:15 a.m.

Will those IO pad rings pass foundry muster?

Will those IO pad rings pass foundry muster?
by Beth Martin on 05-31-2015 at 10:00 pm

I was talking recently to Dina Medhat, a senior technical marketing engineer at Mentor, about, of all things, IO rings. It has not occurred to me that verifying that your IO rings comply with foundry rules presents new challenges.

IO ring checking isn’t new, nor is it unique to advanced IC process nodes. However, the same forces of complexity and physics are in play in all aspects of IC design, requiring careful consideration when planning IO pad rings. Medhat says there is a distinct need for a robust, automated flow to do IO ring checking. She told me about some of these challenge and what she’s been doing to create an automated LEF/DEF-based IO ring checking flow that is flexible and can target different foundries.

Consider that designs typically include IP from multiple vendors, and each vendor has its own set of rules. One important goal of pad cell placement is good electrostatic discharge (ESD) protection when co-locating dissimilar types of cells, such as digital logic, analog cells, processor cores, IO power pads, IO ground pads, termination cells, and so on. Evaluating the design against the many different IP rules, and more especially the rule interactions, depends on automated checking. The remaining question for design teams is how to set up such a flow.

First, says Medhat, let’s look at what the foundries provide—a design rule manual (DRM) with guidelines for pad cell placement that guarantee the required ESD protection when using a given library. Designs must follow these rules when digital, analog, core input, and output (IO) power and ground pads are placed in an IO ring. Common rules that you see in DRMs include:

  • Cell types that can be used in an IO ring
  • Minimum number of a specified power cell per IO ring section and given power domain
  • Maximum spacing between two power cells for a given power pair in a power domain
  • Maximum distance from the IO ring section termination to every power cell
  • Maximum distance from IO to closest power cells
  • Maximum continuous IO ring section of filler without any interruption (breaker or dummy ESD cells)
  • Cells that must be present at least once per corresponding power domain section
  • Constraints for multi-rows implementation

The obvious question is “How can I make sure that my design is safe and that I applied all these rules correctly?” A more subtle question is “Are the rule constraints for these cells the same for all IPs and all foundries, or are they cell- and foundry-specific? If they are different, how do I handle this complexity in an automated flow?”

Of course, constraints are different from one foundry to another, as well as from one technology node to another, and one IP supplier to another. Complying with all of these rules is extremely important, but it’s not easy, and it’s not something you want to do manually. Designers need an automated solution to ensure compliance and improve ESD protection in their designs, but it must also be flexible enough to handle all the details and variations without overwhelming the design team with rule coding, says Medhat.

As with most IC design technologies these days, the ‘ecosystem’ code word applies. The EDA vendors must work with their customers to establish an automated framework to verify compliance with the foundry’s IO placement rules. Medhat has recently spent a fair amount time demonstrating the practicality of this approach using Calibre PERC on real customer designs.

“The input are LEF/DEF files, and all the common rules are already coded in our IO Ring Checker framework,” says Medhat. “Users define their unique constraints using the constraints interface (input form), which is part of the framework, then point to their LEF/DEF database. Executing the IO Ring Checker framework generates two outputs: a violations text report and a violations database, both of which can be loaded into a results viewer like Calibre RVE to debug violations graphically.”

The IO Ring Checker is pretty new, which is why Medhat is presenting results at the DAC Work-In-Progress session on Wednesday evening, June 10, from 6:00-7:00 pm. Look for “LEF/DEF IO Ring Check Automation” (86.65).

Are any of you are incorporating automated techniques for IO ring checking? If so, what techniques are you using, and what are your challenges, results, best practices?


NVIDIA and Qualcomm Talk about High Level Synthesis, Samsung on Low Power for Mobile

NVIDIA and Qualcomm Talk about High Level Synthesis, Samsung on Low Power for Mobile
by Daniel Payne on 05-31-2015 at 4:00 pm

Since 1978 I’ve seen many trends in the semiconductor design world: transistor-level IC design, gate-level design, RTL coding, High Level Synthesis (HLS) and IP re-use. We’ve witnessed the growth in design productivity enabling chips starting with just thousands of transistor all the way up to billions of transistors by using newer design paradigms and more advanced process nodes, like 14 nm FinFET in production now. One EDA company focused on HLS and low power design has an interesting story to tell at the upcoming DAC conference and exhibit in June by inviting NVIDIA, Qualcomm and Samsung to talk about their hands-on experiences. Calypto is the company, and I’ve just chatted with Mark Milligan to get a preview of what’s to come at DAC.

Related – Verifying the RTL Coming out of a High-Level Synthesis Tool

FinFET transistors have been all the rage ever since Intel started talking about tri-gate a few years back, and since then we’ve seen foundries like TSMC, Samsung and GLOBALFOUNDRIES all provide FinFET processes to designers. Leakage power is reduced in FinFET technologies, but on the flip-side there’s an increase in dynamic power that needs to be dealt with during design. On the EDA methodology side you can consider using a tool like PowerPro from Calypto to:

  • Support multiple use case scenarios in creating low-power RTL
  • Exploring RTL alternatives for low-power
  • Guided or automated optimization using formal equivalency proofs
  • Get quick and early RTL power analysis at both block and chip levels

Register online for:


Samsung is my favorite smartphone company and I love the long battery life on my Galaxy Note 4 device and large 5.7″ display. You’ll want to hear how Samsung engineers used a power optimization flow that included formal verification with SLEC pro, and had lint, CDC and autocheck compliance.

Register online for: Samsung: RTL Design Flow with Dynamic Power Optimization for Mobile SoCs


Qualcomm has been using both high-level synthesis and high-level verification (HLV) on their image processing IP. Engineers are now using a standardized HLS/HLV design and verification flow. Chips used for smartphone applications have been successfully designed with this new methodology.

Register online for: Qualcomm – Designing ASIC IP at Higher Level of Abstraction


NVIDIA is a well-known leader in all things related to graphics chips. They first evaluated then adopted HLS and HLV for their TEGRA mobile processors. Come and find out how they moved from a traditional RTL flow to an HLS/HLV flow to accelerate both the design and verification processes.

Register online for: NVIDIA – High Level Synthesis

There are also a couple of tutorials that you can sign up for at DAC that can answer your detailed questions about what the learning curve is like when using C and SystemC as a design language:

Related – Shorten the Learning Curve for High Level Synthesis

The second tutorial on building the iDCT (Inverse Discrete Cosine Transform) will walk you through algorithm coding practices, optimization steps, and how to achieve the best QoR. You should attend this tutorial if you’ve never used an HLS approach before, and be sure to ask lots of questions to get clarification.

Summary
RTL coding had its place in history, and now is the time to consider moving up to HLS and HLV to accelerate both design and verification of your next SoC. DAC is an incredible place to learn about these technologies, and find out how they would fit into your flows.

Related – HLS, Major Improvement through Generations