RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

ARM TechCon 2018 is Upon Us!

ARM TechCon 2018 is Upon Us!
by Daniel Nenni on 10-12-2018 at 12:00 pm

ARM TechCon is one of the most influential conferences in the semiconductor ecosystem without a doubt. This year ARM TechCon has moved from the Santa Clara Convention Center to the much larger convention center in San Jose. Last year the conference seemed to be busting at the seams so this move makes complete sense. A little less convenient but hopefully more room for people and exhibitors to network with, absolutely.

The highlight of this year’s conference is me signing Prototypical books in the S2C booth #933. After spending that last three years researching the emulation and FPGA prototyping market I would be happy to share with you what I have learned and where I see the industry going from here. The books usually go fast so try and stop by on Wednesday or early Thursday.

Not that you would need further justification to attend ARM TechCon, but if you do here are the top points from the conference justification toolkit:

Comprehensive Education
For three days, you’ll choose from 60+ hours and seven tracks of embedded systems learning tailored for developers, engineers and executives.

Hands-On Training
It’s one thing to study from afar, but quite another to put your hands to work. Take advantage of practical training opportunities over two full event days.

Exclusive Networking
With more than 4,000 attendees and 100 suppliers, the potential for finding solutions to your challenges and making lasting relationships is huge.

Valuable Resources
You’ll get exclusive access to speaker presentation decks, excellent tools for your post-conference presentation.

Industry Expertise
Arm TechCon is developed entirely by engineers for engineers thanks to the Technical Program Committee, an elite group of practicing engineers.

You can see the full agenda here with keynotes and such. I will definitely be at the opening keynotes. Simon Segars’ keynote on the Fifth Wave of Computing is a must see and I want to see SoftBank’s COO’s keynote on Collaborating to Deliver a Trillion Device World.

Other than that I will be walking the floor and signing books at booth #933 (S2C Inc.) with my good friend Steve Walters. Steve is also an Emulation and FPGA Prototyping professional. He spent 10 years at Quickturn before it was acquired by Cadence. Steve and I worked together at Virage Logic so between us we know it all. Stop on by and give us a chat.

One of the hot topics again this year is hardware security and if that is your interest here is a great place to start:

Tortuga Logic
to Demo Latest Hardware Security Solutions at ARM TechCon 2018 Tortuga Logic, a cybersecurity company that identifies vulnerabilities in chip-design, invites press attendees to visit ARM TechCon 2018, booth #1032, for a software demo on Oct. 17-18.

Tortuga Logic is solving the problem of rampant chip security flaws, such as Meltdown and Spectre, with its cutting-edge system-level security solutions.

Who: Tortuga Logic is a cybersecurity company located in San Jose, Calif. Jonathan Valamehr (COO), Andrew Dauman (VP of Engineering) and Juan Chapa (VP of Sales) will be at booth #1032 to showcase solutions that identify hardware security vulnerabilities in chip designs.

Why: Tortuga Logic will demonstrate the capabilities of their flagship product Unison, a software platform that analyzes the security of a chip design alongside the normal process of ensuring the chip is functionally correct. The company will also announce the recent addition of Juan Chapa, the company’s newly-appointed VP of Sales.

When: Wednesday (10/17), 11:30 a.m. – 6:30 p.m. Thursday (10/18), 11:30 a.m. – 6:00 pm.

About Tortuga Logic:
Tortuga Logic is a hardware security company offering a suite of security verification platforms to reduce the effort spent identifying security vulnerabilities in modern semiconductor designs. Founded by experts in hardware security, Tortuga Logic’s patented technology augments the industry standard verification tools to enable a secure development lifecycle (SDL) for modern semiconductor designs. For more information, please visit http://www.tortugalogic.com/


EDA Cost and Pricing

EDA Cost and Pricing
by Daniel Nenni on 10-12-2018 at 7:00 am

This is the nineteenth in the series of “20 Questions with Wally Rhines”

When I moved from the semiconductor industry to Mentor, I expected most of my technology and business experience to apply similarly in EDA software. To some extent, that was correct. But there was a fundamental difference that required a change in thinking. Product inventory, especially for semiconductors, must be minimized because it has both real and accounting value. We used to say that semiconductor finished goods are like fish; if you keep them too long, they begin to smell. Software inventory doesn’t even exist. When the order is placed, the actual copy of the software is quickly generated and shipped.


EDA customers are aware of this “software inventory” phenomenon. There is no deadline for purchases that takes into account the lead time for manufacturing, as there is when ordering semiconductor components; an EDA order placed near the end of a quarter can be filled within that quarter. Negotiations for large EDA software purchase commitments tend to drag on until near the end of the quarter when customers suspect they will get the best deal. To counter that, EDA companies provide incentives, or other approaches, to minimize the last minute pressure.

If negotiated terms are not satisfactory, why don’t the EDA companies just let the orders slide into the next quarter? Sometimes they do. But, unlike the semiconductor industry (or any other manufacturing industry), EDA software is between 90% and 100% gross margin. Profit is therefore asymetric. In a semiconductor business, a dollar of cost reduction improves profit by one dollar and a dollar of incremental revenue increases profit by typically 35 to 55 cents. So cost reduction has a larger impact on profitability than revenue growth. In the EDA industry (or most software businesses), a dollar of cost reduction has nearly the same profit impact as a dollar of incremental revenue. The conclusion: working on incremental revenue growth is just as productive, and a lot more pleasant, than working on cost reduction. In addition, a 5% miss in revenue produces about a 25% miss in profit (if the company’s operating profit is normally 20%) so it’s very disturbing to shareholders when an EDA company misses its revenue forecast because the accompanying profit miss will be large.

During the last decade or so, another profit leverage phenomenon has been important. With interest rates at very low levels, acquisition of another profitable company using cash is very accretive to earnings, i.e. cash that is sitting on the balance sheet collecting very little interest now becomes a profit generating asset that increases overall earnings for the acquiring company. A variety of industry analysts and investors have even taken the position that a company that doesn’t fully utilize its borrowing power is under-utilizing a corporate asset. That ignores the risk element associated with borrowing but it has encouraged lots of acquisitions, especially in high technology. Although there has been some consolidation in the semiconductor industry, the primary change has been a higher degree of specialization. Companies like TI that once produced nearly every type of semiconductor and very little profit now produce primarily analog and power devices and consistently among the highest profit in the semiconductor industry. Similarly for NXP moved from a broad mix of products to specialization in automotive and security components.

For EDA, there seems to be something stable about oligopolies. In the 1970’s, Computervision, Calma and Applicon had the largest combined share of the computer automated design industry. In the 1980’s, it was Daisy, Mentor and Valid. In the last three decades, it’s been Mentor, Cadence and Synopsys. The “Big Three” of EDA in recent years have had a combined market share between 70 and 85% and most of that time it was 75% plus or minus five percent. One reason that may have led to this phenomenon is that EDA tools are very sticky; there is typically a de facto industry standard for each specialization, like logic synthesis, physical verification, design for test, etc. It’s very hard to spend enough R&D and marketing cost to displace the number one provider of each of these types of software so changes occur mostly when there are technical discontinuities that absolutely force adoption of new methodologies and design tools. I suspect that most EDA companies make most of their profit from the software products where they are number one in a design segment (GSEDA analyzed nearly seventy such segments for the industry that are over $1 million per year in revenue).

Finally, I find it interesting to look at the EDA cost from the perspective of the companies purchasing the software. Frequently, I hear the complaint that EDA companies never lower the price of their software while semiconductor companies have to reduce the cost per transistor of their chips by more than 30% per year. To analyze this complaint, we used published data to look at the “learning curves” for semiconductors and EDA software (see Figure One).

A semiconductor learning curve is a plot of the log of cost (or price) per transistor on the vertical axis and the log of the cumulative number of transistors ever produced on the horizontal axis. For free markets like semiconductors, the result is a straight line, presumably forever. Semiconductor industry analysts publish data on the number of transistors produced each year as well as the revenue of the industry. The Electronic System Design Alliance publishes the revenue of the EDA industry. When you plot the semiconductor industry revenue per transistor and the EDA industry revenue per transistor on a learning curve, as in Figure One, the straight lines are exactly parallel. That means that the EDA software cost per transistor is decreasing at the same rate as the semiconductor revenue per transistor. Is that a surprise? It shouldn’t be. Just as the semiconductor industry has spent 14% of its revenue on R&D for more than the last thirty years, it has spent 2% of its revenue on EDA software for nearly twenty-five years. If EDA companies failed to reduce their price per transistor at the same rate that the semiconductor industry must reduce its cost per transistor, then EDA would become a larger percentage of revenue for the semiconductor companies, exceeding the 2% average and forcing reduction of other semiconductor input costs.

It turns out that the total semiconductor ecosystem behaves similarly, reducing costs to provide better products and a six order of magnitude decrease in the cost per transistor over the last thirty-five years.

The 20 Questions with Wally Rhines Series


How to Increase Energy Efficiency for IoT SoC?

How to Increase Energy Efficiency for IoT SoC?
by Eric Esteve on 10-11-2018 at 12:00 pm

If you have read the white paper recently launched by Dolphin, “New Power Management IP Solution from Dolphin Integration can dramatically increase SoC Energy Efficiency”, you should already know about the theory. This is a good basis to go further and discover some real-life examples, like Bluetooth Low Energy (BLE) chip in GF 22FDX. Because 22nm is an advanced node compared with 90nm or 65nm for example, the circuit targeting 22nm benefits from a decrease of the dynamic power (at same frequency), but the leakage power will dramatically increase.

In fact, the leakage power increases exponentially with process shrink, as it clearly appears on the second picture below. If you want your SoC to get the maximum benefit of this advanced node to target an IoT application, you will have to seriously deal with the power consumption associated with leakage.

The webinar will address two aspects of power efficiency as described on this picture from STMicroelectronics (“© STMicroelectronics. Used with permission.”): Silicon technology, by targeting FDSOI, and system power management.


Just a remark about the picture below, describing the evolution of the power consumption (active and leakage) in respect with the technology node, for 90nm to 22nm. If you read too quickly, you may think that the dynamic power also increases with process shrink. This is true in Watt per cm[SUP]2[/SUP], but don’t forget the shrink! Your SoC area in 22nm is by far smaller than in 65nm or 90nm, and the result of the equation Area*Power/cm[SUP]2[/SUP] is clearly better when you integrate in 22nm the same gates amount than in 90nm. If you want to increase the battery lifetime of your IoT system, your main concern is to reduce the leakage power by using the proper techniques, described in this webinar.


Dolphin presents various data for typical current consumption of a 2.4 GHz RF BLE when active (Rx/Tx/Processing/Stand By), or in sleep/deep sleep mode and calculate the duty cycle for a specific application: 99.966%. This means that applying power management techniques to reduce the leakage power will be efficient 99.966% of the time, so the designer should do it. This is true for this BLE RF IC example, as well as for IoT sensors, as they typically spend most of their time in sleep.

Improving SoC energy efficiency is clearly a must for IoT systems and any battery-powered devices. The first and obvious reason is to extend the system lifetime (without battery replacement), not by weeks, but really by years. In many cases, the system will be in a place where you just can’t easily replace the battery (think about surveillance cameras).

Other reasons are pushing for maximum energy efficiency, like cost, heat dissipation or scaling. A smaller size battery should allow packaging the system in a cheaper box. Reducing heat dissipation can be important when you want your product to fit with a specific form factor. Or you can decide to keep same battery life but integrate more features in your system…

In this webinar you will learn the various solutions to provide energy-efficient SoC. The designer should start at the architecture level and define a power friendly architecture. This is the first action to take: SoC architecture optimization is providing the highest impact, as 70% of power reduction comes from decisions taken at architecture level. The remaining 30% being equally split between Synthesis, RTL design and Place&Route.

At architecture level, you first define clock domains (to later apply clock gating), frequency domains and frequency stepping. Then you define the multi-supply voltage, global power gating and later coarse grain power gating. Power gating is a commonly used circuit technique to remove leakage by turning off the supply voltage of unused circuits. You can also apply I/O power gating, able to provide I/O power consumption reduction by one order of magnitude in advanced technologies where the leakage power is dominating.

As today’s SoCs require advanced power management capabilities like dynamic voltage and frequency scaling (DVFS), you will learn about these techniques and how to implement it to also minimize active power consumption.

The emerging connected systems are most frequently battery-powered and the goal is to design a system able to survive with, for example, a coin cell battery, not for days or even months, but for years. If you dig, you realize that numerous IoT (and communicating) applications, such as BLE, Zigbee…, have an activity rate (duty cycle) such that the power consumption in sleep mode dominates the overall current drawn by the SoC. To reach this target, the designer needs to carefully consider the power network architecture, clock and frequency domains at design start and wisely implement power management IP in the SoC.

Dolphin will hold a one hour live webinar “How to increase battery lifetime of IoT applications when designing an energy-efficient SoC in advanced nodes?” on October 16 (for Europe or Asia), at 10:00 AM CEST, or on October 25 for Americas, at 9:00 AM PDT. This webinar targets the SoC designers wanting to learn about the various Power Management techniques.

You have to register prior to attend, by going here.

By Eric Esteve from IPnest


One Less Reason to Delay that Venture

One Less Reason to Delay that Venture
by Bernard Murphy on 10-11-2018 at 7:00 am

Many of us dream about the wonderful widget we could build that would revolutionize our homes, parking, health, gaming, factories or whatever domain gets our creative juices surging, but how many of us take it the next step? Even when you’re ready to live on your savings, prototypes can be expensive and royalties add to the pain. Fortunately, ARM have extended their DesignStart program to FPGAs, in collaboration with Xilinx, eliminating cost and royalties for M0 and M3 cores. OK, I’m sure these are folded somehow into device pricing but that’s invisible to you and your savings will last longer than they would if you had to pony up ASIC NREs and license fees or even a conventional FPGA plus license fees.

ARM have been running the DesignStart program since 2010 with a lot of success – over 3,000 downloads and 300 new licensees in the last year alone. Free stuff really does attract a following. The appeal on an FPGA platform is obvious to most of us. At around $70 for a Spartan 6 bread-boardable module, you wouldn’t have a hard time defending that this could easily be absorbed in the family budget. Knowing that you can now get a Cortex M1 (the FPGA-optimized version of the M0) or an M3 – both as soft IP – makes this even more appealing. The most expensive part of your initial effort may be the Keil MDK, though ARM do offer a 90-day eval on that software (maybe you can develop your software really quickly 🙁).

Of course a CPU plus your custom logic may not be enough to meet your needs. Maybe you need some serious multi-processing capability and perhaps wireless capability, so now you’re looking at a Zynq or even a Zynq Ultrscale device. These definitely cost more than $70, but still a lot less than an ASIC NRE. Sorry, the DesignStart program doesn’t extend to A-class or R-class cores, but it still extends to the M1 and M3 cores on those platforms if you want heterogenous processing. So you can prototype your ultra-low power product with high-performance and wireless connectivity when you need it.

The motivation behind this introduction seems fairly obvious. For ARM this is another way to further push ubiquity. Why even think about RISC-V if you can start with a well-known and widely supported core, supported by a massive ecosystem, at essentially no added cost and no additional effort? This should further extend the ARM footprint and network effect in a much more elegant and appealing manner than an earlier and rather embarrassing counter to RISC-V. Meanwhile Xilinx gets to sell more devices with this attractive profile; what’s to lose?

ARM also offers very reasonably priced Arty development boards (based on Artix devices) for makers and hobbyists. These will get you up and running with drag and drop programming, debug support and lots of other goodies. So you really can get all the basics you need to build that first prototype at minimal cost (apart from the Keil MDK).

Of course if you want to get beyond an initial prototype, you are going to have to buy more FPGAs and maybe upgrade to those higher-end FPGAs (but you can carry across the software you developed to those bigger devices). So you may eventually still need a family loan, or Kickstarter funding or a second mortgage. But no-one ever said that getting rich was easy…


Detail-Route-Centric Physical Implementation for 7nm

Detail-Route-Centric Physical Implementation for 7nm
by Alex Tan on 10-10-2018 at 12:00 pm

For many years TSMC has provided IC design implementation guidance as viewed from the process and manufacturing standpoints. The last time TSMC Reference Flow incremented, it was version 12.0 back in 2011. Since then, increased design, process and packaging related complexities of the advanced nodes have demanded more focused efforts –which have translated into incremental set of directives such as DPT (Dual Patterning Technology), advanced packaging CoWoS with HBM2, reliability analysis, EUV, etc.

Advanced Nodes and Physical Design
Finer geometry in advanced nodes has introduced timing degradation from wire and via resistance. The reference flow 12.0 recommends an enhanced routing methodology (such as by via count minimization, routing layers segregation and wire widening to mitigate the impact of wire and via resistance). Concurrently, there seems to be an increased in the EDA efforts to both tighten and secure timing optimization attained early (during synthesis/placement) with those of post-route stage. The heuristic nature and boundaries created by feed-forward point tools have partly contributed to the loss of predictability (figure 1a) and introduced a design capability gap, according to Prof. Andrew Khang from UCSD as seen in figure 1b.
While many block level place and route tools have committed a shift-left move in order to account for numerous physical effects during placement ranging from the mainstream congestion-aware to more SI-aware, IR-aware, DRC-aware, etc., the complexity of the advanced node DRC rules are making the effort of producing a clean and optimal route more painful.

The original Avatar’s
Aprisa and Apogee are two products from Avatar Integrated Systems, a leading provider of physical design implementation solutions. Aprisa is a complete place-and-route (P&R) engine, including placement, clock tree synthesis, optimization, global routing and detailed routing. Its core technology evolves around its hierarchical database and common “analysis engines,” such as RC extraction, DRC engine, and a fast timer to solve complex timing issues associated with OCV, SI and MCMM analysis. Aprisa uses multi-threading and distributed processing technology to further speed up the process. The other product, Apogee is a full-featured, top-level physical implementation tool that includes prototyping, floor planning, and chip assembly –integrated with the unified hierarchical database. Its unique in-hierarchy-optimization (iHO) technology intended to close top-level timing during chip assembly through simultaneous optimization of design top and block levels.

Why a refresh needed

The performance impact of both wire and via resistance are more pronounced in sub-16nm process nodes. This can be seen by much narrower and longer routes due to a faster wire width shrink as compared to standard cell’s. Complex design rules is not helping redundant vias insertion either. As a result, the transition waveform shape is affected for both short and medium length routes. Furthermore, increased cross-coupling capacitance and net resistance simultaneously induced timing impact through larger crosstalk effects. In the end, wire delay takes an increasing percentage of cycle time. By 7nm process node, for nets of significant length, wire delay is measured as more than half of total stage delay and critical paths are much harder to close (as shown in figure 2a, 2b).

The conventional placement-centric place and route architecture methodologies with separate sequential flows are no longer adequate to address this interconnect related effects –as they cause significant pre-route versus post-route timing correlation issues, excessive design iterations, and suboptimal QoR.

Aprisa re-engineered for 7nm and beyond
During TSMC 2018 Open Innovation Platform in Santa Clara, Avatar has announced the availability of a new architecture to its Aprisa and Apogee solutions. With its breakthrough detailed-route-centric architecture addressing advanced nodes challenges, the new place-and-route provides > 2X faster design closure times with better QOR than the conventional counterparts.

As one of Avatar’s customers, Mellanox provides end-to-end Ethernet and InfiniBand intelligent interconnect solutions for servers, storage, and hyper-converged infrastructure. Their SoC designs have both unique characteristics and challenges, involving vast I/O interconnects fabric and utilizing advanced process nodes to reduce their switching latency.

“Advanced place-and-route technology is important for our silicon design activities as we move to more advanced processes,” said Freddy Gabbay, vice president of chip design at Mellanox Technologies. “The detailed-route-centric technology introduced with the new release of Aprisa consistently delivered better quality-of-results and predictable DRC and more than two times faster design time.”

Another customer, eSilicon, is a leading provider of semiconductor design and manufacturing solutions. eSilicon guides customers through silicon proven design flow, from concept to volume production. Its solution targeted for optimal PPA of ASIC/system-on-chip (SoC) design, custom IP and manufacturing solutions.

“eSilicon has used Aprisa on several very large and complex FinFET chips across several process nodes, including 16nm and 14nm,” said Sid Allman, vice president, design engineering at eSilicon. “We have successfully used Aprisa at both the block and top level with very good results. We expect to apply the new release to our advanced 7nm work as well.”

Avatar re-architected Aprisa and Apogee using three prong approaches as illustrated in figure 3:

Unified Data Model (UDM) is the single database architecture for placement, optimization, routing, and analysis. All Aprisa engines utilize the same data models, objects, and attributes in real time.

Common Service Engine (CSE) enables analysis engines and optimization engines to work in concert. Any implementation engine can make dynamic real-time calls to analysis engines at will. Optimizations are made with accurate data the first time. Extraction and timing data gets updated dynamically and seamlessly.

Route Service Engine (RSE) provides proper routing information on a per-net basis to any engine within the system that needs it. The RSE manages the route topology during all phases of optimization and reports to the calling optimization engine the net routing topology, such as metal layers used, RC parasitics and crosstalk delta delay.

Only by predicting and guiding the route topology early in the design can optimization be performed effectively and efficiently as reflected in the results comparison with and without route-centric version in figure 4.

The new release also includes full 7nm support; IR-aware/hotspot avoidance; auto EM violation avoidance and fixing; native support of path-based analysis; design pessimism reduction and up to 20% power reduction.

“We are committed to developing new breakthrough technologies to address the most challenging designs in the industry,” said Dr. Ping-San Tzeng, Chief Technology Officer at Avatar Integrated Systems. “This breakthrough architecture to our flagship products provides leading design teams with much faster design closure while improving the quality of results at 16nm and below.”

To recap, it is imperative to have timing accuracy and predictability throughout placement and route to ensure timing closure convergence. Detailed-route-centric architecture in new Aprisa/Apogee coupled with unified data model and integrated optimization/analysis engines facilitates consistent and up-to-date optimization data throughout the flow, which helps deliver improved quality-of-results, reduces iterations and speeds design convergence more than 2X faster than competition.

Avatar will be highlighting Aprisa and Apogee’s new architecture at the Arm Techcon, October 16 – 18, 2018 at the San Jose Convention Center in booth #827.


Crossfire Baseline Checks for Clean IP Part II

Crossfire Baseline Checks for Clean IP Part II
by Daniel Nenni on 10-10-2018 at 7:00 am

In our previous article bearing the same title, we discussed the recommended baseline checks covering cell and pin presence, back-end, and some front-end checks related to functional equivalency. In this article, we’ll cover the extensive list of characterization checks, that include timing arcs, NLDM, CCS, ECSM/EM, and NLPM.


Timing Arc Checks
The recommended timing arc checks should include checking equivalence of WHEN and SDF conditions in a given liberty file, condition consistencies across different timing corners, and Liberty vs. Verilog/VHDL arc consistencies. This is essential in order to ensure accurate digital simulations and timing analysis.

NLDM Characterization Checks
NLDM related characterization QA should include consistency checks between delay and transition tables, ascending capacitance and transition index values, correct number of indices, and range checks for delay, transition, setup/hold, and minimum period. Ensuring that cell rise and fall delay values don’t vary too much can be a valuable check for clocks, as well as other ports that may require a balanced delay.

It may also be prudent to ensure delays increase with increasing output capacitance, input transition times, and decreasing supply voltage. At the same time, checking that both transition and capacitance values don’t fall outside the range of the defined maximum transition and capacitance is also a necessity. This will ensure that no extrapolation is needed when characterized data is used in a design flow environment. In terms of transition trip points, one must ensure that they are symmetrical and must be a given percentage outside of the delay trip points. Other accuracy checks should include checking for non-changing or zero delay or transition values in a given table.

When comparing two or more PVT corners, large delay deviations should be closely monitored, exact values should not repeat, pin properties and parameters should be consistent, and mode definitions should match. Capacitance and transition properties should be consistently defined for all pins. More importantly, ensuring the same tables and arcs are defined across all given corners will provide a more stable and error-free timing analysis down the line.

Constraint values and related timing information such as setup and hold tables should be defined in matching pairs. Each matching setup and hold tables should have equal indices, and the sum of setup and hold values should be greater than zero. For clock related pins, ensuring pulse width definitions is also necessary.

Additional consistency checks should flag cases where duplicated or illegally defined parameters are used and ensure user-defined parameters are correct. Temperature, voltage, and process corner parameters should be consistent with the library and file name. On top of this, units must be consistent and defined as expected. Pin related checks should guarantee the presence of arcs and the use of required tables and omission of obsolete ones. An important, yet often overlooked check, should ensure that related pin terminals are not defined as outputs.

For standard cell libraries, cell to cell trends with respect to changes related to increasing output drive should be closely monitored. They include area, cell footprint, pin attributes, arc consistency, delay, and power monotonicity. Also, ensuring consistency among the attributes pertaining to power switch cells and its pins will guarantee correct usage of specific cells.

On-chip variation related timing checks should include table presence, monotonicity, and guarantee that all files are paired correctly (when comparing NLDM to AOCV/POCV files).

CCS Characterization Checks

CCS power characterization can also benefit from many of the above checks along with ensuring that all given templates follow the Liberty specification guidelines. The nominal voltage must match the operating condition voltage. This is essential is guaranteeing correct data for a given voltage corner. The dynamic current group must be present for all cells and for power pins, the current must be positive, and negative for ground pins. Additionally, the reference time must be greater than zero since it’s related to physical circuit behavior. The same checks also apply to leakage current. In the absence of gate leakage values, current conservation must hold within the same leakage current group. If all power and ground pins are specified with leakage currents, the sum of all currents should be zero. Finally, when dealing with intrinsic resistance, total capacitance, or intrinsic capacitance, values should not be negative or zero.

ECSM/EM Characterization Checks
Effective Current Source Model (ECSM) and Electro-Migration (EM) related checks are in line with those specified for CCS. Beside ensuring that all tables are consistent across all corners, current values must also be checked to ensure monotonicity across given capacitive loads. Checking for the presence of a given combination of average, peak, and RMS current types may be a design specific requirement that would need to be qualified as well.

NLPM Characterization Checks
Last, but not least, power related characterization checks should include the standard and expected trends related to capacitance, transition times, and voltage. Power is expected to increase when load capacitance or transition times increase. At the same time, it is expected to decrease when supply voltages decrease. In terms of leakage power, one might want to ensure that values fall within an expected range and check whether pins are correctly defined for a given condition, whether they are required or missing.

Conclusion
Although we have already mentioned this in the previous article, it is important to repeat it… IP qualification is an essential part of any IC design flow. A correct-by-construction approach is needed since fixing a few bugs close to tapeout is a recipe for disaster. Given that, IP designers need a dedicated partner for QA solutions that ensures the QA needs of the latest process nodes are always up-to-date. In-house QA expertise increases productivity when integrated with Fractal’s Crossfire validation tool. All framework, parsing, reporting, and performance optimization is handled by the software. On top of that, with a given list of recommended baseline checks, we ensure that all customers use the same minimum standard of IP validation for all designs.

Also Read: Crossfire Baseline Checks for Clean IP


Should Companies be Allowed to Hack Back after a Cyberattack?

Should Companies be Allowed to Hack Back after a Cyberattack?
by Matthew Rosenquist on 10-09-2018 at 12:00 pm

Potential for Hack-Back Legislation. Government officials and experts are weighing in on the concept of ‘hacking back’, the practice of potentially allowing U.S. companies to track down cyber attackers and retaliate.

Former head of the CIA and NSA outlined his thoughts to the Fifth Domain on the Hack Back issue currently being debated by Congress. He is cautious but has expressed an openness to allowing some levels of retaliation by private organizations.

General Hayden is a very sharp and brings unprecedented national intelligence experience to the table, but I must disagree with his position on the risks of enabling businesses to ‘hack back’.

I have had the pleasure of an in-depth 1:1 discussion with him regarding the long-term nation-state threats to the digital domain and have always been impressed with his insights. However, this is a different beast altogether.

Allowing U.S. companies latitude to hack-back against cyber attackers is very dangerous. I believe he is underestimating the unpredictable nature of business management when they find themselves under attack. Unlike U.S. government agencies, which firmly align themselves to explicit guidance from the Executive branch, the guard-rails for businesses is highly variable and can be erratic. Decisions can be made quickly, driven by heated emotion.

The average American business does not understand the principles of active defense, proportional damage, or have insights to establish and operate within specific rules of engagement. They certainly don’t have the capacity to determine proper attribution, gather necessary adversarial intelligence, or even understand the potential collateral damage of weapons they may use.

Instead, we can expect rash and likely volatile responses that lash out at perceived attackers. Unfortunately, cyber adversaries will quickly seize on this behavior and make their attacks appear as if they are coming from someone else. It will become a new sport for miscreants, anarchists, social radicals, and nation states to manipulate their targets into hacking-back innocent parties. As the meme goes, “On the Internet, nobody knows you’re a dog”.

Hack Back Consequences
What happens when threats impersonate hospitals, critical infrastructure, or other sensitive organizations when they attack. The hack-back response may cause unthinkable and unnecessary damage.

Congress is also considering allowing companies to ‘hack back’. Senator Sheldon Whitehouse recently indicated he is considering a proposal to allow companies to “hack back” at digital attackers.

Weaponizing Businesses
I think the whole “hack back” movement is entirely misguided.

Many compare it to ‘stand your ground’ situations, as they try to convince others to join public support. But such verbal imagery it is just not applicable. A better analogy is saying if someone breaks into your house, you should have the right to break into their home or whomever you think did it (because you really won’t know). Most would agree it is not a good idea when framed that way.

Now consider whom you will be empowering to make such decisions. Businesses who were not able or responsible enough to manage the defense of their environment in the first place, will be given authority to attack back. Yet, it is unlikely they will truly understand where the actual attack is originating. They will be acting out of rage, fear, and with weapons they have no concept of potential collateral and cascading damage it may cause.

Every time I have heard an executive wanting to be able to ‘hack back’, it was someone who as not savvy in the nuances of cybersecurity and lacked the understanding of how incredibly easy it is to make an innocent 3rd party look like they are the ones conducting an attack. When I brought up the fact it is easy to make it appear like someone else was behind the strike, such as a competitor, government agency, or hospital, the tone radically changed. Attribution for cyberattacks can take experts months or even years. Businesses have neither the expertise nor the patience to wait, when they want to enact revenge.

Simple Misdirection
If allowed, hacking back will become a new sport for miscreants, anarchists, social radicals, and nation states to manipulate their adversaries into making such blunders or be hacked-back by others who were fooled into thinking they were the source.

Allowing companies to Hack Back will not deter cyberattacks, rather it will become the new weapon for threats to wield against their victims.

Interested in more insights, rants, industry news and experiences? Follow me on Medium, Steemitand LinkedIn for insights and what is going on in cybersecurity.


Top 10 Highlights from the TSMC Open Innovation Platform Ecosystem Forum

Top 10 Highlights from the TSMC Open Innovation Platform Ecosystem Forum
by Tom Dillinger on 10-09-2018 at 7:00 am

Each year, TSMC hosts two major events for customers – the Technology Symposium in the spring, and the Open Innovation Platform Ecosystem Forum in the fall. The Technology Symposium provides updates from TSMC on:
Continue reading “Top 10 Highlights from the TSMC Open Innovation Platform Ecosystem Forum”


Closing Coverage in HLS

Closing Coverage in HLS
by Alex Tan on 10-08-2018 at 12:00 pm

Coverage is a common metric with many manifestation. During the ‘90s, both fault and test coverage were mainstream DFT (Design For Testability) terminologies used to indicate the percentage of a design being observable or tested. Its pervasive use was then spilled over into other design segments such as code coverage, formal coverage, STA timing analysis coverage, etc.

Motivation of having a code coverage
While the term coverage may provide the management team with a sense of how much testing was done on a targeted code or design, to the code developers or system designers it offers an additional measure on how stable are their incepted codes –to be fully realized or used in the production mode. In software development domain, code coverage measures the percentage of source code being exercised during test using a collection of test suites –the underlying axiom here is that a high percentage value indicates a lower chance of undetected bugs.

The primary code coverage criteria include function coverage (measures frequency of a program function is called); statement coverage or line coverage (measures the number of statements that are executed); branch coverage (measures the amount of branches being executed, such as if-else constructs) and condition coverage or predicate coverage (measures whether the Boolean sub-expression has been assessed either true and false).

Although it is hard to achieve a full coverage and the relationship between an attained code coverage level versus probability of a program being bug-free is non-linear, code coverage data offers many benefits. This includes improving the completeness and robustness of the existing test suites (such as to guide generation of missing test cases, to minimize a test suite for runtime reduction, to guide fuzz testing, etc.) and enhancing regression sensitivity of a test suite.


SoC design, HLS and Catapult

Emerging applications such as 5G, automotive and IoT have introduced more embedded sensors, specialized processors and communication blocks such as visual processing, AI neural networks and high-speed communication units. In order to speed-up and lower the inception cost of such products, more system designers are migrating their code abstraction from RTL to high level synthesis (HLS) as shown in figure 1. The overall development time can be cut by half with higher level synthesis. While traditional software coverage tools can be run on the C++ source code, the generated results are inaccurate and may mislead designers for a good coverage as discussed in more details in the next paragraph.

Mentor’s Catapult® HLS Platform provides a complete C++/SystemC verification solution that interfaces with Questa® for RTL verification as shown in figure 2. The platform consists of a design checker (Catapult Design Checker or CDesign Checker), a coverage tool (Catapult Code Coverage or CCOV), a high-level synthesis (Catapult HLS) and a formal tool (Sequential Logic Equivalence Check).

To gain a better clarity of how these platform sub-components handshake, we should track the code as it is being processed. Initially, the code (containing Assert and Cover statements) is applied as input to CDesign Checker for linting and formal analysis to uncover any coding or language related bugs. A subsequent CCOV run is done on the checked code to provide hardware-aware coverage analysis. Once the code is cleanly analyzed, HLS can be performed to produce power-optimized and verification ready RTL. HLS propagates assertions and cover points to the output RTL and generates the infrastructure using SCVerify for RTL simulation using Questa –allowing the reuse of the original C++ testbench to verify the RTL. As a final step, SLEC-HLS formally verifies that the C++ exactly matches the generated RTL code.

Catapult vs traditional Code Coverage
Code coverage analysis in CCOV had been enhanced to be more hardware-context aware. The intent is to also enable designers to achieve the same coverage level for both high-level source code and post HLS RTL. The four main coverage techniques used to analyze the C++ code are statement, branch, focused expression, and toggle coverage. At first glance they look similar to those of the generic coverage tool version, but in actuality are not the same, as captured in the (left) table of figure 3. The right table shows differences between CCOV and traditional code coverage such as GCOV (Gnu Coverage).

Furthermore, mainstream software coverage tools are not hardware-context-aware in nature as highlighted below:

Using Catapult Code Coverage
A complete HLS to RTL coverage flow starts with stimulus preparation for HLS model in either C++ or SystemC. It is followed by running CCOV to assess if targeted coverage is met, otherwise more tests get added. Optionally, designer can exclude areas of the design from coverage analysis. CCOV captured all the coverage metrics into the UCDB and HLS is done to produce power-optimized RTL.

The generated coverage results captured in Questa Unified Coverage DataBase (UCDB), can later be used within the context of the proven Questa verification management tools as shown in figure 5. This CCOV integrated with the UCDB provides an assurance for a complete and accurate RTL coverage analysis.

If the Questa simulator is used with code coverage turned on, all coverage metrics are added into the UCDB. For those cases requiring further verification such as any unreachable code segment, designer can use Questa CoverCheck to help formally prove unreachability and apply the necessary directives to the simulator (or in UCDB) for its exclusion. Prior generated SCVerify settings and makefile can be used to simulate the RTL. The process gets iterated with more tests until RTL code coverage is reached.

In conclusion, migrating to HLS is a proven cost-savings and significantly shorten RTL development time. Mentor’s Catapult hardware-aware code coverage is a key metric in HLS flow and bridges coverage of higher abstraction codes (C++/SystemC) with RTL –enabling a fast convergence to verification sign-off.

For more details on Catapult HLS please check HERE.


TSMC and Synopsys are in the Cloud!

TSMC and Synopsys are in the Cloud!
by Daniel Nenni on 10-08-2018 at 7:00 am

EDA has been flirting with the cloud unsuccessfully for many years now and it really comes down to a familiar question: Who can afford to spend billions of dollars on data center security? Which is similar to the question that started the fabless transformation: Who can afford to spend billions of dollars on semiconductor manufacturing technology?

TSMC has partnered with cloud vendors Microsoft and Amazon to bring EDA into the 21st century. I have said it before, if anybody could do it TSMC could, which makes TSMC all that more sticky as a pure-play foundry. What other foundries have the ecosystem and trust of the semiconductor industry to do this?

The one issue that is still in process is the software business model. From what I am told EDA software licensing has not changed to a traditional pay-per-use cloud model yet. It really is uncharted territory so let’s look at how we got EDA licensing to where it is today.

We started with perpetual licenses that were locked to a specific machine (not good for EDA). Next was the WAN licensing that would let a perpetual license float around using a license server (good), followed by the flexible access model (FAM) which was a three year all-you-can-eat approach offered by a specific vendor (horribly not good). The software subscription licensing that we use today came next where you lease a software license for three years (very good). One company added a remix clause that allowed customers to change the license counts from one product to another (not good). The EDA company that I previously worked for added weekly tokens that can be used for peak simulation/verification times (very good). The token model worked quite well and added much more total revenue than previously thought and gave chip designers more time simulating and verifying. I feel that the pay-per-use cloud pricing would have a similar result, additional revenue above and beyond annual EDA budgets and better chips, absolutely.

The other thing that I want to point out is how important your relationship with the foundry is. I have made a career of it, helping emerging EDA and IP companies work with the foundries creating revenue streams inside the foundry and outside with the top foundry customers. It is interesting to note that Cadence and Synopsys are the two EDA partners TSMC chose to start with. I’m sure the others will follow but take note, Synopsys, the number one EDA and IP company, does not offer their own cloud, they are all-in with TSMC.

One of the keynotes at the TSMC OIP conference last week was Kushagra Vaid, GM and distinguished Engineer at Microsoft Azure (cloud). Before joining Microsoft in 2007 he spent 11+ years designing microprocessors at Intel. It is always nice to talk semiconductor design with someone who actually designed semiconductors. I spoke with Kushagra and Suk Lee after lunch and am convinced that, after numerous failed attempts, EDA is finally in the cloud and will stay there, my opinion.

“Microsoft Azure is pleased to be a TSMC premier partner in the OIP Cloud Alliance, and we’re honored to receive a 2018 partner of the year award from TSMC for our joint development of theVDE cloud solution,” said Kushagra Vaid, GM and Distinguished Engineer, Azure Hardware Infrastructure, Microsoft Corp. “Our collaboration with TSMC will help usher in modern silicon development that leverages the capabilities of the Azure cloud platform.”

“Synopsys has been a TSMC OIP Alliance member for EDA flows and IP for 11 years, and we have expanded our partnership with TSMC to enable IC design in the cloud,” said Deirdre Hanford, co-general manager, Synopsys Design Group. “We have collaborated with Amazon Web Services and Microsoft Azure to provide a secure and streamlined flow for TSMC VDE. The Synopsys Cloud Solution has passed the rigorous TSMC security and performance audits and is ready for our mutual customers to design in the cloud with TSMC collateral using Synopsys tools and IP.”

Synopsys Announces Availability of TSMC-certified IC Design Environment in the Cloud

TSMC Recognizes Synopsys with Four Partner Awards at the Open Innovation Platform Forum

Synopsys Design Platform Enabled for TSMC’s Multi-die 3D-IC Advanced Packaging Technologies

Synopsys and TSMC Collaborate to Develop Portfolio of DesignWare IP for TSMC N7+ FinFET Process

Synopsys Digital and Custom Design Platforms Certified on TSMC 5-nm EUV-based Process Technology

Synopsys Delivers Automotive-Grade IP in TSMC 7-nm Process for ADAS Designs