Synopsys IP Designs Edge AI 800x100

Assertion-based Formal Verification

Assertion-based Formal Verification
by Paul McLellan on 08-05-2011 at 5:34 pm

Formal verification has grown in importance as designs have grown and it has become necessary to face up to the theoretical impossibility of using simulation to get complete coverage along with the practical impossibility of simulating enough to even get close.

There are a number of solvers for what is called satisfiability (SAT) but these work in a rather rarefied theoretical environment different from the way designers work. So it is necessary to add a modeling layer to connect properties in the designer’s world to the types of equations that the solvers can prove. Some properties require additional logic to be added to the design in order to convert, for example, a temporal property into one that an SAT engine can prove.

The modeling layer takes in the design description, the property/properties to be verified, the initial state of the design and any constraints. It then transforms these into the formal equations required by the SAT solver. The solver attempts to find a “witness” for each property. A witness is a sequence of input vectors that make the property true while satisfying all the constraints.

The SAT solver produces one of 3 outcomes:
[LIST=1]

  • Pass, a witness was found
  • Fail, the solver can prove that no witness can exist
  • Undecided, it couldn’t either find a witness nor prove that one is impossible

    As an aside, formal verification products are quite interesting to sell. Typically, to evaluate them, the customer will have an application engineer run an old design through the tool, one that is already in production. It is interesting when the design promptly fails and a sequence is found that causes the design to do something it shouldn’t. Of course, you don’t tell the customer all the problems, they need to buy the tool to find that out.

    Atrenta’s white papers on formal verification, which go into a lot more detail, are available here.


  • Chip-Package-System Webinar

    Chip-Package-System Webinar
    by Paul McLellan on 08-05-2011 at 5:14 pm

    The webinar on CPS (chip-package-system) is on Tuesday 9th August at 11am Pacific time. It will be conducted by Christopher Ortiz, Principal Application Engineer at Apache Design Solutions. Dr. Ortiz has been with Apache since 2007, supporting the Sentinel product line. Prior to Apache he worked at Agere / LSI, where he investigated on-chip signal and power integrity challenges for advanced SoC designs. He received his Ph.D. in physics from the University of Notre Dame.

    A complete Chip-Package-System co-design/co-analysis solution addressing system-level power integrity, SSO, thermal, and EMI challenges. Apache’s Sentinel™combines the chip’s core switching power delivery network, I/O sub-system, and IC package/PCB modeling and analysis in a single environment for accurate CPS convergence, from early stage prototyping to sign-off.

    To register for the webinar go here.

    More details on the whole series of webinars here.


    IC Power Dissipation in…the New York Times!

    IC Power Dissipation in…the New York Times!
    by Paul McLellan on 08-05-2011 at 4:37 pm

    Generally if you want to read about power dissipation in SoCs and the potential impact on limiting how much computer power we might be able to cram onto a given piece of silicon then EE Times is a good place to look. But last weekend there was a full-length article in, of all places a different Times, the New York Times, entitled Progress Hits Snag: Tiny Chips Use Outsize Power.

    The article riffs off apaper at ISCA(link is PDF) titled Dark Silicon and the End of Multicore Scaling. It basically looks at what percentage of transistors will need to powered down at any one time. As early as next year, these advanced chips will need 21 percent of their transistors to go dark at any one time, according to the researchers who wrote the paper.

    This is a big challenge, one that I mused about rather less scientifically just a few weeks ago here on SemiWiki. The challenge with multicore is to work out which parts can be powered down. You can’t power down a whole core (at least not all the time, or why bother to have it on the chip) which means a more subtle approach is required.

    The article in the New York Times ends on an optimistic note with a quote from Dave Patterson of Berkeley:
    “It’s one of those ‘If we don’t innovate, we’re all going to die’ papers,” Dr. Patterson said in an e-mail. “I’m pretty sure it means we need to innovate, since we don’t want to die!”

    This reminds me of a quote from Maurice Wilkes, one of the pioneers of digital computing, who was the head of the University of Cambridge Computer Laboratory when I was an undergraduate. When someone argued that computers couldn’t get much faster due to speed of light considerations (this was in the days of room-sized computers) Professor Wilkes retorted that “I think it just means computers are going to get a lot smaller.” Which, of course, they did.


    Apple Strength Will Compel ARM to Trim its Sails

    Apple Strength Will Compel ARM to Trim its Sails
    by Ed McKernan on 08-03-2011 at 7:00 pm

    ARM’s move into the broad Tablet and PC space is based on lining up as many partners as possible to attack Intel from multiple angles. It’s a strategy not so different from what Intel employed in the early PC days. However, the strategy is unraveling as Apple and Samsung have reached market share domination without ARM’s merchant partners. The end game is still playing out as partnerships and alliances continue to form. The long term impact on ARM will be slowing revenue and earnings growth if it plays along the lines that I think.

    But first, as always, an analogy that I think is relevant. 100 years ago, Henry Ford made two critical decisions that allowed him to dominate the auto market in the first 15 years. The first was the assembly line. The second – which I think is more commonly apropos, is the need to go vertical in the manufacturing supply chain building everything in house.

    Back in the 1970s I took a factory tour of the Ford Mustang assembly line and the Steel Rolling Mill that is part of the River Rouge complex – the largest in the world when it was completed. I was more interested in the Mustang tour than the steel tour. However you get a good sense of what Ford could do and how he could sell cars for under $500 which in turn led to industry dominance until he forgot to refresh the Model T in the late 1920s.

    ARM has a history of working with many chip companies to extend the architecture into every crevice. By having multiple suppliers it is able to have the upper hand in the pricing of its license and royalties. With the PC, tablet and phone market they have recruited a who’s who of semiconductor companies: Broadcom, nVidia, Marvell, Qualcomm, TI, Samsung, ST Ericsson, Freescale and others that I am sure I missed. Kind of like the 1927 Yankees Murderers Row lineup to beat Intel.

    The business model in this space though is not going to hold up for one reason. The Tablet and Phone Markets are going Vertical. Apple and Samsung have their own internal developments that will shy away from the merchant market. They now control over 50% of the market in phones and Apple has 90% of the tablet market. Apple and Samsung are going vertical along the whole supply chain with DRAM, Flash, panels, processors etc.. Samsung is true vertical while Apple is a virtual vertical (partially paying ahead on capacity for lower component prices). Apple will especially need to squeeze pennies from every supplier as they face the likelihood of a subsidized Samsung business model.

    If it hasn’t happened yet, Apple and Samsung will turn the tables on ARM and demand royalty re-adjustments and create most favored nations status for the two. ARM financial projections will have to come down and their sky high P/E will moderate. Given the market consolidation, I expect many of the above merchant suppliers to throw in the towel by the time of the 22nm generation.

    Now here’s the interesting part. I believe Qualcomm and nVidia will survive and both will fight to win the Amazon, Nokia (and MSFT) and HTC Business – consider this the rest of the non-PC market. Qualcomm has an advantage with its Communications Technology but I am not certain that Snapdragon is meaningful to their overall future. NVidia will get closer to Nokia and MSFT because MSFT needs a hardware platform that combines processor and graphics to undercut Intel long term. Nvidia is rumored to be the lead on the Amazon tablet coming out this fall – which should have fairly good volume.

    After Apple has marginalized or commoditized ARM it will make two more strategic decisions. The first is to enter Intel’s Fab – my guess is at the 14nm node. It could be with ARM but much more likely a multi-core configuration with one core x86 and the other ARM. The reason is that Apple is going to go to a dual O/S configuration on its Tablet and PC platforms in order to improve its customer experience and build on its Wall Gardens protecting the greater ecosystem.

    In the internet and I’ll call it the consumer iCloud mode it will run off of iOS for lower power consumption and longer battery life. In business iCloud mode (running office apps) it will switch to x86 and run in MAC OS in order for the user to experience full performance and App compatibility. Intel’s ultrabook initiative is trying to replicate this – but it won’t feel as slick as Apple’s does and MSFT may try to block the implementation since Intel’s light mode will be based off of the Linux Meego O/S.

    The final piece to Apple’s strategy is to arrange a nice marriage between Qualcomm and Intel for the benefit of just Apple. Apple will demand that Intel run Qualcomm silicon in their fabs to get maximum performance at lowest power and cost. The Qualcomm silicon will be used for iPhones and Tablets in some die-stacking package. Getting CPU and Communications silicon cost down is key to their long term battle with Samsnug. ARM wins in the Tablet and iPhone wars hanging on Apple’s coattails but it is less of a win than if the merchant players were all in the game.

    Intel wins in Tablets and iPhone business through expanded foundry business for Apple that includes building ARM chips for iphone and ARM + x86 combo CPUs for Tablets and MAC Air notebooks. Add on to that the Qualcomm foundry business and it is a significant revenue upside. Qualcomm may have the option of increasing their business with Intel Foundry – for the purpose of selling chips into Samsung, HTC, and Nokia etc… As a long time participant in the mobile PC business, it is the shifting alliances that will be the most fun to watch over the next two years.


    Apple makes 2/3 of profits of entire mobile industry

    Apple makes 2/3 of profits of entire mobile industry
    by Paul McLellan on 08-02-2011 at 5:41 pm

    This is an amazing picture (click to enlarge). Apple now makes 2/3 of all the profit in the entire mobile handset industry. And that is the entire handset industry, not just smartphones where it has also blown past Nokia to become number one (although there are more Android handsets than iOS, those handsets are spread across multiple manufacturers and the manufacturers make a lot less profit per handset).

    In 2007 Nokia made over half the profit. They also made about 1/3 of all the phones manufactured, around one million phones per day. Now they aren’t even on the chart (you need to be profitable to have a fraction of the total profits). Samsung and HTC make another quarter of the profit, with RIM bringing up the rear.

    Another interesting statistic: so far this year Microsoft made less than 1% of its revenues from mobile. So those people who point out that it makes more from patent licenses to Android manufacturers are right. Of course it is hoping for a big increase when Nokia finally ships WP7 phones, but my own opinion is that Nokia is doomed by a mixture of Chinese competition at the low end and Apple/Android at the smartphone end. But I could be wrong: carriers are very political and don’t want to be held hostage to Apple or Google. How Microsoft has weakened that now carriers may want to embrace them since they are a “safe” partner who can be rolled over if necessary.


    Note: You must be logged in to read/write comments. When registering for SemiWiki put “iPad2” in the referral section and you may win one!


    Has IP moved to Subsystem? Will IP-SoC 2011 bring answers?

    Has IP moved to Subsystem? Will IP-SoC 2011 bring answers?
    by Eric Esteve on 08-02-2011 at 11:21 am

    I have shared with you the most interesting I have heard during IP-SoC 2010, in two blogs, Part I was about IP market forecast(apparently my optimistic view was quite different from the rather pessimistic vision shared by SC analysts) and Part II, named “System Level Mantra”, was strongly influenced by Cadence clever presentation, but this was before Cadence decided to drop “EDA360”, as least according with Dan Nenni in “CDNS EDA360 is dead”.

    Today, it’s time to talk about the future, as the next session of IP-SoC will be held in December in Grenoble, in the French Alps. As usual since 1998, the conference will be Design IP-centric, but the interesting question will be to know where the IP industry stands on the spectrum starting from a single IP function, ending to a complete system. Nobody would allege that we have reached the upper side of the spectrum and claim that you can source complete system from an IP vendor. The death of EDA360 is a clear illustration of this status. Maybe because the SC industry is not ready to source a complete IP system (what would be the added value of the Fabless companies if/when will occur?), most certainly because the IP vendors are far to be able to do it (it will require strong understanding of specific application and market segment, associated technical know-how of such application and, even more difficult to met, adequate funding to support up-front development, accepting the risk to miss the target…). This is why an intermediate step may be to offer IP Subsystem. According with D&R, who organize IP-SoC, the IP market is already here: “Over the year IPs have become Subsystems or Platforms and thus as a natural applicative extension IP-SoC will definitively include a strong Embedded Systems track addressing a continuous technical spectrum from IP to SoC to Embedded System.” So IP-SoC 2011 will be no more IP-centric only, but IP Subsystem centric!

    It will be interesting to hear the different definitions of what is exactly an IP Subsystem. If I offer a PCI Express Controller with an AMBA AXI application interface, may I call it a subsystem? I don’t think so too! But should I add another IP function (like for example Snowbush offering PCI Express plus SATA) to call it a subsystem? Or should I consider the application first, and pick –or design- the different functions needed to support this specific application? Then, how to market the CPU, the memories and probably other IP which belongs to my competitor? The answer is far to be trivial, and this will make the next IP-SoC conference worth to attend! You probably should not expect to come back home with a 100% definite answer (if anybody knows the solution, he should start a company a.s.a.p.) but you will have the chance to share the experience of people who have explored different tracks, and learn from them. If you are one of these, then you definitely should submit a paper and share your experience on how to Design or Market IP subsystems! See the “Important dates” below:

    • Deadline for submission of paper summary: September 18, 2011
    • Notification of acceptance: October 15, 2011
    • Final version of the manuscript: November 6, 2011
    • Working conference: December 7-8, 2011

    If you are not yet involved into IP subsystem but in IP Design/Market, don’t worry, as the “Areas of Interest” list is pretty long:

    IP Best practice
    • Business models
    • IP Exchange, reuse practice and design for reuse
    • IP standards & reuse
    • Collaborative IP based design

    Design
    • DFM and process variability in IP design
    • IP / SoC physical implementation
    • IP design and IP packaging for Integration
    • IP and system configurability
    • IP platform and Network on Chip

    Quality and verification
    • IP / SoC verification and prototyping
    • IP / SoC quality assurance

    Architecture and System
    • IP / SOC transaction level modeling
    • Multi-processor platforms
    • HW/SW integration
    • System-level analysis
    • System-level virtual prototyping
    • NoC-based Architecture

    Embedded Software
    • Software requirements (timeliness, reactivity)
    • Computational Models
    • Compilation and code generation

    Real-Time and Fault Tolerant Systems
    • Real-time or Embedded Computing Platforms
    • Real-Time resource management and Scheduling
    • Real-time Operating system
    • Support for QoS
    • Real-time system modeling and analysis
    • Energy-aware real-time systems

    If you just want to attend, just register here, and send me a note, it will be a pleasure to meet you there!

    By Eric Esteve from IPnest


    PathFinder webinar: Full-chip ESD Integrity and Macro-level Dynamic ESD

    PathFinder webinar: Full-chip ESD Integrity and Macro-level Dynamic ESD
    by Paul McLellan on 08-01-2011 at 10:00 am

    The PathFinder webinar will be at 11am Pacific time on Thursday 4th August. It will be conducted by Karthik Srinivasan, Senior Applications Engineer at Apache Design Solutions. Mr. Srinivasan has over four years of experience in the EDA industry, focusing on die, system, and cross-domain analysis. His professional interests include power and signal integrity, reliability and low-power design. He holds a MSEE from the State University of New York, Buffalo.

    The industry’s first comprehensive layout-based electrostatic discharge (ESD) integrity solution provides integrated modeling, extraction, and simulation capabilities to enable automated and exhaustive analysis of the entire IC, highlighting areas of weaknesses that can be susceptible to ESD induced failure. PathFinder also delivers innovative transistor-level dynamic ESD capabilities for validation of I/Os, analog, and mixed-signal designs.

    Register for the webinar here.


    MCU Performance Customers: The Cavalry is Coming Over The Hill

    MCU Performance Customers: The Cavalry is Coming Over The Hill
    by Ed McKernan on 07-31-2011 at 7:30 pm

    cavalry lg

    The under the radar, sleepy microcontroller market is about to undergo a rapid transformation the next several years with new entrants and the rise of 32 bit cores that will redefine the parameters for success. This will revive growth and result in new winners and losers. But lots of questions remain.

    My first job out of college in 1984 was programming an 8 bit 8051 for a telephone handset. It took months to finish a programming task in assembly language that I thought I could do in a 16 bit microcontroller in a week or so. I begged my boss to allow us to switch. He declined – we couldn’t afford tacking on a couple extra bucks per telephone. Translation: I was underpaid. I swore then that the 8051 would surely be gone in a couple years. Missed that prediction!

    It’s still here more than 25 years later. The 8051 along with the other 8 bit controllers are a $5B market and I am now convinced they will never make it to the Smithsonian Museum of History.

    What is new is that 32 bit controllers have been on a tear the last two years. They’re finally taking off. This year 32 Bit MCUs should do roughly the same revenue as 8 bit. The magic ASP number for market liftoff is around $1 per chip – unbelievable.

    The tragedy of the earthquake and Tsunami that struck Japan highlighted not only how fragile life is but also the world economy. Renesas, the company most severely hit by the earthquake has revenue of $9B that is 40% based on sales to automotive customers. This 40% dimmed the lights in Toyota, Honda and Nissan factories around the world. Think about it – a $9B company, single-sourced, levered into a $1T customer base. Imagine what the entire $15B MCU market leverage is and you see where I am going.

    JIT (just in time) manufacturing was exposed and shown to be – at the extreme – very risky. Auto companies will demand 6-9 months instead of 30-60 days of inventory stored around the world. Second sourcing, the curse of the semiconductor industry in the 1970s and 1980s will be asked for but declined. What then are the alternatives that the automakers and others will pursue.

    Renesas is the big dog at 30% of the MCU market and they face the supreme challenge of winnowing down the extended number of architectures that resulted from two large mergers. The first was Hitachi and Mitsubishi. More recently NEC. In their efforts to support all legacy products, they risk losing the future. And there is no talk yet of adopting ARM at 32 bits. They appear to feel safe with the current customers but the high end market is pushing for more performance.

    ARM is the new love interest of many microcontroller vendors at 32 bit. The argument is that they can be targeted in very low power and high performance markets with different cores. Then there is the common programming platform which customers will appreciate. It can be compelling and seems to be working. Atmel, St Micro, TI, Infineon and others are headed down this path. Microchip has licensed MIPs to attack the 32 bit market and with its leadership in the 8 bit market – they seem be leveraging its loyal customer base for future growth.

    The second trend that to me could be more impactful is the fact that Xilinx and Altera have announced plans to enter the market in the next year with a family of FPGAs that include hard core A9 processors running up to 1 Ghz with their associated Caches, hard memory controllers, CAN and Gbit Ethernet controllers. All this with a sea of LUTs and hundreds of GPIO. Ahh – yes but its an FPGA and probably will cost more than an ARM and a Leg (excuse the pun).

    This is where I think it is interesting. Xilinx and Altera are focused on 28nm process technology. Much of the 32 bit MCU world is at 130nm or 90nm. By being at least 3 nodes ahead and using hard blocks for the CPU and peripherals, there is a chance that these parts will be smaller in die size than current MCUs and therefore sell at or below price parity. One caveat to this – there is no integrated flash for code store. I suspect they both will include a stacked die arrangement in their product families. Perhaps, though this is an outside chance – Altera and Xilinx will try to be pin compatible with other ARM MCU vendors.

    Another aspect to watch is how analog fits into this strategy. Many MCU vendors have seen tight integration of the processor and analog on a single die as the winning formula. However, integrating analog below 90nm is difficult and doesn’t offer Moore’s Law savings. I presume the FPGA vendors are focusing on performance and will partner with leading Analog guys like Analog Devices, Linear Tech for the platform solution.

    The auto and industrial markets are the most likely first targets. Automakers are begging for more performance and the 1GHz solutions from Altera and Xilinx are likely to be a leap ahead of Renesas. Plus I would suspect Altera and Xilinx architected their offerings based on input from the smaller set of large auto and industrial customers instead of the thousands of total worldwide MCU customers. Remember 40% of Renesas sales is automotive.

    For the end customer – more solutions to choose from. The ingredients are: ARM standard architecture+two vendors at similar pricing+full temperature range (Auto, Industrial, consumer). For Xilinx and Altera it is an interesting new market to pursue. At $5B in size, the 32 bit MCU TAM is larger than their current combined revenue.

    A week ago I listened to the Altera earnings conference call. What was intriguing was that John Daane – the CEO mentioned that they had a Japanese customer come in requesting a one time order for their high end Stratix 4 FPGA to replace an ASIC that they couldn’t source due to the Tsunami. The revenue Altera would receive in the coming quarter would be over $15M – significant enough to tell Wall St. I started thinking, the customer had to have redesigned his PCB to support the Stratix 4. But in the future, a customer in a crunch may not have to redesign – just place an order.


    Smart Fill Replaces Dummy Fill Approach in a DFM Flow

    Smart Fill Replaces Dummy Fill Approach in a DFM Flow
    by Daniel Payne on 07-30-2011 at 7:11 pm

    I met with Jeff Wilson, Product Marketing Manager at Mentor in the Calibre product group to learn more about Smart Fill versus Dummy Fill for DFM flows. Jeff works in the Wilsonville, Oregon office and we first meet at Silicon Compilers back in the 1990’s.

    Dummy Fill

    This diagram shows an IC layout layer on the left as originally designed, then on the right we see the same layout with extra square polygons added in order to fill in the blank space. Source: AMD

    IC layouts use multiple layers like metal, poly, diffusion, via, etc. to interconnect transistors. The fab engineers know that if you can make each layer with a certain density that the yield will be acceptable. Dummy fill as shown above has worked OK for many nodes however the yield at 65nm and smaller nodes for digital designs requires a new approach in order to keep yields high.

    The dummy fill helps make each layer more planar, and so there are DFM rules that need to be followed.

    Q: How popular is Calibre with the dummy fill approach?
    A: Calibre serves about 80% of the dummy fill market now.

    Q: Is fill only used on metal layers?
    A: No, actually all layers can benefit from fill techniques.

    Smart Fill

    Q: Why do we need to change from dummy fill?
    A: The DFM rules for digital and analog designs have become more complex and the dummy fill approach just isn’t adequate to meet the rules. With dummy fill you are going to have too many violations that require manual edits, this takes up precious time on your project.

    The percentage of total thickness variation has increased at each node, making CMP variation a critical issue requiring analysis. Source: ITRS

    Q: What is the new approach with Smart Fill?
    A: It’s DFM analysis concurrent during the fill process, so that the layout is more correct by construction.

    Q: Do I need manual edits to my layout after running Smart Fill?
    A: Our goal is to have zero edits after Smart Fill.

    Q: At what node do I have to consider using Smart Fill?
    A: Our experience with foundries and IDMs is that at 65nm and below for digital designs, and 250nm and below for analog designs will directly benefit from Smart Fill.

    Q: What other issues are there to be DFM compliant with fill?
    A: The size of the IC layout database needs to be reasonable and the run times kept short.


    Dummy Fill on left, SmartFill on right. Source: AMD

    Q: When I use the Calibre Smart Fill, do I need to learn to write new rules?
    A: No, our approach has you write fewer rules.

    Q: What kind of run time improvements could I see with Smart Fill?
    A: One customer reported that dummy fill ran in 22 hours while Smart Fill ran in 40 minutes.

    Q: What is the Mentor product name for Smart Fill?
    A: We call it SmartFill and it’s part of Calibre Yield Enhancer.

    Q: What other areas does YieldEnhancer automate?
    A: Litho, CMP, ECD, Stress and RTA.]

    Q: What about my critical timing nets?
    A: SmartFill can read in a list of your critical nets and then avoid interfering with their performance by using spacing.

    Source: Mentor Graphics

    Q: Who would use a tool like SmartFill?
    A: Foundries, IDMs and Fabless design companies that want a technology advantage.

    Q: What layout databases does SmartFill support?
    A: Milkyway (SNPS), OA (Cadence), LEF/DEF (Cadence), Oasis.

    Q: How do you keep run times low?
    A: Through Cell-based fill (more than a single shape), it helps keep the file size and run times more reasonable.

    Summary
    To keep yield levels acceptable there are new DFM rules that affect how fill is created. The old approach of dummy fill has given way to Smart Fill which uses a concurrent analysis approach during fill to assure that DFM rules are not violated.


    Totem webinar: Analog/Mixed-Signal Power Noise and Reliability

    Totem webinar: Analog/Mixed-Signal Power Noise and Reliability
    by Paul McLellan on 07-30-2011 at 5:26 pm

    The Totem webinar will be at 11am on Tuesday 2nd August. This session will be conducted by Karan Sahni, Senior Applications Engineer at Apache Design Solutions. Karan has been with Apache since 2008, supporting the Redhawk, Totem, Sentinel product lines. He received his MS in Electrical Engineering from the Syracuse University New York.

    Totem is a full-chip, layout-based power and noise platform for analog/mixed-signal designs. Totem addresses the challenges associated with global coupling of power/ground noise, substrate noise, and package/PCB capacitive and inductive noise for memory components such as Flash and DRAM, high-speed I/Os such as HDMI and DDR, and analog circuits such as power management ICs. Integrated with existing analog design environments, Totem provides cross-probing of analysis results with industry standard circuit design tools. It also enables designers to create a protected model representing the accurate power profile of their IP for mixed-signal design verification. Totem can be used from early-stage prototyping,to guide the power network and package design, to accurate chip sign-off.

    Register for the webinar here.