NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

CEO Interview: David Dutton of Silvaco

CEO Interview: David Dutton of Silvaco
by Daniel Nenni on 01-30-2017 at 7:00 am

Silvaco has undergone one of the most impressive EDA transformations so it was a pleasure to interview the man behind it. David Dutton’s 30+ year career started at Intel, Maxim, and Mattson Technology where he led the company’s turnaround and finished as President, CEO, and board member. David joined Silvaco as CEO in September of 2014 and the rest is history in the making.

This is a picture of Silvaco’s corporate offices around the world. Their Taiwan office (beige building) is between the Hotel Royal where I stay and TSMC Fab 12, across the street from Starbucks. So yes, I pass by it quite frequently.

Give us a brief introduction to Silvaco, including an overview of your current operations and product offerings?
Silvaco is a leading provider of EDA tools, used for process and device development and for analog, digital, mixed-signal, power IC, and memory design. The portfolio includes tools for TCAD, Frontend, Backend, simulations, power integrity sign off, reduction of extracted netlist, variation analysis and also IP cores. Overall Silvaco delivers a full TCAD-to-signoff flow for vertical markets including display, power electronics, optical devices, radiation & soft error reliability, analog and HSIO design, library and memory design, advanced CMOS process and IP development.

The company is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, Japan, and Asia.

What do you look forward to in 2017 for Silvaco?
We are coming off one of our strongest years ever with about 30% year over year growth on bookings. Silvaco has many things to look forward to this year. In 2016 we saw great new product adoption with Victory 3D, Variation Manager, and Jivaro RC reduction tools, along with SmartSpice and Expert being utilized down to 7nm by customers. We established an IP division which brings Silvaco into the fastest growing area of the Circuit Design market. We welcomed new teams from three acquisitions – Infiniscale, edXact and IPExtreme. The Silvaco family sure has grown in 2016. The semiconductor industry continues to change, and Silvaco is using these changes to aggressively grow for the future of Silvaco and our customers. In 2017 we will continue to invest in improving our products and making acquisitions that advance our position.

Which markets do you feel offer the most and best opportunities for your products over the next few years and why?
The key markets for Silvaco’s products over the next few years are Display, Power, Automotive and advanced CMOS. On the display side, Silvaco’s product suite is used for design of both thin-film transistor (TFT) LCD displays and organic LED (OLED) displays. With the growth in adoption of smartphones, flat-screen TVs, smartwatches and more this is an area of increasing importance. Almost all manufacturers of displays use the Silvaco suite for design and most of these designs are in high-volume manufacturing. The display market demands specific capabilities from TCAD and EDA tools, that are not mainstream, Silvaco works closely with our display customers to understand their needs and has been investing in development specific to this growing segment. We view power IC devices as a key market due to automotive, industrial and medical systems that are all increasing in IC content. In addition, Silvaco has a long history of leadership in the compound semiconductor device market which puts us in a unique position to leverage our leadership in this growth area. The automotive market is not only critical for our display and power design tools, but also our IP division has CAN FD and controllers that are key to the development of the autonomous driving market. We have also launched the MIPI I3C family of IP for sensor connectivity and IoT applications.

Advanced CMOS is an area Silvaco is not known for, but we have made a significant investment to change this in the last year. Many of our design tools are involved at 16nm and below. Silvaco is helping customers down to 7nm with our CleverTM 3D RC extraction tool. Our Variation Manager, Viso, Invar and Jivaro RC reduction tools help our customers increase performance at signoff. Our SmartSpice is utilized down to 7nm due to its accuracy and performance, and we are constantly working to improve the performance of our products.

There is a wave of consolidations going on in the semiconductor industry right now. How do all these Mergers and Acquisitions affect EDA providers like Silvaco?
It seems like not a day goes by that there is an announcement about yet another semiconductor merger. There is no doubt that our industry is changing. The current group of mergers and acquisitions of semiconductor companies seems to be driven by at least three forces: The first reason is that the design and development costs at the advanced nodes are increasing due to complexity, which drives some companies to seek mergers to effectively pool their resources. The second reason is that we are seeing a shift from semiconductors being driven by mobility to the emergence of automotive and IoT drivers. For example, the Qualcomm and NXP merger was due to Qualcomm, a leader in mobile, recognizing the shift and acquiring NXP who is a leader in automotive and IoT areas. A third force is that China is investing in semiconductor growth and is doing a lot of acquisitions to accelerate their growth. We see Silvaco is in the position to get stronger as this consolidation wave rolls through, and even our position in China will strengthen through this time. Some trends are playing into Silvaco’s strengths such as display growth, power devices for automotive and analog mixed signal for IoT. Silvaco has completed four acquisitions in the last 18 months to help accelerate our growth, and they are all contributing to our expansion. We also announced our agreement to merge with Global TCAD Solutions, GTS at IEDM 2016. The transaction is expected to be completed soon.

How is Silvaco taking the initiative in growing in emerging economies?
For Silvaco, we are more committed to our growth more now than ever before. We also recently announced the opening of our office in Shanghai, China. Due to this growth, we see the need for technical talent on a global scale. We cannot rely on just one region to supply all our engineers and scientists. Silvaco already has development activity in the US, UK, France, Austria, and Japan. We are adding Russia and India technical offices as well due to the fact that these regions have grown solid technical resource bases for the software industry. We host Silvaco Symposiums in regions around the world to support the local design ecosystem. For IP, after a hugely successful REUSE 2016 in Mountain View, we are planning a REUSE 2017 show in Shanghai.

Also Read:

CEO Interview: Toshio Nakama of S2C

CTO Interview: Mohamed Kassem of efabless

IEDM 2016 – Marie Semeria LETI Interview


ISS Gary Patton Keynote: FD-SOI, FinFETS, and Beyond!

ISS Gary Patton Keynote: FD-SOI, FinFETS, and Beyond!
by Scotten Jones on 01-28-2017 at 12:00 pm

Two weeks ago the SEMI ISS Conference was held at Half Moon Bay in California. On the opening day of the conference Gary Patton CTO of GLOBALFOUNDRIES gave the keynote address and I also had the chance to sit down with Gary for an interview the next day.


Continue reading “ISS Gary Patton Keynote: FD-SOI, FinFETS, and Beyond!”


SoC Integration using IP Lifecycle Management Methodology

SoC Integration using IP Lifecycle Management Methodology
by Daniel Payne on 01-27-2017 at 12:00 pm

Small EDA companies often focus on a single point tool and then gradually over time they add new, complementary tools to start creating more of a sub-flow to help you get that next SoC project out on time. The most astute EDA companies often choose to partner with other like-minded companies to create tools that work together well, so that your CAD department doesn’t have to cobble together a working solution. I was pleased to find two such EDA companies that have worked well together on SoC integration using IP lifecycle management methodology, Methodics and Magillem.

There are four tenets to this particular EDA tool interface:

[LIST=1]

  • Bring IP management to all lifecycle stakeholders through an integrated platform
  • Optimize IP governance
  • Connecting IP design reuse and the IP governance process
  • Manage defects and traceability so that IP modifications are propagated and IP quality improves

    From the Methodics side they offer IP lifecycle management so that SoC design companies have control over both the design and integration of internal and external design elements: libraries, Analog, digital, stand-alone IP. You get traceability and easier reuse by coupling the IP creators with every IP consumer. Collaboration between designers is enabled by use of a centralized catalog, automated notifications, flexible permissions and integrated analytics.

    Related blog – CEO Interview, Simon Butler of Methodics

    Over on the Magillem side you find tools that are IP-Xact based which is derived from IP-reuse methodology to help solve the challenge of maintaining consistency between different representations of your system, by using a single source of data for your specification, hardware design, embedded software and even documentation.

    The Methodics tool is called ProjectIC (yellow), and here’s how it works with Magillem (red) at a conceptual level:

    Now that we’ve seen the big picture, let’s delve one layer lower and start to look at how these two tools create a workflow:

    Related blog – IC Design Management, Build or Buy?

    This workflow will benefit designers in several ways:

    • IP standardization through IP-XACT
    • Fast IP configuration
    • Intelligent IP integration
    • IP design rule checking
    • Hierarchical IP version & bug tracking
    • IP cataloging
    • Automated Magillem reassembly when a workspace is updated
    • Results annotated back to the IP version
    • Notifications automatically sent based on subscription model
    • Takes advantage of the ProjectIC triggers /workflow engine
    • Plugs a major hole in the RTL assembly methodology

    Engineers are always curious about how integrations work under the hood, so the engines from ProjectIC and Magillem communicate with each other transparent to the end-user, so that each workspace load and update action triggers an executable script that runs Magillem in the user’s workspace:

    Related blog – 5 Reasons Why Platform Based Design Can Help Your Next SoC

    Stepping up a level, here’s what a tool user sees when running the ProjectIC tool:

    So this integration is up and running, ready to help out today. The next version of the integration has three refinements:

    • IP-XACT attributes auto-populated on IPs in ProjectIC
    • Changing configurations will automatically trigger IP-XACT attribute refresh
    • Results of multiple workflows will be visible in a single pane

    Summary
    They say that necessity is the mother of all invention, so it’s refreshing to see that two EDA vendors have taken the time to define, build and test an integration between their tools that will directly help out SoC projects in their quest to be bug-free, and work the first time.


  • Timing Closure Complexity Mounts at FinFET Nodes

    Timing Closure Complexity Mounts at FinFET Nodes
    by Tom Simon on 01-27-2017 at 7:00 am

    Timing closure is the perennial issue in digital IC design. While the specific problem that has needed to be solved to achieve timing closure over the decades has continuously changed, it has always been a looming problem. And the timing closure problem has gotten more severe with 16/14nm FinFET SoCs due to greater distances between IPs, higher performance requirements and lower drive voltages. The timing closure problems will only get worse in 10nm and 7nm SoCs.

    By today’s standards, the complexity of early timing closure challenges seems quaint. Initially on-chip delays were dominated by gate delays. Later on, as the process nodes shrank, wire delays became the main factor. Wire lengths grew longer and wires became thinner and developed higher aspect ratios. The taller thinner wires exhibited increased capacitive and coupling delays aggravated by resistive shielding.

    Still, designers were able to address these issues with logic changes, buffer insertion and clock tree optimization. For many years clock tree synthesis (CTS) was neglected by the major P&R vendors. Around 2006 Azuro shook up the CTS market, realizing big gains in performance, area and power reductions with their improved CTS. Cadence later acquired them and now we see attention to improving CTS from Synopsys as well. Big changes have occurred with concurrent logic and clock optimization.

    But the problem of timing closure occurs not only inside of P&R blocks but also between them. Often within blocks it is possible to avoid multi-cycle paths. However, connections between blocks at nodes like 28nm and below are not so easy to deal with. According to Arteris, with a clock running at 600MHz, you can reasonably expect ~1.42ns of usable cycle time per clock cycle. Assuming a transport delay of .63 ns/mm, it is only possible to cover 2.2mm before registers need to be inserted into a data line. And in most 28 nm SoCs, there are a large number of paths which are longer than 2.2mm.

    The process of improving timing becomes myriad, with designers torn between a huge number of trade offs. Low threshold gates are faster but can wreak havoc with power budgets. Likewise adding pipeline stages for interconnect between major blocks must be weighed carefully because of power and resource limitations. Ideally chip architects can look ahead and anticipate when there will be timing issues later in the flow as chip assembly is taking place. However, this does not always work out as planned. The burden often falls to the backend place-and-route team to rectify unaddressed timing closure issues.

    Furthermore, when timing closure issues at the top level are identified late in the flow, they can necessitate iterations back to the front end team, causing massively expensive delays. The history of digital IC design is filled with innovations to deal with timing closure. Early placement and routing tools were the first tools used to address timing issues. They were quickly followed by floor planning tools. The new floor planning tools were very good at estimating IP block parameters, but not so good at optimizing the placement of the interconnect that exists between the IP blocks.

    The designs most prone to difficult timing issues are large SoCs at advanced nodes. Their complexity has grown explosively. For timing closure within blocks history has shown that linkages with the front end can help back end tools do their job better. The same is likely with connections between blocks.

    In fact, over the last couple of years we have seen increasingly sophisticated approaches to top level interconnect in SoCs. One example is the adoption of Network on Chip (NoC) for making connections between blocks more efficient, providing reduced area, and offering higher performance with lower power. Arteris, a leading provider of Network on Chip technology has recently hinted that NoC may be key in gaining further improvements to top level timing closure.

    The largest SoCs, CPUs and GPUs are scaling up in size dramatically. The upper bounds have reached over 10-15 Billion transistors. Timing closure in these designs is paramount. However, the scale of the problem has moved beyond the ability of one part of the flow to provide a comprehensive solution. Front to back integration will be essential. I predict that 2017 will prove to be a pivotal year for solutions to timing closure in SoCs.


    The Nannification of Tesla

    The Nannification of Tesla
    by Roger C. Lanctot on 01-26-2017 at 12:00 pm

    I can’t tell you how many times I have sat down with executives of large companies and startups who have tried to get me excited about geo-fencing. Geo-fencing is a clever little technology that can allow a device maker to restrict access to a device, service or content when that system roams beyond a particular zone of acceptable use either based on time of day or location.


    Geo-fencing is a powerful tool for security applications – as in preventing a car from operating outside of a time or geographic-delimited area – or enabling notifications when said vehicle is so misused. But my reaction always takes me back to my teen years and the finky-ness underlying the technology. The creators of geo-fencing always describe it as a godsend for tracking family members – which it might be – but for many including me it is an intrusion and an unwelcome one at that.

    It looks like Tesla Motors is getting into the geo-fencing game and this may well unwind some of the pizzazz many associate with the brand. The latest incarnation of Tesla’s vaunted autopilot now limits use to certain classes of roads and to posted speed limits and won’t drive faster than 45 miles per hour.

    There are several serious implications to these changes – and they are reflected in the responses of Tesla owners on public forums ranging from rage to minor vexation. The two biggest implications, though, are A) the use of map data integrated with autopilot and B) the impact of taking away functions via over-the-air software updates.

    The first implication is huge. Tesla is clearly using map data to determine acceptable areas for using autopilot – at least according to Model S owners who have taken recent deliveries. These owners report that autopilot is no longer available on secondary roads. To be honest I haven’t been able to confirm this phenomenon, but if it is true it reflects Tesla implementing geo-fencing to restrict access to the autopilot function.

    This is a first-time implementation in the automotive industry with very serious implications regarding the vehicle’s location awareness, privacy, security and the rights of the owner. The Tesla ownership experience has always been one that requires a surrender of some control – and this is not the first time that vehicle functions have been altered or removed – but the fact that vehicle performance characteristics are determined by location is something new for the automotive industry.

    Like most geo-fencing applications, the objective is to enhance safety and protect the driver, but the reaction among Tesla drivers in public forums suggests a growing level of frustration. The amazing and terrifying thing about the original autopilot was that it could be used anywhere, any time at any speed (or so it seemed). It seems that the autopilot party is coming to a close in the interest of safer driving.

    Location awareness alone is a big deal and the impact of Tesla’s integration of map data with semi-automated driving in a production vehicle shows Tesla leading the way in automated vehicle driving development even as it is perceived by its customers as subtracting capabilities. Tesla owners are most annoyed, though, at the function being limited to posted speed limits or speed limits + 5MPH.

    Other car companies, such as Ford Motor Company, have found ways to integrate local speed restrictions with their cruise control functions – but no other car company has introduced defined geographic areas for the use of cruise control.

    Tesla autopilot users are predictably and justifiably complaining that driving at or below the speed limit is actually creating a more hazardous driving situation – backing up traffic and frustrating drivers who are trying to pass. Tesla has morphed from the auto industry’s bad boy to teacher’s pet. What’s next? Scanning the license plates of scofflaws and reporting them wirelessly and anonymously to local police?

    Not likely. But restricting the functionality of autopilot has instantly converted the feature from a liberating other-worldly experience into soul-crushing, predictable, and more or less run-of-the-mill adaptive cruise control (with passing). The challenge, for Tesla, will be to use this new level of map integration to enhance rather than restrict functionality.

    The integration of the map may ultimately allow Tesla to deliver a fully automated driving experience or, in the short term, an enhanced ability to transfer between highways or from highways to secondary roads and back again in an automated manner. For now, though, Tesla must cope with customers crabbing about capabilities that have been downgraded.
    The function downgrade enabled by an over-the-air software update will give some consumers – and car companies – pause to reconsider their automotive software update strategy. The reality is that no reconsideration need take place. Like smartphones, cars will need to be updated and sometimes features or functions will be altered or removed. It’s best we all get used to that as soon as possible. There is no future in making updates optional – especially when safety is at stake.

    Bottom line: Tesla has upped the finky-ness of autopilot with map integration. It’s worth noting though that for now Tesla has not made the full descent into sending drivers latte coupons when they drive near a Starbucks. While the rest of the auto industry is trying to turn artificial intelligence into a contextual marketing tool to distract drivers, Tesla once again demonstrates its laser-like focus on enhancing, refining, and advancing the driving experience. Tesla’s technology leadership continues unabated.
    For the full debate within the Tesla Motor Club:

    https://teslamotorsclub.com/tmc/threads/autopilot-speed-restrictions-what-do-you-think.82652/

    *According to the reports of some new Model S owners, Autopilot will no longer work in urban setting such as New York City – CNET video: https://www.cnet.com/roadshow/news/tesla-autopilot-model-s/

    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


    AAPl Vs QCOM Who wins?

    AAPl Vs QCOM Who wins?
    by Daniel Nenni on 01-26-2017 at 7:00 am

    Things just got interesting in the iPhone supply chain with the $1B AAPL Vs QCOM legal action filed this week. For the life of me I could not understand why Apple second sourced the normally QCOM modem in the iPhone 7. It caused quite a stir in the technical community but we could only surmise that it was a price issue on the business side. Well, clearly it was more than just price.

    AAPL:“For many years Qualcomm has unfairly insisted on charging royalties for technologies they have nothing to do with. The more Apple innovates with unique features such as TouchID, advanced displays, and cameras, to name just a few, the more money Qualcomm collects for no reason and the more expensive it becomes for Apple to fund these innovations. Qualcomm built its business on older, legacy, standards but reinforces its dominance through exclusionary tactics and excessive royalties. Despite being just one of over a dozen companies who contributed to basic cellular standards, Qualcomm insists on charging Apple at least five times more in payments than all the other cellular patent licensors we have agreements with combined.”

    Apple seems to be piggy backing on the legal actions against QCOM from China, Korea, Taiwan, EU, and the USA. But Apple’s problems may have started when they used Intel modems and broke an exclusivity clause with QCOM. Either way this is a legal mess that may not be resolved for months or even years.

    QCOM:Apple’s complaint contains a lot of assertions. But in the end, this is a commercial dispute over the price of intellectual property. They want to pay less than the fair value that QUALCOMM has established in the marketplace for our technology, even though Apple has generated billions in profits from using that technology

    In the meantime let’s look at the modem issue and see who will ultimately profit. My bet is TSMC of course and here is why:

    Remember, even though Intel supplies the XMM 7360 LTE modem used in the iPhone 7, TSMC manufactures it on their 28nm process. The next-in-line Intel modem is the XMM 7480 which was announced one year ago and is now being qualified by AT&T and other carriers. Intel has made statements in the past that they will move modem manufacturing from TSMC to Intel so the $1B question is: Who will manufacture the XMM 7480?

    Here is the answer from the J.P. Morgan Tech Forum at CES 2017:

    Q – Harlan Sur: So you guys recently got qualified with your next-gen XMM 7480 modem. Help us understand, first of all, is this product being manufactured by Intel internally or is it still being manufactured at TSMC?

    A – Navin Shenoy: We’ll make decisions on where we manufacture the modem on a pragmatic basis. I’m not going to tell you right now yet where we’re going to manufacture XMM 7480 or the subsequent ones. But suffice it to say, we’re looking at both internal and external options.

    Clearly that decision has already been made since the chip is in production. As you can tell Navin (Intel Client Computing Group VP) is a career Intel employee well versed in doublespeak. I was hoping Murthy Renduchintala (Navin’s boss) would rid Intel of double speakers but clearly that is not the case, yet.

    If it was on an Intel process you can bet Navin would have proudly boasted, so my bet is that the XMM 7480 is already in high volume manufacturing using TSMC 28nm and it is highly unlikely it will be moved to Intel 14nm. In my opinion the first Intel modem to use an Intel process (14nm) is the 5G modem they announced this month. 4G modems are a price driven commodity and nobody does 28nm better than TSMC. I would also argue that the TSMC 16FFC process is better than Intel 14nm for price and power but Intel needs to prove their ability to manufacture mobile chips to justify the huge investment they have made so it will probably be Intel 14nm.

    The other winner of course is one of my favorite IP companies (CEVA) as their IP is designed in the Intel 4G modem. Intel licensed CEVA-XC core for LTE chips back in 2010 at around the same time it acquired Infineon’s wireless business unit. Infineon is also a CEVA licensee for their ARM-based 3G and 4G LTE modems.

    The Intel Corporate Earnings call is tonight so we can continue this discussion in the comments section…


    Power Management Beyond the Edge

    Power Management Beyond the Edge
    by Bernard Murphy on 01-25-2017 at 7:00 am

    Power in IoT edge devices gets a lot of press around how to make devices last for years on a single battery charge, significantly through “dark silicon” – turning on only briefly to perform some measurement and shoot off a wireless transmission before turning off again. But we tend to forget that the infrastructure to support those devices – gateways, backhaul communication and clouds – cannot play by the same rules. This infrastructure must deal with unpredictable traffic in high volumes, where power-down strategies are impractical.

    Moreover, if you consider total power burned and the cost of that power, this is far, far higher in the infrastructure than in the edge devices. Datacenters alone are believed to consume about 3% of total energy produced worldwide. Those costs motivate owners of infrastructure to lean hard on equipment suppliers to reduce power by whatever means they can. Meeting that objective generally requires a much more nuanced management of power, depending heavily on understanding how a wide range of realistic workloads will drive the system.

    AMD is a company whose products are used in datacenters, wireless and wireline network applications, network security and unified communications applications, so they feel that pressure across their product line. Illustrating this, they have a white-paper on how they approached power reduction in one of their server-class designs, written by a couple of technical team members based in Austin.

    The objective was a retooling for lower power since they were starting from an existing design; changes to the fundamental architecture weren’t an option. Implementation-stage tweaks wouldn’t return big enough savings so that left micro-architectural fine-tuning as the only way to drive down power. While such changes are usually quite modest, impact can be significant – AMD was able to reduce idle power by 70% and peak power by 22%. But as always in power reduction, there’s no simple recipe for finding the best places to make changes. You have to try a lot of possibilities against a lot of different use-cases, then decide which of those are most promising in power saving, while balancing impact on other factors such as area and timing.


    That kind of iteration isn’t possible if you’re going to measure the impact through power estimation at the gate-level since each RTL what-if would require a re-implementation cycle through multiple tools. AMD estimated that it would take 6-8 weeks to generate power estimates at the gate-level, at which point that analysis would be irrelevant to a design that had evolved far beyond the point at which the measurements were made.

    A much better approach is to iterate on changes with power estimation at RTL. In absolute terms this won’t be as accurate as estimation at the gate level – RTL-based estimation must estimate Vt mixes, design-ware mapping, clock trees and interconnect parasitics, all factors which are known at the gate level and on which gate-level accuracy depends. But relative accuracy between RTL-based estimates for modest changes on the same design can be much better. Moreover, for incremental changes estimation can intelligently re-compute the impact on activity factors without need for re-simulation, making power-estimation even faster. AMD observed that these factors together trimmed analysis time to a day.

    The design team chose Ansys’ Power Artist to drive their RTL power estimation. This is a product with a distinguished history. Commercial RTL-based power estimation was first introduced to the world back in the 1990’s by Sente, in a product called WattWatcher. The product and the team have gone through a couple of acquisitions and some names have changed, but PowerArtist is that same product, greatly evolved of course. Point being, they’ve been doing this for a long time and have wide-spread industry recognition as experts in the space.

    AMD provide detail on how they used PowerArtist to isolate power hogs and experiment with improved clock gating. They also found that 50% of estimated power was being burned in the clock distribution network (PowerArtist models this network for this reason). This is a good example of why designer judgment (guided by input from the tool) is so crucial in power reduction. You could reduce power by gating clocks at the leaf-level where feasible and still not significantly reduce power, whereas carefully planning gating higher in the clock tree for a smaller number of leaf-cases could make a much bigger difference.


    Power reduction is a process as almost anyone in the game will be eager to tell you. You don’t do one task then put the power tools away. You’re constantly checking and improving, which is again why fast iteration in power estimation is so important. I found one chart in the AMD write-up particularly interesting – a trend chart for estimated power as the design progressed. Power-aware design teams have this process embedded in their regression suites so they can update trend charts like this as the design evolves. You fixed that timing problem or you changed the queue-manager from fixed length to variable length but power spiked up again – what happened? Getting frequent updates is the best way to check and correct before problems become unfixable. PowerArtist has the tools you need to support this kind of analysis and trending.

    You can read the AMD team’s write-up HERE.

    More articles by Bernard…


    Mentor Safe Program Rounds Out Automotive Position

    Mentor Safe Program Rounds Out Automotive Position
    by Bernard Murphy on 01-24-2017 at 7:00 am

    Mentor has an especially strong position in the automotive space given their broad span of embedded, SoC, mechanical and thermal and system design tools. Of course, these days demonstrating ISO 26262 compliance is mandatory for semiconductor and systems suppliers, so EDA vendors need to play their part to support those suppliers in demonstrating that components and design tools they offer meet appropriate levels of certification.

    Mentor has recently announced the Mentor Safe Program which aims to comprehensively qualify and document, to ISO 26262 standards, a select range of components and design tools used in the design of automotive systems. This includes the software components Nucleus SafetyCert and the Volcano VSTAR AUTSAR basic software stack, as well as several design tool qualifications. This is an important addition for systems suppliers who ultimately must demonstrate complete compliance to the standard. The Safe Program provides the documentation and certification to back that up in areas covered by Mentor components and tooling.

    Nucleus SafetyCert is a version of the popular Nucleus RTOS, in the certification process with TÜV-SÜD, and is verified and documented to meet requirements for device manufacturers developing to avionics DO-178C Level A, industrial IEC 61508 SIL 3, medical IEC 62304 Class C, and automotive ISO 26262 ASIL B. Volcano VSTAR is also TÜV-SÜD certified to ASIL B. The process for design tools is a little different because what is required for a tool depends on the tool classification level (TCL). I’ll talk a little about that further on in this piece.

    ISO 26262 is dry stuff, but we need to understand it if we want to succeed in this rapidly growing market so it’s worth digging into the process in a bit more detail. By way of example, look at what Mentor supplies in the Nucleus SafetyCert certification package:

    • Source code
    • Documentation on the software development, configuration management and QA processes
    • Documentation on the requirements process, designs standards and coding standards
    • Documentation on the software verification process, the test plan and the complete software test suite
    • A safety manual to be used by system integrators to guide correct/permissible usage

    All of this with traceability across the safety lifecycle and extensive hyperlinking to simplify audits and reviews. (And yes, you need to sign an NDA to get access to this package!)


    For tools, Mentor generates a report which becomes a component of documentation for their product certification where required. These have up to 8 sections:

    • Sections 1, 2 and 3 covering boilerplate information on the document and the tool classification process.
    • Section 4 covering the tool classification– the tool impact (TI) and tool error detection (TD) levels, leading to a tool confidence level (TCL), based on use-cases, configuration, environment, safety checks and tool/tool-chain restrictions. If the classification is TCL1, the rest of the report is not required.

    (The following sections are only required for tools at TCL2 or TCL3 levels)

    • Section 6 describes the software tool qualification process
    • Section 7 describes the tool qualification – QA documents, test-cases, errata and tool error detection relative to use-cases
    • Section 8 describes the software tool qualification conclusion

    Tools that are currently covered by this certification program are several of the Tessent silicon test and yield analysis tools and ReqTracer for requirements tracing. Mentor intends to add more tools over time.

    Nine of the Tessent silicon test and yield analysis tools can be used at the TCL2 level so have been certified through SGS-TÜV-Saar for use at any TCL level. ReqTracer is classified as TCL1 so is not required to have certification. However, Mentor provide justification in their report for the tool on why they classified tool impact (TI) and tool error detection (TD) levels leading to this TCL classification.

    Per the ISO 26262 standard, this is all Mentor must do to demonstrate compliance for a TCL1 tool but they have actually taken it a step further for ReqTracer. The standard requires that confirmation measures (review and approval of the process) are assessed by a different person than the one performed the classification (a level I1 degree of independence per the standard), but Mentor took review to level I3 where the review was performed by independent Functional Safety Certified Automotive Manager (FSCAM). Looks like they take this pretty seriously.

    You can read more about the Mentor Safe program HERE.

    More articles by Bernard…


    Qorvo and KeySight to Present on Managing Collaboration for Multi-site, Multi-vendor RF Design

    Qorvo and KeySight to Present on Managing Collaboration for Multi-site, Multi-vendor RF Design
    by Mitch Heins on 01-23-2017 at 12:00 pm

    Over the last several weeks I’ve been having a lot of discussions with colleagues around IP reuse and design data management. This led me to a discussion with Ranjit Adhikary, Marketing Vice President for ClioSoft.

    ClioSoft is best known for their design collaboration software platform called SOS. They also sell an enterprise IP management platform that works in conjunction with SOS. As I spoke with Ranjit it became clear to me that a discussion about IP and design management really is indeed a discussion about how to work in a collaborative environment. IP blocks are simply an artifact of collaborative design that must be managed.

    So backing up a bit I started to quiz Ranjit about what they are seeing in the design arena today and how this has affected how they are marketing their products. With the advent of the internet of things (IoT), a key driver for ClioSoft’s business for last two years has been wireless design. How so you might ask?

    Well, wireless implies RF, analog and mixed signal designs and these types of designs are typically done using full-custom design methodologies. Further, full-custom design methodologies imply lots of hands-on engineering work in interactive EDA tools. To handle complexity, designs get carved up into blocks that can be easily managed by humans. Large designs use lots of hierarchy and lots of blocks. Combine this with ever increasing competition and ever shortening design cycles times, and that means you must have big teams of engineers working simultaneously on these designs with the need to collaborate with each other.

    To make matters more complex, companies that create these type of RF and mixed-signal ICs typically come about through multiple rounds of mergers and acquisitions and as a result, have design teams scattered around the world in multiple locations across multiple time-zones and many times with multiple different EDA tool sets. As you can imagine, this is a very fluid environment and can be fraught with peril as a company gets closer to tape-out time. The worst case scenario being that the left hand doesn’t always stay in sync with the right hand and a design gets taped-out using a wrong and incompatible version of a block making the entire IC dead on arrival.

    This then begs the question of how do companies manage these types of designs when using geographically dispersed design teams with different EDA tools? Ranjit started to explain more of the details about how their product offerings help and then stopped short and said, hey why am I telling you this? We are hosting a webinar in a week’s time where one of our customers, Qorvo, and one of our EDA partners, Keysight, will be presenting on just this topic. You should attend the webinar and they can tell you in their own words how they are managing RF designs in just such an environment. Qorvo will talk about how they manage their RF designs and Keysight will talk about how they handle the issues of EDA interoperability.

    Great, I’m all in! And…while I’m at it, I’ll let everyone else know about the webinar as well because if I’m asking, I’m sure everyone else is probably thinking the same thing. The webinar is being hosted by ClioSoft and will be held on February 1[SUP]st[/SUP], 2017 at 10:30am PST. To register for the webinar simply follow this link and use the “CLICK HERE TO REGISTER” button at the bottom of the page.

    In the meantime, if you have interest to learn more about ClioSoft’s offerings you can visit their website at: www.cliosoft.com.

    Also Read

    Tool Trends Highlight an Industry Trend for AMS designs

    Managing International Design Collaboration

    Making your AMS Simulators Faster (webinar)