CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Improvements in SRAM Yield Variation Analysis

Improvements in SRAM Yield Variation Analysis
by Tom Dillinger on 03-27-2016 at 12:00 pm

The design of an SRAM array requires focus on the key characteristics of readability, writeability, and read stability. As technology scaling has enabled the integration of large (cache) arrays on die, the sheer number of bitcells has necessitated a verification methodology that focuses on “statistical high-sigma” variation analysis. Designers must ensure that the statistical number of “weak cells” that may fail the characteristics above is sufficiently low, and adequately covered by the array’s architecture for error detection/correction and redundant rows and columns.

The importance of array design robustness is amplified by the goal to operate the array at a unique, often dynamically-adjusted, supply voltage domain, to reduce (active and standby leakage) power.

The “brute force” method to determine the high-sigma SRAM yield subject to PVT variation is to execute a Monte Carlo-based sampling of parameters from their statistical distributions, and simulate circuit behavior with these parameter values. Yet, a suitable assessment of the array yield for a large SRAM requires an inordinate number of Monte Carlo sampled simulations. Algorithms for optimized sampling and circuit response sensitivity to determine high-sigma SRAM yield have been developed, to allow faster array design closure.

One of the leading EDA companies providing optimized variation analysis tools is Solido Design Automation. I recently had the opportunity to chat with Kris Breen, VP of Customer Applications, and Amit Gupta, President and CEO at Solido, about the latest advancements incorporated into the recent Version 4 release of their Variation Designer software. As Kris noted, “At Solido, we are variation specialists — it is our sole focus. We recognize that high-sigma yield analysis requires unique expertise. We assist customers with extensive education offerings and best practices methodology support.” (as an integral part of product licensing). Solido has clearly identified an important area in IC design — their year-over-year revenue growth in 2015 was 70%, in an otherwise tepid EDA market.

Before delving into the latest capabilities of Variation Designer for SRAM analysis, here’s a little background on this crucial SoC design topic.

As mentioned above, the anticipated “yield” of an SRAM array dictates the architectural decisions (e.g., redundancy) and performance/power tradeoffs (e.g., a unique VDD supply domain, with operating and standby modes). The volume of bitcells requires a “high-sigma” yield analysis, while the extensive die area allocated to large arrays requires attention to global and local parameter variation.

SRAM yield analysis basics

SRAM bit cell circuit analysis involves simulation of three main characteristics:

Read stability (RS)

Read stability implies that the cell stored value is unaffected by switching activity in the surrounding neighborhood. Further, a read access to the cell, and the corresponding read current between cell and bitlines, does not result in a sufficient internal node voltage drop to potentially “flip” the stored value.

Read access performance (RA)

A read access fail would occur if the driven bitline voltages have not reached a suitable differential at the inputs to the sense amplifier, at the time in the access cycle when the sense amp is enabled. Note that statistical process variation yield analysis involves the bit cells in combination with the sense amplifier sensitivity, as reflected in the design margin required for sense amp offset.

Write access (WA)

During a write cycle, the bitline differential is transferred to the bitcell, with sufficient current to overdrive the existing internal node voltages. A write access failure would result from an insufficient transition within the cell (i.e., to some % of VDD for the ‘1’ node) by the end of the cycle — although the positive feedback within the cell would continue to boost node voltages, an immediate read to the same cell location would need sufficient read current drive, as described above.

All these characteristics must be robust across PVT variations — specifically, at a reduced VDD supply voltage.

The figure below provides an illustration of how the “sigma yield” of an example array would vary with VDD, for RS, WA, and RA (with RA examples using different numbers of bitcells per column).


Solido’s Variation Designer has traditionally included a High-Sigma Monte Carlo (HSMC) array analysis feature, with specific statistical sampling optimizations to reduce the requisite number of Spice circuit simulations to realize a yield assessment with accurate cases at the tail of the yield distribution (e.g., 6-sigma and greater). The results are not an “extrapolation” of a distribution curve, but provide specific circuit simulation samples at high sigma for further analysis of the selected parameter values. (As process statistical distributions are increasingly non-Gaussian, extrapolation to the high-sigma tail of an overall yield is highly inaccurate.)

Kris highlighted, “Our methodology works seamlessly with our customer’s existing Spice circuit simulation environment. Variation Designer works with all commercial Spice products. If you can measure it, you can use Solido to analyze it.”

Specifically, this release of Variation Designer extends the High-Sigma Monte Carlo feature for SRAM yield analysis, with a new “Hierarchical Monte Carlo” capability. The variation experts at Solido identified an opportunity to improve the accuracy of the SRAM yield methodology — the yield is an intricate interdependence between variations in different functional blocks of the overall array — e.g., bitcell, bitline pre-charge, sense amplifier, address decode, word line drivers.

With a memory slice or critical path and a minimal amount of architectural information as input from the designer, Hierarchical Monte Carlo statistically reconstructs the full on-chip memory to produce accurate full-chip yield results. Hierarchical Monte Carlo works by sampling each component on the slice or critical path with the correct statistical frequency – for example, 3 sigma global statistical variation, 4 sigma on control circuitry, 5 sigma on sense amps, and 6 sigma on bitcells. Runtimes are still fast, as only the slice or critical path is ever simulated in Spice, and because the millions or billions of correctly structured samples are massively reduced using technology similar to Solido’s HSMC such that only thousands of simulations are actually run.

According to Solido, this is the first time the problem of statistically verifying memory critical paths and slices has been accurately solved. Previous methods were pessimistic by 10%-60%, as measured in comparison studies done by Solido’s customers. The advantage of getting the right answer is that it enables designers to tighten margins significantly, producing faster, lower power, memories that are less expensive to produce.

This latest release of Variation Designer also includes updates to allow designers to enhance their analysis to more effectively incorporate process and environment parameters together. Alas, I’m out of space for this article — look for a subsequent article on Variation Designer’s Statistical PVT features soon.

For more information on Solido’s technology, please follow this link.

-chipguy


Webinar: Design a LTE-based M2M Asset Tracker SoC with CEVA, using GNSS and OTDOA

Webinar: Design a LTE-based M2M Asset Tracker SoC with CEVA, using GNSS and OTDOA
by Eric Esteve on 03-27-2016 at 7:00 am

If you could not attend live to the webinar from CEVA “Lear how to design a LTE-based M2M Asset Tracker SoC”, you have a second chance to access it remotely and to learn a lot. You will learn about CEVA’ Dragonfly platform 1 or 2, based on CEVA-XC8 or CEVA-XC5, and you will discover how mobile Machine 2 Machine (M2M) devices developed in the next years will use a combination of two technologies, cellular and positioning. Cellular for M2M like LTE or 3G is well known as well as positioning like Galileo Global Navigation Satellite System (GNSS), but the Time Difference On Arrival (TDOA) technology, is a novel positioning system based on existing 4G antennas.

The total cellular M2M connections, 243 million in 2014, are expected to reach 2 billion by 2020, with a 42% CAGR. But the M2M technology mix will change dramatically in the next 3 to 5 years: if the vast majority (60%) of WWAN modules were based on 2G in 2013, 3G technology will represent more than 50% in 2018 and 4G is expected to reach almost 70% in 2022. If the industrial smart meters dominates the M2M today, by 2018 the consumer market will dominate the industrial segment, and we expect about 500 M2M Mu/year to need both cellular and positioning technologies by 2020. Specifically the following market segments: asset tracking for containers, trucks, cars, appliance, kids, pets or bikes and smartwatch, smart grid, smart farms or smart plants.

If you take a look at the many cellular or positioning protocols on the above picture, you realize that you need to select a solution providing maximum flexibility by enabling multiple standards support. Moreover, this solution should be able to efficiently handle multiple standards concurrently. It should be low power (mobile M2M) and offer a cost low enough (we often hear about a $5 limit per module) to allow a wide development. All these requirements push for using a DSP based platform (flexibility) so the designer can optimize HW-SW partitioning (low cost and power efficiency) while handling multiple standard concurrently. That’s the description of CEVA’ Dragonfly reference platform, which is a SoC prototyping board with Radios, DSP and FPGA, going with a SW development platform with SW tools, RTOS, communication libraries and BSP.
CEVA has run a demo during MWC 2016, using Dragonfly to control drone navigation over LTE Cat-0, using satellite emulation for GPS:

As we can see on the picture, developing an asset tracker system (assuming the drone the asset in this case) requires the existence of an ecosystem. That’s why the two presentations from Galileo Satellite Navigation Ltd. of their GNSS solution and Nestwave of their Indoor/Outdoor positioning were very welcomed.

I think the satellite positioning technology is well-known today, even if it’s still a challenge to provide accurate positioning at low power cost and with acceptable complexity, so I will focus on OTDOA technology offered by Nestwave with their CellNav positioning system. CellNav doesn’t use satellites, but is based on existing 4G antennas. The base stations transmit a few subframes, the Positioning Reference Signals (PRS) and the mobile listens to the signals and compute their time on arrival. Similar to GPS, triangulation can be made and the mobile positioning determined with 5 to 30 m accuracy, expected to be enhanced to 1-5 m in the future.

In fact, it’s an extra option for geolocation, software only and low cost. Nestwave expect CellNav to enable new applications that need low power, always-on, fast time to first fix and ubiquitous indoor/outdoor geolocation. Pretty elegant solution from a company created in 2014!

You will get the complete picture, including GNSS or OTDOA computation or hardware description of CEVA Dragonfly, by going HERE to attend remotely to this webinar.

Eric Esteve from IPNEST


TSMC and Flex Logix?

TSMC and Flex Logix?
by Daniel Nenni on 03-26-2016 at 7:00 am

There was a lot to learn at the TSMC Technical Symposium last week, in the keynotes for sure but also in the halls and exhibits. Tom Dillinger did a nice job covering the keynotes in his posts Key Take aways from the TSMC Technology Symposium Part 1 and Part 2 but there was something interesting that many people may have missed in the exhibit hall.

As you may know this event is invitation only and that includes the companies who exhibit. To exhibit you must have a formal relationship with TSMC and more importantly with TSMC’s top customers so it interesting to see new companies in the exhibit hall and speculate why they are there.

The most interesting new company exhibiting this year in my opinion was Flex Logix Technologies:

FLEX LOGIX ANNOUNCES PROGRAM FOR FAST-TRACK EVALUATION AND PROTOTYPING
Reconfigurable RTL Enables One Design to Serve Varying Customer Requirements

“Architects, front-end designers and physical design teams all need to become familiar with this new technology for applications from MCU to IOT to Networking and more. Like with any technology, it is best to learn by doing and starting simple,” explained Flex Logix CEO and co-founder Geoff Tate. “This new program allows customers to fully evaluate EFLX in detail and in silicon at very low cost.”

Geoff Tate and Andy Jaros manned the booth (Andy and I worked at Virage Logic together years ago). Talking to both the CEO and VP of sales was a great opportunity to understand the Flex Logix value proposition so here it goes:

More and more companies are trying to build flexibility into their SoC designs. The traditional approach has been to overdesign an SoC or functional block to try and anticipate all possible requirements and simply select an option: blow a fuse, spin a metal mask, or make a bond out option to “personalize” a particular chip for a customer or market application.

The theory goes, with advanced process nodes, gates are “cheap”, so this design philosophy is easily justifiable. But what is not cheap are the mask costs not to mention the engineering and validation cost. And there’s the cost of missing a market window if a spec changes or a customer decides they want to tweak a custom built hardware accelerator because their algorithm changed or they want to modify the pinout due to system constraints.

As market requirements and customer demands are changing even more rapidly, designing SoCs with more flexibility in mind is making more and more financial sense. Even if it uses a few more “cheap” gates, can save money on multiple tape outs, and helps keep up with changing requirements.

It requires a slightly different approach to designing chips of course and Flex Logix has the right idea with their Fast Track program to help architects and designers experiment with adding more flexibility to their projects. Additionally, the ability to have one die that can be retargeted to multiple applications improves ROI.

Additionally, the ability to upgrade features in the field, in system, offers the possibility of a new revenue stream: providing optional upgrades that permit better, faster operation. Often the alternative is to fall back to emulation in software which can suck up a lot of processor bandwidth (not to mention power) that can be used elsewhere.

For more detailed information, Don Dingee is our embedded design expert and he has written about Flex Logix twice thus far. Or you can give Andy a ring, he is always good company for a coffee or lunch.

Creating a better embedded FPGA IP product

Reconfigurable redefined with embedded FPGA core IP

FLEX LOGIX, founded in March 2014, provides solutions for reconfigurable RTL in chip and system designs using embedded FPGA IP cores and software. The company’s technology platform delivers significant customer benefits by dramatically reducing design and manufacturing risks, accelerating technology roadmaps, and bringing greater flexibility to customers’ hardware. Flex Logix recently secured $7.4 million of venture backed capital. It is headquartered in Mountain View, California and has sales rep offices in China, Europe, Israel, Taiwan and Texas. More information can be obtained at http://www.flex-logix.com


GM in the Middle of Mobility Muddle

GM in the Middle of Mobility Muddle
by Roger C. Lanctot on 03-25-2016 at 4:00 pm

General Motors has made a flurry of announcements around its Maven mobility brand for car sharing and its investment in Lyft. The latest news, first reported by re/code, is that Lyft and Maven will be rolling out a short-term rental program for Lyft drivers to use Chevy Equinoxes in Chicago later this month. The program is called Express Ride.
Continue reading “GM in the Middle of Mobility Muddle”


10nm SRAM Projections – Who will lead

10nm SRAM Projections – Who will lead
by Scotten Jones on 03-25-2016 at 12:00 pm

At ISSCC this year Samsung published a paper entitled “A 10nm FinFET 128Mb SRAM with Assist Adjustment System for Power, Performance, and Area Optimization. In the paper Samsung disclosed a high density 6T SRAM cell size of 0.040µm[SUP]2[/SUP]. I thought it would be interesting to take a look at how this cell size stacks up to 6T SRAM cells we have seen to-date and some projections for what other companies 10nm 6T SRAM cell sizes might be.

[TABLE] align=”center” border=”1″
|-
| style=”width: 71px” |
| style=”width: 60px” | 45nm
| style=”width: 60px” | 32nm/
28nm
| style=”width: 66px” | 22nm/
20nm
| style=”width: 66px” | 16nm/
14nm
|-
| style=”width: 71px” | Intel
| style=”width: 60px” | 0.3460
| style=”width: 60px” | 0.1710
(32nm)
| style=”width: 66px” | 0.0920
(22nm)
| style=”width: 66px” | 0.0588
(14nm)

|-
| style=”width: 71px” | Samsung
| style=”width: 60px” | 0.3700
| style=”width: 60px” | 0.1490/
0.1200
| style=”width: 66px” | NA
| style=”width: 66px” | 0.0640
(14nm)
|-
| style=”width: 71px” | TSMC
| style=”width: 60px” | 0.2420
| style=”width: 60px” | 0.1270
(28nm)
| style=”width: 66px” | 0.0810
(20nm)

| style=”width: 66px” | 0.0700
(16nm)
|-

6T SRAM cell size versus node (µm[SUP]2[/SUP]).

Looking at this data you can see that at 45nm and 20nm TSMC led and at 28nm Samsung led (the leaders at each node are in bold). At 16nm TSMC chose to take a conservative approach and leverage their 20nm process pitches for their first FinFET resulting in a larger SRAM cell size than would otherwise have been expected. Intel very aggressively scaled their process and took the lead.

I have taken 6T SRAM cell size data for Intel back to 130nm, Samsung back to 90nm and TSMC back to 130nm and plotted SRAM cell size versus node. Using a power law to fit the curves the R[SUP]2[/SUP] values are >0.98 for Intel and TSMC and >0.97 for Samsung clearly indicating a very good fit. Using the resulting equations, I have projected Intel and TSMC 10nm 6T SRAM cell sizes. For Intel I project a 6T SRAM cell of 0.0284µm[SUP]2[/SUP] and for TSMC of 0.0238µm[SUP]2[/SUP].

Assuming TSMC returns to their historical SRAM trends they will once again have the smallest SRAM cell size. This may be optimistic because Intel is expected to have a smaller contacted gate pitch and minimum metal pitch than TSMC at 10nm. In fact, we expect TSMC’s 7nm process to have similar pitches to Intel’s 10nm process. We should note here that TSMC is expected to begin ramping 10nm at the end of 2016 and they are targeting the end of 2017 for a 7nm ramp. With Intel delaying 10nm to 2017 TSMC’s 7nm and Intel’s 10nm may be ramping around the same time.

The bottom line is based on my analysis the Samsung 10nm 6T SRAM cell size looks significantly larger than what I would expect from Intel and TSMC.


Shifting Asia Electronics Production

Shifting Asia Electronics Production
by Bill Jewell on 03-25-2016 at 7:00 am

Japan emerged as the largest supplier of consumer electronics in the 1980s. The Japan surge was driven by lower cost labor than in the U.S. and Europe as well as innovative products from companies such as Sony, Toshiba and Panasonic (formerly Matsushita). By the 1990s much consumer electronics production shifted to South Korea with even lower cost labor and the growth of companies such as Samsung and LG (formerly Lucky Goldstar). Also in the 1990s Taiwan became a major source of production of computers and peripherals.

In the 21st century China has become the dominant source of electronics production. Initially China electronics production was primarily done by subsidiaries or partners of Japanese, South Korean, U.S. and European electronics companies seeking lower cost labor. China-based companies such as Lenovo, Haier, and Huawei have now become major producers. China will continue as the dominant electronics producer for the foreseeable future. Its population of over 1.3 billion will provide a long term source of low cost labor. However several Southeast Asia countries are growing their electronics industries rapidly.

The table below shows major Asian electronics production countries with labor force, literacy rate, GDP per capita, exports of computer and telecom equipment and economic freedom rank. Each country has been assigned an outlook based on our assessment at Semiconductor Intelligence.

The labor force size shows the upper limit on electronics production workers. The literacy rate suggest the education level of the labor force. The countries are ranked by GDP per capita, which is related to the labor cost in each country. Electronics exports reflect the overall electronics production of each country. The economic freedom rank connotes the ease of doing business.

Taiwan, Japan and South Korea have relatively high labor costs which makes growth in electronics manufacturing unlikely, thus they are assigned a down arrow (v) under Outlook. Malaysia’s GDP per capita is higher than most other Asian countries and its 14 million person labor force could limit growth, thus a flat (~) outlook. Thailand and Indonesia have labor costs similar to China. Thailand is limited by labor force size and questionable government stability. China’s growth rate is slowing, earning a flat (~) assessment. Indonesia is poised for growth with a labor force of 122 million people and low electronics exports, justifying its up arrow.

The Philippines, India and Vietnam have the lowest GDP per capita of the listed countries, about half of China’s. The Philippines has long been a major site for semiconductor assembly and test, but it is looking to diversify into more value added electronics production. India has a labor force of over 500 million, but only a 71.2% literacy rate and a low economic freedom rank of 123. Electronics exports are insignificant, with most production for domestic use. India is thus assigned a question mark. Vietnam has shown strong growth (see February 2015 post), has a 55 million person labor force and low GDP per capita – earning a positive assessment (^). Vietnam has a low economic freedom rank of 131, but it is higher than China’s 144. Vietnam is following China’s example as country with a Communist government and a capitalist economy.

The chart below show electronics production three-month-average change versus a year ago for select countries with available data.


China’s electronics production growth has been slowing from around 13% in early 2015 to about 10% recently. The latest data from Malaysia and India show strong growth of about 30%. India rebounded from declines of greater than 50% in late 2014 and early 2015. Vietnam’s electronics growth is around 20%. Some of the growth factors are short term, but it is a sign of strong growth in other Asian countries offsetting the slowing growth of China.

The shift in electronics production to emerging Asian countries will take several decades to play out. China’s dominance should continue over this period. However countries such as Indonesia, the Philippines and Vietnam are worth watching.


What the FBI is Saying about Connected Cars

What the FBI is Saying about Connected Cars
by Roger C. Lanctot on 03-24-2016 at 4:00 pm

The U.S. Federal Bureau of Investigation (FBI), in cooperation with the U.S. Department of Transportation, put out a written public service announcement (PSA) last week detailing the agency’s concerns regarding automotive cybersecurity and its recommendations for the driving and vehicle owning public. The PSA followed by four days the explosion of a Volkswagen Passat on Bismarkstrasse in Berlin, Germany, during the morning rush hour.

The explosion of the Passat which killed the driver, a suspected and previously convicted organized crime figure, was believed by investigators to be linked to organized crime, not terrorism. But given the recently expressed concerns regarding automobile cybersecurity, it was impossible to ignore the juxtaposition of the two events – the explosion and the PSA.

The impossibility of ignoring the connection between the two stories was made even more obvious by one of the recommendations in the FBI’s PSA:

“In much the same way as you would not leave your personal computer or smartphone unlocked, in an unsecure location, or with someone you don’t trust, it is important that you maintain awareness of those who may have access to your vehicle.”

This very sound advice follows descriptions, in the FBI/DOT PSA, of the various ways cars can be hacked wirelessly or via devices plugged into the car including smartphones and OBDII devices, such as Progressive’s Snapshot usage-based insurance device. The message from the FBI and the DOT is clear, that cars are highly vulnerable to hacking and that consumers should ensure that their software is up to date and that any outstanding recalls have been corrected.

http://tinyurl.com/jnaer5b – “Motor Vehicles Increasingly Vulnerable to Remote Exploits”

The four key warnings from the FBI include the aforementioned one to take care to prevent unauthorized access to one’s car. The other three are:

[LIST=1]

  • Ensure your vehicle software is up to date.
  • Be careful when making any modifications to vehicle software.
  • Maintain awareness and exercise discretion when connecting third-party devices to your vehicle.

    Interpreted more directly, I’d say the FBI and DOT were more or less seeking to put the kibosh on the entire OBDII aftermarket business. The PSA warns against using such devices or at least raises questions regarding their reliability and safety. What is missing is any process for establishing that reliability – so the FBI and DOT, at least, are warning consumers very directly to stay away from these devices.

    The problem is that there is nowhere to go to get a warranty on the safety of using an aftermarket device in a car. This is very bad news for insurers, such as Progressive, depending on these devices. Other than Progressive, most insurers around the world have been quickly moving away from the use of OBDII devices after recalls (ie. American Family device recalled) and hacks have exposed their weaknesses. The FBI and DOT are also no fans of connected smartphones in cars.

    But the FBI and DOT seem to also be raising questions regarding car sharing. If you share your car or use a car sharing service, how can you be sure the vehicle is safe, secure and clean? The bottom line is that you can’t.

    The FBI isn’t saying not to use ZipCar or Car2Go or any of a dozen other car sharing services, but the agency is raising questions regarding the potential downside to connected car technologies – even as it tips its hat in the PSA to the virtues of using car data to reduce fuel consumption, emissions and traffic congestion and anticipate vehicle failures. Having read the PSA, the average consumer is going to think twice before hopping in a shared car.

    The real concern regarding connected cars is the enhanced ability of thieves to use connectivity technology to steal the car or the driver’s identity or to steal control of the car remotely. Unlike the car bombing in Berlin, no malicious hacker has yet used remote control of a car for malicious purposes with any serious consequences. Most hacks, to date, have been white hat exploits with the hackers sharing the details of their work with the effected car maker – with a few notable, though unpublicized and exceptional, attempts by black hat hackers to blackmail car companies.

    What the past two years’ worth of hacks have demonstrated is that cars are highly vulnerable and can be penetrated by determined hackers. Some hacks, such as the Skoda Wi-Fi gateway hack and the recent Nissan Leaf hack, have revealed gaping security holes largely the fault of careless developers. Hopefully the auto industry has learned some basic lessons from these exploits.

    Of greater concern are malicious hackers in the world that have discovered the vulnerabilities of cars and for whom a ransomware attack on a car might be an appealing opportunity. It is not too much to imagine hackers requiring payment from a car owner to restore control of their vehicle.

    The more immediate concern remains vehicle theft, which is currently at historic lows. A recent conversation with a senior security executive at the U.K.’s Thatcham Research Centre revealed that over the past 20 years, the steady enhancement and implementation of Thatcham’s security standards have contributed to reducing vehicle theft by 90% – but it is far from zero. The U.S. has no equivalent of Thatcham.

    Vehicle theft is down in the U.S. as well, but the emergence of vehicle connectivity and its ability to create new pathways into opening up and starting cars could alter that trajectory. It seems the PSA from the FBI and DOT is just one way that the National Highway Traffic Safety Administration is seeking to keep the pressure on auto makers to take on and mitigate the current cybersecurity challenges. Presumably, the industry has gotten the message.


    Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


  • Webinar: A Tool for Process and Device Evaluation

    Webinar: A Tool for Process and Device Evaluation
    by admin on 03-24-2016 at 12:00 pm

    Not only are foundries continuing to introduce processes at new advanced nodes, they are frequently updating or adding processes at existing nodes. There are many examples that illustrate this well. TSMC now has 16FF, 16FF+ and now 16FFC. They are also announcing 10nm and 7nm processes. In addition, they are going back to older nodes and adding ULP processes for IoT and RF designs.


    Then there are other foundries that offer technically competitive processes. With all of this you have a large array of choices for a company contemplating the start of a design project. Operating voltages, leakage, switching speed, device characteristics and many other factors contribute to the final judgment on what process is best for a particular project. For many teams, collecting and comparing information on process trade offs and benefits becomes a huge exercise involving scripts, lots of spice runs, reams of reports, spreadsheets and some amount of guess work.

    ProPlus, a leader in circuit simulation, has set about to change the way foundry processes and their PDK’s are evaluated. They have announced their Model Exploration and Platform Benchmark (MEPro) product to help improve the procedure and outcomes of foundry and process selection. MEPro can also help validate libraries and make sure that design errors are not made that eat up margins.

    Once a design is started, MEPro offers useful features to ensure that optimal devices are selected for circuit needs. It comes with a large number of built in tests which can be run at the click of a button and the output is displayed in graphical form. It can also help designers plan for process corners.

    ProPlus is well positioned to offer a product like this. They already offer a wide range of products dealing with semiconductor simulation, yield, and library development. They have a track record of offering high capacity, high performance solutions. ProPlus will be hosting a one-hour webinar that will go into greater detail on the capabilities and usage of MEPro for evaluating design processes.

    The webinar will be held on March 31 at 11AM PDT. The presenter will be ProPlus CTO Bruce McGaughy, who will present a number of case studies, and also will include a demo of MEPro. This webinar is intended for circuit design teams that are using multiple process platforms from one or more foundries. Also process development and modeling engineers will find this session very informative.

    Here is the list of topics:

    • Browsing advanced model libraries from all leading foundries
    • Exploring device characteristics and performance to assist designs
    • Evaluating process platform performance with device/circuit targets
    • Benchmarking process platforms from different foundries or nodes
    • Monitoring process revisions and evaluate the impact to circuit designs

    To sign up for this session, use this link. This should be a very informative session.


    SystemC and Adam’s Law

    SystemC and Adam’s Law
    by Bernard Murphy on 03-24-2016 at 7:00 am

    At DVCon I sat in on a series of talks on using higher-level abstraction for design, then met Adam Sherer to get his perspective on progress in bringing SystemC to the masses (Adam runs simulation-based verification products at Cadence and organized the earlier session). I have to admit I have been a SystemC skeptic (pace Gary Smith) but I came away believing they may have a path forward.

    Adam has a nice characterization for what a level of abstraction needs to become successful, which he modestly calls “Adam’s law”. You need tools to develop an abstract model, you need tools to verify the model and you need a path to the next lower level of abstraction, not just for the design data but also for verification. We have this for transistors/gates to GDSII, we have it for RTL to gates, and we are getting closer for SystemC, where we already have modeling and synthesis and we are starting to see a path for verification in the emerging portable stimulus standard and interoperability of SystemC, UVM and ‘e’.

    One of the problems I had always had with SystemC was the apparent magical promise of converting software algorithms into hardware. In the earlier session Frederic Doucet of Qualcomm did a great job of dispelling the magic and showing that HLS was doing 3 very understandable (to an RTL-head) things:

    • Managing latency by letting you parallel blocks of algorithm (run two additions in parallel rather than one after another)
    • Letting you tradeoff resource-sharing versus parallelism (sharing one multiplier block versus parallelizing multiple blocks)
    • Making it simpler to experiment though tool options rather than explicit structure, by removing sequencing choices from the abstract model

    The Portable Stimulus Working Group (PSWG) is developing a standard which aims to address the verification need. I have talked more about this in other blogs so I won’t repeat that material here. Essentially the goal is to support stimulus which can interoperate or be easily moved between virtual modeling, HLS models, RTL simulation, emulation and hardware prototyping – to the greatest extent feasible. This should ease adoption of the verification part of Adam’s law. PSWG development is still underway and expected to produce a release in early 2017.

    There was a good talk from Intel on the realities of using SystemC in production design. Bob Condon made several interesting points:

    • Many teams in Intel are using SystemC HLS flows as a part of production design, both for algorithm-dominated and control-dominated designs
    • Compelling reasons to switch from RTL to SystemC include time-pressure, a significant amount of new code, a line of sight to a derivative and an existing starting point with some kind of C/C++ model.
    • Groups happiest using the approach don’t try to fool with the generated RTL and see a big compression in design and verification time. Groups most unhappy are those that do mess with the RTL. On a related note, satisfied groups felt absolute best QoR was less important (within reasonable bounds) than schedule, which was why they didn’t feel the need to tweak.
    • Overall big pluses are faster time to validated RTL and ease of modifying the code. Less prominent are retargeting technologies and sharing code between VP and verification teams.

    Dirk Seynhaeve of Intel/Altera gave an interesting talk. Their objective is to expand FPGA appeal to software developers who need to accelerate algorithms, obviously aiming to expand the accessible market. Software developers don’t want to learn hardware design but they do understand how to parallelize threads on multi-processor systems. So Altera is supporting software development in C or C++ with OpenCL for parallelization. Their view is that SystemC is just too big a step for software developers.

    I wrapped up with Adam on directions and drivers for HLS flows. He said we’re still early in SystemC adoption – maybe 10% of the audience in the first session raised their hands when asked if they were users. Outside the Bay Area uptake has been stronger, but valley engineers have built a lot of expertise in RTL and will take them longer to change. Quite likely a driver will be power optimization with acceptable performance. Power is most impacted by software but the hooks for optimization have to be in hardware – connecting the two through high-level design is a logical step.

    Design sizes and compressed schedules will help. And as Moore’s law slows down, focus on improving algorithm performance is increasing. That will force more experimentation with parallelism which be easier in SystemC than in RTL. And he does believe that verification complexity will force more (verification) sharing across levels, again encouraging top-down approaches. Sounds like we should expect continued gradual transition.

    To learn more about Cadence system-level design and verification solutions, click HERE.

    More articles by Bernard…


    What if this is as good as the iPhone gets?

    What if this is as good as the iPhone gets?
    by Don Dingee on 03-23-2016 at 4:00 pm

    How do I write about Apple so well? “I think of Steve Jobs, and I take away vision and creativity.” Please recognize that is a bit tongue in cheek, but I do think we are at a turning point where Apple is having a very hard time moving its loyal customers toward continued upgrades, and it is forcing them into unusual compromises. Continue reading “What if this is as good as the iPhone gets?”