CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Analog Mixed-Signal Layout in a FinFET World

Analog Mixed-Signal Layout in a FinFET World
by Tom Dillinger on 03-20-2016 at 12:00 pm

The intricacies of analog IP circuit design have always required special consideration during physical layout. The need for optimum device and/or cell matching on critical circuit topologies necessitates unique layout styles. The complex lithographic design rules of current FinFET process nodes impose additional restrictions on device and interconnect implementation, and on device placement, which further complicates the analog IP layout task.

To manage some of the layout complexities, features have been added to the schematic/layout tools to record the design intent — i.e., a schematic database property (“constraint”) which can be used to assist with initial layout generation, and be checked against the final implementation. Yet, the need for additional analog IP layout automation remains a key issue.

I recently had the opportunity to review the topic of analog IP layout productivity with Bob Lefferts, Director of R&D for Mixed-Signal IP at Synopsys. Bob was passionate about the topic, and he should know — he manages the CAD team that supports over 1,000 analog IP designers and layout engineers, who deliver a breadth of IP functionality over multiple process nodes and multiple foundries.

First, a little background on analog layout design…

The implementation of analog circuits involves the judicious placement of devices — and especially, multiple device “fingers” — in a manner such that groups of devices will have matching characteristics. The goal is to reduce the sensitivity of circuit performance to mask overlay and fabrication on-chip variation, aka “OCV“. These same considerations apply to the interconnect patterns connecting matched devices (and their individual fingers).

To represent this design requirement, layout tools have been enhanced to accept constraints added to the design database. For example, schematic devices in a differential input pair can be tagged as “matched”; the corresponding device and pin layouts could have constraints added to require:

  • a specific orientation (although most current FinFET process nodes require a single orientation for gates and lower-level metal layers)
  • relative positioning
  • min/max separation, or “bounding box” area limits
  • alignment between devices (e.g., center/edge; horizontal or vertical alignment)
  • symmetry of devices relative to an axis

A “centroid-type” layout of a diff pair has been used historically, as it reduces sensitivity to x, y, and rotational overlay tolerances.

(Example of centroid orientations of multiple device fingers in a matched diff pair. )

A more sophisticated layout assist feature is to allow grouping of individual cells, such that all devices in the group are bound and move together.

Bob highlighted that these typical constraints are no longer sufficient in FinFET process nodes, due to additional requirements:

  • minimizing local mask density variation
  • assigning common multipatterning “mask color” layer designations, while simultaneously satisfying color density balance design rules
  • incorporating dummy layout data for regular litho periodicity; satisfying FinFET-on-grid and gate periodicity restrictions
  • matching layout-dependent effect (LDE) parameters in device models

An additional consideration that Bob stresses was the process design rule restrictions on device channel length and device channel width. His team is developing design flows in the most advanced FinFET process nodes, so that qualified IP is ready with production PDK releases from the foundries. These process nodes offer very limited options for device channel length, which severely hampers custom analog design. As a result, the implementation must incorporate multiple devices in series, to effectively realize a longer L value — this amplifies the complexity of generating analog layout to satisfy matching and variation-insensitive requirements.

Bob also spoke about constraint-driven layout, saying, “Several unsuccessful attempts have been made where EDA tools have asked the layout/design engineer to add textual constraints so the layout effort can be automated. But layout engineers don’t think like a programmer and instead operate in a visual context. Asking a layout person to create lines and lines of textual constraints is like asking an engineer to write poetry — only a very few will be successful.”

Bob clearly drove home the need for continued improvements in analog IP layout productivity, beyond the recording of constraints. He said, “We have a very close collaboration with the Synopsys custom tools R&D development team. Our design team meets regularly with R&D, and provides input on new features. These features invariably become part of the custom layout editor, schematic editor, and simulation environment. Specifically, we have worked on a unique method to improve the layout productivity on complex FinFET device and cell layouts. First, we place devices on interconnect tracks and then deal with fins, instead of snapping devices to fins, and then trying to make the interconnect line up. We also added a level of automation such that the layout engineer can concentrate on connecting devices to match the schematic while meeting all of the design rules.”



Complex pattern of matched series/parallel devices in analog layout (from Synopsys)

At the upcoming SNUG meeting in Santa Clara, Bob will be presenting details of the productivity gains that his design team has realized, and the results of the collaboration with the tools R&D team, in the talk “FinFET IP Design Using Synopsys Latest Innovation in Custom Tools” .

If you are a Synopsys user, I would encourage you to attend SNUG Silicon Valley on March 30-31, and Bob‘s presentation, in particular. Here are links to the SNUG registration and schedule:

https://www.synopsys.com/Community/SNUG/Silicon%20Valley/pages/default.aspx

-chipguy


Key Takeaways from the TSMC Technology Symposium Part 1

Key Takeaways from the TSMC Technology Symposium Part 1
by Tom Dillinger on 03-20-2016 at 7:00 am

TSMC recently held their annual Technology Symposium in San Jose, a full-day event with a detailed review of their semiconductor process and packaging technology roadmap, and (risk and high-volume manufacturing) production schedules.
Continue reading “Key Takeaways from the TSMC Technology Symposium Part 1”


Internet of Things Augmented Reality Applications Insights from Patents

Internet of Things Augmented Reality Applications Insights from Patents
by Alex G. Lee on 03-19-2016 at 9:00 am

US20150347850 illustrates an IoT (Internet of Things) AR (Augmented Reality) application in a smart home. A smart home IoT device communicates via a local network to a user AR device (e.g., smartphone) for providing the tracking data. The tracking data describes the smart home IoT device. The AR devices can recognize the smart home IoT device in the camera view based on the tracking data. Once the smart home IoT device is identified in the camera view, the AR application can augment the camera view with additional information and control interface about the smart home IoT device. The user can control the smart home IoT device using the AR device.

US20150347850 illustrates an industrial IoT AR application. A machine broadcasts a status of the machine and tracking data related to the machine to a user AR device. The status includes a presence, an operating status, operating features, and characteristics of the machine. The tracking data includes a physical identifier and location information of the machine. When the user AR device is in proximity to the machine, the user AR device authenticated with the machine. Then, the augmented reality application in the user AR device generates directions to the machine and descriptions of the machine. The AR application can augment the camera view with interactive virtual functions associated with functions of the machine.

US20140063064 illustrates an IoT AR application in a connected car. An AR head-up display in the connected car displays the overlaying a virtual image regarding the surrounding environmental information on an actual image of the external vehicle that is observed through the transparent display. The surrounding environmental information regarding the vehicle includes information about events occurring outside the vehicle (e.g., location information of the other vehicle, speed of the other vehicle, traffic lane of the other vehicle, and an indicator light status of the other vehice), background information within a predetermined distance (building information, information about other vehicles, weather information, illumination information), accident information of the other vehicle, and traffic condition information. The surrounding environmental information are obtained via the V2X (V2V and V2I) communication system.

US20150310667 illustrates an IoT AR system for providing contextually relevant AR contents to a user. The IoT AR system extracts features of a specific object in a location in a field of view of a camera of the user AR device. The IoT AR system assesses a situation of the user associated with the user AR device based on information obtained from surrounding IoT devices and sensors. Then, the IoT AR system provides the context-aware visualization regarding the specific object based on the assessed user situation.


More articles from Alex…


How HBM Will Change SOC Design

How HBM Will Change SOC Design
by Tom Simon on 03-19-2016 at 7:00 am

High Bandwidth Memory (HBM) promises to do for electronic product design what high-rise buildings did for cities. Up until now, electronic circuits have suffered from the equivalent of suburban sprawl. HBM is a radical transformation of memory architecture that will have huge ripple effects on how SOC based electronics are designed and assembled.

Instead of laying memory out horizontally, HBM vertically sandwiches memory silicon to create stacks of memory chips that are connected by using Through Silicon Vias (TSV’s). The JEDEC JESD235 standard for HBM was adopted in Oct 2013. The goal of the standard was to add bandwidth, reduce area, lower energy and increase functionality.

We’ve been hearing about stacking die and using TSV’s for a while, but they were primarily the domain of large FPGA or GPU companies. 2015 was a significant year for this technology as it has started to become much more readily incorporated into new designs. Still, the main application areas are high performance computing, graphics and networking. Nevertheless, because of the power and area advantages, HBM will be used in applications like laptops and mobile. It’s not just for the data center.

To put the significance of HBM in perspective consider that a single stack of HBM offers bandwidth of 128-256 gigabytes per second. One stack is approximately 5mm by 7mm. More than one stack can be used in a single package where it can be interfaced directly to an SOC. Typical arrangements use an interposer to combine multiple HBM stacks with their own controllers and a large SOC, such as a GPU, into a single package.


In HBM, each die in a stack has two fully independent memory channels of 1 to 32 gigabits. A stack consists of 4 die. When 4 die are stacked with, for example, 4 gigabits per channel they provide 4 gigabytes (8*4Gb) of storage. Each channel is completely independent and offers all of its signals to the bottom of the stack through TSV’s. This accounts for 193 signals per channel, of which 128 are data.

For a memory channel offering 128GB/sec, the controller need only be clocked at 500MHz, making closing timing relatively straightforward. Power savings are around 60% for HBM1 compared to GDDR5. Additionally, GDDR5 requires high drive strength PCB buses that consume board real estate and add design complexity. If 4 HBM2 Stacks (5mmx7mm) are used in a design, it would offer 1TB/sec bandwidth. To reach the same bandwidth with DDR4 would require 40 modules. The HBM stacks above could be easily added to a single 50mm square SIP.

eSilicon has been working with HBM since 2011. In 2014 they started with limited volume using HBM1 at 28nm. In 2015 they taped out 7 test chips. They are continuing support for HBM2 with a 28nm test chip that taped out in December of 2015. But their HBM capabilities are rapidly moving to 14nm and 16nm. Part of eSilicon’s business model is to provide one stop shopping for HBM based designs by bringing together design, test, and manufacturing to deliver a final yielded product.

To help designers understand the benefits and design process for HBM, eSilicon hosted a seminar recently that brought together SK Hynix, Amkor Technology, Northwest Logic, and Avery to speak on each step of delivering an HBM based product. For those who were not able to attend the seminar in Mountain View, eSilicon is hosting a webinar broadcast of the event scheduled for March 29 2016 at 8AM and again at 6PM PDT. There was a lot of useful information presented regarding the supply chain and design considerations for the memory die, PHY layer, HBM Controller and 2.5D design choices. This seminar is well worth watching if you care about higher bandwidth, lower power and area, among other things.


Can Qualcomm avoid repeating Motorola’s fate?

Can Qualcomm avoid repeating Motorola’s fate?
by Don Dingee on 03-18-2016 at 4:00 pm

NPR had an interesting guest this morning: Edward Luce, author of “Time to Start Thinking: America in the Age of Descent”. I’m not about to turn SemiWiki into a politics blog, but there is some precedent in the technology business. I’ve caught myself saying more than once recently that “Motorola is no longer the company I worked 14 years for.”

I started thinking about the decline of Motorola and the history of Qualcomm Continue reading “Can Qualcomm avoid repeating Motorola’s fate?”


Custom IC Design Flow with OpenAccess

Custom IC Design Flow with OpenAccess
by Daniel Payne on 03-18-2016 at 12:00 pm

Imagine being able to use any combination of EDA vendor tools for schematic capture, SPICE circuit simulation, layout editing, place & route, DRC, LVS and extraction. On the foundry side, how about creating just a single Process Development Kit (PDK), instead of vendor-specific kits. Well, this is the basic premise of a recent webinar from Brian Bradburn at Silvaco. The custom IC design flow with all Silvaco tools is shown below:

Foundries have long been creating vendor-specific Process Design Kits (PDK) for all of the major EDA vendors, which requires a lot of engineering effort. To streamline this engineering effort several standardization efforts have arisen.

PDK
A Process Design Kit (PDK) is just a collection of files and folders that define the front end and back end technology for IC design on a particular process node. Silvaco has created about 150 PDKs so far with a focus on low power, analog and power management foundries like:

  • TSMC
  • UMC
  • TowerJazz
  • X-Fab
  • GLOBALFOUNDRIES
  • ON Semiconductor
  • VIS
  • SMIC
  • Dongbu

iPDK
Interoperable PDK (iPDK) came from TSMC starting in 2007, and by 2009 the first 65nm iPDK was ready. The iPDK Alliance called IPL controls the iPDK specification and members include: Altera (Intel), Ciranova, Mentor Graphics, Pulsic, SpringSoft, Synospys and TSM with Xilinx and STMicroelectronics as advisors. With iPDK the foundry and partners spend less time on PDK development.

OpenPDK
As with much in the EDA industry, there will be multiple standards so OpenPDK is yet another approach, this time from Si2.org, using an XML structured file and translators for main vendor tools. Each supplier creates their own parser to create the standardized exchange format. An OPDK can also create an iPDK.

OpenAccess
Cadence created a standard way for any EDA company to read/write their IC design data for front-end and back-end, calling this OpenAccess. With OpenAccess an EDA vendor uses API calls to get IC data, which enable design data portability.

So you can open up the same schematic in either Virtuoso from Cadence or Gateway from Silvaco. In the Silvaco front-end tools you can either use native data or quickly translate to or from OpenAccess, your choice.

Issues

All of this IC design reuse sounds really promising and liberating, however there are some issue for you to aware of. There can be subtle differences between Cadence PDK (using Skill), iPDK (Tcl), custom PDK (Tcl, Python, Perl).

OpenAccess is certainly a great idea, yet over time there are different versions like DM4 in use today while DM5 is under design. Each data model is not compatible between versions, so you must have the compatible version in your tool flow. Vendor translators to OpenAccess need to be maintained and kept up to date. Finally, Cadence uses Skill code, which is not released as Open Source, so you need a copy of Virtuoso to update their cells.

All Silvaco Tools
To give you an idea of how many EDA and Tcad tools that Silvaco has to offer, look at the following chart:

Q&A

Q: Is Silvacopart of IPL?
A: We don’t really need to join, because the existing members are pushing iPDK in the right direction.

Summary
The lofty goals of taking your existing EDA tools and using an interoperable PDK are now reality, which really allows engineering teams to choose their front-end and back-end tools based on the design size, complexity and project budget.

Webinar Archive
View the archived webinar online here, after a brief registration process.

Related Blogs


Data Security Predictions for 2016!

Data Security Predictions for 2016!
by Daren Klum on 03-18-2016 at 7:00 am

2016 has come and with it some of the greatest challenges we have ever faced in the data security industry. Data breaches run rampant, encryption is dead, big security companies rake in billions in consulting fees selling fear and today’s large corporations have no other option than to shell out good money after bad on old tech that simply doesn’t work (a known evil is better than a new evil).

As I look into the crystal ball I see nothing but bad things coming. When I say bad I mean the kind that will change lives. I predicted last year was the ‘age of the hack’ and this year will be the ‘age of the impact.’ Meaning in 2016 hackers will develop new ways to monetized hacked data and turn it into BIG money. I also believe hacking will hit our grid – power, airline and all the ‘things’ we are starting to connect to the Internet.

Whether or not we want to admit it America is at War with a cunning, smart and well financed enemy. They will start using the Internet as a weapon (they already have) and frankly we are simply not well enough equipped to thwart these efforts. The industry has moved too slow and sadly as it stands today many of our critical systems are still wide open and available to any number of attacks.

2016 will continue to see a rise in social engineering on social networks. The ‘bad guys’ will develop even more brilliant ways to gain access to your information and use your identity for it’s gain. We are already seeing a major rise in ‘stolen identities’ and that will continue to grow. Also, there will be a huge influx of foreigners trying to get into America and that will fuel the false identity market. Even more so than what we see today.

I also see 2016 as the year 1 out of 5 people will be a victim of some form of fraud. From insurance, payroll, tax returns and credit card fraud. 2016 will see a huge rise in fraud as money becomes tight in countries across the world and criminals see no other way to make money than to turn to cyber-crime.

I’m sorry I sound so negative – it’s just the harsh reality of 2016. So do you want some good news for 2016?

I believe we will finally see some tangible alternatives to the legacy technologies like encryption that have prevented us from having true security. I believe companies like Microsoft, Intel are taking security seriously and bringing in new technologies that really solve some of the pressing issues. I also see the rise of new technology like my company Secured2 that has eliminated the data in transit & at rest hacking problem. I also see far better monitoring technology emerging and a better way of understanding who is accessing data and where it’s going.

Most importantly, I believe we will start seeing security built into products so the products themselves are secure and don’t require significant technical ability. Security should be invisible and just work. I see that starting to emerge in 2016.

One thing is certain – for every move we make in a positive direction the bad guys make moves as well. It’s always a challenge to stay one step ahead of a smart, well funded and relentless enemy. As it stands right now the enemy has an upper hand but the good news is times are changing.


Autonomy at Odds with Security

Autonomy at Odds with Security
by Roger C. Lanctot on 03-17-2016 at 4:00 pm

It’s funny that we all now believe that Google got the automated driving ball rolling. The reality is that the government started it all with the Defense Advanced Research Projects Agency (DARPA) and its famous DARPA Grand Challenge, which consisted of three tests (in 2004, 2005 and 2007) of driverless cars in different driving environments ranging from cross country to urban.
Continue reading “Autonomy at Odds with Security”


VC Apps Tutorial at DVCon 2016

VC Apps Tutorial at DVCon 2016
by Bernard Murphy on 03-17-2016 at 7:00 am

We might wish that all our design automation needs could be handled by pre-packaged vendor tool features available at the push of a button, but that never was and never will be the case. In the language of crass commercialism, there may be no broad market for those features, even though you consider that analysis absolutely essential.

Much of the analysis of this type is motivated by insufficient capabilities to do comprehensive checks through simulation. This is particularly the case in top-level checking on very large designs, but I have seen examples of in-house automation starting as small (ha-ha) as 20M gates. Typical questions include whether all blocks are hooked up correctly (per spec requirements) or more quickly localizing latency or coverage problems in a design.

Connectivity checks are a good example of this class. For connectivity between two instances of two specific IPs, you might have a requirement that say ports beginning with “a” cannot connect to ports beginning with “q”. This kind of checking can get progressively more elaborate. A clock pin must be driven by a clock source in the appropriate domain; it also can only be gated by a signal from an appropriate source. Reset pins have similar requirements and must be grouped accordingly into reset domains.


This concept need not be limited to static and netlist-specific checks. You might also want to look at power intent, simulation data, performance data and formal analysis. And you might not just want to do checking/analysis – you might also want that analysis to drive design modification.

Building these kinds of capabilities starts with user-extensibility to core tools. I’m pretty familiar with this concept from my Atrenta days – we built extensibility into SpyGlass and other tools through Tcl, Perl and C interfaces. But I’m in awe of what you can do through the Synopsys VC Apps. These give you a unified interface to data for the design, simulation, coverage, power intent, formal analysis and more.

VC Apps is a programming interface, through Tcl or C, to customize Verdi. And since Verdi is the integrated debug platform in Verification Compiler, that means you can get to anything Verdi can get to. Several examples were presented:

  • General connectivity checking, also graphing completion of connectivity by different classes (I would guess clock, reset, communication, that sort of thing)
  • Looking for high latency on reads and writes to memory, then color-coding read/write-enables in the simulation waveform corresponding to those cases
  • Generating specialized power instrumentation rules by looking at both the design and power intent

Why support both Tcl and C? Good question. Tcl is great for quick one-off checks, and maybe even long term checks which you’ll make part of your regression. And Synopsys Tcl provides a lot of nice grouping and filtering features which otherwise you’d have to write yourself. But if you need to write computationally intensive applications and you’re comfortable in C or C++, you may find the C API is faster. Either way, you can access a lot of capability.

You can learn more about VC Apps HERE.

More articles by Bernard…