RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

How to Implement a Secure IoT system on ARMv8-M

How to Implement a Secure IoT system on ARMv8-M
by Daniel Payne on 04-14-2017 at 12:00 pm

This weekend my old garage door opener started to fail, so it was time to shop for a new one at The Home Depot, and much to my surprise I found that Chamberlain offered a Smartphone-controlled WiFi system. Just think of that, controlling my garage door with a Smartphone, but then again the question arose, “What happens when a hacker tries to break into my home via the connected garage door opener?” I opted for a Genie system without the WiFi connection, just to feel a bit safer. Connected devices are only growing more prominent in our daily life, so I wanted to hear more about this topic from ARM and attended their recent webinar, “How to Implement a Secure IoT system on ARMv8-M.”

ARM has been designing processors for decades now, and has come to realize that security is best approached from a systems perspective and should involve both the SoC hardware and software together, thus giving birth to the moniker ARM TrustZone, where there is hardware-based security designed into their SoCs that provide secure end points and a device root of trust. This webinar focused on the Cortex-M33 and Cortex-M23 embedded cores shown below in Purple:

Most IoT systems could use a single M33 core, although you can easily add two of these cores for greater flexibility and even saving power. Cores and peripheral IP communicate using the AHB5 interconnect. Security is controlled by mapping addresses with something called the Implementation Defined Attribution Units (IDAU). This system also filters incoming memory accesses at a slave level as shown in the system diagram:

That diagram may look a bit complex, however ARM has bundled most of this IP together along with the mbed OS with pre-built libraries and called it the CoreLink SSE-200 subsystem which is pre-verified, saving you loads of engineering development time.

With TrustZone you have a system that contains both trusted and un-trusted domains, then memory addresses are sorted to verify that they are in a trusted range or not.

Let’s say that you like the security approach from ARM and then want to get started on your next IoT project, what options are available? ARM has built up a prototyping system using FPGA technology called the MPS2+ along with IoT kits that include the Cortex-M33 and Cortex-M23, plus there’s a debugger called Keil. You can also use a Fixed Virtual Platform (FVP) which uses software models for simulation.


One decision that you make for your IoT device is the memory map, splitting it into secure and non-secure addresses using a Secure Attribution Unit (SAU) together with the IDAU. There are even configuration wizards available to let you quickly define the start and end address regions.

ARM even has created an open-source platform OS called mbed OS,just for the IoT market, already with some 200,000 developers. With the addition of this OS we now have three levels of security:

  • Lifecycle security
  • Communication security
  • Device security

Related blog –IoT devices Designers Get Help from ARMv8-M Cores

Summary
It’s pretty evident that ARM has put a lot of effort into creating a family of processors, and when it comes to security they have assembled an impressive collection of cores, semiconductor IP, SDK, compiler, platform and debugger. What this means is that now I can more quickly create my secure IoT system with ARM technology, using fewer engineers, all at an affordable price.

There’s even detailed virtual training courses coming up in April and May for just $99 each, providing more depth than just the webinar provides.

Watch the archived webinar online here.


IP Traffic Control

IP Traffic Control
by Bernard Murphy on 04-14-2017 at 7:00 am

From an engineering point of view, IP is all about functionality, PPA, fitness for use and track record. From a business/management point of view there are other factors, just as critical, that relate less to what the IP is and more to its correct management and business obligations. The problems have different flavors depending on whether you are primarily a supplier of IP, primarily a consumer or if you are working in a collaborative development. To avoid turning this into a thesis, I’ll talk here just about issues relating to the design and royalty impact of the consumption of soft and hard IP, particularly when externally sourced.


A large modern design can contain hundreds of such IPs – CPUs, memories of various types, standard interfaces, bus fabrics, accelerators, security blocks, the list is endless. Each comes with one or more license agreements restricting allowed access and usage. These can become quite elaborate as supplier and consumer arm-wrestle to optimize their individual interests. Licenses can be time-bounded, geography-bounded, design and access bounded and there may be additional restrictions around ITAR. Moreover new agreements are struck all the time with new bounds which overlap or partially overlap with previous agreements. The result for any one IP can be a patchwork of permissions and restrictions on who can access, when they can access, what they can access and for what purpose.

Multiply this by potentially hundreds of sourced IP on a design, across multiple designs active in a company, across multiple locations and you have a management nightmare. This is not stuff a design team wants to worry about but you can see how easy it would be, despite best intentions, to violate agreements.

There’s another problem – IP is not static. At any given time, there can be multiple versions of an IP in play, addressing fixes or perhaps offering special-purpose variants. New versions of IP appear as a design is evolving which leads to questions of which versions you should be using when you’re signing off the design. These questions don’t always have easy answers. A latest fix may be “more ideal” by some measures but may create other problems for system or cross-family compatibility. There may be a sanctioned “best” version at any given time, but how do you check if your design meets that expectation prior to signoff?

What about all the internally-sourced IP? For most companies these provide a lot of their product differentiation. Thanks to years of M&A/consolidation they have rich sources of diverse IP, but built to equally diverse standards and expectations which may be significant when used in different parts of the company. Version management and suitability for use in your application are as much a concern as for externally sourced IP. And you shouldn’t think that internal IP is necessarily free of rights restrictions because it is internal; there may still be restrictions inherited through assignments from prior inventors.

Finally, once you are shipping, do you have accurate information to track royalty obligations? Again this is not always easy to figure out across complex and differing license agreements unless you have a reliable system in place to support assessing payments and, if necessary, to support litigation.

Navigating these thickets of restrictions and requirements is an IS problem requiring a professional solution. Getting this stuff wrong can end in lawsuits, possibly torpedoing a design or a business so this is no place for amateur efforts. Deployed on the collaborative 3DEXPERIENCE Platform, the Silicon Thinking Semiconductor IP Management solution delivers an enterprise level capability companies need to manage and effectively use their IP portfolios. The solution provides easy and secure global cataloging, vetting, and search capabilities across business units and partner and supplier networks. It also delivers essential IP governance tools, and enables full synchronization of IP management with issue, defect and change processes. It also provides royalty tracking and management capabilities to maximize profits while minimizing litigation risks.

You can get another perspective on these management aspects of IP traffic control HERE.

Also Read

Synchronizing Collaboration

Behind the 3DEXPERIENCE for silicon

Latest Pinpoint release tackles DRC and trend lines


IP Vendors: Call for Contribution to the Design IP Report!

IP Vendors: Call for Contribution to the Design IP Report!
by Eric Esteve on 04-13-2017 at 12:00 pm

The EDA & IP industry enjoys high growth for the Design IP segment, but a detailed analysis tool is missing. IPnest will address this need in 2017, expecting the IP vendors’ contribution! If we consider the results posted last March by the ESD Alliance, the EDA (and IP) industry is doing extremely well, as the global revenue has grown in Q4 2016 by 22% compared with Q4 2015! If we zoom to the “Design IP” category, the 22% growth rate is online with the industry and the most significant information is the confirmation that the “Design IP” is now confirmed to be the largest category. The “CAE” category used to be by far the #1 for years, until 2015 as you can see on the graphic below.

When you see the slope of the Design IP curve in light blue during the last 3 to 4 years, you realize that the Design IP category will stay #1 in the future. As we all need facts, I have calculated the CAGR for 2010 to 2016: The Design IP category has grown with a 16% CAGR, when the next category (CAE) has grown with a 7% CAGR. All is good for Design IP, except that the industry miss a detailed analysis tool, like the former “Design IP Report” released by Gartner up to 2015. IPnest will launch this type of report in April 2017, covering the Design IP market by category for 2015 and 2016.

The obvious difference between the Design IP segment and any other EDA category is the business model, based on up-front licenses and royalties (even if only certain vendors ask for royalties). The royalty part of the revenues should be clearly identified, and two main categories defined: IP License and IP Royalty.

To better understand the Design IP market dynamic, we need to segment this market into categories:

·Microprocessor (CPU)
·Digital Signal Processing (DSP core)
·Graphic Processing (GPU)
·Wired Interface IP
·Wireless Communication IP
·SRAM Memory Compilers (Cells/Blocks)
·Others (OTP, MTP, Flash, XRAM) Memory Compilers
·Physical Library (Standard cell & I/O)
·General Purpose Analog & Mixed Signal
·Infrastructure IP (NoC, AMBA)
·Miscellaneous Digital IP
·Others…

You can download the pdf version of the Excel spreadsheet at the bottom of this article
… or contact me: eric.esteve@ip-nest.com

If we only consider the large, well-known IP vendors like ARM, Imagination, CEVA, Synopsys, Cadence or Rambus to measure the Design IP market, we will probably reach 80% of the effective market size. But the remaining 20% are made of a multitude of companies and some of them are very innovative, maybe designing the next big function that all the chip makers will integrate tomorrow. We need IP vendors to contribute by sharing their revenues, and we need all of them to participate!

Like the EDA market, the IP market history has been marked by acquisitions. Back in 2004, the acquisition of Artisan by ARM for almost $1 billion has been like a thunderclap (Artisan revenue was about $100 million). When you look at the way Synopsys has built their IP port-folio, it was through the successive acquisitions of InSilicon, Cascade (PCI Express controller), Mosaid (DDRn memory controller), the Analog Business Group of MIPS Technologies or the largest, the acquisition of Virage Logic in 2009 for $315 million (logic libraries and memory compilers). The “Design IP Report” can also be an efficient tool for the small IP vendors to get visibility and for the large one to complement their port-folio through an acquisition…

Like EDA, the IP market is not monolithic, but made of various IP categories. Each category follows its own market dynamic and if you want to build an accurate IP market forecast, you will have to consider each category individually, project the specific evolution and finally consolidate and calculate the global IP market forecast. This approach works reasonably well. For example, for the Interface IP segment, made of various protocols (PCIe, USB, MIPI, and many more), IPnest has started to build a 5 year forecast in 2009, and did it every year. We could measure the difference between the forecast and the actual results for the first time in 2014 and I am proud to say that the forecast was accurate within +/- 5%, and this error margin has stayed the same every year.

If you, as an IP vendor, think that you need to benefit from an accurate report (the “Design IP Report”) and if you expect to see some accurate forecast of the IP market, you need to contribute and share your IP revenues!

Eric Esteve fromIPNEST

Don’t miss the “IP Paradox” panel during the DAC 2017, organized by Eric Esteve and moderated by Dan Nenni:

The IP Paradox: Growing Business Despite Consolidations


SPIE 2017 ASML and Cadence EUV impact on place and route

SPIE 2017 ASML and Cadence EUV impact on place and route
by Scotten Jones on 04-13-2017 at 7:00 am

As feature sizes have shrunk, the semiconductor industry has moved from simple, single-exposure lithography solutions to increasingly complex resolution-enhancement techniques and multi-patterning. Where the design on a mask once matched the image that would be produced on the wafer, today the mask and resulting image often look completely different. In addition, the advent of multi-patterning has led to patterns being broken up into multiple colors, with each “color” being produced by a different mask and a single layer on a wafer requiring two to five masks to produce. Designs must be carefully optimized to ensure that the resulting wafer images are free of “hotspots” that can lead to low yields.

Continue reading “SPIE 2017 ASML and Cadence EUV impact on place and route”


Communication with Smart, Connected Devices and AI

Communication with Smart, Connected Devices and AI
by Daniel Payne on 04-12-2017 at 12:00 pm

I’ve lived and worked in Silicon Valley for 13 years, but since 1995 I’ve been in the Silicon Rainforest (aka Oregon) where the world’s number one semiconductor company Intel, has a large presence, along with dozens of smaller high-tech firms. In the past year I’ve started to attend events organized by the SEMI Pacific Northwest Chapter. On April 21st they are presenting an interesting breakfast forum entitled, “The Future of Communication: from Smart & Connected Devices to Artificial Intelligence and Beyond”. I’ll be attending and blogging about this forum, so stay tuned for my April blog.

Here’s what to expect in this SEMI breakfast forum:

When: Friday, April 21st, 2017 starting at 7:30AM

Where: Qorvo, 2300 NE Brookwood Pkwy, Hillsboro, OR 97124

Thanks in large part to recent advances in semiconductor technology, the world is on the verge of an unprecedented volume of information exchange that promises to reshape our future. From Smart Cars to Smart cities, to Artificial Intelligence and Beyond, the so called 4th industrial revolution will provide us with the means to create new methods of communication with unprecedented capability.

Please join us for the SEMI breakfast forum to hear our distinguished guest speakers explore both the technology breakthroughs required to realize this future and the potential changes that it portends. It is also a great event to network with leaders in the local community.

Agenda
[TABLE] style=”width: 500px”
|-
| 07:30 – 08:00
| Breakfast, Check In
|-
| 08:00 – 08:10
| Moderator Welcome
|-
| 08:10 – 08:45
| Keynote Speaker: Glen Riley, General Manager Filter Solutions Business Unit, Qorvo
|-
| 08:45 – 09:10
| Claire Troadec, Activity Leader for RF Devices and Technologies, Yole Développement
|-
| 09:10 – 09:35
| Rob Topol, General Manager, 5G Business and Technology, Intel Corporation
|-
| 09:35 – 10:00
| Networking Break
|-
| 10:00 – 10:15
| Startup Companies CEO Panel: Moderated by Jon Maroney, Partner, Oregon Angel Fund
|-
| 10:15 – 10:25
| Mounir Shita Entrepreneur, CEO & Founder, Kimera System
|-
| 10:25 – 10:35
| Eimar Boesjes, CEO, Moonshadow Mobile
|-
| 10:35 – 10:45
| Stephen A. Ridley, CEO/CTO, Founder,Senr.io
|-
| 10:45 – 11:30
| Startup Companies CEO Panel Discussion Moderated by Jon Maroney, Oregon Angel Fund, Kimera, Moonshadow and Senr.io
|-

Registration
Pricing by April 14th is $55 for SEMI members and $75 for non-members. Register online here.

Sponsors





Synchronizing Collaboration

Synchronizing Collaboration
by Bernard Murphy on 04-12-2017 at 7:00 am

Much though some of us might wish otherwise, distributed development teams are here to stay. Modern SoC design requires strength and depth in expertise in too many domains to effectively source from one site; competitive multi-national businesses have learned they can very effectively leverage remote sites by building centers of expertise to service company-wide needs. Multi-national operations aren’t going to go away, which means we need to get better at multi-site and multi-time-zone development.


Years of experience have shown that multi-site development can be effective but requires rather more management overhead than you might expect and much more care in ensuring that intent is very carefully communicated and cross-checked at each sync-up. The problem is never in broad expectations – it’s most commonly in the details, especially around implicit assumptions we think we all share. I’ve done this for 20 years so I have some experience of what can go wrong.

The most common way to synchronize understanding and flush out those implicit assumptions is crude and painful, but generally effective. Lots of early and late-night group meetings, generating mountains of status and update documents but you do enough of it and maybe no balls get dropped. It works (mostly) but it’s not very efficient (witness continued schedule overruns), pulling many people into meetings to which any one participant might contribute to 10% or less of the discussion. There must be a better way – and there is for implementation teams.

Implementation is an area that particularly lends itself to distributed development thanks to deep expertise needed (among others) in each of timing analysis, placement and power distribution network design. Distributing tasks like these between different sites is common today, especially on large designs (billions of gates). But of course while each domain is specialized, these objectives are very interdependent. Getting and keeping all these team on the same page the old-fashioned way is what spawns all the meetings, PowerPoints and spreadsheets.

Which is kind of ironic. In this 21[SUP]st[/SUP] century Web-based, instant-access world we’re building the very latest in electronic systems using 20[SUP]th[/SUP]-century management methods. Pinpoint from Consensia aims to change that, particularly in implementation management. You may have heard about this tool when it was first developed by Tuscany Design; the organization is now part of the Dassault ENOVIA PLM group. They have an impressive list of customers, though the only one I can find publicly cited is Qualcomm, who have been using the tool for many years (an impressive reference in its own right).


Pinpoint isn’t replacing any of your favorite/process-of-record tools for implementation. Each of those continues to play its full role in whichever design center-of-expertise has responsibility for that function. What Pinpoint provides is effectively a real-time consolidated web-based view of results and status across a variety of disciplines. This starts with a dashboard view across all monitored projects. You can drill down into a project to see progress on metrics and trends by run versions, across covered analyses.

This alone provides an important sanity check and management tool for how the design is progressing. We all know that the average project consumes 30-40% of (actual) schedule at around 90% complete (funny how that happens). Much of that time is spent diagnosing problems and negotiating suggested fixes, many of which require tradeoffs across domains. The first synchronization time-saver is that you can all look at the Pinpoint status page without needing to first build presentations; which blocks are in good shape and which are struggling is immediately obvious. In fact, if your block is doing well, you may not need to turn up to the meeting at all, a second time-saver for at least some of you. Not that you don’t love extra meetings.


All that’s great to reduce management overhead, but can it help get the job done as well? Yes it can. From the dashboard reports, block teams can drill down from a current or earlier analysis to connectivity-aware layout views overlaid with IR-drop heat maps and critical paths. And they can filter paths to display, based on all the usual criteria. But instead of one tool expert sharing screens from a process-of-record tool with others who aren’t expert in that tool, all teams with an investment in the problem can look at and experiment with the data, before, during and after the meeting.

Now you have a basis for productive conference call between smaller hands-on teams. You’re all seeing the same thing, you can debate real-time how to triage the analysis results and what to do next. You can look back at previous runs to see if a suggested fix made the problem better or worse. You converge faster because you’re all working from the same page (literally). You’re synchronizing real-time on fixing problems, without needing to get to a larger meeting.

If you’re at a small company, all working around the same table, this probably isn’t for you. But if you’re building big designs across multiple time-zones, think a moment on why Qualcomm and other big companies are using this software. For the managers, it saves time and money (~$1M in allocated headcount cost in one project, just by pulling in the release date), for the workers it saves time, reduces time spent in soul-sucking meetings, and lets you spend more time wrapping up your part of the design and spending quality time with your family. You can learn more from this Consensia Webinar.

Also Read

Behind the 3DEXPERIENCE for silicon

Latest Pinpoint release tackles DRC and trend lines

Sustainability, Semiconductor Companies and Software Companies


Calibre Can Calculate Chip Yields Correlated to Compromised SRAM Cells

Calibre Can Calculate Chip Yields Correlated to Compromised SRAM Cells
by Tom Simon on 04-11-2017 at 12:00 pm

It seems like I have written a lot about SRAM lately. Let’s face it SRAM is important – it often represents large percentages of the area on SOC’s. As such, SRAM yield plays a major role in determining overall chip yields. SRAM is vulnerable to defect related failures, which unlike variation effects are not Gaussian in nature. Fabrication defects are discrete events and random in nature. As a consequence of this they follow Poisson distributions. So, modeling them is distinctly different than things like process tilt or variation. While, as is the case with other parts of the design, modeling is important, it is especially important for SRAM. If there is a likelihood of a failure, replacement SRAM units can be allocated to serve in its stead.

But how much redundancy should be provisioned? If none is provided, a single failure will render the chip useless. This is in effect will double the cost of the part – a new part is needed to replace the failed device. At the other extreme, if 100% redundancy is provided, we again are looking at nearly double the cost per part. So, where is the happy medium?

The rate of failures depends on what happens in so called Critical Areas – where defects that are known and their rate is established, can cause failures. The presence of a failure depends on the size of the defect. Some are too small to cause harm, others are so massive that they render the entire chip inoperative. Usually the foundry as extensive data on the kinds and sizes of defects that cause recoverable issues with SRAM.

Of course, if we are talking about a defect that causes a power to ground short or a malfunction in a sense amp, we are going to have a hard time managing that. For the other class of failures – a row or column failure, alternative resources can be mapped to take their place. A great many papers have been written on techniques for implementing these replacements. However, design teams still faces a judgement call as to just how much redundancy to implement.

Fortunately, Mentor offers an option in Calibre called the YieldAnalyzer tool that can help translate defect density data into yield projections for a specific design. It starts by taking the defect density information for each layer to calculate the average number of failures. Calibre YieldAnalyzer uses yield models to then calculate yield. There are special cases, such as vias, where a single defect may not alter connectivity due to the large number of duplicate elements in a structure like a via array.

Calibre YieldAnalyzer must also be aware of the specific defects associated with row or column failures for each memory block. This is usually layer dependent, and is specified in a configuration file. The tool uses information on available repair resources. Of course, these resources are also subject to failures, so a second order calculation is needed to determine the availability of the actually functioning repair resources.

Because of how Calibre YieldAnalyzer works, it is possible to easily perform what-if analysis to zero in on the optimal amount if repair resources. As was mentioned at the outset, due to the large area of SRAM and the expense of adding repair resources, it is desirable to find the optimal balance between too many and too few.

It’s easy to think of Calibre as a rule checking program, however its capabilities have expanded well into the area of DFM. Helping to provide assistance with optimizing repair resources goes way beyond physical checking and encompasses sophisticated statistical analysis. Mentor has a white paper on their website that goes into much more detail about the process and algorithms used to provide these results.


SPIE 2017: Irresistible Materials EUV Photoresist

SPIE 2017: Irresistible Materials EUV Photoresist
by Scotten Jones on 04-11-2017 at 7:00 am

Irresistible Materials (IM) is a spin-out of the University of Birmingham in the United Kingdom that has been doing research on Photoresist and Spin-On Carbon hard masks for 10 years, most recently with Nano-C on chemistry development. IM has developed a unique EUV photoresist and they are now looking for partners to help bring it to commercialization.
Continue reading “SPIE 2017: Irresistible Materials EUV Photoresist”


TSMC Design Enablement Update

TSMC Design Enablement Update
by Tom Dillinger on 04-10-2017 at 12:00 pm

A couple of recent semiwiki articles reviewed highlights of the annual TSMC Technical Symposium recently held in Santa Clara (links here, here, and here). One of the captivating sessions at every symposium is the status of the Design Enablement for emerging technologies, presented at this year’s event by Suk Lee, Senior Director at TSMC. In the broadest sense, design enablement refers to both EDA tools and design IP, developed specifically for the new process node.

TSMC focuses on early engagement with EDA vendors, to ensure the requisite tool features for a new process node are available and qualified, on a schedule that supports “early adopter” customers. As the prior semiwiki articles have mentioned, N10 tapeouts will be ramping quickly in 2017, with N12FFC and N7 soon to follow. So, it was no surprise that the EDA tool status that Suk presented for these nodes was green, usually for multiple EDA vendors (e.g., 3 or 4).

The unique part of Suk’s presentation is the description of key EDA tool requirements presented by the new process node. These offer insights into the additional complexities and design characteristics introduced. Here are some of the new features that struck me as particularly interesting.

stacked vias and via pillars

There are two characteristics of each new process node that are always troublesome for designers, and for the optimization algorithms applied during physical implementation. The scaling of metal and via pitches (for the lowest metal layers) results in increased sheet and via resistance. Correspondingly, this scaling also exacerbates reliability concerns due to electromigration — this issue is magnified due to the increased local current density associated with FinFET logic circuits.

SoC designs at these new nodes need an efficient method to utilize the upper level layers in the overall metallization stack, for reduced RC delay and/or improved electromigration robustness. Suk presented two options that are being recommended for N7 — stacked vias and via pillars. Design rules enabling stacked vias are leveraged by the TSMC Mobile platform, while the expectation is that the High-Performance Computing (HPC) platform designs will need to regularly use via pillars. A via pillar is depicted in the figure below.

Suk highlighted some of the unique EDA tool algorithms needed, to support the prevalent use of via pillars:

  • physical synthesis, clocktree synthesis, APR

Physical implementation algorithms need to assess where via pillars are needed — there is a significant interconnect timing improvement versus route track blockage tradeoff assessment required.

  • parasitic extraction, static timing analysis, EM, and I*R

The via pillar is a unique geometry. RC extraction tools need to translate this topology into a suitable model for subsequent electrical analysis (EM, I*R), specifically how the current will spread throughout the pillar. EDA vendors have addressed this via design insertion and analysis requirement for N7 — this is fully green.

One area that has me curious that Suk didn’t mention is the yield impact of using via pillars. Commonly, yield enhancement algorithms are exercised near the end of physical implementation, often by attempting to add redundant vias where feasible — perhaps, a via pillar insertion strategy will evolve as a new DFM/DFY option.

“cut metal” masks and coloring
Advanced process nodes have replaced traditional metal interconnect lithographic patterning with spacer-based mandrels and cuts, to realize more aggressive pitch dimensions. The drawn metal layout shapes are translated into drastically different mask implementations, involving the addition of: mandrel shapes (for spacer-based damascene metal etching); “cut masks”; and, metal/cut decomposition color assignment (associated with multi-patterning and successive litho-etch steps). There are optimizations available to reduce the need for multi-patterning of cuts, by adjusting the cut spacing through the addition of metal extensions — the figure below illustrates a simple example.

(From: “ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology”, Han, et al., Proc. SPIE, October, 2015. Note the metal extensions added to align cuts.)

TSMC has worked with EDA vendors to optimize metal and cut mask generation, and multi-patterning decomposition. Flows impacted include physical implementation, LVS, and extraction. Suk’s presentation also briefly mentions that ECO flows with cut metal and metal extensions needed to be updated, as well.

dual pitch BEOL
At the symposium, TSMC introduced an aggressive technology roadmap, including the new N12FFC offering. This technology is intended to offer a migration path for existing 16FF+/16FFC designs.

N12FFC includes an improved metal pitch on lower levels, as compared to N16. Logic blocks would be re-implemented with a 6T cell library, from TSMC’s Foundation IP for N12FFC. Other hard IP would be re-characterized, without new layout. As a result, EDA vendors need to support dual-pitch back-end-of-line (BEOL) IP pin and routing implementations, integrating both new 12FFC and existing 16FFC blocks.

Suk highlighted that the Design Enablement team at TSMC is also introducing technology model support (and qualified EDA tools) to address the reliability challenges of new process nodes, especially the more stringent targets of automotive applications — e.g., advanced electromigration analysis rules, advanced (self-heat) thermal models for local die temperature calculations, device parameter end-of-life drift due to BTI and HCI mechanisms.

The close collaboration between TSMC and the EDA tool developers is fundamental to early customer adoption for emerging technologies. Each new node introduces physical implementation and electrical analysis challenges to conquer. It will be interesting to see what new EDA tool and flow capabilities the N5 process node will require.

-chipguy


Webinar: Chip-Package-System Design for ADAS

Webinar: Chip-Package-System Design for ADAS
by Bernard Murphy on 04-10-2017 at 7:00 am

When thinking of ADAS from an embedded system perspective, it is tempting to imagine that system can be designed to some agreed margins without needing to worry too much about the details of the car environment and larger environment outside the car. But that’s no longer practical (or acceptable) for ADAS or autonomous systems. The complexity of control challenges and environmental interference in the car and outside the car (see e.g. my earlier blog on 5G) require that modeling for design at the total system level begin well before component implementation (and perhaps even architecture) is locked down.

REGISTER HERE for the Webinar, either 6am PDT and 1pm PDT (both April 20[SUP]th[/SUP])

The way to get there is through comprehensive driving-scenario simulations, conducted with a system-level behavioral model of an autonomous or semi-automated vehicle. This model includes all sensors, antennas, control systems, drive systems and vehicle body, placed in situ in a virtual driving environment of roads, buildings, pedestrians, road-signs, etc. In this simulated environment, thousands of driving scenarios can be evaluated rapidly, to test whether the vehicle’s sensors, control algorithms, and drive systems perform as expected under situations.

REGISTER HERE for the Webinar, either 6am PDT and 1pm PDT (both April 20[SUP]th[/SUP])

Sensors, antennas and electronics are the brains behind today’s intelligent Advanced Driver Assistance Systems (ADAS). Advances in integrated antenna design, image sensing and integrated circuit design are quickly transforming automotive vehicles into autonomous vehicles. These advances are also helping build cheaper, safer and more intelligent ADAS systems. As the design of these ADAS systems becomes more complex, though, design engineers must rigorously simulate multiple components and systems for functionality, reliability and safety.

About the Presenters

Larry Williams
Larry is Director of Product Management at ANSYS’s Electronics Business Unit. He is responsible for the strategic direction of the company’s electrical and electronics products, including the High Frequency Structure Simulator (HFSS) finite element simulator and is an expert with over 20 years experience in the application of electromagnetic field simulation to the design of antennas, microwave components, and high-speed electronics.

Jerome Toublanc
Jerome is a Business Development Manager for ANSYS Semiconductor Business unit. He has over 15 years of experience in SoC Power Integrity and Reliability challenges for large range of technologies, from RTL to GDSII level, and from Chip, Package to System level.

Arvind Shanmugavel
Arvind is senior director of application engineering at ANSYS.