RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Synopsys Earnings Call Q1 2015

Synopsys Earnings Call Q1 2015
by Paul McLellan on 02-20-2015 at 7:00 am

Synopsys announced their results yesterday. Their 2014 already ended, this is the end of their fiscal first quarter. On the call were Aart, one of Synopsys’s two co-CEOs, the other being Chi-Foon Chan; and Trac Pham, the new CFO on his first earnings call.

Synopsys’s results were good. A quick look at the results. Revenue was $542M so comfortably above a $2B/year run-rate. Non-GAAP EPS was $0.80. They also raised future guidance. They see the environment as solid.

But as usual my interest is not so much short-term financial measures but discerning longer-term trends in things like process, foundry availability, sea-changes in EDA methodology and so forth.

“The number of active FinFET designs and tapeouts to date grew nearly 15% in just the last quarter, to almost 200. The breadth of our FinFET proven tools and IP gives us a notable competitive advantage, as evidenced by Synopsys being relied on for approximately 95% of these designs.”

I think “relied on” is just a wiggle word meaning that they used a lot of Synopsys tools, but since they probably also used Virtuoso and Calibre, I think Cadence and Mentor could say they were “relied on” too. Also, designs and tapeouts “to date” grew to 200, not just last quarter.

“We’ve taped out more than 30 FinFET chips.” So with the previous bullet, that means there are 170 FinFET designs in progress, around 30 of which started last quarter.

“We’re engaged in numerous 10 nanometer partnerships with early adopters”. “Through our TCAD technology, we’re already collaborating with silicon providers and research consortia such as imec on 5 nanometer and 7 nanometer.” One key question is whether FinFETs will work at 7nm or whether we will need to go to gate-all-round or some other technology.

“Our flagship VCS functional verification product is the primary simulator for 80% of advanced designs.”
That is a big percentage, given that both Cadence and Mentor also have credible offerings in the same space.

“Synopsys is the number one supplier of interface, analog, memory, and physical semiconductor IP.” In fact they are the #2 supplier of IP behind ARM.

“Our HAPS FPGA-based prototyping solution does just that, and has proven itself in the marketplace. Q1 was its highest revenue quarter ever, and with more than 5,000 HAPS systems installed at customers today.”
Later Aart said that this was being driven by the needs of software development. “The challenge with that is of course that the software guys would like to start modeling and trying out their software before the chips are ready.”

There were a couple of questions about IC Compiler II which Aart characterized as growing market share. But Cadence also talked about digital design as an area where they were investing additional resources and also growing share. Aart said that some of this is just the Lake Wobegon effect “EDA is the industry where all the children are always above average, and all the share gains are above average.”

“The Coverity integration of infrastructure and sales has gone well, and our initial financial expectations are on track. We saw 32 new logos in the quarter, and executed an important agreement with a large, U.S. energy company.” Coverity is used for analysis of software especially in safety and security critical applications. Analysts reckon this area is growing at around 20% per year. In the questions Aart said they were on-track to be profitable in the second half of this year and over $100M in 2016.

“One customer accounted for over 10% of revenue”.
Everyone knows it is Intel. It is a big number since 10% revenue last quarter is over $54M.

“We ended Q1 with approximately 9,300 employees, with more than one-third in lower-cost geographies.”So over 3100 employees in India, China and other similar countries. “You can see that the headcount did decrease from Q4 to Q1. A large portion of that was due to the voluntary retirement program and the small layoff we had, but also the delayed hiring.”

There is a lovely transcription error in one of the questions which talks about moving from the “plainer world to FinFETs”. I think I’m going to start calling “planar” transistors “plainer” from now on. Those FinFETs are so exciting for EDA.


FinFET Designs Need Early Reliability Analysis

FinFET Designs Need Early Reliability Analysis
by Pawan Fangaria on 02-19-2015 at 9:30 pm

In a world with mobile and IoT devices driven by ultra-low power, high performance and small footprint transistors, FinFET based designs are ideal. FinFETs provide high current drive, low leakage and high device density. However, a FinFET transistor is more exposed to thermal issues, electro migration (EM), and electrostatic discharge (ESD) compared to a planar FET. A higher current in FinFET transistor leads to local self heating and a significant increase in substrate temperature. Since the active area in a FinFET is covered by field oxide on three sides, the generated heat is trapped inside. The heat slowly dissipates towards the substrate, thus increasing the substrate temperature. This can lead to domino effect in case of interconnected systems. With high current drive capability of FinFETs, the overall current density of metal interconnects also increases, thus increasing the heat all over. An increase of temperature by 25[SUP]o[/SUP]C can degrade the life of a device by 3x to 5x.

Similarly, with technology scaling the margin between nominal voltage and breakdown voltage of a device is significantly reduced. This leaves very thin operating window for ESD. Also, a FinFET device has very poor snap-back characteristic. Read “Full Chip ESD Sign-off – Necessary” for more details about ESD in devices. Interconnects can be equally vulnerable to large current crowding due to an ESD event.

Considering the FinFET devices to be more prone to such effects which can render them to short-term as well as long-term risks of failure, an SoC design based on FinFET technology nodes cannot be left for EM and ESD sign-offs at the end of the design cycle. It’s advisable to do thermal, EM and ESD analyses of a design as it progresses from very early stages until completion.

RedHawk from ANSYSprovides a thermal-aware EM analysis platform that can be used as the design progresses. Power and signal EM analyses can be performed at non-uniform temperatures for different metal layers. Temperature profiles generated from ANSYS Sentinel-TI can be annotated on to the RedHawk layout to re-compute the true thermal aware EM violations. This capability is perfect for FinFET based designs which exhibit large variation of temperature across the chip. A detailed description about how Sentinel-TI utilizes RedHawk created CTM (Chip Thermal Model) and analyzes chip-package thermal impact due to leakage and self-heat is provided in a technical paperat techonline website. It also describes about ANSYS Icepak which can be used for system-level thermal analysis.

ANSYS PathFinder provides ESD planning, verification and sign-off solution for full-chip SoC as well as IP. It utilizes a simulation based methodology that accurately identifies current density issues and appropriately places diodes and clamps to resolve current bottlenecks in case of an ESD event. An accurate ESD device modeling, flexibility to handle different scenarios and user-friendly debug environment helps designers find route causes for design weaknesses and take appropriate action. Read the technical paper for actual details on ESD analysis and handling.

The paper also describes about different approaches utilized for different types of IP such as standard cell libraries, analog and mixed-signal, I/Os, sensors, PMICs, memories, and so on. For example, Vectorless approach that exercises all nets with accurate switching behavior can be best utilized for a comprehensive EM coverage on power and signal nets inside standard cells. Similarly ANSYS Totem can be used for complex IP such as high-speed I/O, image sensor, and so on.

Today, FinFET node is entering mainstream for IC manufacturing. This expresses the acute need for reliability analysis to become an integral part of the design flow. In order to meet aggressive time-to-market window, the reliability analysis must start as early as possible in the design flow and sign-off at the end for a faster design closure.


Mentor and ASSET Intertech Do a DFT World Tour

Mentor and ASSET Intertech Do a DFT World Tour
by Beth Martin on 02-19-2015 at 1:01 pm

The Mentor Graphics test folks and ASSET Intertech have teamed up to provide a series of free DFT seminars in the US, Europe, and Asia. The first one is in Austin, TX on February 19, 2015, and the last is in Tokyo on April 24. Hereis the full list of locations and dates.

The morning session covers IJTAG. The new IEEE 1687 Internal JTAG (IJTAG) standard is changing the way the industry validates, tests and debugs chips and circuit boards. IJTAG-based methods are more cost-effective, more accurate, faster and less time-consuming for you. IJTAG’s software-driven tests and validation routines are initiated from instruments embedded inside chips providing key benefits from a silicon or board perspective based on the type of problems you are trying to solve. This seminar will highlight the synergy between Mentor and ASSET tools and how the IJTAG ecosystem that they provide will accelerate adoption of this technology. Don’t miss the chance to learn how to tap into this useful IP.

Related — IJTAG was recently ratified by the IEEE-SA standards board. Its a bouncing baby IEEE standard!

Who should go?

  • DFx Engineers who need to insert the IJTAG networks and gain the benefits of accessing IP within the silicon.
  • Board Designers who want to gain the benefit of enhanced board validation and test features accessed by IJTAG.
  • Test Managers who want to improve their overall test process and resolve test challenges that cannot be addressed with current test technologies.

Following a delicious lunch, the afternoon seminar is all about the next big thing in test compression—EDT Test Points. Embedded test compression was commercially introduced over a decade ago and has scaled to well beyond the 100X range envisioned when it was first introduced. However, growing gate counts enabled by new technology nodes as well as new fault models targeting defects within standard cells are driving the need for even greater compression levels. This session will begin with a review of leading-edge test compression features and techniques and will then introduce and focus on an exciting new technology, called EDT Test Points, which has been developed specifically to work with embedded compression to further reduce pattern volume for compressed patterns. Numerous customer beta engagements have shown that EDT Test Points can reduce compressed pattern counts on an average by a multiplicative factor of 2-4X, without affecting test coverage, and even for designs with the most aggressive embedded compression configurations.

Related — Daniel Payne recently wrote a very good article on EDT Test Points, More Test Points are Better

Who Should Attend?

  • Designers, DFT engineers, and test consultants involved with creating testable ICs and producing the manufacturing test sets
  • Product engineers responsible for manufacturing test of ICs
  • Test managers looking to minimize manufacturing test costs while maintaining or improving test quality

Registernow for an ASSET InterTech and Mentor Graphics DFT Technology Seminar near you!


Earnings Calls: Behind the Scenes

Earnings Calls: Behind the Scenes
by Paul McLellan on 02-19-2015 at 7:00 am

Last weekend I wrote about the Applied Materials earnings call. And over the last couple of years I’ve written about lots of other earnings calls. Most people have never been on an earnings call, I mean in the conference room where the call is being conducted, not just listening. So I thought it might be interesting to describe how it actually goes down.

When I was at Cadence I was part of the investor relations team in that I was the technical person that they would arrange meetings with if an analyst came by and wanted to discuss technology. The main investor relations people could talk about the company finances but knew little about technology. I was the house-trained technologist who could be trusted not to pre-announce a product or say something that I wasn’t meant to. In fact the analysts actually knew very little about technology in most cases, and were out of their depth if I went too deep into anything. I think they just wanted to try and assess whether we were ahead of Synopsys by looking someone (me) in the eye, and also learn a few new buzzwords so they could sound smart when they asked questions on earnings calls.

You generally only hear 3 people talk on the earnings call. The head of investor relations who introduces the call and reads the safe-harbor statement. The CEO of the company who usually goes next and gives some color to the quarter with the headline financial results, which all the analysts already know since the press release went out 30 minutes prior to the call. Finally the CFO who will give more detail of the finances, cash-flow, capital investment, headcount and so on. Then the interesting part begins when the CEO and CFO have to answer questions.

So you might assume that there are just 3 people in the boardroom when the call takes place. Actually, at least when I was at Cadence, we would have another half-dozen or more people. Another investor relations person learning the ropes, maybe someone else from finance. Since the CEO Ray Bingham was from finance and not technology we would have several of us on hand to handle technical questions. We didn’t actually answer them, of course, we trained Ray so he could answer them. Our job was to make him look good. One challenge was that the boardroom had a top-of-the-line phone system with microphones hanging from the ceiling so on a call anyone in the room could speak and be picked up. On an earnings call this was a disadvantage since it means that nobody could speak, however quietly.

The CEO and CFO’s statements have been word-smithed to death over the preceding few days and so provide a very controlled perspective of the company’s business. The interesting stuff happens in the questions when there is no script. If it were a financial question Ray, or Bill Porter the CFO, would answer. If it was technical or product related then Ray would stall for time for a few seconds while we wrote talking points on a white-board and he would then take those bullet points and run with them. That way we made it look like he knew a lot more about the underlying technology than he really did.

With Regulation-FD (fair disclosure) anyone has the right to listen to an earnings call. This regulation was passed to stop the practice prior to 2000 of giving market-moving information to a select few (typically large institutional investors) before it became generally public. In those days, you wouldn’t be able to get on a conference call if you weren’t a professional investor or analyst.

So now even you can get on the call. But don’t expect to get called on to ask questions, that will be limited to analysts who already have a relationship with the company since their opinion has a broad reach (aka affects the stock-price). I prefer to read the transcripts than to listen. I usually skim the CEO and CFO’s statements since they are not going to contain any surprises. It is the question and answer session where interesting stuff gets conveyed and maybe something off-message gets said. I’m not interested in the financial stuff in general so I skip questions about next quarters cash-flow or tax-rate. The questions about product are always interesting. Fabless companies and manufacturing equipment companies often let out details of foundries that the foundries themselves do not. After all, if a volume manufacturing ramp pushes out, these are the first people to know and it is often material to their business so they cannot say nothing at all.

A good place for listening to recordings of calls are reading transcripts is SeekingAlpha here.


Mentor shows post-PC industrial device approach

Mentor shows post-PC industrial device approach
by Don Dingee on 02-18-2015 at 9:00 pm

The term “human machine interface” originated from the factory floor. In the context of HMI, machine refers not to the computer, but to a machine tool or other instrument the computer was attached to. For decades, if an HMI was needed, it was implemented on a PC or single-board computer running Microsoft Windows. Real-time processing often came from dedicated processors running an RTOS, or microcontrollers running code on bare metal, elsewhere in the system.

Each side wanted what the other had. The PC guys tried to incorporate elements of real-time control, using virtualization techniques to guest an RTOS on Windows. The real-time guys tried to add graphics capability. In some situations, these approaches worked, but the integration was somewhat fragile. This was especially true when integrating graphics on an RTOS. The GPU market on Windows PCs moves so quickly, it was hard to keep from being eaten by obsolescence on a unique chip and driver with limited support.

Speaking of fragile, the evolution of Windows caused problems. In the age of Windows NT, things were pretty good: it integrated with enterprise networks and management tools, it was stable, and developers liked it. Windows 7 was a hot mess of instability, resulting in the Windows Embedded Standard 7 fork to tailor out unneeded pieces. Windows 8 brought issues with a new presentation layer and changes to administration. As a result, many HMI applications tried to stay on Windows NT – until the bitter end, when it was recently end-of-supported. A few went the Windows Embedded Compact route, trying to capture near-real-time needs. Again, some of that worked, but some are still looking for a more modern solution.

This is the post-PC era. A single multicore SoC can now handle what it used to take several boards worth of computers to do. Cores can be dedicated to real-time, or networking, or user interface, or a thread-optimized approach can be used. High performance mobile GPUs can handle most HMI needs. A powerful device can be built around a single SoC.

What operating system should run on a multicore SoC in industrial automation? Linux? Certainly a DIY option, but it doesn’t exactly handle real-time control all by itself. Android? An apps-based strategy might be cool, but again stability and robustness is a question. RTOS? Super for control, but IT guys and OT guys don’t see eye to eye on deployment. And, what is the right graphics approach? How is connectivity handled?

Just as multicore SoCs are heterogeneous, what best fits industrial automation is a heterogeneous OS that blends all these environments into a single framework.


Mentor Embedded has created that industrial automation framework. They have taken all their knowledge about multicore SoC software development and debug, combined it with their knowledge on multicore OS virtualization, added their RTOS and Linux and Android experience, and pulled graphics, safety, and connectivity from the industrial ecosystem. We’ve covered the Mentor Sourcery software development tools recently, but several new pieces in the industrial automation context merit discussion.

First is their selection of Qt. There is no better choice for a cross-platform, open source user interface development framework. Many developers are already familiar with it from mobile apps space. It natively supports OpenGL to run on a mobile GPU, or has a Quick 2D renderer plugin. It has charting and data visualization capability. It has a pre-packaged virtual keyboard, and a library of common controls like gauges, buttons, dials, and other user input and display items.

Image courtesy Mentor Embedded and Digia plc
Qt is a registered trademark of Digia plc and/or its subsidiaries

The next news here is the Nucleus RTOS is now IEC 61508 certified for safety-critical use. Also, Mentor has obtained Wurldtech Achilles Communication Certification, a cybersecurity specification becoming a checklist item for industrial control.

Another interesting area is the top of the stack. Mentor has teamed with Softing to get much of the connectivity, such as Fieldbus, Ethernet/IP, and an OPC UA toolkit. The thing with many industrial automation applications is they enter a brownfield, with legacy protocols already deployed. Having legacy interfaces side by side with modern industrial IoT wired and wireless capability is a big plus.

Finally, there is the enterprise integration aspect. Mentor has worked with Icon Labs on the Floodgate family of solutions. Floodgate Defender provides an embedded firewall with stateful packet inspection, rules- and threshold-based filtering to help secure networks. Icon Labs is also integrating with McAfee ePO, providing policy-based endpoint management.

As Mentor demonstrated at the recent 2015 ARC Industry Forum, this all rolls together nicely:


The combination of a multicore SoC with this Mentor Embedded industrial automation software solution allows small, safe, and highly functional industrial devices to be built. Designers creating SoCs for industrial automation environments should consider this suite of software when evaluating designs, rather than just verifying under Linux. Mentor Embedded has the tools and knowledge to assist designers in constructing their solutions.

Related articles:


MEMS Require 3D Field Solver for Accurate Cap Values

MEMS Require 3D Field Solver for Accurate Cap Values
by Tom Simon on 02-18-2015 at 9:00 am

MEMS devices have become extremely important and common. Freescale last year reported its combined MEMS shipments exceeded 2 billion units. If we just examine how many accelerometers we each probably own today, it is easy to see why the market for these products is growing so rapidly. The first and most obvious device is our cell phone. Another growing area is for hard drive protection using fall sensors in laptop computers. If you have a sports tracker, you can add another to the list. The earliest use of these devices was for air bag crash sensors. Also a lot of cars use them for traction and antiskid control.

Already we have a handful on our list of devices we own that use them. I have read about smart meters using them to detect tampering. Many of us have video game controllers with motion or gesture controls. The list goes on.


Actually it’s understandable how there can be billions of units already installed, with the need for more on the way. MEMS technology has opened the doors for many new applications for devices that interact with the real world. To make this possible there have been advances in fabrication and design
technology.

A key element for the success of MEMS is design analysis and verification. In this respect they are not unlike semiconductors. MEMS stands for micro electro mechanical systems. Electrical properties of these systems need to be well understood before a successful working design is possible.


In classical semiconductors most people are familiar with finger caps, or MOM caps as they are sometimes called. Modern MEMS accelerometers rely on a similar structure that relies on changes in sidewall capacitance as one set of plates moves relative to the other during acceleration. The moving plates are connected via ‘flexible’ attachments so they behave somewhat like a weight on a spring. Usually there are differential fixed plates on each side of the moving mass so that as it moves in a direction one set are seeing increased capacitance and the other is seeing reduced capacitance.


Sidewalls are made very large to maximize the side wall surface area. And of course the space between the conductors is air not dielectric. However, just as with finger caps, the best way to determine capacitance is with a field solver. Another parallel issue with MEMS is parasitic capacitance.

The challenge with FEM or Method of Moments solvers is that the number of unknowns is large and the compute resources and time required can be quite large. For less accurate results, on-chip capacitance extraction based tools have been used historically. But they are being supplemented by embedded field solvers such as Mentor’s xACT-3D used with Calibre.

Mentor has announced that the xACT-3D solution is now available for MEMS designs. As like most field solvers it starts with a stack up description. Next the design is read in from a layout tool and it is converted to a 3 dimensional data base. Mentor says that xACT-3D uses a true field solver that runs very quickly to output a parasitic database that has capacitance values for every node pair, including parasitic caps values. At that point the data can be output in a variety of standard formats such as a SPICE netlist. Here is a white paper that goes into more detail regarding the Calibre xACT-3D solution for MEMS.


MEMS accelerometers are in great demand and are needed for wide range of applications. They are needed for subtle activities such as gesture recognition, all the way up to impact sensors for passenger safety or industrial applications. Design complexity and sophistication is bound to accelerate. A fast and efficient means of performing critical design analysis and verification before silicon will be useful.


32-bit MCUs Way to Go for IoT

32-bit MCUs Way to Go for IoT
by Majeed Ahmad on 02-18-2015 at 7:00 am

Cost, power and performance, and security are the fundamental ingredients of chip development for the Internet of Things (IoT) market and that 32-bit microcontrollers are a way forward to meet these basic requirements. That was the crux of the message from the webinar held by Andes Technology Corp. on February 10, 2015.

You can see the full webinar HERE.

“The 8-bit MCU standard is limited by peripherals and instruction set and it doesn’t offer the cost advantage in the IoT environment,” said Emerson Hsiao, Senior VP of Sales & FAE at Andes. “Moreover, memory interface in 8-bit MCUs lead to bottlenecks for both power and performance, so they don’t make sense for IoT devices.” He added that peripherals in Andes’ 32-bit processor cores operate at different power modes, and thus they optimize power consumption.

Hsiao said that the IoT market is constantly evolving and there are significant changes in the IoT landscape every year. So IoT chips should not only offer lower power and higher performance but they should also be future proof in terms of technology upgrades. Hsiao quoted touch-panel controllers as an example where 8-bit MCUs sufficed for the first-generation touchscreens. However, for second- and third-generation touch-panel controllers, more demanding gesture applications for smartphones and wearable devices necessitate more powerful 32-bit MCU cores.


Andes’ ultra-low-power processor core solutions

Hsiao also presented smart meters as a case study where a chip costs US$2-3 and features MCU, communication port, and sensor interface as primary components. So rather than saving a few cents in chip development with 8051, more advanced power-saving techniques offered by 32-bit MCU cores could lead to a lot more energy conservation in the end. Hsiao mentioned that there are 300 million smart meters only in China. That just shows the scale of energy conservation that power-efficient chips could bring to smart meter operation at large.

For security, Hsiao again used the smart meter case study and explained how a secure MCU system can carry out embedded code protection within end-user device. He said that smart meter devices are mostly vulnerable at the memory and JTAG levels, and showed how Andes cores could allow access to JTAG debug interface and ILM to secure embedded software and program data.

CPU and Memory Bottlenecks

For IoT and connected wearables, Hsiao emphasized the small gate count for saving die area and high performance with execution efficiency for designers needing an upgrade path from 8-bit cores that have been widely used in embedded applications during the past two decades. However, if there was a prominent theme in this webinar, it was how power is driving requirements for applications such as IoT, connected wearables and other flash-memory based requirements. And the fact that Andes’ low power solutions impact beyond the processor cores.

Hsiao said that Andes’ 32-bit MCU cores offer greater energy efficiency through power-saving modes that outnumber competitor solutions. “Andes employs PowerBrake technique that results in flash acceleration, which in turn, reduces power consumption and improves performance.” The PowerBrake technology is based on the variation in frequency scaling at 16 levels and has met industry benchmarks for both Coremark and DMIPS, he added.

The PowerBrake technology helps minimize the idle power through creating different power modes for the CPU. The creation of power profiles for different connect stages helps to optimize the power interface to CPU and thus lowers power consumption and improves performance horsepower.

Another power efficiency technology that Hsiao mentioned during the webinar was FlashFetch, which minimizes access to NOR flash memory and thus lowers power and enhances memory interface speed. FlashFetch memory acceleration technique records repeated code sequence in a structure called TinyCache for later fast accesses.


FlashFetch eases memory interface bottlenecks

TinyCache is different from traditional caches in a sense that it takes power consumption into account. It improves program execution efficiency by providing zero wait-cycle for instruction accesses, and at the same time, it helps in cutting the total power consumption of CPU and flash memory. So, while the power consumption contributed by the CPU slightly increases due to the additional logic for the TinyCache, the power use coming from flash memory is greatly reduced due to less number of accesses.

Moreover, if required, FlashFetch can allow instruction accesses to read ahead, thus speeding up the execution of sequential code. This feature supports 64-bit and 128-bit fetch widths of flash memory. Hsiao acknowledged that prefetch buffer helps in enhancing performance, but it also brings in redundant instructions that lead to increased power consumption.

Image Credit: Andes Technology Corp.

Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronicsand The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


Freescale and Samsung

Freescale and Samsung
by Paul McLellan on 02-17-2015 at 6:58 pm

It is impossible to keep a secret in this business. Everyone knows that Freescale is being shopped around and there is interest.

From Yahoo Finance:
The parties that Freescale is speaking to could not be learned. The New York Post first reported that Freescale was working with investment banks to explore a sale. Freescale shares were up 8.5 percent at $37.66 in morning trading, giving it a market value of $11.5 billion.

Well, I’ve done some learning and apparently Samsung is the suitor and the deal is pretty much done. Of course no deal is done until it is done, and one of the reason for engaging investment bankers is to get the price up by getting some other players into the ring. Samsung would make a lot of sense, they already have a major presence in Austin where Freescale is headquartered. In fact I believe the Samsung fab in Austin is the largest in the US, bigger than anything Intel has, or GlobalFoundries or…anyone else…Micron I guess.

About half of Freescale’s business is automotive, a fast growing market now and for the future (driverless cars don’t really drive themselves, a lot of semiconductors do) and area where Samsung is not strong. I expect they supply a lot to Hyundai and Kia, that is the way Korea works, but globally they are not the name that leaps of everyone’s tongue. The US in particular is a huge automotive market. Not just the big 3 but transplant factories for BMW, Mercedes, Toyota, Honda and others. Most of the rest of Freescale is various forms of communication (but not mobile, they exited that business a few years ago after they failed to find a buyer for it).

Freescale was a spinout of Motorola’s SPD, silicon products division, taken private by Blackstone, Carlyle and TPG Capital (and maybe some other smaller companies). The first CEO was Rich Beyer (a name to make any salesman’s heart leap for joy) who was COO at VLSI for a couple of years and I used to give weekly tutorials to about how designs were done and what EDA was. In 2011 Freescale went public (it was never public before, of course, except as part of Motorola which was). The buyout companies still own almost 2/3 of the company.


7nm node is arriving, which ones will continue past 2020?

7nm node is arriving, which ones will continue past 2020?
by Pawan Fangaria on 02-17-2015 at 6:30 pm

‘Laughing Buddha’ is eternal, but for semiconductor industry, I must say it’s ‘laughing Moore’. Moore made a predictive hypothesis and the whole world is inclined to let that continue, eternally? When we were at 28nm, we weren’t hoping to go beyond 20/22nm; voices like ‘Moore’s law is dead’ started emerging. Today, we are already into production at 16nm and 14nm, and looking at 10nm, 7nm, 5nm, 3nm, and even lower going forward.

Well, there is a large contribution of FinFET transistor structure in scaling the semiconductor technology to 16nm/14nm. FinFET along with high mobility materials like III-V and Ge for its channel can pull the node up to 10nm, may be 7nm, but not beyond that.

For 5nm or even for 7nm, foundry experts are gearing up to develop further next-generation transistors, the front runner among them seems to be what is called ‘Gate-All-Around’ (GAA) transistor.

If we look at the evolution of transistor structure through gate, it appears to be progressing pretty much in line starting from single gate to double gate, tri-gate/FinFET and now GAA. However, it’s extremely difficult, expensive and time consuming to experiment fabrication of such complex structures and with newer material compositions. Fabricating transistors is one part of the process, often called FEOL (Front-End-Of-Line) process. BEOL (Back-End-Of-Line) process is to do all the interconnections, and there comes the complex part of managing the RC. Again, there are local interconnects at device level accomplished by MOL (Middle-Of-Line) process. The global interconnects are done by BEOL and they are prone to RC delays. Today at lower nodes, BEOL employs multiple patterning which requires extra deposition and etching with every pattern, thus increasing the cost of production. Technically, multiple patterning can still be viable at 7nm, however the industry is looking at EUV (Extreme Ultra Violet) lithography to reduce that cost; with EUV, BEOL process can be done with single exposure and throughput can be as good as ~150 wafers per hour. But for EUV lithography, foundries are dependent on semiconductor equipment manufacturing companies. To accelerate EUV lithography, Samsung, Inteland Applied Materialsare reported to fund Inpria Corporation, a pioneer in high-resolution photoresist development and materials for emerging semiconductor patterning technologies. Recently, Inpria patented a technology in which inorganic photoresists provided nano-scale imaging below 20nm. By the way, aBeam Technologies is reported to have developed a technology to fabricate test patterns with minimum line-width of 1.5nm which can be used to test metrological equipments with ultra-high precisions.

Nevertheless, if we see the roadmap of big ones, Intel, GlobalFoundries, Samsung, TSMC, UMC, all plan to bring 10nm chips latest by 2017. Daniel Nenni even bloggedabout Intel’s plan to launch 10nm chips in early 2017.

TSMC is much aggressive on 7nm as well. According to ASML, TSMC has already ordered for EUV scanners to be purchased in 2015 and they are expected to start 7nm chip production in early 2018. Intel does not seem to be behind either; it plans to go ahead with 7nm, even without EUV if that’s not ready. So, let’s take some delay into account and say 7nm comes out in 2019. That translates to roughly two years gap for every major production node.

Above graph clearly shows 90nm, 65nm, 45nm, 32nm, 22nm, 14nm and 10nm to have around two years gap in every succession. 32nm/28nm was an inflection point below which it really was difficult to scale down. Double patterning and then multiple patterning started taking place. FinFET was invented, and now we are looking at GAA and other innovative transistor structures, EUV, and so on to go below 10nm. 7nm may arrive in 2018, 2019. Let’s say 5nm and 3nm also arrives past 2020 with support from EUV, GAA and other innovation as required. Then what? Which nodes will survive? Definitely, a few of them will have long maturity curve with major production volumes. It needs clever and strategic planning for fabs to reap the benefits from them; they will become the cash cows in the long-run. Let’s look at design starts per node as of 2013 (courtesy Synopsys) –

We can clearly see 350nm – 90nm in declining mode, 65nm – 32nm still moving towards maturity and 22nm – 14nm in growth mode. If we extrapolate this trend to 7nm and then 5nm and 3nm beyond 2020, we can envision that by that time 14nm and 10nm will be in major production. Will they continue for long? I would think so, because the FinFET process will be perfected by then with 16nm, 14nm and 10nm adopting the same technology with improved performance. If GAA and other technologies get perfected by say 2025, they may take over by 2030. Beyond that we need to again look at our ‘laughing Moore’!


Arteris Adds Functional Safety to NoC

Arteris Adds Functional Safety to NoC
by Majeed Ahmad on 02-17-2015 at 1:00 pm

Arteris Inc.has joined hands with Yogitech S.p.A. to help automotive system-on-chip (SoC) designers meet the required functional safety metrics and obtain the ISO 26262 certification for automotive safety integrity levels (ASIL) in the least possible time.

Arteris—which provides network-on-chip (NoC) interconnect IP solutions—will license the fRSVC_flexNoC Safety Verification Component from Yogitech to jump start the safety analysis and verification of its FlexNoC Resilience Package IP for accomplishing safety objectives in a much faster way.

“Customers who license the Arteris FlexNoC Resilience Package from Arteris will also be able to license the fRSVC_FlexNoC Safety Verification Component from Yogitech,” said Kurt Shuler, VP of Marketing at Arteris. “Many of Arteris’ Resilience Package customers are already longtime customers of Yogitech, so this partnership makes it very convenient for these companies to integrate Arteris FlexNoC into their existing functional safety analysis and verification processes.”


Arteris and Yogitech: ISO 26262 certification solution

Shuler calls Yogitech’s offerings “functional safety verification IP.” Yogitech is now developing the fRSVC_FlexNoC Safety Verification Component for FlexNoC. It’s going to be a component to the Yogitech Safety Designer and Safety Verifier Tool Suites that will make it easier for FlexNoC users to automate the required ISO 26262 test coverage and fault injection needed for certification.

Arteris will also be adding safety documentation to the existing FlexNoC Resilience Package IP. “Arteris is working with Yogitech to create this safety documentation to ensure it meets ISO 26262 requirements,” Shuler added. “Having this available to customers will make it easier for them to create the necessary ISO 26262 for their custom design.”

According to Shuler, implementing functional safety features in hardware is crucial for three reasons. First, the software-centric approach for implementing safety features in automotive SoCs involves a lot more effort to develop and maintain than using a certifiable hardware IP.

Second, software can be violated, and ultimately, the chipmaker will have to be answerable for safety risks. Third, chipmakers, which come at tier 4 in automotive products hierarchy, have to leave the software implementation of safety features to tier 1 and third-party players and thus they lose control of safety features.

The partnership between Arteris and Yogitech is targeting compliance with the ISO 26262 functional safety standard for SoC design teams. However, Arteris and Yogitech will also extend the offering to the IEC 61508 standard, addressing safety-related industrial markets such as robotic systems.

Anatomy of NoC IP Partnership

Yogitech—a leading player in creating the ISO 26262 safety spec—provides services and solutions to semiconductor outfits and system integrators to help meet functional safety demands.Its customer portfolio includes chipmakers such as Renesas, STMicro and TI; IP vendors like ARM; and system integrators as such Bosch and Denso. Yogitech, being part of the entire value chain, is able to help its customers with a broad view of the automotive SoC market.

Mauro Pipponzi, Director of fRTools at Yogitech, said that the partnership with Arteris involves a number of steps. First, it will produce a functional safety analysis and verification on the Arteris FlexNoC IP in order to generate the Safety Manual of the IP and the Safety Documentation Package, including the data—both analysis and verification—characterizing the IP from safety standpoint.

Second, according to Pipponzi,is the development of the fRSVC_FlexNoC Safety Verification Component, packaging safety documentation data in a format reusable with Yogitech fRTools during the integration of the IP.

Third, Pipponzi concluded, is the use of the fRSVC_FlexNoC by Arteris end customers. They will be able to integrate the FlexNoC IP in their SoCs using the fRSVC_FlexNoCand together with Yogitech Safety Designer and Safety Verifier Tool Suites and achieve safety objectives on their products with safety metrics and verification data being already provided.


Safety Verification Component for FlexNoC

According to Arteris’ Shuler, connected car standards like Advanced Driver Assistance Systems (ADAS) and V2V/V2I are all about functional safety. He added that stringent reliability becomes a requirement for automotive SoCs when they interact with the vehicle system for either acceleration or deceleration. Moreover, the addition of cameras and sensors to connected car platforms like ADAS will require greater processing power and that will lead to computation consolidation for automotive SoCs.

Shuler added that consumer electronics and mobile SoC makers like Nvidia and Qualcomm are new to safety features that are imperative to in-car electronics. That’s why they are adopting IP for car safety and are becoming Arteris customers.

Image credit: Arteris Inc.