webinar banner2025 (1)

Parasitic Extraction for Advanced Node and 3D-IC Designs

Parasitic Extraction for Advanced Node and 3D-IC Designs
by Alex Tan on 10-31-2018 at 7:00 am

Technology scaling has made positive impacts on device performance, while creating challenges on the interconnects and the fidelity of its manufactured shapes. The process dimension scaling has significantly increased metal and via resistance for advanced nodes 7nm and onward, as shown in figures 1a,1b. Similar to a fancy smartphone without a good wireless carrier quality (4G/LTE or 5G), a higher performance device is deemed an unattractive option as it needs to be accompanied by optimal wirings in order to minimize net delay attributed latency. Hence, to accurately measure design targets, capturing interconnect contribution during IC design implementation is crucial.

Challenges to parasitic extraction
From a designer’s standpoint, a good parasitic extraction solution should address accuracy, performance, capacity and integration aspects.

Accurate modeling of wire capacitances in an advanced node process is a non-trivial task as it is a function of its shape, context, distance from the substrate and to surrounding wires. It eventually leads to solving the electrostatic field in a region involving multiple dielectrics. The more heterogeneous design trend employing innovative and complex packaging has also necessitated the augmentation of existing extraction techniques with 3D-IC modeling capability (see figure 1c).
As design size is growing, both the extraction file size and turn-around time increases –to reflect the jump in design net count, extracted RC networks size and its associated physical representation or layers handling. Capacity works both ways: the extraction tool of choice should be capable of absorbing a large design, do the extraction and produces an extraction file that is reasonably compact to be back-annotated in downstream timing analysis stage. All of these should be done fast, too.

Apart from managing route resources or interconnect (by means of pre-routes, layer assignments and route blockages), having an accurate and robust parasitic extraction technology is also essential in helping to pinpoint hot-spots due to ineffective utilization of wires or vias, and any potential signal integrity related issues. The extraction step should be interoperable with either the analysis or the optimization tools that will consume the parasitics data points.

Modeling, extraction accuracy and xACT
Both device and interconnect modelings hold critical role in providing accurate parasitic values. With device architecture transitioning to non-planar, multi-gate architecture such as FinFET and the upcoming the Gate All Around (GAA) structures, the current density and parasitic capacitance between the gate and source/drain terminals is expected to increase with further technology scaling.

During the micrometer process technology era, field-solver techniques for capacitance extraction was reserved for correlation purposes as it provided good results accuracy but was computationally expensive. We were also accustomed to labeling 2D, 2.5D, or pseudo-3D modes to RC extraction. Recently, there are many field-solvers and its variations noted (from finite element to boundary element based and to the most recent floating random-walk method). While accuracy is traditionally achieved through discretization of the parasitic equation by means of table lookup, such approach is inadequate with the increased layer and design complexity.

Calibre xACT™ is Mentor’s high-performance parasitic extraction solution. It combines fast, deterministic 3D field solver and accurate modeling of physical/electrical effects of complex structures/packaging used in advanced nodes –to deliver needed extraction accuracy, including rotationally invariant total and coupling capacitances.

In order to address RC extraction of heterogeneous design such as a 3D-IC with FOWLP (Fan-Out Wafer-Level Packaging), xACT applies a 3D-IC modeling by taking into account two interface layers between the neighboring dies as shown in figure 2. It captures their interaction and creating an ‘in-context extraction’ which offers highly accurate and efficient extraction results –with 0.9% error and 0.8% for the total ground capacitance and total coupling capacitances, respectively.

xACT also handles new interconnect modeling requirements at all layers such as accounting potential shift in BEOL due to multi-patterning impact on coupling capacitance, MOL contact bias modeling, Line-End Modeling (LOM), etc.

Extraction size reduction techniques
SPEF/DSPF and log files are notoriously ranked top on IT’s disk-space screener list. These files though normally retained in a compressed format, are still huge and can strain not only disk space, but also downstream simulators’ capacity –so reducing the parasitic size while not losing the overall accuracy is key.

Unlike some parasitic extraction methods’ reliance on the use of threshold or tolerance value as basis for parasitic size reduction, xACT is resorting to a more efficient reduction mechanism known as TICER (TIme Constant Equilibration Reduction). Electrically-aware TICER produces a smaller RC network while controlling the error. This feature can be used across design flows (analog, full-custom and digital sign-off).

A trial on a 128K SRAM design shows 30% faster timing simulation a parasitic netlist with TICER reduction (figure 3) when compared to an unreduced netlist, while the simulation error was within 2% compared to the unreduced netlist (figure 4).

Multi-corner interconnect extraction is usually a requirement for cell characterization and design sign-off as they have to be performed across multiple process corners. The introduction of multi-patterning at advanced nodes adds even more corners. For example, due to multi-patterning at 7 nm, the original nine process corners gets expanded to more than a dozen, since each has one or more multi-patterning (MP) corners. Instead of running each process corner separately –which is costly, xACT performs simultaneous multi-corner extraction, in which all process, multi-patterning, and temperature corners are extracted in a single run. The user specifies the desired combination of corners to extract and netlist, which is done after a LVS run.

Speed, capacity and integration
Because designs are also growing in complexity at each successive node, a big challenge for parasitic extraction at 7nm is processing the design and the necessary corners without incurring additional cycle time during the signoff phase. xACT solution handles all of these complex modeling requirements and utilizes a net-based parallelism with multi-CPU processing to deliver fast and accurate RLC parasitic extraction netlists. It enables full-chip extraction of multi-million instance nanometer design with multi-threaded and distributed processing architecture.

Advanced technology scaling has also introduced increased geometrical variabilities induced by the uncertainties in the manufacturing processes. Such variations of the manufactured devices and interconnect structures may cause significant shift from their design intent –the electrical impact of such variability on both the adjoining devices and interconnects should be assessed and accounted for during signoff.

Performance is multidimensional. From implementation perspective, design performance is not a function of the characterized library and wire choices only, but might be influenced by signal integrity induced delay. On the other hand, reliability analysis such as EM and self-heating are becoming more common and augmented as part of sign-off, xACT provides device location information to these tools to ensure current density violations can be accurately identified and resolved. Subsequent corrective actions such as via doubling and wire spreading can be taken to reduce current density occurrence.

The Calibre xACT platform also uses foundry-qualified rule decks in the Calibre SVRF language, and is interoperable with the Calibre nmLVS™ tool and with industry-leading design implementation platforms.

For more details on Mentor’s Calibre xACT, please check HERE.


Solving and Simulating in the New Virtuoso RF Solution

Solving and Simulating in the New Virtuoso RF Solution
by Tom Simon on 10-30-2018 at 12:00 pm

Cadence has done a good job of keeping up with the needs of analog RF designs. Of course, the term RF used to be reserved for a thin slice of designs that were used specifically in RF applications. Now, it covers things like SerDes for networking chips that have to operate in the gigahertz range. Add that to the trend of combining RF and digital blocks onto one die or into the same package and the scope of analog RF designs expands pretty rapidly.

Nevertheless, there were a few noticeable holes in the Cadence solution when it came to addressing RF designs. In the case of simulation, different parts of the design often resided in Allegro SiP or Virtuoso, so integrating and managing pre and post layout simulation was problematic. The other hole for RF users were the options available for EM solver based model generation and simulation. However, Cadence has expended a lot of effort to resolve these issues in their new Virtuoso RF Solution, and the results look pretty promising.

I had a conversation with Michael Thompson, RF Solutions Architect at Cadence, about the work they have recently done to improve the entire solution. His first point was that it used to be OK to do design separately, but changes in IC and package design mean that many more things are being combined and need to be looked at in a unified way. Thus, Virtuoso and Allegro SiP should to work together for RF designs. This created a requirement for lowering the barriers to exchanging design data between the systems, creating free bidirectional data exchange. They added the ability to concurrently use multiple technologies for simulation and layout. The key is to have one golden schematic for the entire design, including the package and multiple die, inside of Virtuoso.

The other hole they needed to plug was integration with EM solvers to make the flow seamless. Previously Cadence relied on a patchwork of external solvers integrated with SKILL code through the Connections Program. Of course, Cadence had their FEM solver that came in through the Sigrity acquisition. However, it was really targeted at board and package level problems as evidenced by its SiP integration. The majority of IC solvers are Method of Moments. Cadence struck a partnership with National Instruments to integrate their AWR Axiem tightly into Virtuoso. At the same time, they also created a path for Sigrity in the IC flow.

With seamless integration for extraction and simulation set-up, the ease of adding RF models for critical structures has improved dramatically. The models are S-parameter, but Spectre-RF has also improved its S-parameter handling. As a circuit’s design progresses, designers can move from QRC, to FEM and MoM, while keeping each of these as separate extracted views. The Hierarchy Editor allows swapping models for the simulation runs.

For the Virtuoso RF solution, Cadence has also been working on new device models. One example that Michael brought up was GaAs models.

Their solution brings together package and IC design into one environment where difficult RF design problems can be solved more easily. This new solution was shown for the first time at IMS. Ensuring that teams working on the package and on the IC can share data and analysis results makes sense with the growing complexity of RF designs. For more information on the new Cadence Virtuoso RF Solution, I suggest looking at the solution page on their website.


A Smart Way for Chips to Deal with PVT Issues

A Smart Way for Chips to Deal with PVT Issues
by Tom Simon on 10-30-2018 at 7:00 am

We have all become so used to ‘smart’ things that perhaps in a way we have forgotten what it was like before many of the things we use day to day had sensors and microprocessors to help them respond to their environment. Cars are an excellent example. It used to be commonplace to run down your battery by leaving your lights on. Now cars are smart enough to turn them off if left on too long. Even better illustrations are how cars adapt to driving at elevation or warm up smoothly when cold. There were simple mechanical gizmos that tried to compensate for operating conditions, but they were prone to malfunction or operating poorly. The use of monitoring has completely changed how reliable things are and how well they can adapt to changing conditions.

What we sometime fail to appreciate is that SOCs need to be smart in the same way. If my car can adjust the fuel mixture to compensate for temperature or oxygen levels, then why shouldn’t ICs adjust automatically for things like metal variation, operating voltage or even local temperature levels? If ICs can be made smart then performance, reliability and even yield will improve. Moortec is an IP provider that has been focusing on in-chip monitoring for almost a decade. They have sensors and controllers that can be embedded in SOCs during design that can help measure, adjust and compensate for a large variety of issues that occur in ICs during operation and over time as they age.

The most basic use of PVT sensors is to expedite and facilitate testing. Chips can be rapidly binned and proper operation can be verified by checking internal performance characteristics. However, there is a lot to gain by moving beyond using in-chip sensors for test and using them to dynamically manage chip operation.

Chips endue stress from higher self-heating with newer process nodes and the higher densities that they bring. Electrical overstress, electro migration, hot carrier aging, and increased negative bias temperature instability all threaten IC operation. Likewise, IR drops caused by increased gate capacitance, more resistive metal, and even supply issues can cause performance degradation or even failure. Additionally, process variation is harder to control because of new variation sources, multiple thresholds and the effects of ageing.

Moortec has been working on this problem since 2010, with their focus on in-chip monitoring systems. They have put together a system that uses several different sensor IP blocks that can be placed one or more times on the die. They tie these sensors together with a PVT controller which can be used to support DVFS/AVS, clock speed optimization, silicon characterization, and increased reliability and device lifetime.


Their process monitoring IP block uses multiple ring oscillators to assess device and interconnect properties. With the results of this sensor it is possible to perform speed binning, age monitoring and report timing analysis.

The voltage monitoring IP block is extremely versatile. It can monitor IR drop, core and IO voltage domains, and facilitates AVS. At the same time, it also helps monitor the quality of the supplies. It is useful in detecting supply events, perturbations, and supply spikes. An interesting feature is the ability to use one instance to monitor multiple supply domain channels in FinFET nodes.

The last leg of the triad is their temperature sensor. It has high accuracy and resolution and offers a number of testability features together with variable sampling modes to allow higher sampling rates if needed for performance.

High reliability and performance both require in-chip monitoring. In each of the critical markets for semiconductors today, it is necessary to squeeze out every ounce of performance while ensuring reliable operation. In safety critical systems such as ADAS, monitoring proper functioning and detecting age related failures is mandatory. Mobile devices need to operate at the lowest power possible, so DVFS is almost always used. In servers, high operating speed generate significant heating which even when minimized can still affect chip operation.

Moortec’s solution looks like it offers IP that is easily deployable to make chips smarter. I just wish that my parents’ carbureted Pontiac I drove in high school had the smart features that today’s technology provides. However, talking about that is a little bit like complaining about what a hassle dial phones were back in the day. That said, it seems inevitable that all chips will be smart soon enough. More information about
Moortec’s in-chip monitoring IP is available on their website.


The Latest from Samsung Semiconductor

The Latest from Samsung Semiconductor
by Tom Dillinger on 10-29-2018 at 12:00 pm

Earlier this Spring, Samsung Foundry held a technology forum, describing their process roadmap and supporting ecosystem developments (link). Recently, the larger Samsung Semiconductor organization conducted a Tech Day at their campus in San Jose, presenting (and demo-ing) a broader set of products. The focus of the day was on Samsung memory technology, encompassing non-volatile flash, DRAM, and GDDR roadmaps. The audience was more focused on system design and integration than silicon process technology, and the key Tech Day announcements reflected new Samsung memory products being introduced. (Samsung Foundry also made a major announcement.) Here are the highlights from the Samsung Tech Day.

Interesting Facts, Figures, and Quotes
In addition to the product introductions, there were some “sound bites” from the presentations that I thought were quite interesting:

  • “EUV lithography for DRAM manufacture is currently in R&D, not yet in production – it will no doubt be introduced in future DRAM generations.” (a few layers)
  • “Every 2 years, we create more data than we previously created in all of history.” (e.g., 160 ZB in 2025)
  • “Facebook generates 4 PB/day alone.”
  • “A future Class-5 fully-autonomous vehicle will generate 4TB/day.”
  • “Analytics are changing the way in which professional sports are being played. The defensive strategies being employed against individual hitters have resulted in the lowest overall Major League Baseball batting average in 46 years.”
  • “5G communications will be rolled out to 19 metropolitan areas in 2019.” (including San Francisco)
  • “Data center corporations are aggressively adding a Corporate AI Officer (CAIO)executive position.”
  • “Memory holds the key to AI.”

The focus of these examples was the requisite data capacity and bandwidth required of the current set of workloads. The key conclusion was:

“In the past few decades, computing evolved to a client-centric model. We are now moving to a memory-centric compute environment.”

One cautionary comment was provided:

“A significant percentage of the (unstructured) data being generated for analytics is ROT – redundant, obsolete, or trivial. A requirement for these memory-centric, data-driven applications will be to optimize the working dataset.”

Here are the major product announcements from the Samsung Tech Day.

256GB RDIMM

Samsung introduced the 16Gb DDR4 DRAM in 2017, utilizing their “1y nm” process technology. At the Tech Day, a 256GB “3D stacked” Registered DIMM stick was introduced. Although there’s been lots of attention given to 2.5D and 3D topologies for multiple (heterogeneous) logic die in a package, Samsung has been in production with stacked memory die for several generations – see the figure below.

Compared to an equivalent configuration with 2 x 128GB RDIMM, the 256GB RDIMM provides a ~25% power reduction, obviously a key factor in server design.

As the new RDIMM offers 2X the memory capacity in the same footprint, the maximum memory footprint of compute servers is likewise increased – e.g., 8TB in a 32-DIMM, 2P rack-mounted server. “In-memory” database transaction processing capabilities are expanded. For chip design, I was specifically thinking about the EDA applications for SoC electrical analysis, which are now able to accommodate 2X the model complexity, as well.

7LPP in Production
Although the theme of the Tech Day was the synergy between the Samsung Semiconductor product family and “memory-centric computing”, there was a major Samsung Foundry announcement, as well.

The “full EUV” 7LPP foundry process is now in full production, with comprehensive “SAFE” ecosystem support from EDA and IP partners.

Bob Stear, Senior Director, Samsung Foundry Marketing, indicated, “7LPP offers a 40% area reduction, and a 20% performance or 50% power improvement compared to 10nm. We are achieving a sustained exposure power output of 250W, enabling a throughput exceeding 1500 wafers per day. The utilization of single-exposure EUV lithography is truly a big leap in cost-effective production, compared to previous multipatterning-dominated process nodes. The number of masks is reduced by 20%.”

The figure above depicts the improved fidelity associated with (single-mask) EUV exposure versus (multi-patterned) 193nm ArF-immersion lithography.

Bob also hinted at future Samsung Foundry offerings, namely:

 

  • (2nd generation) 18FD-SOI, w/embedded Magneto-resistive MRAM
  • follow-on nodes 5LPE and 4LPE (E = “early” adopter), with PDK’s available in early 2019
  • (more info to come at the next Samsung Foundry Forum in May’19)
  • 3GAA (Gate-All Around) in 2019

“Smart” Solid-state Drive Architecture
A very unique announcement was the “Smart SSD”, a design that integrates an FPGA into the SSD package.

Xilinx collaborated with Samsung on the product engineering, offering a full application development and software library stack for the (Zynq, with ARM-Cortex core) FPGA integrated into the SSD.

The CEO of Xilinx participated in the product announcement, saying, “This new computational SSD architecture moves acceleration engines closer to the data, offering improved performance for database tasks and machine learning inference.”

Examples were provided of ~3X performance of (parallel-query) DB TPC-H transaction processing and ~3X business intelligence analytics (MOPS) throughput.

The Smart SSD architecture does present some interesting acceleration opportunities, and also some challenges. The endurance specifications for SSD’s vary significantly.

The system integrator utilizes the anticipated data communications workload profile to match the SSD endurance with the product requirements – e.g., an SSD “boot device” with limited activity (~0.1 – 1.0 effective drive writes per day, DWPD) to hard drive data caching (3++ DWPD). The use of an SSD in a new set of applications, such as providing accelerator engine data, requires new workload profiling and considerations for endurance reliability analysis (and over-provisioning) – a very interesting area for further research, to be sure. (The figure below provides an example of the SSD endurance calculations for Samsung SSD’s – a very interesting whitepaper is available here.)

Samsung Semiconductor definitely presented a unique perspective at their Tech Day, highlighting the need to focus on storage capacity and bandwidth for a new “memory-centric” computing environment.

-chipguy


Intel Q3 2018 Jibber Jabber

Intel Q3 2018 Jibber Jabber
by Daniel Nenni on 10-29-2018 at 7:00 am

This is what happens when you have a CFO acting as a semiconductor CEO, and Robert Holmes is a career CFO with zero semiconductor experience or education. Granted, no way did he write the opening statement, but it was full of jibber jabber anyway. The real disappointing jibber jabber was from our own Murthy Renduchintala on the status of 10nm which has been a trending topic on SemiWiki and elsewhere for many months. Why Intel thought they could jibber jabber their way out of 10nm questions I do not know. It started with Bob’s opening statement which in no way did he write:

While our current product lineup is compelling, our roadmap is even more exciting. We continue to make good progress on 10-nanometer. Yields are improving, and we’re on track for 10-nanometer-based systems on shelves during the holiday 2019 selling season. The breadth of IP we’ve assembled combined with Intel’s design, software, packaging, and manufacturing capability, gives us an unmatched ability to invent the industry’s future.

Bob, your current product lineup is compelling for one single reason, you have no real competition at 14nm. Intel 14nm is by far superior to TSMC 16nm and Samsung/GF 14nm in both performance and density. Unfortunately, that lead ends now with TSMC and Samsung 7nm which makes your current product lineup an offense to Moore’s Law and the industry leading Intel Tick-Tock model that we all knew and loved.

And the Murthy 10nm Jibber Jabber in the Q&A:

Venkata S. M. Renduchintala – Intel Corp.
Hey, Vivek, let me take it. This is Murthy. First of all, as Bob said in his opening remarks, the progress we’ve made in the quarter is very much in line with our expectations. While we can’t give any specific numbers, I do believe that the yields as we speak now are tracking roughly in line with what we experienced in 14-nanometer.

So we’re still very much reinforcing and reaffirming our previous guidance that we believe that we’ll have 10-nanometer shipping by holiday of 2019. And if anything, I feel more confident about that at this call than I did on the call a quarter ago. So we’re making good progress and I think we’re making the quarter-on-quarter progress that’s consistent with prior generations having reset the progress curve.

“While we can’t give any specific numbers”? Sure you CAN but you just won’t. Are they that embarrassing? How about a little transparency? And you wonder why the fake news about 10nm getting cancelled got traction? Murthy, since you were not at Intel during the 14nm yield ramp let me remind you that it was disastrous. So where exactly are 10nm yields in relation to 14nm?

Now that TSMC is in HVM with 7nm, which is comparable in performance and density to the much delayed Intel 10nm, not only CAN you disclose specific yield or defect density numbers, investors should be demanding it! It was embarrassing how the analysts on the call did not push for more information.

The full Intel Q3 2018 transcript is here.

The good news is that Intel had a fantastic quarter but AMD not so much. Hopefully this will change when AMD has 7nm parts out early next year but I would not bet on it. Even after losing the process lead the Intel sales organization is getting VERY aggressive and protective of their lead customers. I have seen examples of this first hand and I am seriously impressed. Intel is not walking away from price competitive deals, absolutely.

Intel +3.6% on beats, Data Center recovery, and positive guidance
Q3 results that beat EPS and revenue estimates driven by a recovery in Data Center, which missed estimates last quarter. Upside Q4 guidance has revenue at $19B (consensus: $18.39B) and EPS of $1.22 (consensus: $1.09). Revenue breakdown:

Client Computing, $10.2B (+16% Y/Y; consensus: $9.33B)
Data Center, $6.1B (+26%; consensus: $5.89B); IoT, $919M (+8%; consensus: $952.4M)
Non-Volatile Working Memory Solutions, $1.1B (+21%; consensus: $1.14B)
Programmable Solutions, $496M (+6%; consensus: $526.8M)

AMD Q3 revenue miss, weak guidance
Q3 results missed revenue by $50M with a reported $1.65B. Non-GAAP EPS narrowly beat by a penny at $0.13 but GAAP EPS missed by as much with $0.09.Computing and Graphics missed consensus with $938M in revenue (+12% Y/Y/ -14% Q/Q) compared to the $1.05B estimate. On the year growth was driven by Ryzen desktop and mobile products sales, partly offset by lower graphics sales.

The other notable news is that Intel publicly addressed fake news from a well known rumor site claiming that Intel 10nm had been cancelled. It has been discussed on SemiWiki in detail amongst actual working semiconductor professionals who found it to be fake news. The rumor site of course still stands by the report and that pretty much sums up the state of American media today. Thumbs up to Intel on this one. Let’s hope a legal response is being considered.

SemiAccuratehas learned that Intel just pulled the plug on their struggling 10nm process. Before you jump to conclusions, we think this is both the right thing to do and a good thing for the company.

Intel News
✔@intelnews
Media reports published today that Intel is ending work on the 10nm process are untrue. We are making good progress on 10nm. Yields are improving consistent with the timeline we shared during our last earnings report.
8:42 AM – Oct 22, 2018


Update October 22, 2018@3:30pm: Intel has denied ending 10nm on Twitter. The full tweet is, “Media reports published today that Intel is ending work on the 10nm process are untrue. We are making good progress on 10nm. Yields are improving consistent with the timeline we shared during our last earnings report.” SemiAccurate stands by its reporting.

Also read:
Intel Slips 10nm for the Third time?
Intel delays mass production of 10nm CPUs to 2019
Intel 10nm process problems — my thoughts on this subject
Kaizad Mistry on Intel’s 10 nm Technology (PDF)


Is Your BMW Secure?

Is Your BMW Secure?
by Roger C. Lanctot on 10-28-2018 at 7:00 am

The cybersecurity of automobiles has become an increasingly critical issue in the context of autonomous vehicle development. While creators of autonomous vehicles may have rigorous safety and testing practices, these efforts may be for naught if the system are compromised by ethical or unethical hackers.

Establishing cybersecurity in a motor vehicle is a daunting proposition. Cars are exposed in unprotected areas such as parking garages and public roadways much of the time they are in operation. Cars are also increasingly connected to wireless cellular networks and nearly all cars built after 1996 are equipped with an OBD-II diagnostic port enabling physical access to vehicle systems.

The proliferation of smartphone connection solutions such as Android Auto, Apple Carplay, the CCC Consortium’s MirrorLink and the SmartDeviceLink Consortium’s SDLink have also opened a path to cybersecurity vulnerability. All of these attack surfaces were used by Tencent’s Keen Security Labs when the organization identified 14 vulnerabilities in BMW vehicles earlier this year.

It is hardly shocking the Keen found these vulnerabilities. What is shocking was BMW’s response.

As a member of the Auto-ISAC, based in the U.S., BMW was obliged to report vulnerabilities to the membership – encompassing upwards of 50 car companies and their suppliers – within 72 hours. Instead, BMW waited more than three months. (Note: It is possible that the part of BMW that was notified of the hack by Keen was not in touch with the BMW executives representing the company within the Auto-ISAC.)

]During that time, between notification by Keen and notification of the Auto-ISAC, BMW worked directly with Keen engineers and scientists to remedy the flaws found by Keen. In fact, there are multiple videos available online that describe the details of the hacks and the efforts to correct them – which included over-the-air software updates, a capability that reflected BMW’s design foresight.

BMW concluded the episode by giving Keen the first ever BMW Group Digitalization and IT Research Award and pledging to collaborate closely with Keen in the future. BMW was Keen’s second “victim.”

Two years ago Keen remotely hacked a Tesla Model S also resulting in fixes from Tesla delivered via over-the-air software updates. Keen performed a second Tesla hack a year later and ultimately Keen parent Tencent took a 5% stake in Tesla.

It’s not clear whether Tesla was a member of the Auto-ISAC at the time of the Keen hacks or whether it reported those hacks in a timely manner. But there are lessons to be learned from both hacks.

1. Even the most sophisticated cars designed by some of the cleverest engineers in the industry have been found to be vulnerable to physical and remote hacks;

2. In a world where cars are increasingly driven based on the guidance of software code, cybersecurity is suddenly an essential concern for which there is no immediate, obvious fix;

3. Over-the-air software update technology is a key part of the solution;

4. Car companies must report cybersecurity attacks and vulnerabilities in a timely manner – mainly because so many components and so much code is shared across multiple car makers;

5. Car makers are obliged to constantly test their own systems and foster bug bounty programs and ethical hacking of their own systems to identify vulnerabilities in a proactive manner.

Unlike cybersecurity hygiene for mobile devices, consumer electronics or desktop computers, car makers cannot wait until they are hacked to respond. Car makers must be in a constant state of cybersecurity vigilance and testing.

This need is reflected in a recent announcement from Karamba Security. The company has launched its ThreatHive. ThreatHive implements a worldwide set of hosted automotive ECUs in simulation of a “car-like” environment for automotive software system integrators.

According to Karamba: “These ECU software images are automatically monitored to expose automobile attack patterns, tools, and vulnerabilities in the ECU’s operating system, configuration and code.” In other words, Karamba is embedding pen testing of systems into the development cycle of automotive systems.

The Karamba solution reflects the fact that car makers cannot wait for an intrusion and the lengthy product development life cycle requires a means of hardening automotive systems prior to market launch. As for automotive cybersecurity generally or the security of a given BMW particularly, cars may never be fully or certifiably cybersecure.

Car makers need to come clean with their industry brethren via organizations such as the Auto-ISAC and, ultimately, must be honest with their customers. If BMW knows my BMW is insecure, they better let me know and let me know how they are going to or how they have fixed that vulnerability.

In the video describing the remediation a BMW engineer says that the corrective measures are “transparent” to the vehicle owner who “will not notice the difference.” Unfortunately, BMW appears to have misunderstood the meaning of “transparent.” When correcting cybersecurity flaws, car makers must disclose, not hide, their work to protect the consumer. That may be the biggest lesson of all from the Keen Security Lab hack of BMW and may be one of the more difficult obligations for the industry to accept.


Semiconductor stocks free fall as bad news gets worse

Semiconductor stocks free fall as bad news gets worse
by Robert Maire on 10-26-2018 at 12:00 pm

Semiconductor stocks have had another significant down leg as the bad news continues to pile on. Bad news in this case doesn’t come in threes , it comes in droves. TI is perhaps very scary news as it is a rather broad based supplier of semiconductors that has fared better than more pure play chip suppliers. TI gave weak guidance that was broad based which echoes TSMC last weak giving less than stellar predictions.

The “chip flu” which started with the memory sector has now spread tp a full blown, cyclical down turn, epidemic. Analysts continue to capitulate even though many stocks are off by close to 50%. The horse bolted months ago through the open barn doors. We have very few analysts left to capitulate as there exist “perma-bulls” who never go negative despite the news.

We are quickly closing in on a “it can’t get much worse” scenario. The only things worse would be a trade war that would be the “coup de grace” for the industry.

We would remind investors that “bad” is a relative term. In the early cyclical days of the semiconductor industry, almost every company lost money in a cyclical downturn. Now most everyone is making money, just less of it, yet the stocks behave like we are on are way to red ink.

We are only part way through earnings season so that negative flow of news is not yet over. We can’t imagine much positive news that will come out of companies yet to report.

The good thing about all this bad news is that sooner or later we will be so bad that things can only get better, we may not be far from that point.

We still don’t know the shape of the down cycle, whether its a “U” or canoe or “L” shaped bottom, its obviously not “V” shaped.

AMAT is almost at the $30 price point we suggested. Lam is not that far from our $130 view. KLAC has broken through $90. We had said for a long while that AMD was way overdone and it was off by 9% today as it plummets back to earth.

With current sentiment, even good news and good guidance will not elicit a positive reaction in the stocks.

At this point the end of the quarterly reporting season can’t come soon enough to slow the decline.

Is Samsung a canary in a tech coal mine?
Earlier in 2018, Samsung essentially halted capex spending for its display business. This was followed shortly by their memory capex “push out” (maybe now turned into a cancellation).

Our reaction to these data points was much more negative than others as we view Samsung as one of the smarter CAPEX spenders in tech and their slow down had very ominous overtones which are now playing out.
Samsung should likely know better that any tech company, even Apple, what demand is looking like. They have the broadest based, consumer facing, tech exposure of any company in the world. They sell everything from TVs and dishwashers to smart phones and key components to frenemies like Apple.

If anyone would see a tech downturn coming it would be Samsung
They seemed to be voting with their feet as they were the first to cut spending on CAPEX when things seemed to be so good they couldn’t get any better. They were obviously right in their projections, we don’t think it was dumb luck or coincidence.

The question now becomes, will they also be one of the first ones to predict an up turn out of the current down cycle? We would certainly bet on it.

Right now the Samsung canary may be croaking but we would be listening for an improving tone as a guide for the tech sector, just not any time soon.

China trade safe till after elections?
Given that the stock markets are getting whacked we think its a reasonable assumption that the administration, hopefully won’t do anything related to trade, as it pour pour gasoline on already large bonfire. This is not to suggest that the administration acts rationally or does things that are expected but rather is more pre-occupied with the election and stumping for candidates. Right now there are many other things ahead of China trade on the administrations to do list .

The stocks a “foggy bottom”
We could see another dead cat bounce in the stocks after the sell off today but we don’t think we are yet at the bottom. There is still a lot of noise and confusion, coupled with uncertainties like China that will likely keep downward pressure on the stocks. We need to clear away some of the confusion and at least get some of the trade uncertainty put to bed before we can have a more stable upward move in the stocks. Its just not time yet.


Carl Icahn Activist Activities

Carl Icahn Activist Activities
by Daniel Nenni on 10-26-2018 at 7:00 am

The “20 Questions with Wally Rhines” series continues

Carl Icahn is a remarkably charming person. You might expect him to be a mean, aggressive adversary but he actually jokes about his foibles, tells stories about interesting people and gently poses questions. “I thought Jerry Yang just didn’t want to sell his Yahoo baby to Microsoft”, Carl related. “So I bought a few hundred million of Yahoo stock and called Steve Balmer, telling him we could make a deal. Steve said Microsoft had moved on. And you know, after my tenth call to him, I began to think they really had moved on”, quipped Carl. This seemed to relax some of the tension in the room but I remembered my rehearsed preparation for the meeting.

There is an entire cottage industry of consultants who train executives in the art of dealing with Carl. Mine was a day of training from one of the best firms, plus lots of study. More than 25 MS and PhD theses have been written, analyzing Icahn’s tactics. Unlike Jeff Smith of Starboard and Jesse Cohn of Eliot Associates, both of whom I’ve dealt with, Carl is uniquely different. Less analytics and lots of gut feel.

Before entering Carl’s office, I knew what the room would look like, where I would be asked to sit (with the sun shining in my eyes), how he would start the conversation, what he would try to establish during the meeting and exactly what I should try to achieve. The year was 2010 and Icahn Associates had acquired over 10% of the common stock of Mentor Graphics. They planned to continue buying but were stopped by our “poison pill” that limited them to a 15% ownership. Donald Drapkin of Casablanca Capital followed Carl’s lead and began acquiring Mentor stock as well as appearing on television, as Carl was doing, to blast the Mentor management.

And then the proxy fight followed, with three nominees from Icahn Associates to replace the most senior Mentor Directors. There’s nothing like a proxy fight to consume time, upset employees and customers, and challenge the patience of a CEO. Every word and every slide that the company management communicates to anyone must be publicly disclosed in an SEC filing the next day. And each of these will be scrutinized for absolute accuracy. On the other side, the activist is free to make baseless accusations, misrepresent facts and generally stimulate unrest among shareholders and the public. Rules for a proxy fight clearly favor the activist and are not likely to be changed. The company is legally prohibited (in our case by court injunction) from explaining to shareholders how to split their ballots if they want to vote for less than all the proposed nominees of the activist. The result: Companies frequently negotiate a compromise with the activist, adding one or more activist-sponsored directors to their list of nominees. Some, like Mentor, fight the good fight but usually lose, as we did.

Then the challenge begins of managing a company when new directors will vote against most things that management proposes. In addition, much of the effort of the company is now directed at providing analyses for whatever objective the activist is promoting. In our case, that was the idea that Mentor should be sold or, at the very least, split into pieces to facilitate a sale.

And then there are the “shareholder” lawsuits that follow. Mentor spent hundreds of thousands of dollars defending a shareholder lawsuit claiming that we had improperly turned down an offer (which was actually not an offer) to buy the company for $18 per share. Through most of the years that the lawsuit continued, with depositions of the Directors and much of management, the stock was selling for more than $18 per share. If we lost, I wondered if the shareholders who were supposedly harmed would be required to pay us the difference between the $18 per share and the $20+ per share that their stock was now worth.

In most cases I’ve observed, the new Board members begin to understand over time why the other Board members and management have made the decisions they have made. Divergent director opinions gradually begin to converge. At the next Christmas after the proxy fight, I received an engraved bottle of Johnny Walker Blue scotch from Carl with the words, “NOT FOR USE AT BOARD MEETINGS”. At a subsequent Christmas, after our stock price had increased substantially, I received one that said, “TO BE USED AT BOARD MEETINGS”. Of course, I had to donate the bottles to charities or pay compensation to the company to avoid questionable receipt of a gift (Figure One)

For Mentor and Icahn Associates, the ending was good. The Icahn stock appreciated from a purchase price near $9 to a peak of over $25 and Icahn Associates more than doubled its investment when Mentor bought back half the stock at $18.50 and Icahn sold the rest. We discovered things about our financial and business structure that we might not have investigated if we had not been stimulated by our new Director demands. Although, two of the three Icahn Directors were not re-elected, the other one, David Schecter, was a strong contributor to the Board and we were sorry when he resigned.

The lesson for companies that come under attack? Continue to do what is best for your shareholders and resist acting in the interest of a minority shareholder just to reduce the pain of conflict. And keep an open mind; many of the themes that activists promote have merit even if they are driven by incomplete information. Ultimately, we all have the goal of increasing shareholder value and smart people working toward the same goal can usually find common ground.

The 20 Questions with Wally Rhines Series


IBIS-AMI Model Generation Simplified

IBIS-AMI Model Generation Simplified
by Tom Dillinger on 10-25-2018 at 12:00 pm

The increasing demand for data communication throughput between system components has driven the requirement for faster SerDes IP data rates. The complexity of the transmit (Tx) and receive (Rx) signal conditioning functions has correspondingly evolved. As a result, the simulation methodology for SerDes electrical interface verification needs to encompass the entire signal path, while maintaining simulation efficiency. To best address system modeling requirements with the wide diversity of SerDes implementations, the electronics industry adopted a new modeling approach – the I/O Buffer Information Specification-Algorithmic Modeling Interface (IBIS-AMI) – as maintained by the IBIS Open Forum consortium (link).

Background

A serial lane is used to transmit data over a differential wire pair, where the system physical backplane (or other PCB motherboard + daughter card topology) represents a significant electrical “distance”. Multiple lanes are commonly integrated into a “link” – e.g., a SerDes IP block may provide a data communications link comprised of 8 lanes.

Each data bit in the serial stream is denoted in the time domain as a unit interval (UI) – the electrical topology of the lane between components will incorporate many UI’s. For example, at very high data rates, a UI could be comparable in physical dimension to a through-board signal via. As a result, to maintain a suitably low bit error rate (BER), the accuracy of model extraction and simulation is paramount.

As illustrated below, there will be significant frequency-dependent insertion loss attenuation (IL) in the transmitted signal through the board trace materials and stack-up (e.g., FR-4, Megtron).


Figure 1. Example of insertion loss versus frequency.

The signal will also be subjected to reflection loss (RL) from impedance mismatches, which thus also impact IL. Additionally, near-end and far-end crosstalk losses (NEXT, FEXT) from neighboring switching activity also degrade the signal. As the (effective) clock for the series data is embedded in the signal transitions, any jitter in the time reference for each UI further complicates the clock-data recovery (CDR) at the Rx end.

Early SerDes electrical analysis used a simplified IBIS electrical model of the Tx driver and Rx receiver, merged with the extracted (S-parameter based) loss model of the SerDes lane. SerDes architects were then responsible for merging this analysis with the equalization functionality in the Tx and Rx blocks.

Figure 2. Example of a Tx driver and Rx receiver IBIS model merged with the serial lane; the channel response characteristics are derived from the impulse response.

The introduction of the IBIS-AMI specification enabled architects to develop a comprehensive simulation model. EDA companies extended their signal integrity simulation tools in support of this additional model capability. Yet, the adoption of IBIS-AMI models by SerDes IP developers to release to SoC customers was progressing slowly.

IBIS-AMI Model Generation
I had the opportunity to chat with Ken Willis, Product Engineering Architect at Cadence about the IBIS-AMI modeling features, and the novel developments at Cadence to help accelerate the adoption rate.

Ken began,“Since the introduction of PCIe (@ 2.5Gbps), designers have understood the critical requirement for full channel simulation of serial links, including equalization models. The Algorithmic Modeling Interface definition was added to the basic IBIS specification. However, IBIS-AMI model generation requires a unique skill set – part SerDes architect, part signal integrity engineer, part software developer. This need for this diverse expertise impeded the adoption of IBIS-AMI by IP developers. The Sigrity System SI team at Cadence recognized this issue, and released the AMI Builder.” (video link).

Ken continued, “AMI Builder provides a wizard-based flow. SerDes designers utilize a library of algorithms to define their architecture. These algorithmic building blocks have been developed in close collaboration with the Cadence internal IP team. Each wizard includes a broad set of implementation options and parameters for the SerDes IP designer to select. The generated IBIS-AMI model is directly compilable into the Sigrity SI simulation platform.”

For example, consider the signal conditioning typically incorporated into the Tx side of the SerDes lane. A data value transition includes a significant spectral energy at high frequencies. Due to the IL characteristics of the signal trace, this energy is severely attenuated – to compensate, an “emphasis” is applied to the transition edge. Also, there will be remaining signal energy at the Rx after the UI time interval – a phenomenon denoted as “inter-symbol interference” (ISI). To reduce this energy, both “pre-cursor” and “post-cursor” signal energy derived from adjacent data values may be incorporated into the Tx waveform provided to the driver.

A feed-forward equalizer (FFE) SerDes block is typically included in the architecture, with weighted “taps” representing the contribution of successive cursors, which ultimately add/subtract to the driver current.


Figure 3. FFE signal emphasis at the Tx end of the SerDes. The top figures illustrate the data waveform from the FFE. The schematic in the bottom figure illustrates a simple emphasis circuit with the current and delayed data input “taps” — the “delayed” data also influences the signal current at the differential outputs.

The AMI Builder library includes a Tx FFE algorithm, with a wizard to assist with defining the options for the cursor tap configuration and tap coefficients.

Figure 4. The AMI Builder FFE wizard options.

A general SerDes architecture typically includes several blocks, as illustrated below.

Figure 5. SerDes architecture example (PISO: parallel-in, serial-out; SIPO: serial-in, parallel out)

The signal conditioning at the Rx typically includes:

 

  • agc (automatic gain control): amplifies the signal magnitude after Tx equalization and trace losses
  • ctle (continuous-time linear equalizer): an example of an analog CTLE filter is shown below, both passive and active; digital CTLE implementations are also common
  • cdr (clock data recovery): using the time reference at the Rx, the clock phase is adjusted to the optimal capture point in the data UI time window


Figure 6. Examples of simple analog CTLE filters; a typical active filter response curve is shown.

The common eye diagram depicts equalized (superimposed) data waveforms, illustrating magnitude and phase (jitter) variations. The CDR aligns the capturing clock edge at the horizontal center of the eye; the vertical maximum of the eye is compared to the (differential) voltage margin associated with the Rx clocked “slicer” circuit that stores the incoming data value.


Figure 7. Eye diagram after Rx signal equalization

Figure 8. Illustration of the AMI Builder wizards for Rx blocks

Ken highlighted, “The IBIS-AMI spec supports a variety of representations of SerDes blocks. As an example, for the CTLE filter, designers could provide a text file with: the pole-zero rational functions in s-parameter format, a mag/frequency description, or the step function response. AMI Builder will synthesize and plot the filter response.”


Figure 8. AMI Builder CTLE wizard

Ken continued, “Architects have the flexibility to construct the AMI Builder model to fit their specific configuration. For example, the positioning of the CTLE could be swapped with the AGC. If there is a need for a user-defined block model not present in the library, designers can provide their own C-code – say, for an AGC with unique compression or magnitude clipping characteristics.”

Figure 9. IBIS-AMI model interface API’s

I asked Ken, “Once a SerDes IP developer has solidified their internal implementation with AMI Builder, how is the IBIS-AMI model released to the end customer?”

Ken replied, “The IP developer defines which parameters in the IBIS-AMI model are reserved and which are editable. For example, end user-defined parameters could range from SerDes IP configuration specifics to the overall system jitter impairment.”


Figure 10. Illustration of reserved and user-defined parameters in the IBIS-AMI model

“What’s ahead for IBIS-AMI and the AMI Builder?”, I inquired.

Ken replied, “The first wave of IBIS-AMI users are SerDes IP developers and customers. The advent of very high-speed DDRx parallel interfaces also requires signal equalization, and thus comparable approaches. The modeling of the parallel interface clock strobe compared to serial clock recovery requires attention, to accurately represent the analog strobe waveform as the timing reference.”

(For example, see the follow DesignCon 2018 paper on DDR5 AMI model generation, describing the collaboration between Micron Technology and Cadence – link.)

“Also, the evolution of a SerDes lane to a pulse-amplitude modulation waveform for multiple-level encoding will require AMI modeling focus – for PAM-4, equalization models need to correct 4 signal levels.”, Ken added.

IBIS-AMI models are now de rigueur for SerDes IP system integration. SerDes designers need to incorporate this model (and the related configuration documentation) into their IP customer enablement deliverables. Yet, the expertise needed to prepare and verify this complex model requires diverse skills – kudos to Cadence for providing the automation aids to expedite development of IBIS-AMI models.

-chipguy

 


The Cloud-Edge Debate Replays Inside the Car

The Cloud-Edge Debate Replays Inside the Car
by Bernard Murphy on 10-25-2018 at 7:00 am

I think we’re all familiar with the cloud/edge debate on where intelligence should sit. In the beginning the edge devices were going to be dumb nodes with just enough smarts to ship all their data to the cloud where the real magic would happen – recognizing objects, trends, need for repair, etc. Then we realized that wasn’t the best strategy; for power because communication is expensive, for security and privacy because the attack surface becomes massive, and for connectivity because without the connection is down, the edge node becomes an expensive paperweight.

Turns out the same debate is playing out inside the smart car, between the sensors on the edge (cameras, radars, ultrasonics, even LIDAR) and the central system, and for somewhat similar reasons. Of course in the car, most of the traffic is going through wired connections, most likely automotive Ethernet. But wired or wireless doesn’t change these concerns much, particularly when the edge nodes can generate huge volumes of raw data. If you want to push all of that to a central AI node, the ethernet will have to support many Gbps. The standard is designed with that in mind, but keep adding sensors around the car and you have to wonder where even this standard will break down.

Hence the growing interest in moving more intelligence to the edge. If a camera for example can do object recognition and send object lists to the central node rather than a raw data stream, bandwidth needs should be significantly less onerous. Just like putting more intelligence in IoT devices, right? Well – the economics may be quite different in a car. First, a dumb camera may be a lot cheaper than a smart camera, so the initial cost of a car outfitted with these smart sensors may go up quite a bit.

There’s another consideration. A lot of these sensors sit in the car’s fenders/bumpers. What’s one of the most common bodywork repairs on a car? Replacing the fender. In a traditional car, labor and painting aside, this may cost somewhere in the range of $300 to $700. That goes up to over $1,000 if the fender includes lights and (dumb) sensors. Make those sensors smart and the cost will go up even further. So adding intelligence to sensors in a car isn’t the obvious win it is in IoT and handset devices.

Safety requirements create some new challenges in this “cloud” versus edge use-case. Assuring an acceptable level of safety requires a lot of infrastructure, such as duplication and lock-step computing in the hardware, but also significant work in the software. One argument has it that this is best centralized where it can be most carefully managed and ensured, relying on only modest capabilities in edge nodes to avoid heavy costs in duplicating all that infrastructure.

But that’s not ideal either. If everything is centralized, guaranteeing response times for safety-critical functions becomes more challenging, particularly when dealing with huge volumes of raw data traffic. If instead sensors have more local intelligence, you can take all the necessary functional safety steps within such a sensor, and since you’re communicating much less data to the central node, safety measures in the interconnect become less costly.

In some cases the OEM may want both object lists and raw data. What?! Think about a forward-facing camera. Object recognition in this view is obviously useful (pedestrians, wildlife, etc) for triggering corrective steering, emergency braking and so on. But it may also be useful to feed the raw view with object identification to the driver’s monitor. Or to feed an enhanced view in poor visibility conditions, this potentially requiring more AI horsepower than an edge camera can provide (such as fusion from other sensors).

Perhaps by now you are thoroughly confused. You should be. This is not a domain where all the requirements are settled and component providers merely have to build to the spec. Like most aspects of autonomy or high-automation, advanced vision and machine learning, the guidelines are still being figured out. OEMs are finding their own paths through the possibilities, creating need for flexible solutions from edge node/sensor providers – dumb, intelligent or a bit of both (to paraphrase Peter Quill in Guardians of the Galaxy). CEVA can help those product companies build in that flexibility. You can learn more HERE.