RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Carl Icahn Activist Activities

Carl Icahn Activist Activities
by Daniel Nenni on 10-26-2018 at 7:00 am

The “20 Questions with Wally Rhines” series continues

Carl Icahn is a remarkably charming person. You might expect him to be a mean, aggressive adversary but he actually jokes about his foibles, tells stories about interesting people and gently poses questions. “I thought Jerry Yang just didn’t want to sell his Yahoo baby to Microsoft”, Carl related. “So I bought a few hundred million of Yahoo stock and called Steve Balmer, telling him we could make a deal. Steve said Microsoft had moved on. And you know, after my tenth call to him, I began to think they really had moved on”, quipped Carl. This seemed to relax some of the tension in the room but I remembered my rehearsed preparation for the meeting.

There is an entire cottage industry of consultants who train executives in the art of dealing with Carl. Mine was a day of training from one of the best firms, plus lots of study. More than 25 MS and PhD theses have been written, analyzing Icahn’s tactics. Unlike Jeff Smith of Starboard and Jesse Cohn of Eliot Associates, both of whom I’ve dealt with, Carl is uniquely different. Less analytics and lots of gut feel.

Before entering Carl’s office, I knew what the room would look like, where I would be asked to sit (with the sun shining in my eyes), how he would start the conversation, what he would try to establish during the meeting and exactly what I should try to achieve. The year was 2010 and Icahn Associates had acquired over 10% of the common stock of Mentor Graphics. They planned to continue buying but were stopped by our “poison pill” that limited them to a 15% ownership. Donald Drapkin of Casablanca Capital followed Carl’s lead and began acquiring Mentor stock as well as appearing on television, as Carl was doing, to blast the Mentor management.

And then the proxy fight followed, with three nominees from Icahn Associates to replace the most senior Mentor Directors. There’s nothing like a proxy fight to consume time, upset employees and customers, and challenge the patience of a CEO. Every word and every slide that the company management communicates to anyone must be publicly disclosed in an SEC filing the next day. And each of these will be scrutinized for absolute accuracy. On the other side, the activist is free to make baseless accusations, misrepresent facts and generally stimulate unrest among shareholders and the public. Rules for a proxy fight clearly favor the activist and are not likely to be changed. The company is legally prohibited (in our case by court injunction) from explaining to shareholders how to split their ballots if they want to vote for less than all the proposed nominees of the activist. The result: Companies frequently negotiate a compromise with the activist, adding one or more activist-sponsored directors to their list of nominees. Some, like Mentor, fight the good fight but usually lose, as we did.

Then the challenge begins of managing a company when new directors will vote against most things that management proposes. In addition, much of the effort of the company is now directed at providing analyses for whatever objective the activist is promoting. In our case, that was the idea that Mentor should be sold or, at the very least, split into pieces to facilitate a sale.

And then there are the “shareholder” lawsuits that follow. Mentor spent hundreds of thousands of dollars defending a shareholder lawsuit claiming that we had improperly turned down an offer (which was actually not an offer) to buy the company for $18 per share. Through most of the years that the lawsuit continued, with depositions of the Directors and much of management, the stock was selling for more than $18 per share. If we lost, I wondered if the shareholders who were supposedly harmed would be required to pay us the difference between the $18 per share and the $20+ per share that their stock was now worth.

In most cases I’ve observed, the new Board members begin to understand over time why the other Board members and management have made the decisions they have made. Divergent director opinions gradually begin to converge. At the next Christmas after the proxy fight, I received an engraved bottle of Johnny Walker Blue scotch from Carl with the words, “NOT FOR USE AT BOARD MEETINGS”. At a subsequent Christmas, after our stock price had increased substantially, I received one that said, “TO BE USED AT BOARD MEETINGS”. Of course, I had to donate the bottles to charities or pay compensation to the company to avoid questionable receipt of a gift (Figure One)

For Mentor and Icahn Associates, the ending was good. The Icahn stock appreciated from a purchase price near $9 to a peak of over $25 and Icahn Associates more than doubled its investment when Mentor bought back half the stock at $18.50 and Icahn sold the rest. We discovered things about our financial and business structure that we might not have investigated if we had not been stimulated by our new Director demands. Although, two of the three Icahn Directors were not re-elected, the other one, David Schecter, was a strong contributor to the Board and we were sorry when he resigned.

The lesson for companies that come under attack? Continue to do what is best for your shareholders and resist acting in the interest of a minority shareholder just to reduce the pain of conflict. And keep an open mind; many of the themes that activists promote have merit even if they are driven by incomplete information. Ultimately, we all have the goal of increasing shareholder value and smart people working toward the same goal can usually find common ground.

The 20 Questions with Wally Rhines Series


IBIS-AMI Model Generation Simplified

IBIS-AMI Model Generation Simplified
by Tom Dillinger on 10-25-2018 at 12:00 pm

The increasing demand for data communication throughput between system components has driven the requirement for faster SerDes IP data rates. The complexity of the transmit (Tx) and receive (Rx) signal conditioning functions has correspondingly evolved. As a result, the simulation methodology for SerDes electrical interface verification needs to encompass the entire signal path, while maintaining simulation efficiency. To best address system modeling requirements with the wide diversity of SerDes implementations, the electronics industry adopted a new modeling approach – the I/O Buffer Information Specification-Algorithmic Modeling Interface (IBIS-AMI) – as maintained by the IBIS Open Forum consortium (link).

Background

A serial lane is used to transmit data over a differential wire pair, where the system physical backplane (or other PCB motherboard + daughter card topology) represents a significant electrical “distance”. Multiple lanes are commonly integrated into a “link” – e.g., a SerDes IP block may provide a data communications link comprised of 8 lanes.

Each data bit in the serial stream is denoted in the time domain as a unit interval (UI) – the electrical topology of the lane between components will incorporate many UI’s. For example, at very high data rates, a UI could be comparable in physical dimension to a through-board signal via. As a result, to maintain a suitably low bit error rate (BER), the accuracy of model extraction and simulation is paramount.

As illustrated below, there will be significant frequency-dependent insertion loss attenuation (IL) in the transmitted signal through the board trace materials and stack-up (e.g., FR-4, Megtron).


Figure 1. Example of insertion loss versus frequency.

The signal will also be subjected to reflection loss (RL) from impedance mismatches, which thus also impact IL. Additionally, near-end and far-end crosstalk losses (NEXT, FEXT) from neighboring switching activity also degrade the signal. As the (effective) clock for the series data is embedded in the signal transitions, any jitter in the time reference for each UI further complicates the clock-data recovery (CDR) at the Rx end.

Early SerDes electrical analysis used a simplified IBIS electrical model of the Tx driver and Rx receiver, merged with the extracted (S-parameter based) loss model of the SerDes lane. SerDes architects were then responsible for merging this analysis with the equalization functionality in the Tx and Rx blocks.

Figure 2. Example of a Tx driver and Rx receiver IBIS model merged with the serial lane; the channel response characteristics are derived from the impulse response.

The introduction of the IBIS-AMI specification enabled architects to develop a comprehensive simulation model. EDA companies extended their signal integrity simulation tools in support of this additional model capability. Yet, the adoption of IBIS-AMI models by SerDes IP developers to release to SoC customers was progressing slowly.

IBIS-AMI Model Generation
I had the opportunity to chat with Ken Willis, Product Engineering Architect at Cadence about the IBIS-AMI modeling features, and the novel developments at Cadence to help accelerate the adoption rate.

Ken began,“Since the introduction of PCIe (@ 2.5Gbps), designers have understood the critical requirement for full channel simulation of serial links, including equalization models. The Algorithmic Modeling Interface definition was added to the basic IBIS specification. However, IBIS-AMI model generation requires a unique skill set – part SerDes architect, part signal integrity engineer, part software developer. This need for this diverse expertise impeded the adoption of IBIS-AMI by IP developers. The Sigrity System SI team at Cadence recognized this issue, and released the AMI Builder.” (video link).

Ken continued, “AMI Builder provides a wizard-based flow. SerDes designers utilize a library of algorithms to define their architecture. These algorithmic building blocks have been developed in close collaboration with the Cadence internal IP team. Each wizard includes a broad set of implementation options and parameters for the SerDes IP designer to select. The generated IBIS-AMI model is directly compilable into the Sigrity SI simulation platform.”

For example, consider the signal conditioning typically incorporated into the Tx side of the SerDes lane. A data value transition includes a significant spectral energy at high frequencies. Due to the IL characteristics of the signal trace, this energy is severely attenuated – to compensate, an “emphasis” is applied to the transition edge. Also, there will be remaining signal energy at the Rx after the UI time interval – a phenomenon denoted as “inter-symbol interference” (ISI). To reduce this energy, both “pre-cursor” and “post-cursor” signal energy derived from adjacent data values may be incorporated into the Tx waveform provided to the driver.

A feed-forward equalizer (FFE) SerDes block is typically included in the architecture, with weighted “taps” representing the contribution of successive cursors, which ultimately add/subtract to the driver current.


Figure 3. FFE signal emphasis at the Tx end of the SerDes. The top figures illustrate the data waveform from the FFE. The schematic in the bottom figure illustrates a simple emphasis circuit with the current and delayed data input “taps” — the “delayed” data also influences the signal current at the differential outputs.

The AMI Builder library includes a Tx FFE algorithm, with a wizard to assist with defining the options for the cursor tap configuration and tap coefficients.

Figure 4. The AMI Builder FFE wizard options.

A general SerDes architecture typically includes several blocks, as illustrated below.

Figure 5. SerDes architecture example (PISO: parallel-in, serial-out; SIPO: serial-in, parallel out)

The signal conditioning at the Rx typically includes:

 

  • agc (automatic gain control): amplifies the signal magnitude after Tx equalization and trace losses
  • ctle (continuous-time linear equalizer): an example of an analog CTLE filter is shown below, both passive and active; digital CTLE implementations are also common
  • cdr (clock data recovery): using the time reference at the Rx, the clock phase is adjusted to the optimal capture point in the data UI time window


Figure 6. Examples of simple analog CTLE filters; a typical active filter response curve is shown.

The common eye diagram depicts equalized (superimposed) data waveforms, illustrating magnitude and phase (jitter) variations. The CDR aligns the capturing clock edge at the horizontal center of the eye; the vertical maximum of the eye is compared to the (differential) voltage margin associated with the Rx clocked “slicer” circuit that stores the incoming data value.


Figure 7. Eye diagram after Rx signal equalization

Figure 8. Illustration of the AMI Builder wizards for Rx blocks

Ken highlighted, “The IBIS-AMI spec supports a variety of representations of SerDes blocks. As an example, for the CTLE filter, designers could provide a text file with: the pole-zero rational functions in s-parameter format, a mag/frequency description, or the step function response. AMI Builder will synthesize and plot the filter response.”


Figure 8. AMI Builder CTLE wizard

Ken continued, “Architects have the flexibility to construct the AMI Builder model to fit their specific configuration. For example, the positioning of the CTLE could be swapped with the AGC. If there is a need for a user-defined block model not present in the library, designers can provide their own C-code – say, for an AGC with unique compression or magnitude clipping characteristics.”

Figure 9. IBIS-AMI model interface API’s

I asked Ken, “Once a SerDes IP developer has solidified their internal implementation with AMI Builder, how is the IBIS-AMI model released to the end customer?”

Ken replied, “The IP developer defines which parameters in the IBIS-AMI model are reserved and which are editable. For example, end user-defined parameters could range from SerDes IP configuration specifics to the overall system jitter impairment.”


Figure 10. Illustration of reserved and user-defined parameters in the IBIS-AMI model

“What’s ahead for IBIS-AMI and the AMI Builder?”, I inquired.

Ken replied, “The first wave of IBIS-AMI users are SerDes IP developers and customers. The advent of very high-speed DDRx parallel interfaces also requires signal equalization, and thus comparable approaches. The modeling of the parallel interface clock strobe compared to serial clock recovery requires attention, to accurately represent the analog strobe waveform as the timing reference.”

(For example, see the follow DesignCon 2018 paper on DDR5 AMI model generation, describing the collaboration between Micron Technology and Cadence – link.)

“Also, the evolution of a SerDes lane to a pulse-amplitude modulation waveform for multiple-level encoding will require AMI modeling focus – for PAM-4, equalization models need to correct 4 signal levels.”, Ken added.

IBIS-AMI models are now de rigueur for SerDes IP system integration. SerDes designers need to incorporate this model (and the related configuration documentation) into their IP customer enablement deliverables. Yet, the expertise needed to prepare and verify this complex model requires diverse skills – kudos to Cadence for providing the automation aids to expedite development of IBIS-AMI models.

-chipguy

 


The Cloud-Edge Debate Replays Inside the Car

The Cloud-Edge Debate Replays Inside the Car
by Bernard Murphy on 10-25-2018 at 7:00 am

I think we’re all familiar with the cloud/edge debate on where intelligence should sit. In the beginning the edge devices were going to be dumb nodes with just enough smarts to ship all their data to the cloud where the real magic would happen – recognizing objects, trends, need for repair, etc. Then we realized that wasn’t the best strategy; for power because communication is expensive, for security and privacy because the attack surface becomes massive, and for connectivity because without the connection is down, the edge node becomes an expensive paperweight.

Turns out the same debate is playing out inside the smart car, between the sensors on the edge (cameras, radars, ultrasonics, even LIDAR) and the central system, and for somewhat similar reasons. Of course in the car, most of the traffic is going through wired connections, most likely automotive Ethernet. But wired or wireless doesn’t change these concerns much, particularly when the edge nodes can generate huge volumes of raw data. If you want to push all of that to a central AI node, the ethernet will have to support many Gbps. The standard is designed with that in mind, but keep adding sensors around the car and you have to wonder where even this standard will break down.

Hence the growing interest in moving more intelligence to the edge. If a camera for example can do object recognition and send object lists to the central node rather than a raw data stream, bandwidth needs should be significantly less onerous. Just like putting more intelligence in IoT devices, right? Well – the economics may be quite different in a car. First, a dumb camera may be a lot cheaper than a smart camera, so the initial cost of a car outfitted with these smart sensors may go up quite a bit.

There’s another consideration. A lot of these sensors sit in the car’s fenders/bumpers. What’s one of the most common bodywork repairs on a car? Replacing the fender. In a traditional car, labor and painting aside, this may cost somewhere in the range of $300 to $700. That goes up to over $1,000 if the fender includes lights and (dumb) sensors. Make those sensors smart and the cost will go up even further. So adding intelligence to sensors in a car isn’t the obvious win it is in IoT and handset devices.

Safety requirements create some new challenges in this “cloud” versus edge use-case. Assuring an acceptable level of safety requires a lot of infrastructure, such as duplication and lock-step computing in the hardware, but also significant work in the software. One argument has it that this is best centralized where it can be most carefully managed and ensured, relying on only modest capabilities in edge nodes to avoid heavy costs in duplicating all that infrastructure.

But that’s not ideal either. If everything is centralized, guaranteeing response times for safety-critical functions becomes more challenging, particularly when dealing with huge volumes of raw data traffic. If instead sensors have more local intelligence, you can take all the necessary functional safety steps within such a sensor, and since you’re communicating much less data to the central node, safety measures in the interconnect become less costly.

In some cases the OEM may want both object lists and raw data. What?! Think about a forward-facing camera. Object recognition in this view is obviously useful (pedestrians, wildlife, etc) for triggering corrective steering, emergency braking and so on. But it may also be useful to feed the raw view with object identification to the driver’s monitor. Or to feed an enhanced view in poor visibility conditions, this potentially requiring more AI horsepower than an edge camera can provide (such as fusion from other sensors).

Perhaps by now you are thoroughly confused. You should be. This is not a domain where all the requirements are settled and component providers merely have to build to the spec. Like most aspects of autonomy or high-automation, advanced vision and machine learning, the guidelines are still being figured out. OEMs are finding their own paths through the possibilities, creating need for flexible solutions from edge node/sensor providers – dumb, intelligent or a bit of both (to paraphrase Peter Quill in Guardians of the Galaxy). CEVA can help those product companies build in that flexibility. You can learn more HERE.


Essential Analog IP for 7nm and 5nm at TSMC OIP

Essential Analog IP for 7nm and 5nm at TSMC OIP
by Tom Simon on 10-24-2018 at 7:00 am

When TSMC’s annual Open Innovation Platform Exposition takes place, you know it will be time to hear about designs starting on the most advanced nodes. This year we were hearing about 7nm and 5nm. These newer nodes present even more challenges than previous nodes due to many factors. Regardless of what kind of design you are undertaking at these nodes, clocking IP is essential. This IP is analog and has even trickier design constraints at these smaller nodes. Andrew Cole at Silicon Creations gave a presentation at the Exposition that provide a lot of insight into what is required to produce this important foundation IP.

Silicon Creations has delivered clocking IP, such as PLLs, that have been used literally billions of times on production chips. Achieving success on this many instances requires tremendous verification resources. One of the interesting parts of Andrew’s presentation discussed the size of their server farm that are used for AFS simulation. They have two sites with more than 2000 cores. The combined RAM is 15TB. They need over 2000 AFS licenses to run their SPICE simulations. Being analog guys, they have even added their own liquid cooling on the processors so they can overclock them.

So why the need for such enormous resources? Andrew started by mentioning the application target for many of these ICs, which turns out to be IoT. He admitted that it is an over used term and has no good definition. However, it’s a useful shorthand for ICs that need to operate on low power, can start and stop quickly, have low leakage, and require few or no external components. Silicon Creations leverage TSMC’s low power processes: 180LP, 40ULP, 22ULL and FinFETs from 16nm to 5nm. These PLLs consume as little as 5uW and can start in as little as 3 clock cycles.

Andrew talked about how analog designs scale as processes shrink. They have seen their PLLs become about 8x smaller in the move from 180nm to 5nm. The limiting factor is noise which turns out to be proportional to kT/C. As such, capacitor values play a big role in determining noise. The other big challenge is wire resistance. With the significant relative increase in wire resistance, it is no longer possible to use lumped R for simulation. Silicon Creations has moved to performing simulation using fully distributed netlists for R and C. Add to this the need to use fully 3D aware tools, and the problem grows substantially. For an example PLL, it now takes 100 times longer to run post layout simulation for 5nm than it did at 40nm.

PLLs and SerDes face even more simulation obstacles. Their jitter requirements are on the order of 0.1ps. Clock cycles are ~100ps. System level activity can stretch out to 1ms, which is 10 orders of magnitude greater than the resolution needed to see jitter issues. Next, factor in the need to run Monte Carlo transient simulations to ensure good yield and it’s easy to see why Silicon Creations has had to scale up their server farm so extensively.

The next question is how well does all this simulation effort correlate to silicon. The answer is: quite well. For power, the mean and standard deviation match closely – sim: 3.02uA 1.5% to meas 3.15uA 1.6%. Below are the PLL fast locking plots.

Lastly, here is the graph for phase noise.

Few IP companies have as much experience and as many instances in the field as Silicon Creations. For digital design teams eager to take advantage of the benefits of 5nm, using proven and well designed and verified IP for clocking, Silicon Creations offers a compelling solution. Their 5nm solutions are taping out shortly. More information on the topic of advanced node analog clocking IP is available on the Silicon Creations website.


Webinar: ASIC and FPGA Functional Verification Study

Webinar: ASIC and FPGA Functional Verification Study
by Alex Tan on 10-23-2018 at 12:00 pm

ASIC or FPGA? Each design style has earned designers’ votes depending on the level of urgency, application complexity and funding of their assigned projects. While it is feasible to transition from ASIC to FPGA design or vice versa, such a move is usually done across project refresh instead of midcourse.

Both Xilinx and Intel(PSG division, formerly from Altera acquisition) are the dominant FPGA vendors representing over 80% of FPGA market share –recently registered $4.52 billion combined annual sales between Q217 until Q118 (which is equivalent to the FPGA market size in 2016 based on Gartner). Although it represents only a slice of the global semiconductor market as captured in figure 1, FPGA and ASIC as design implementation vehicles are in high demand and will be around for a while.
As a leading EDA solution provider, Mentor, a Siemens Business commissions Wilson Research Group every two years to conduct a broad, vendor-independent study of design verification practices around the world. Harry Foster, Chief Scientist Verification for the Design Verification Technology Division of Mentor summarized the survey outcomes in this Mentor webinar. In order to provide some contexts on the highlights to be discussed here, let us review the state of our design space.

The computing landscape
There have been plenty narratives asserting the lasting footprints left by our computing technology adoptions. Driven by cost and usage personalization in the mid 80s, our compute system environment shifted from mainframe-centric to workstations. Such migration was then followed by the increased pervasiveness of mobile computing, and the subsequent introduction of cloud computing (which can be perceived as the ‘next-generation mainframe’ with an advanced architecture). Cloud computing allows scalability and provides elastic support to the increasing network of distributed compute clients, accentuated by the proliferation of Internet of Things (IoTs) in the past few years.

Now, with the ever louder drumbeat of AI and data inferencing move to the edge, enabled by the upcoming 5G deployment, once again we slide along this network continuum –to bring about intelligent closer to the data source or else, filter the captured data from dozens of sensors to be more manageable in size and more structured, prior to transporting back to the cloud. It is a data driven computing era with more heterogeneous system, agile data handling, reasonable compute capacity, shorter development cycles and smaller form factor.

The core enablers
As the workhorse of the computing solutions, IC design and its associated implementation methods also align with any paradigm shift on the end-to-end technology use. Since the early days of VLSI design, flexibility provided by Programming Logic Array (PLA) technology and later FPGA (Field Programmable Gate Array) have provided alternatives to full-custom or ASIC design approaches.

During the great push for performance and density aided by Moore’s Law, ASIC design was top on the designers select list. With the recent transition into more data-centric applications, FPGA design adoption has been on the rise –providing solution to data centers as augmented, dedicated accelerators such as for network, search, compute, etc. On the other hand, ASIC designers are faced with shorter development time to develop high-performance and complex designs as the development cost also increases rapidly –making it more prohibitive for cost-sensitive applications.

As FPGA devices have become larger and faster, verifying functionality of costly ASIC designs in FPGAs has become an effective and economical method of verification. However, some ASIC structures cannot be directly implemented in an FPGA efficiently. So the balance of which implementation is more suitable is dependent on the given project criteria: timeline, complexity and budget.

Highlights of the 2018 study
This year 2018 survey outcomes were collected from a total of over 1200 companies globally. More than 45% of the survey respondents are FPGA/High Performance FPGA based designs compared with only around 12% doing it with ASIC. In term of ASIC design size, they are split almost evenly split across three binnings (less than 1M gates, 1-40M gates and greater than 40M gates) and across process technologies (0.15u to 7nm) –with a trend towards 14nm and beyond.

The top three FPGA applications are related to aerospace/military, industrial, and data centers, which account for almost half of the FPGA based designs, with notable increase in servers related applications when compared against the 2016 survey data.

In ASIC, design applications related to compute platforms servers, wireless and automotive account for half of the study participants. There are more hardware/prototyping engineers involved in the implementation of FPGA design compared with those of ASIC designs.

The typical regression time incurred by ASIC designs is longer compared with those of FPGA based as shown in the tabulated comparison using values extracted from the study. There is a notable increase (7%) in shorter ASIC regression taking less than 9 hours compared with 2016 data. On the other hand, FPGA regression taking longer than 9 hours seems to be on the increase (5%) compared with 2016 data.

Both FPGA and ASIC design teams have adopted similar dynamic techniques such as coverage metric based (code, functional), assertion and constrained random. On the static side, almost a third of ASIC designs have adopted formal property checking or verification, while FPGA is playing catch up as only about one-fifth of designs embraced the static techniques.

Most of the reported functional flaws are design error related, which is common root cause for both FPGA and ASIC. Likewise, verification engineers spend almost half of their verification time in doing debug (42-45%). One critical finding from 2018 study is the potential FPGA bugs rolled into production. 84% of FPGA design projects have non-trivial bugs escape into production as shown in figure 4.

Key takeaways from this year study are ASIC projects displayed maturity in their processes while FPGA projects being pressured to catch-up in order to prevent bug escaping into production.

There are plenty of interesting data points which can be extracted from the study. For more details on 2018 Wilson Research Group Functional Verification Study, please check HERE.


Trade war could be the tipping point for American manufacturing

Trade war could be the tipping point for American manufacturing
by Vivek Wadhwa on 10-23-2018 at 7:00 am

When Western companies moved manufacturing to China, it was all about minimizing costs. China was a developing country with labor costs among the lowest in the world. It also offered massive subsidies and readily turned a blind eye to labor abuse and environmental degradation.

Today, China is the world’s second-largest economy and has ambitions of overtaking the West. Its labor, real-estate, and energy costs have increased so much that they are comparable to those in some parts of the United States. According to Boston Consulting Group, by 2014, China’s manufacturing-cost advantage over the U.S. had shrunk to less than 5 percent. Add to that the intellectual-property theft and unfair trade practices that China has engaged in, and it becomes clear why it makes sense for companies to bring manufacturing back to America.

Doing that is not easy. It is hard to hire the large number of skilled manufacturing workers in the U.S.; intricate supply chains pose barriers; and retooling factories is expensive. But with the trade war that President Trump launched and with the Chinese government’s rigging the deck against foreign companies, businesses may have a strong motivation to bite the bullet and make the investment. The problems they have long had in the U.S. are also now surmountable.

Robots have advanced so far that they can do the work of Chinese workers. Foxconn’s announcement in August 2011 that it would replace a million workers with robots at its Chinese factories never came to fruition, because the robots of that era were not capable of doing fine tasks such as circuit-board assembly and could not work safely alongside human workers. Today, industrial robots can thread a needle and work hand-in-hand with humans. They can do practically every assembly job as well as pack the boxes the goods are shipped in.

Assembling automobiles is one of the hardest of all manufacturing tasks. But with the help of a new generation of robots, Tesla was able to ramp up production at its Fremont, California, factory to produce more than 100,000 cars per quarter. It did this cost effectively in a region that has some of the highest labor costs in the world.

Low-value manufacturing can be moved out of China relatively easily. It’s already being shifted to nearby countries such as Vietnam, Thailand, and Indonesia. The challenge — and the prize — lies in the high-value, high-technology manufacturing such as what Apple does in China for all of its products except the MacBook Pro, manufactured in Austin, Texas.

There is a complex web of supply chains that have developed in China for electronic goods. Products such as the iPhone have hundreds of components, including the display, integrated circuits, optical modules, sensors, and internal memory, which are sourced from suppliers all over the world. Over the past three decades, production of these technologies started moving to China, and many of the key suppliers became closely interconnected. It is not easy to disentangle operations from China’s high-density integrated-circuit ecosystem.

But it is easier than it would have been if Western companies didn’t fear China would steal their intellectual property.

In 2015, according to Seamus Grimes of National University of Ireland and Yutao Sun of Dalian University of China, the supply chain for Apple’s products consisted of 198 global companies, with 759 subsidiaries, located in 16 different countries. The research, which they explained in their forthcoming book on China and Global Value Chains, found that 32.7 percent of these suppliers were Japanese, 28.5 percent American, 19.0 percent Taiwanese, 6.5 percent European, and only 3.95 percent Chinese. Of the 391 subsidiaries providing highest-value “core components,” 40.4 percent were American, 26.8 percent Japanese, 10.7 percent Taiwanese, 9.2 percent Korean, and only 2.2 percent Chinese.

To put it simply, more than half of the components of Apple’s products are imported into China, and practically none of the important core technologies are made by Chinese companies. Nearly all of the intellectual property in Apple’s products originates from outside China. The researchers found that the few subsidiaries that foreign companies located in China that were producing core components were largely involved in the production and testing of products for just-in-time delivery to locations for final assembly.

China surely isn’t happy with this situation. Having spent billions of dollars in state-led investment, its domestic production of semiconductors accounts for less than 13 percent of the country’s demand, and its ability to design and produce this critical input remains seriously constrained according to East-West Center’s Dieter Ernst. That is why the national focus is on moving further up the value chain and creating intellectual property.

American companies no longer have the financial motivation to sell their souls and deal with the risks. That is why I expect the trickle of manufacturing returning to U.S. shores will, over the next few years, become a flood.

More in the video below.


The Latest in Parasitic Netlist Reduction and Visualization

The Latest in Parasitic Netlist Reduction and Visualization
by admin on 10-22-2018 at 12:00 pm

The user group events held by EDA companies offer a unique opportunity to hear from designers and CAD engineers who are actually using the EDA tools “in the trenches”. Some user presentations are pretty straightforward – e.g., providing a quality-of-results (QoR) design comparison when invoking a new tool feature added to a recent release update. Occasionally, a user will convey to the audience an exuberant enthusiasm, describing how a tool enabled new capabilities that changed their methodology – those presentations are especially memorable.

At the recent Silvaco Users Global Event, or SURGE (link), I had the pleasure of attending such a user presentation. The topic was parasitic reduction and netlist qualification – a crucial flow step in the management of signoff electrical and timing analysis. Parasitic netlist reduction is critical to addressing the tradeoff between results accuracy and flow throughput.

Parasitic netlist reduction is somewhat unique, in that it is one of the few steps for which a (best-of-breed) point tool solution is preferred.

Whereas the distinctions between logic synthesis and physical design flows are blurring, and implementation-level timing-noise-power co-optimizations utilize a common data model, parasitic netlists utilize industry-standard file representations – i.e., DSPF, SPEF. Whereas the blended flows require the selection of a broad “platform” of tools from a large EDA vendor, parasitic netlist optimization need not be bound to a specific platform. In that vein, the SURGE user presentation (representing a large semiconductor design company) highlighted how they had integrated the Silvaco Jivaro netlist reduction and Visa netlist visualization tools into their design methodology.

The presentation began with a brief review of parasitic extraction challenges from Jean-Pierre Goujon, AE Manage at Silvaco. “The advanced process technology nodes are resulting in a tremendous increase in the size of extracted netlists that are becoming unmanageable for subsequent signoff analysis flows. Netlist reduction is mandatory.”

Jean-Pierre described some of the unique features required of a reduction tool, including:

 

  • selective reduction: the magnitude of the netlist reduction versus post-reduction accuracy may need to be applied differently on selected nets or hierarchical subckts
  • accuracy feedback: comparisons of pre- and post-reduction net topologies for equivalent (point-to-point) R, Ceff, delay are required; also, methods are needed to detect anomalous R and C data, perhaps related to opens/shorts in the original netlist)
  • “full custom” extraction netlist reduction options

The netlists associated with full custom extraction results in tremendous data volume, compared to cell-based interconnect extraction. Intelligent algorithms for (multiple, parallel) active device reduction are required – more on that shortly.

 

  • visualization: a means of visualizing the extraction netlist is needed for debug

Jean-Pierre’s overview was followed by the user perspective. Some of the user highlights were:

“We worked close with Silvaco developers to identify (custom) extracted topologies where aggressive reduction optimizations could be applied. Decoupling capacitors, dummy transistors, and multi-fingered devices are prevalent in custom layouts in advanced nodes, and contribute greatly to the netlist data volume. Unique merging algorithms were used, while maintaining the overall accuracy.”

“There are certainly ‘built-in’ reduction features of commercial extraction tools – yet, we found they did not provide the accuracy versus netlist size targets we were seeking. There were significant net RC differences.”

“And, the (reduced) netlist netnames were impossible to interpret – we used Viso just to try to visualize the topology that was present.”

“Our (custom) extracted netlists include resistive elements with Tc1 and Tc2 temperature coefficients – Jivaro correctly managed temperature-sensitive netlists during reduction.”

“A key feature to us is the capability to compare extracted netlists (for the same layout). We are designing in advanced process nodes, where the PDK collateral is evolving. Designers need to be able to quickly determine and visualize the extraction differences between PDK releases. The same applies to the evaluation of parasitic differences for extraction at different corners.”

The user’s enthusiasm for the application of Jivaro and Viso into their extraction flow was evident.

He ended with the comment,“By the way, I also want to say that the support from the Silvaco AE team as we were integrating the reduction tools into our flows was excellent.” You don’t often hear that sentiment expressed at these user group events.

-chipguy


TSMC Q3 2018 Earnings Call Discussion!

TSMC Q3 2018 Earnings Call Discussion!
by Daniel Nenni on 10-22-2018 at 7:00 am

The TSMC OIP Forum was very upbeat this year and now we know why. It wasn’t long ago that some media outlets and a competitor said 7nm would not be a popular node because it is too expensive blah blah blah. People inside the fabless semiconductor ecosystem however know otherwise. As I have said before, 7nm will be another strong node for TSMC, déjà vu of 28nm. The difference being that there will not be cloned 7nm processes like 28nm so TSMC market share and margins will remain strong, my opinion.

Let’s take a look at the Q3 2018 earnings call transcript and see what else we can learn:

Now let’s take a look at revenue by technology. 7-nanometer process technology contributed 11% of total wafer revenue in the third quarter. 10-nanometer accounted for 6%, while the combined revenue from the 16- and 20-nanometer accounted for 25%. Advanced technologies, defined as 28-nanometer and more advanced technologies, accounted for 61% of the total wafer revenue.

Apple is > 17% of Q3 revenue if you include 20nm (iPhone 6) and 16nm (iPhone 6+ and iPhone 7) legacy products.

Now let me make some comment about capacity and CapEx. At TSMC, we build our capacity according to the customer demand. We are continuing to increase 7-nanometer capacity to meet the strong customer demand. We reiterate our 2018 CapEx to be between US$10 billion and US$10.5 billion. In addition, as I have talked about before, although our leading edges capital cost continue to increase due to increasing process complexity, we are able to offset its impact to our CapEx by productivity improvements and further optimization of our capacity planning.

CAPEX can be further reduced by purchasing the equipment GF has in NY? TSMC will move from 5 layer EUV at 7N+ to 14 layer EUV at 5nm so they will need those extra ASML EUV Systems. TSMC will build new fabs for 5nm. In my opinion 5nm will be another big node for TSMC so I expect CAPEX spending to be at the high end for sure.

TSMC CEO C.C. Wei is a very strong leader and from what I am told he is loved by TSMC employees so I expect a very good run under his command. As we know from Intel’s latest debacle, a great CEO is key and C.C. is a great CEO, absolutely. He also has a sharp wit and is approachable and engaging which strengthens his credibility.

Now let me update you about the August 3 virus incident. On August 3, TSMC experienced a computer virus outbreak, which affected a number of computer systems and fab tools. The infection was due to misoperation and insufficient firewall controls. We have since corrected this problem to ensure such viruses will not happen again in the future. Our remediate actions including the following: implementing an automated system to guarantee fool proof execution so that such misoperation will not happen again; enhanced firewall control for fab isolation; and network control to each individual computer. More enhancements now are ongoing, too, for further improve tool immunity against future infections. TSMC sets top priority for such security enhancement.

From what I was told it was a vendor’s fault, but I am glad to see TSMC assume full responsibility and take the appropriate actions. I’m not a big fan of finger pointing as it is a sign of weak leadership.

Now let me talk about the N7 and N7+ and the EUV’s progress. TSMC’s N7 technology is now available for customers to unleash their innovations. This is the first time in the semiconductor industry the most advanced logic technology is available for all product innovations at the same time. We continue to work with many customers on N7, N7+ product design and expect to see more than 100 customer product tape-outs by end of 2019. We expect 7-nanometer to be a long node and will attract multiple waves of customer adoptions.

Absolutely.

N7+ is in risk production now. Since the N7+ has 15% to 20% better density and more than 10% lower power consumption, we are working with many customers for their second wave product designs in N7+. Although the number of tape-outs today account for a small portion of the total 7-nanometer tape-outs, we expect the activity to pick up at a rapid pace in 2020 and beyond. Because the N7+ is using a few layers of EUV photolithography to have better cycle time and patent control, we have made steady progress on EUV technology development towards high-volume production. Tool availability, EUV power, productivity, defect reduction, mask improvement, material and process optimization are all on schedule. A few customers have already made plans to adopt our N7+ in their 2019 products.

N7+ really is a test bed for EUV. They are doing 5 layers in preparation for a full EUV implementation of 14 layers at 5nm. It should not be hard to figure out the N7+ customers as they are the early adopters of 5nm. This half node approach has worked well since 20nm (Apple coming to TSMC) so I expect it to continue.

Let me move to our N5 status. Our N5 technology development is on schedule. We have completed the design solution development and are ready for customers’ design start. The N5 risk production schedule in first half 2019 stays the same. Compared to N7, TSMC’s N5 deliver 1.8x to 1.86x logic area reduction and close to 15% to 18% speed gain and ARM A72 core. We expect to receive first customer product tape-out in spring of 2019, followed by production ramp in first half 2020.

Apple will use 5nm in 2020 so you can bet it will be in HVM in the first half of 2020. From what I hear 5nm test chips are meeting/exceeding expectations and the PDK is solid so I see no reason to doubt TSMC’s 5nm schedule at this time.

Now let me talk about advanced packaging update. TSMC has been developing advanced wafer-level packaging technologies to integrated advanced SoCs, memories, integrated passive device, to enhance system performance. We believe our advanced packaging solutions will contribute to our business growth. We are now expanding the applications of both CoWoS and InFO especially for high-performance computing. Most of the CoWoS products require integration of SoC with High Bandwidth Memory, HBM, in 3D stack. We are making good progress in qualifying multiple HBM sources through close collaboration with customers and the DRAM suppliers. We are also working with a few leading customer on SoIC, which stands for system on integrated chips, where multiple heterogeneous chipsets will be integrated with close proximity to deliver better performance. And we target to start production in 2021 time frame.

TSMC has really done a nice job on packaging. I remember when CoWos came out there were quite a few doubters. Visionaries like myself and Herb Reiter saw this coming but even we are surprised at the amount of resources TSMC has committed to packaging and the excellent results. TSMC now has the MOST sticky foundry process in the world.

Now to the Q&A. Sometimes there are some very funny interactions but this is not one of them:

Michael Chou Deutsche Bank AG, Research Division – Semiconductor Analyst Is it fair to say that 7-nanometer sales portion will be more than 20% of total sales for the whole year next year?

Lora Ho Taiwan Semiconductor Manufacturing Company Limited – CFO and Senior VP of Finance Let me answer that. You have seen our report. The third quarter 7-nanometer accounts for 11%. The fourth quarter will be more than 20%. So for whole year 2018, 7-nanometer will contribute close to 10% of total TSMC revenue. Go beyond 7 — 2018, and we will have very, very strong ramp, in 2019 as well, we expect the revenue contribution will be much higher than 20%.

Randy Abrams Crédit Suisse AG, Research Division – MD and Head of Taiwan Research in the Equity Research Department Okay. The second question I wanted to ask was about the 7+ versus 5-nanometer. You mentioned 2020 would see the very strong ramp-up of tape-out and activity in volume on 7+. Is it your view — I think last conference, Mark said 5 was a little bit more conservative at this stage. So how’s your view now for interest activity and expectation for a steep ramp-up of 5 into 2020?

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman We still expect very fast ramp on 5. The reason is simple. Because of a lot of products developed in the AI area, you need the speed, you need the lower power, and you also need a small footprint. So from this — from today, we can see when we work with our customers, the ramp will be steep again.

Roland Shu Citigroup Inc, Research Division – Director and Head of Regional Semiconductor Research Okay. Can you just reiterate the growth breakdown for this 4 platforms next year?

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman Okay, let me give you some color on it. In the next few years, if we look at ahead, actually, the smartphone is going to be in our daily life even more and more. So we have a 4 growth engine: one is a mobile phone, actually it’s a high-end smartphone; second one is a high-performance computing; automotive; IoT. The mobile phone probably for TSMC will have a 5 year CAGR, if I look at it right from today, it will be mid-single digit growth. And the all others 3 platorms will have a very comfortably double-digit growth in the 5 year time frame.

Bill Lu UBS Investment Bank, Research Division – MD and Asia Semiconductors Analyst Great. I know 2018 is not over yet, but if you think about the next couple of years, I know TSMC has talked about a long-term growth rate of 5% to 10%. Now I feel like more recently, you’ve talked a lot more about the progress on 7-nanometers. We all know about Intel’s struggles with their process technology. And it’s public information. They’ve announced it, right? So — and then you’ve got some good design wins. Can you talk about your long-term outlook in 2019? Given these drivers you just said, out of the — 3 out of the 4 new drivers will be above 10%. So are we looking at something more towards the high end of that? Or how do you think about that?

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman We continue to say 5% to 10% growth rate. Probably I would like to — following your question, I would like to say probably tends to be at the higher side of that 5% to 10%.

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman Okay, actually the question is about the EUV and how much of the benefit we can get from the EUV, right? Usually, if we are not using the EUV, sometimes for the very critical dimension on the N7, you have to — or N7+, you have to use the 4 layers of lithography to pattern one of the critical dimension. Now using the EUV, you’re just using 1 layer so that you reduce the cycle time by 4x of photolithography, 4x of etch. Now you become 1 lithography, 1 etch. In total, how many layers we reduced? That depends on the customer’s requirement, but usually I just give you a hint already, right, 4 layer can become 1 and we are replacing some of the 3 layers to become 1 and we have a few layers of that. So that give you a hint. Cycle time reduction, definitely, because you do 4x into 1x, that’s a big advantage. Productivity-wise, today, EUV is progress very well — up to our expectation. And in fact, TSMC has turned on the 250-watt power and we believe we are the only one company continuously run the 250 watts EUV power so far today.

From what I hear ASML has 500-watt power working in the lab so 5nm EUV throughput should not be a problem. The question I have, now that EUV is in production, will ASML actually make money on EUV? After the many years of R&D and EUV broken promises? Billions of dollars must have been spent…

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman Okay. Actually, I don’t want to comment on my competitors’ strategy. But let me, again, stress our mature nodes’ strategy. We continue to develop some of the specialty technology to meet the customers’ requirement, right, I just stated in that. And yes, a lot of specialty technology we are doing, I give you some example already, power management IC, CMOS, MEMS, everything. So that will help us to compete with our competitor. Actually, this kind of specialty technology particularly we have to work with the customer. And so that’s why I say working with the customer to meet their requirement. And that, in turn, to keep TSMC’s business. And that’s a way that we migrate the logic technology — pure logic technology to the more advanced node. But for the existing capacity, we develop into the specialty technology. And so our strategy is still meet customer’s requirement, but we don’t increase the existing logic capacity.

C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – CEO & Vice Chairman I think the AI’s application would be everywhere, actually, from the edge server or to the end device that’s just like the smartphone of everybody. So this kind of a development is to our advantage because TSMC certainly have a technology leadership. In order the AI would be effective, you need a very advanced technology for the highest performance computing. So I don’t see the effect that you are talking about, this application is better than that so that affected the growth or something. No, it will be continues to grow. And I expect this growth much faster than I predicted here.

This is what I have been saying, AI everywhere. And with AI you get increased demand for performance, low power, and increased density which leads to increased leading edge process technology demand. This is really good news for TSMC and the fabless semiconductor ecosystem of course.

These are the questions that interested me. You can see the rest HERE. There is a lot more to discuss so let’s do that in the comments section.


ASML most immune to slow down due to lead times Not LRCX

ASML most immune to slow down due to lead times Not LRCX
by Robert Maire on 10-21-2018 at 7:00 am

ASML reported EUR2.78B in revenues with EUR2.08B in systems. 58% was for memory. EUV was EUR513M with 5 systems. Importantly orders were for EUR2.20B in systems at 64% memory and 5 EUV tools. This was likely better than expectations given the overall industry weakness. EPS of EUR1.60 was more or less in line with expectations. Guidance of roughly EUR3B in revenues for the December quarter is very good.
Continue reading “ASML most immune to slow down due to lead times Not LRCX”


Portable Stimulus enables new design and verification methodologies

Portable Stimulus enables new design and verification methodologies
by Jim Hogan on 10-19-2018 at 12:00 pm

My usual practice when investing is to look at startup companies and try to understand if the market they are looking to serve has a significant opportunity for a new and disruptive technology. This piece compiles the ideas that I used to form an investment thesis in Portable Stimulus. Once collected, I often share ideas to get feedback. Please feel free to offer up suggestions and critique. Thanks – Jim
Continue reading “Portable Stimulus enables new design and verification methodologies”