CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

CEO Interview: Geoff Tate of Flex Logix

CEO Interview: Geoff Tate of Flex Logix
by Daniel Nenni on 10-10-2016 at 7:00 am

Geoff Tate Flexlogix

This is the second in series of interviews we will do with executives inside the fabless semiconductor ecosystem. Geoff Tate was the founding CEO of Rambus and is now CEO and co-founder of Flex Logix (embedded FPGA). This one should be of great interest due to the recent $16.7B acquisition of Altera by Intel. We all now know the importance of having programmable logic inside data center chips but what about networking, automotive, AI, and IoT chips? And do you really have to pay $16.7B to have it?

What does Flex Logix do?
We provide embedded FPGA as hard IP cores along with the software to program them. Our embedded FPGA has density similar to a Xilinx FPGA, with 90%+ utilization, using a minimum-metal stack in each node (4-6 metal routing layers), with no extra masks or process waivers.

Today, we offer embedded FPGA in TSMC 28nm and 40nm nodes, proven in silicon; and we will shortly make embedded FPGA available in TSMC 16nm. Our software has been working for more than a year and we have customers in design today.

Our customers are chip companies (or system companies big enough to design their own chips) who integrate our embedded FPGA into their SoC/MCU/IOT chips to be able to reconfigure critical RTL at any time. It can cost a company multiple millions of dollars and add 3-6 months to the schedule if they have to change the RTL at any time during the design process. With embedded FPGA, we can eliminate those costly and time-consuming setbacks. And with embedded FPGA, critical RTL can be updated in the system extending the effective lives of chips and systems both, increasing ROI.

What can an embedded FPGA do that an FPGA chip cannot?
There are numerous new architectures and applications we are enabling that were not previously possible.

One example is fast control logic. A block of ~1K LUTs worth of reconfigurable logic in 16nm can have 512 control inputs with reconfigurable RTL that generates ~100 control outputs with a clock rate of ~1GHz. Imagine trying to do this with an FPGA chip? The SoC would need 512 signals going off-chip to the FPGA with the associated pins for power/ground at ~1GHz signaling with another ~100 pins coming back. The cost to do this would be enormous in packaging and the routing would kill the latency and frequency.

IO width and clock frequency is the number one reason embedded FPGA can do things external FPGA cannot. This is analogous to on-chip SRAM which can easily have very wide 64/128/256/512 bit buses with high clock rates whereas connecting an external SRAM chip to an SoC means narrower buses and lower frequencies due to package costs, PCB signaling speed difficulties, etc.

Another example is in a 40nm microcontroller where we’ve seen a small block of reconfigurable RTL with DSP acceleration execute certain DSP functions with 2x-5x less energy than an ARM core, even before considering the ARM core memory access energy. The EFLX embedded FPGA is faster than the ARM core, so an EFLX array can monitor and process certain critical signals at a significant improvement in battery life, only waking up the processor core when needed. This benefit can be realized with 300-400 LUTs with integrated MACs operating at 0.9V. There are few FPGA chips anymore with this little capacity and they operate at higher voltages and do not have integrated MACs.

How will chip companies benefit from embedded FPGA? Systems companies?
Many microcontroller families at 90nm have dozens or even hundreds of SKUs to handle a wide range of end user requirements, which really are just small variations of a master design. A typical example would be different serial protocol needs (SPI vs UART vs I2C vs…). Now that leading-edge microcontrollers are just beginning to migrate to 40nm where mask costs are ~$1M, embedded FPGA provides a zero-mask-cost solution to meet different customers’ needs. This enables customers to quickly respond with solutions to new requests without having to re-tape out and validate a new chip.

Initial uses in microcontrollers may be invisible to the systems customer. However, as microcontroller companies become comfortable with embedded FPGA, the next logical step is to enable systems customers to program reconfigurable RTL blocks both in the I/O subsystem and on the main processor bus. This will enable customers to implement acceleration blocks to augment their specific application.

In the data center, there is apparently a desire to build systems and never touch the hardware for the life of the data center. This is challenging to do considering the changing standards for protocols, packets, etc.. Reconfigurable RTL solves that issue because it can be updated at anytime to keep up with new standards, even custom versions only used by one customer.

In base stations, reconfigurable RTL enables reconfigurable Digital Front End DSP processing to handle the availability of new frequencies and other customer requests.

In NVM memory, there are numerous new technologies emerging. With reconfigurable RTL, the memory interfaces can be made flexible enough to adapt post-design to handle the variations in timing sequences and error correction algorithms that rapidly evolving new NVM technologies (MRAM, ReRAM, Xpoint, etc) require.

The overall benefit to everyone involved is that chips and systems have longer, more productive lives and can satisfy a broader range of customers and applications. This reduces inventory and production costs, and increasing sales and ROI.

What applications are the early adopters of embedded FPGA?
We see strong interest in embedded FPGA in a very wide range of technologies from <$1 microcontrollers/IoT to very high-end networking chips.

The early adopters are in microcontrollers, signal processing, networking, communications, and defense applications where the need is very clear and the value is very high. Even with these applications, customers insist on seeing our IP proven in silicon and typically do very extensive due diligence of architecture, software and physical design to be sure we are ready, which we are, before proceeding.

How big can the market for embedded FPGA be over time?
Almost every customer we have talked to says, “this can be a very useful technology.”

Many customers are still figuring out how best to use our technology because no architectures have ever existed with reconfigurable RTL that are fast and dense. To help them, we recently hired a Director of Solutions Architecture who joined us from Intel where he was a Lead Systems Architect on numerous volume Desktop CPUs. Before that, he was an architect for numerous Sun servers. He has already done an extensive energy analysis which is available on our website and shows we save energy for DSP applications in 40nm. We are now actively hiring to build his team up to meet customer demand for architecture assistance.

We’ve noticed that many customers want to wait until others come to market. I saw this at Rambus where I was the founding CEO. At Rambus, the success of Nintendo’s N64 triggered an avalanche of adoption from numerous customers, including Intel, who had been carefully evaluating and preparing till they saw the first million-unit-plus application. I believe this will also happen with embedded FPGA. In time, I personally think embedded FPGA can become a very pervasive technology. This is NOT a niche technology.

Do you compete with Xilinx and Altera? Why don’t they provide embedded FPGAs if there is a need?
We don’t compete at all. My co-founder Cheng Wang talked with several FPGA companies before we started Flex Logix. Then after the company was founded, we had several more discussions when Cheng’s paper on his patented interconnect technology came out. These discussions were initiated because of their interest in understanding Cheng’s work. We learned two things from these discussions with people at senior levels:

First, traditional FPGA companies are all heavily invested in complex and widely used software. This makes changing their hardware interconnect architecture very unattractive, even if it is better. This situation is similar to CISC vs RISC back in the 90s.

Second, they had been asked periodically by companies to provide embedded FPGA, but they were uninterested in doing so. They have an attractive business building chips for which they don’t have enough engineers. The companies weren’t willing to pay very much, compared to the opportunity cost for the resources required; and the business model of providing IP is very, very different from that of selling chips.

As a result, the FPGA chip companies are like Intel and AMD who never competed with ARM in the embedded processor IP space. The chips are used in different ways in different applications by different customers than is the embedded IP.

What are your major challenges?
At this point, the major challenges are primarily behind us. We had to show that our technology worked, and we now have working silicon in two process nodes. We have customers designing complex chips who have successfully integrated our embedded FPGA and also using our software in the process.

We have customers adopting our technology, with customers in design now, and we are actively in negotiations and technical due diligence with many more.

We also had to show we could fund a semiconductor IP business. We stayed lean and focused on building silicon and software on a small seed round, then raised a Series A of $7M+ when we proved silicon and got customers designing chips with our technology.

Today, our major challenge is meeting our existing customer commitments and hiring and training staff fast enough to keep up with rapidly growing demand from new customers. For a while now we have been in constant recruiting mode and I expect we will stay there. We are constantly talking to a wide number of interested applicants and are always looking for the most qualified and motivated new hires to join our team as we need them. In fact, we are hiring “ahead of the curve” since we have to train people on a new technology and methodology and we want to make sure we have capacity to meet customer needs.

While any new technology, particularly one that has never existed before, is challenging, if you visit our offices, you’ll see us working hard and having a lot of fun working together as a team.

Also Read:

CEO Interview: Xerxes Wania of Sidense

A Candid Conversation with the GlobalFoundries CEO!

CTO Interview with Dr. Wim Schoenmaker of Magwel


DOJ takes victory Lap in KLAC / LRCX deal post mortem (3 of 3)

DOJ takes victory Lap in KLAC / LRCX deal post mortem (3 of 3)
by Robert Maire on 10-09-2016 at 4:00 pm

The KLA deal died due to fox guarding the hen house.

Fox can’t guard Hen House…
In an industry where there are relatively few widget makers and only one, very dominant, widget inspector, the thought of one of the widget makers buying the most crucial widget inspector obviously would be anti-competitive. Not only would the other widget makers complain but the widget buyers would also scream about how unfair it would be…..this has been our thesis about the central problem with the deal and we were correct.

  • DOJ implies that KLAC is more important than LRCX
  • DOJ conspired w/ Japan, Korea & China as we thought

Remedies don’t remedy the situation…
Whatever remedies that were discussed/proposed/agreed upon they were obviously not enough. We find the concept of data “firewalling” close to worthless in preventing the potential problems of the deal. A divestiture would have likely been too painful as would likely be the case with licensing. There are really not a lot of ways to effectively protect competition in this type of deal which meant not workable or at least agreeable solutions could be had.

Even the DOJ has figured out KLAC is more important than LRCX in semis
Which supports our view going forward of KLAC over LRCX

“Metrology and inspection technologies are growing increasingly important to the successful development of semiconductor fabrication equipment and process technology”

The DOJ figured out (obviously with help from others in the industry) that metrology and inspection are the key to process development. Process tools have slowed as decreases in pitch (node dimensions- measured in nano meters) have slowed due to multi-patterning and 3D stacking which reduce the need for rapid improvement in pattern dimensions. In fact current 3D NAND is fabricated on “trailing edge” pitch dimensions using primarily older dep and etch technology. The number of commodity/competitive etch and dep processes out of all dep and etch processes has increased which has increased competition and commoditization(which the DOJ likes..). Proof of this is Applied share gains against LRCX.

On the other hand, metrology and inspection remains a market very dominated by KLAC, as evidenced by its gross margins (only exceeded by the purely monopolistic ASML). Theres a lot less competition in this market and DOJ figured that out.

This clearly supports our view that KLAC has a much easier time coming out of this broken deal over the next few years than LRCX does as they have less ongoing competition than Lam.

Conspiring with the enemy…
We find it interesting that the DOJ pointed out that they essentially conspired with japan, China and Korea the three countries we previously identified as being problematic- Hitachi & TEL in Japan, ASMC and others in China and Samsung in Korea.

The deal was facing a very uphill battle as they were up against four aligned government agencies.

Makes Applied’s metrology/inspection more valuable…
This leaves Applied as the only significant company with strong positions in both process and inspection/metrology. Though still far from being in KLAC’s league, Applied continues to make strides. This leaves Applied in a better position.

Is KLAC untouchable?
It seems that there is essentially no way that KLAC can get together with a process company now given what just happened. The best it can do is try to round up smaller inspection/metrology companies which is going to be much harder to do as they are obviously way over the HSR trip wire in most markets.

The DOJ had helpIts clear that the DOJ had help from both other equipment makers as well as chip makers who opposed the deal. There may be some left over damage and hard feelings on the part of these affected parties which could impact LRCX and to a lesser extent, KLAC, going forward. However its not like customers are going to rush into the arms of Applied either. Obviously Lam got its wires crossed when it said it had the support of customers for the deal because its clear they were part of the complaint.

The DOJ thumps its chest and takes a victory lap…
Although we will hear both Lam and KLA’s side of the story tomorrow on separate conference calls, the DOJ has already spoken its mind and pronounced its “victory” for competition.

We could also view this as a warning shot across the bow of other potential deals that its not just HSR “horizontal”overlap but “vertical” customer overlap that matters. Zero product overlap just doesn’t get automatic approval.

About Semiconductor Advisors LLC
Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects.


The KLAM deal has died now how will KLAC and LRCX recover? (2 of 3)

The KLAM deal has died now how will KLAC and LRCX recover? (2 of 3)
by Robert Maire on 10-09-2016 at 12:00 pm

As we had been suggesting the merger deal between KLAC and LRCX has failed. It obviously ran into too many complications, costs or other issues to continue. Unlike the Applied TEL deal which went on for a staggering 18 months before calling it quits in this case 12 months was enough to figure out it wasn’t getting done.

In our view this was likely Lam’s one and only chance of catching Applied in size and market breadth so it is very disappointing to say the least.

As we have suggested in previous notes we think KLAC has a better path as a stand alone, independent company and LRCX likely now needs a new plan to offset issues ahead.

Lam will have a conference call tomorrow, Oct 6th, at 9AM EDT. The conference call can be joined by dialing 1-800-967-7187, Conference ID 9322807, within the U.S. and 1-719-325-2289, Conference ID 9322807, for all other locations.

Autopsy…
It seems of little consequence what caused the death of the deal. It is likely many contributing factors. We will likely not get an exact run down on the call as AMAT never told us officially what killed their deal.

It is clear however that its highly unlikely that anyone in the semiconductor equipment space will attempt another large deal given these last two attempts ended in failure.

What we will hear…
Much as we heard from AMAT, Lam will likely try to reassure us that things are rosy without KLAC and it makes little difference to Lam’s growth plans and abilities.

We would not be entirely surprised to hear Lam pre-announce a great quarter as a way to put salve on the wound of the failed deal and make up for it in the eyes of investors. As we previously said, we think both Lam and KLA are having great quarters and Lam will likely use that fact to offset any potential negative reaction in the stock.

How long a recovery time?

We think that it was close to a years worth of recovery for the AMAT/TEL deal. We think the recovery here will likely be shorter but still at least two to three quarters to get fully back on track. We would expect a follow on analyst meeting after this period to chart the new course much as Applied did (and they executed well on their new course and model without TEL). The shortly upcoming planned analyst meeting is too soon to have a fully planned out story.

The damage….
There was obviously a lot of shuffling of personnel planned, with some expected departures and some coming out on top. Its likely that many knew what new role they would play in the combined KLAM and now may or may not be happy with that not happening. There will likely be a different set of departures and reshuffling without the deal.

There was also likely planning for projects and products that may have already started. Some may be able to continue as “just friends”, others not.

The direct financial damage is likely several hundred million but obviously more extensive damage in lost opportunity.

This is all water under the bridge and of little consequence in the grand scheme of things.

AMAT has gained while Lam & KLA were engaged…

Applied has done very, very well while Lam & KLA were tied up. Gaining share and pushing forward. This is exactly opposite of Lam’s share gains against Applied likely peaking while Applied was tied up with TEL. Lam is going to have a difficult time slowing Applied current momentum.

Collateral impact…
We are hard pressed to think of who/what Lam or KLA could/would/should buy here to make up for he lack of merger. The pickings are somewhat slim. Nanometrics or Nova are the obvious answer on the metrology side and there is always ASMI on the process side. The problem is that these deals would likely have been done already had they made a lot of sense. Perhaps ACLS would be a good acquisition as they have had good results and nice momentum of late. Would someone be willing to take a chance on Veeco now that Aixtron has been bought by the Chinese.

Many of these companies stocks are trading at a perceived takeover premium as well.

The stocks…
Given the spread, we think that many investors already got the idea that the deal was not going to happen. We had suggested “buy on the barf” when the deal imploded but now we think that there will likely not be much downside , especially for KLAC which should easily trade up from here.

LRCX stock is another story. Its had a good run and at leats part of the valuation is based on the acquisition. The bigger question is will Lam’s Q3 performance offset the downside potential from losing the deal. It may in the short term but we think Lam now has bigger long term challenges ahead of it.

KLAC stock likely has much better longer term prospects as fundamental investors get a refresher on the story. The stock has been trading at an artifical discount to its typical historical valuation.

A good pair trade might be to go long KLAC against a short on LRCX (assuming there is still time to get in).

We would expect both management teams to be out on the road explaining things to investors shortly after the quarter is reported

LAM= Life After Merger…….

About Semiconductor Advisors LLC
Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects.



KLAC & LRCX – Fall Out from the deal Falling Apart (1 of 3)

KLAC & LRCX – Fall Out from the deal Falling Apart (1 of 3)
by Robert Maire on 10-09-2016 at 7:00 am

The odds of deal completion has fallen to low levels. Whats the fallout on the companies and stocks? Is there life after a failed merger?

“A quagmire wrapped up inside an enigma” – LRCX & KLAC’s merger is the talk of the town, both in the semiconductor equipment industry as well as DOJ watchers in Washington DC. The opaque process of the DOJ and the surprising complexity of what might have otherwise should have been an easy deal has created a stir and a lot of speculation. The only thing clear about the deal is that it has taken far too long and that the odds of a successful outcome are likely falling by the day.

Although we just started out last October by being just skeptical and assuming the deal would take longer than the company had suggested we have now transitioned through dubious and into doubtful, now viewing the deal as have only a 30 to 40% chance of success at best. As the clock ticks down to October 20th, investors need to prepare.

Words you don’t want to hear in a hospital: “Yours is an interesting case”……..
You also don’t want the DOJ to use your merger as a poster child to set policy or precedent. You just want to be a run of the mill merger with no HSR issues that just sails through. There is much speculation that the deal is different than normal and that it may be related to precedent or policy setting. Being an oddball case in the DOJ is clearly not good…..

Vertical & Horizontal…

Most merger concerns revolve around horizontal problems, product overlap, that hit the HSR 40% tripwire. Less popular forms of concern are vertical, those with overlapping customer base. Since KLA & Lam are vertical it already puts it into a class that has been historically harder for the DOJ to prosecute. It seems to suggest that the deal is outside of normal bell curve in terms of DOJ interest making the delay curious.

Death by Delay…
Even if the DOJ doesn’t want to set policy or precedent but doesn’t like the deal, they can always kill anything they don’t want by “delaying it to death”. This allows them a way to get rid of it without seeming to act by actually acting by “failure to act”. Its a convenient “alibi” for the DOJ, by burying a deal in the normal Washington bureaucracy. In the case of KLA & Lam the DOJ is surely aware of the Oct 20th timeout.

Did a balloon pop?
In the AMAT TEL merger, the DOJ sent around to interested parties, a proposed consent decree settlement trial balloon that apparently did not go over very well. Given that its likely that the same people at DOJ are looking at this proposed merger as well, perhaps they sent out a similar trial balloon to interested parties, maybe with similar results. hard to tell, but could be an explanation for the sudden change……

The timing looks very difficult…and long

We think whatever remedy is being sought or negotiated has to be something more substantial than just a behavioral remedy. As we have previously said, if this were just a behavioral remedy revolving around “firewalling” data it would have been done a long time ago as KLA already has been doing it for years ( protecting sensitive customer data). If it is a divestiture or licensing or similar, it will likely take time. It took Fairchild and On semiconductor 5 months to get rid of a dinky $25M IGBT business before they got the green light from the DOJ after a deal to sell it to Littlfuse, and they were very forthcoming and admitted the problem of overlap way up front. Lam was probably not as accommodating. How long would it take for Lam/KLA to sell or license off thin film or critical dimension business? Probably at least 3-6 months in a fire sale.

Given that the deal appears to be stuck in the DOJ, that suggests we still need Korea, Japan & China regulators to approve the deal. They are likely waiting on the DOJ remedy before their figure out what they want which will be at least as much as the DOJ if not more. This will likely mean several more months assuming that foreign regulators go along with whatever the DOJ got….if not…it will take even longer.

Taken together, this suggests that even if we had an approved remedy we are probably talking about another 6 months to get the deal done which would make it equal to the length of the AMAT TEL deal before it fell apart.

Deal “Certainty & Uncertainty”….
One of the very significant parameters that a board has to consider in an M&A transaction is certainty of the deal. An deal that is less certain should have a higher price premium or a higher break up fee to make up for the added risk of the acquisition not going through. In the case of KLA and Lam the deal seems a lot less certain than it did a year ago, which would imply that KLA’s board could ask for a significant break up fee or higher value of both. When you add to this the view that lam is now getting KLA at a discount price given the outperformance over the past year it seems to underscore the need for a different deal.

The problem is that if its a deal at a different price or break up do we have to go back to square one for approval further delaying things.

The other issue is should KLA extend the deal no matter how much Lam is willing to pay. Maybe the risk is too high and the odds now too low that any amount of money would not offset the risk and increased valuation that KLA sees. Customers and employees of KLA have been left in the lurch for almost a year and another 6 months of waiting could do very serious , irreparable, damage. On the other side , would Lam want to pay a much higher price or a high breakup and risk getting a damaged KLA or not getting a deal and being out a lot of money (plus the $50M to $100M in interest on the debt raised already for the deal).

Plus theres the fact that AMAT continues to gain ground while KLA and Lam are in limbo. We find this an amazing case of “trading places” as it was Lam who a few short years ago was gaining share while AMAT was engaged to TEL. The shoe is now on the other foot…

We had been saying that KLA was so important to Lam and Lam’s management that they would do anything or pay any amount of money to get the deal done. We are not so sure now…..even though this is the one and only last chance to catch up to Applied, Lam has no other alternatives, they may have to let it go as it has gone beyond a recoverable point. Even if Lam wants to pay stupid money or commit stupid acts to get the deal done its not really their choice any more as KLA’s board can just walk away on Oct 20th

Extending the play clock beyond Oct 20th does not look very enticing to anyone…except perhaps to AMAT

October 21st 2016…the morning after…
If we try to figure out the aftermath on Oct 21st, we have seen this movie before. When the AMAT TEL deal broke, it took quite a while for both parties to get their act back together and figure out where they were going, perhaps at least a year with AMAT and more with TEL.

KLA would emerge in better shape than Lam as much of Lam’s valuation and future is tied up with the acquisition of KLA. Lam has made public statements about how it needs to integrate yield management with process for the good of customers and used that statement as the centerpiece of support of the deal. Now they won’t have it but AMAT does.

Lam needs “something more” because dep and etch are becoming more commoditized and thus price sensitive as time goes by. Multi patterning has been great but that will slow sooner or later as EUV finally kicks in. The lack of node/geometry progress in dep and etch has allowed competitors to catch up and make more of the steps competitive and subject to price competition. We have seen this happen at customers such as Toshiba who split up the commodity tools and left the remaining critical tools to Lam out of the 20 some odd etch steps. Lam needs differentiation….

KLA on the other hand didn’t need the merger (other than for a pay day) and likely emerges in better shape though perhaps a bit of wear and tear for being distracted for a year.
Business has been very good for everybody

One thing that may mask the damage and fall out is the fact that business is very , very good for everyone in the industry right now. We expect both Lam and KLA will have bang up September quarters as the second half looks a lot better than previously thought. We have already heard how great things are at the recent Applied lovefest. This tone of strong business may soften the blow of a busted deal.

The Stocks.. Buy on the Barf
One of the potential stock reactions is that the arbs who have replaced fundamental investors in the stock of KLA will “puke up the stock” when the deal fails and drive the price down as they exit stage left. However fundamental investors who haven’t paid attention to KLAC in the last year will likely recognize that the stock is currently trading in the range of a roughly 12 P/E when it should be trading closer to its historical 14 to 15 times (which has always been higher than Lam’s valuation). The fundamental guys will likely push it back up after the arbs exit and may start buying in before they exit thus minimizing the downside.

Fundamental guys run little risk as even if the deal does go through they still get rewarded by a nice upside given the wide spread due to the uncertainty.

Basically, if you are a fundamental investors, just go out and buy KLAC now as you have little to lose if you are willing to wait till the dust settles.

LAM = Life After Merger?

What to do? Lam has had a great run and stalled out in the low 90’s as we had previously suggested it would. Lam is already fully valued and has the benefit of the merger priced in so there is likely more downside than upside in Lam. Its hard to call it a short as fundamental business remains strong and can potentially soften a break up. We do however see downside risk into the 80’s and potentially continued weak behavior in the weeks and months following until they come up with a new story and new direction and get their act together again. Its likely going to be tougher on lam as a lot was pinned on the acquisition so coming up with an alternate story will be tougher as compared to KLA who will just carry on as they were before the deal.

Right now we see more risk than reward in owning Lam especially going into Oct 20th. Depending on the outcome we may be tempted to get back in if the deal and the stock fall apart.

About Semiconductor Advisors LLC
Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects.


One line of macOS code could cap a 20-year pivot

One line of macOS code could cap a 20-year pivot
by Don Dingee on 10-07-2016 at 4:00 pm

When Steve Jobs made it clear at the 1997 Apple Worldwide Developer Conference he was taking back his company, he tossed the now famous line in his opening monologue: “Focusing is about saying no.” Approaching 20 years later, that decision still reverberates. Continue reading “One line of macOS code could cap a 20-year pivot”


Drift is a Bad Thing for SPICE Circuit Simulators

Drift is a Bad Thing for SPICE Circuit Simulators
by Daniel Payne on 10-07-2016 at 12:00 pm

My first job out of college was with Intel, located in Aloha, Oregon and I did circuit simulations using a proprietary SPICE circuit simulator called ASPEC that was maintained in-house. While doing some circuit simulations one day I noticed that an internal node in one of my circuits was gradually getting higher and higher, even exceeding the value of VDD. What in the world could be causing this, the voltage cannot climb higher than VDD, I thought. Finding a senior engineer Clair Webb, I showed him the plot and asked, “What did I do wrong?”

His reply, “Oh, your circuit is OK, it’s the circuit simulator that has a bug. Go file a bug report.” Wow, I was stunned, because I presumed that in the commercial world there were no software bugs and that the SPICE answers were always to be trusted. Have times changed that much with circuit simulators producing the wrong results?

If you’re doing signal and power integrity designs then there are extracted interconnect netlists that are part of your circuit, and it’s possible that you can see DC drift on signals during transient analysis. The HSPICE team over at Synopsys has anticipated that you might experience this drift issue, so they’ve created a 15 minute Webisode titled, “Drift-free Transient Simulation for Signal and Power Integrity Analysis – Using deCap-aware Rational Functions.”

The R&D engineer at Synopsys explaining how to best use HSPICE in this webisode is Ted Mido. He talks about a new feature in HSPICE on scattering parameter handling that will make sure that your transient circuit simulations are drift-free. The example circuit used is an 8 GBps differential PRBS simulation. You can even get this complete demo case at no cost by requesting it.

The online signup is here and taking up just a quarter of an hour learning more about HSPICE will certainly save you from getting the wrong analysis results for SI and PI simulations. Mido-san talks about topics like:

  • Small extracted capacitances in the range of nF and pF for transient results at GHz speeds
  • Larger extracted capacitance for decoupling capacitors in packaging with capacitance values in the mF range for transient results in KHz speeds
  • S-parameters with a very wide range of values
  • How convolution is used when starting from S-parameters and going to transient analysis
  • Inverse FFT or rational function modeling for approximation
  • Recursive convolution
  • Computation effort versus run times
  • Separate sets of S-parameters for packaging and chip interconnect
  • HSPICE automatically detecting and separating S-parameters for coupling decap and chip interconnects

Climbing the dimensions (part 1)

Climbing the dimensions (part 1)
by Claudio Avi Chami on 10-07-2016 at 7:00 am

Translated and adapted from an article by Jaime Poniachik
The novel Flatland was written en 1884 by Edwin A. Abbot. This novel describes a fantastic, two-dimensional, flat world. Hence the name of the novel. This world has living beings. They have only two dimensions and they move in a plane which they cannot abandon.

It is not difficult to imagine a bi-dimensional reality. An amoeba, for example, laid on a flat surface is essentially living on a two- dimensions reality. It will explore and get food moving its pseudopods up and down, left and right, but not above or below its “body”.

In the same way that there is a three-dimensional world (ours), and the amoebas two-dimensional world, we can try to imagine how a four-dimensional world would be. Please note that we are talking about a four dimensional spatial world. Time is commonly regarded as an additional dimension, but it is not spatial (and we cannot easily move on it back and forth).

The Mystery of the Yellow Room

In “The Mystery of the Yellow Room”, suspense master Mr. Gaston Leroux (author of “The Phantom of the Opera”), proposes the following enigma: A woman that has been attacked nearly to death, is found in a locked room. A similar case is proposed by Edgar A. Poe in his story, “The Murders in the Rue Morgue”. In both cases, the authors finally give the reader a plausible explanation of how the crimes were executed inside rooms apparently locked from the inside.

A three-dimensional being could execute the perfect crime in a bi-dimensional world. A crime that even Leroux or Poe would not be able to explain.

In a bi-dimensional world, a locked room could be, for example, a rectangle. No creature from this bi-dimensional world would be able to enter the locked room, unless it forced its doors or opened a hole in its walls (the rectangle perimeter).

But a three-dimensional assassin could easily enter the locked room from the third dimension, commit the crime and abandon the locked room, leaving no trails behind.

In the same way, a criminal from a four dimensional reality could easily enter a perfectly locked three-dimensional room without touching its walls, its ceiling or its floor. It almost makes you look behind your back in awe.

A peek into hyperspace
What is the aspect of a four-dimensional object? We will try to imagine an hypercube, a cube of four dimensions.

To try (somehow) to imagine the hypercube we will explore its analog bodies on lesser dimensions.
Let’s start by the square. The square is a figure. It is flat, it has only two dimensions. It can be built using one-dimensional elements. Let’s say that we take a pair of segments. Then we connect their vertices using another pair of equal length segments. Using one-dimensional elements (the segments) we have built a two-dimensional figure, the square. This can be seen in figure 1.


Figure 1 – A square, a 2-D figure built using 1-D segments

We can also imagine that the square “connects” between two 1-D universes, the upper segment and the lower segment.

Now let’s climb to the third dimension. A cube can be built in a similar way as we built the square. This time we draw a square in one plane, and then another square in a parallel plane. If we connect each one of the vertices of the top square with the bottom square, we get a cube, as shown in figure 2.


Figure 2 – A cube, a 3-D body built connecting 2-D squares

Now let’s imagine we have a cube floating in space. We have another cube floating on another three-dimensional space. If we connect all eight vertices from one cube with those of the other cube, we have built an hypercube.

Representing such a beast is a little more complicate than telling how to build it. A possible representation (in two dimensions!) of an hypercube is shown on figure 3. If we were able to see four dimensions, we would see that all its edges are ortogonal by pairs, as they are in the square and in the cube.


Figure 3

– An hypercube (tesseract), a 4-D body built connecting 3-D cubes


Counting the elements

A square has a single face, four vertices and four edges
We climb another dimension to 3-D. There we find that the cube has six faces, eight vertices and twelve edges.
When we climb to the fourth dimension, we find that the tesseract has sixteen vertices.
On the following table we can see elements for each figure or body, starting from the first dimension and climbing into the fourth dimension:

Vertices Edges Faces Bodies
Point 1
Segment 2 1
Square 4 2 1
Cube 8 12 6 1
Tesseract 16 32 24 8

The quantity of vertices for each figure follows a simple rule, it is 2 to the power of the dimension. But the formula for the quantity of edges, faces, etc. are a bit more complicated.

Also read: Climbing the dimensions (part 2)

My blog: FPGA Site


Takata’s Deepest Betrayal

Takata’s Deepest Betrayal
by Roger C. Lanctot on 10-06-2016 at 4:00 pm

There’s been a lot of betrayal in the automotive industry over the past few years. Consumers have been betrayed by car makers that failed to identify, report or anticipate problems or that deliberately misled their customers. But no betrayal was deeper than that of Takata and the ongoing airbag recall effort. And Takata’s primary victim was Honda.

Approximately 34M vehicles from more than 30 brands (and another 7M worldwide) have been identified as being subject to the Takata recall in the U.S., related to airbags that can explode and expel shrapnel capable of injuring or killing drivers. It is nearly impossible to identify a car brand untouched by the recall other than Volvo, which dodged this recall bullet thanks to its relationship with Takata competitor Autoliv. Even Swedish neighbor Saab fell victim to the recall, most likely due to its partial and then full ownership by GM prior to its ultimate bankruptcy.

But no brand has suffered a more severe blow than American Honda Motors. But the blow was as much psychic as it was financial and logistical, for Honda has long prided itself on its research acumen and its unique supplier relationships.

Having just finished reading “The Honda Way” by Jeffrey Rothfeder the prospect of Honda being blind-sided so miserably by a supplier, even such an important one as Takata, is almost unthinkable. Rothfeder details Honda’s extensive onboarding and vetting process for new suppliers – and the steady engagement with those suppliers to improve processes and reduce costs throughout the life of the relationship.

Honda is known to embed engineers at manufacturing facilities for its suppliers, according to Rothfeder, and the company isn’t above diving into the financial records of its suppliers to better identify and root out inefficiencies. These close ties have proven to be beneficial to both Honda and its suppliers. As trust and cooperation flower between Honda and its supply chain partners both organizations generally flourish.

Other car makers have equally strong ties to Takata. GM has been known to direct smaller Tier 2 suppliers to engage with Takata, making Takata something of an agent for GM. But no car maker snuggles up to its partners, like Takata, in as intimate a fashion as Honda.

So it is painful that Honda stands as the leading victim of the Takata recall with 10.7M Honda and Acura vehicles affected, nearly one third of the total for all brands. Rothfeder’s “The Honda Way” was completed and published before the Takata recall crisis unfolded, so Rothfeder is forced to cover the event in an Afterword.

He writes, in part: “Honda has a rigid program of monitoring supplier activities closely and demanding annual improvements in quality and output. This policy has been in place for decades and results in a somewhat unorthodox relationship between Honda and its suppliers, one in which Honda essentially molds the companies that it buys parts and components from in its image, fusing its culture and operational practices with the suppliers’ to produce real collaboration.

“This approach was undone in Takata’s case for two reasons: Takata had an excellent perceived safety record throughout the auto industry, and Takata’s products were viewed as too proprietary and complex – unlike other parts, like electrical components, headlamps, engine components, and so on – for the automaker to diagnose potential problems. In taking this stance, Honda essentially gave Takata a free pass that few of its other suppliers enjoyed.”

The result for Honda is equivalent to the impact of the Tohoku earthquake and tsunami of 2011 which interrupted production for Honda and other car makers and suppliers. Honda recovered from the earthquake and tsunami – closely collaborating with suppliers to ensure they retained their personnel and operational readiness during the downtime in order to be prepared when production roared back to life.

The recall tsunami has brought with it a similarly forthright response. The difference is that recovery in this instance is heavily dependent on Honda’s dealer network. More than any other car maker, Honda and its dealers have embraced the challenge to track down affected cars to complete the recall work.
I have seen the outreach efforts by Honda dealers first hand. My mother recently sold her 2003 Honda Civic. She described the multiple calls she received from her Honda dealer trying to get the vehicle in to do the recall repair. Honda dealers generally have embraced the effort to round up customers and pull them in for recall work – with one dealer in particular standing out – Kuni Honda in Colorado – which instituted an aggressive program of early morning and after hours recall work on behalf of their customers.

Even the efforts of the most determined dealer networks, such as Honda’s, face serious headwinds in trying to get potentially dangerous airbags off the roads. Consumers either can’t be bothered, or don’t take the matter seriously enough.

The Takata recall experience has taught the industry that the responsibility for customer and vehicle engagement by car makers and their dealers extends well beyond the point of sale. It has also taught us all that there is no free pass and that supplier relationships are forever, or at least for as long as the vehicle remains operational.

Takata betrayed an industry with its dangerous airbags. But no car maker felt that betrayal more directly and deeply than Honda.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Smarter Cities and The Internet of Things

Smarter Cities and The Internet of Things
by Bill McCabe on 10-06-2016 at 12:00 pm

Parking meters, information signs, CCTV, traffic signals – almost everywhere that you look in a modern city, there’s a microchip embedded device, connecting to what has now become known as the all-encompassing Internet of Things. Although we often overlook the fact, cities are, in essence, huge and complex businesses. Cities compete for residents, investors, tourists, and even funding from central government. For cities to remain relevant, they have to become smarter, leaner, and more connected. The IoT is helping the world’s largest cities to do this, and it’s all happening on a grand scale, and at a phenomenal rate.

According to Gartner Research, in this year alone, 5.5 million new ‘things’ are expected to become connected every day. From consumer devices like smartphones and fitness devices, to interactive flat panel displays and information kiosks, IoT is seeing huge adoption rates and staggering investment. Just over a year ago, an IDC FutureScape report predicted that local government bodies would represent up to a quarter of all government spending, specifically because of investment into the research and implementation of connected technologies.

Simple Ideas are Changing How Cities are Run
Looking at just a few of the innovative technologies from the last five years, it is possible to start developing a picture of what smart cities will look like within the next decade. Bitlock is an innovative technology that uses proximity keys to automatically activate or deactivate bike locks. At the same time, the system uses an owner’s smartphone to record the GPS location of the lock and bike. Such a system could be utilized on a large scale, such as in a bike sharing program in heavily congested cities. Private and government organizations could track bikes for better management, and they could even use the uploaded data to provide real time updates for bike availability, while also recording patterns of utilization.

Streetline is another smart city technology that shows great promise. Using networked parking sensors, Streetline can record parking availability in real time, and report to city officials and publicly available smartphone apps, simultaneously. The technology is in widespread use around Los Angeles, and as of May this year, over 490 million individual parking events had been recorded and reported using Streetline sensors. Studies have shown that smart parking systems can reduce peak parking congestion by up to 22%, and can reduce total traffic volume by 8%. With other technologies like IBM’s Intelligent Transportation Solutions, local governments could utilize devices to gather real time aggregated data which can be used to measure traffic volume, speed, and other metrics, which could be used to design better policy and city planning.

Opportunities for IoT Skilled Professionals
Innovative technologies like these are just the beginning of what is possible in a smart city. Emerging technologies have the potential to make major cities more functional and convenient for residents and visitors, and more manageable for government bodies. Even so, there are still challenges to overcome. Infrastructure is a major challenge, and cities will need to plan and implement high speed networks, as well as the servers that are necessary to support their sensors and other systems. Storage and processing needs will increase as IoT becomes more widespread, and security will need to become a major area of focus. Security is not just necessary to safeguard systems, but also to protect end user privacy and data.

It’s clear that smart technologies and IoT are the future of the world’s major cities. Which in turn means that experienced developers, operations professionals, engineers, and IT security specialists will be in high demand, with growing opportunities in the immediate future, and in the coming years.

For more information please check out our new website www.internetofthingsrecruiting.com


Processors, Processors, Processors Everywhere

Processors, Processors, Processors Everywhere
by Tom Simon on 10-06-2016 at 7:00 am

At first glance a processor conference might seem a bit arcane, however we live in an era where processors are ubiquitous. There is hardly any aspect of our lives that they do not touch in some way. Last week at the Linley Processor Conference the topics included deep learning, autonomous driving, energy, manufacturing, smart cities, commerce and more. The conference was led off by a keynote from the conference’s namesake Linley Gwennap, who touched on all the main themes for the following two days.

The keynote presented many familiar topics and ideas, along with several surprising ones. Let me summarize the most interesting points.

Linley observed that increasing wafer costs have reached the point where the price per transistor is actually going up. 20nm was the crossover point for this. As Linley puts it – “Moore’s Law is only for the rich.” The effect of this will be that cost sensitive products will stay at 28nm. Thus mainstream products will be limited in the amount of integration they contain. However, high end products will continue to move to new advanced nodes because the justification for higher prices exists.

During the era of declining transistor costs, processors were in a race that demanded a new release every two years, which starved necessary architecture changes. We saw a continuous stream of general purpose processors as a result. This is likely to change. Specialized architectures can offer a 10 or even 100 times improvement in performance-per-watt metrics. We see numerous examples of this from companies such as Tensilica, or Microsoft with their “Catapult” project which combines an FPGA with a general-purpose processor. The other trend that this will drive is the addition of more specialized processing in accelerators to offload CPU’s.

Vision processing has become a very active area because it has widespread applications. Vision processing is being used in gaming, mobile devices, industrial applications and advanced automotive. Vision Processing Units (VPU’s) which serve this market are available from Cadence, Ceva, Synopsys, and VeriSilicon. Large verticals such as NXP and Intel are entering this market through acquisitions.

Neural networks are also changing the processor landscape. Neural network training requires massive data bandwidth and high precision floating point processing. Whereas, the recognition process needs highly parallel but smaller data size processing. Many players are active in this market. Google has developed a special purpose ASIC for TensorFlow. There is also Wave Computing, Intel with its acquisition of Nervana Systems, as well as IBM and others.

Linley pointed out that data center growth has been very good for Intel. This market has grown by 11%. Interestingly the public cloud which includes Amazon Google and Alibaba is the fastest growing segment. Intel’s Xeon E5 has become the mainstream processor for 2S systems. This is now using the 14nm Broadwell-EP. Also there is the Xeon D which offers a single socket SOC with up to 16 Broadwell cores. Avoton initially targeted low-cost micro servers with its eight Atom cores. However, the interest in micro servers has diminished leaving Avoton to target the embedded market.

At the same time Intel is seeing challengers for Xeon. IBM has its Open Power initiative, which is bearing fruit with the several Power8 processors already available in servers. And in 2017 we can expect to see Power9 based servers. Also AMD is moving forward with a new Zen CPU.

There is also a bevy of activity in ARM based alternatives to the x86 architecture. AppliedMicro is seeking advantage with X-Gene 1 and 2. Cavium is focusing on high throughput with their upcoming ThunderX. QUALCOMM and Broadcom could also deploy new ARM server processors in 2017.

ARMv8 processors are moving into the embedded space. For instance, AppliedMicro and Cavium both offer embedded versions of their multicore ARMv8 server processors. QUALCOMM is likely to do the same on designs using over 24 ARMv8 CPU cores. NXP is now sampling QorIQ LS2 with up to eight Cortex-A72 cores. And the Broadcom StrataGX line moves to Cortex A57.

The usual distinction between server processors and embedded SOC’s is beginning to blur. Some Intel processors are suitable for both server and networking designs. Activities such as network function virtualization are starting to use server infrastructure. What were previously network processors are starting to look like multicore embedded processors where we see SMP Linux and GNU tools. This means that some of these network processors come with vastly improved development environments. The winning future network architecture is in flux.

One of the big stories from this conference was how network function virtualization is moving from the core to the edge. This is enabled by new development tools and changing hardware. We are seeing intelligent network adapters that include processors or FPGA’s that offer a wide range of programmability. This makes it possible to offload a large number of tasks. In some cases, virtual switch functions are running in virtualized servers on the NIC. This will do a lot to improve performance inefficiency.

Linley feels pretty strongly that the real market for IOT is business. There is high motivation in business for the improvements that IOT can bring. Anywhere there is cost savings there is a reason to innovate. IOT offers a compelling business case because of the many ways that it can improve process efficiency, conserve resources, improve security and grow markets. We can already see it being used in smart meters, parking, lighting, energy, vending machines and more.

The consumer and home market will certainly grow as well, especially as this technology gains momentum. Consumers are motivated by convenience, time savings, security, as well as status and social engagement. However, the direct economic benefits it brings are smaller and less tangible.

Processors will play a central role in making IoT data secure, which is an essential prerequisite for market growth. Linley emphasized that IoT data needs to be secure during transmission. With the ease of interception of wireless data, it needs to be encrypted. Stored data at rest in the cloud is also potentially vulnerable. Cloud service providers need to take precautions. IoT sensor and edge devices are vulnerable to hacking because they are difficult to physically secure. This is where secure boot comes into play. Also secure on-chip storage for crypto keys is necessary. Lastly, there needs to be a way to authenticate incoming commands.

Next he spoke about how cars are now built containing dozens of processors. Microcontrollers are used for things like windows, wipers etc. Then there are the processors used for in-dash electronics. Applications include navigation, user interface as well as digital dashboards and the surround view video. Pile on top of this the processing needs for ADAS, and we can easily see why automotive is a huge and growing market for processors. Linley sees this market at $10 billion annually and growing. In fact, he is suggesting that it could double by 2025.

For a moment let’s look at the requirements for ADAS. The players in this market are names from the smartphone processor market. Nvidia, NXP and TI are all recognizable with Tegra, i.MX and OMAP, respectively. Though, most of these need a vision processing engine to boost performance in the ADAS application. This is where Neural Networks come into play. For recognition massively parallel 8-bit operations are needed. For ADAS there is also a sensor fusion processing requirement of the highest order. Radar, Lidar, ultrasonic, infrared and optical all need to be combined to create the internal 3D virtual world the ADAS system will use to make effective and safe operational decisions.

Already Tesla, Volvo and others offer driver assist, which in many cases does an impressive job. These systems require driver supervision, but greatly aid in reducing driver workload. The first wave of autonomous vehicles could hit the market in 2018. At the very least they will be built with the processing power, but may lack the final software. Ford recently announced that it plans to produce a fully autonomous vehicle with no steering wheel in 2021.

While these systems may add $5,000 or more to the BOM for a car, businesses like Uber or Lyft would stand to see huge net savings if they can eliminate the cost of drivers. I know this all seems like science fiction, but we are poised on a dramatic precipice. My own thought on the rapid progress in autonomous vehicles is that it stems from the exceptionally heavy traffic found in Silicon Valley. Nowhere else will you find the motivation and the talent together to accomplish such a difficult task.

After Linley’s keynote there were two days of detailed presentations on a wide range of topics, including those above. For more information about this and other Linley conferences follow this link.