Banner Electrical Verification The invisible bottleneck in IC design updated 1

Moore’s Law Drives Foundries and IP Providers

Moore’s Law Drives Foundries and IP Providers
by Daniel Payne on 01-19-2018 at 7:00 am

2017 was a banner year for semiconductor sales as they topped $400B for the first time, an increase of some 20%, there is happiness in Silicon Valley, Taiwan, South Korea, and well, everywhere. With the foundries pushing to ever-smaller process dimensions and even going back to mature nodes and offering more variations that are more power or area efficient, I am noticing a proliferation of process nodes to choose from. The big challenge that crops up is how to get all of that much-needed semiconductor IP onto the latest process node in order to attract new customer design starts. I’ve been following one semiconductor IP provider in the UK called Moortecthat has embedded in-chip monitoring blocks that allow chip engineers to dynamically monitor the Process, Voltage and Temperature (PVT). What’s new this week is their announcement that the TSMC 12nm FinFET process called 12FFC is now supported by their PVT monitoring IP.

The 12FFC process from TSMC has a new six-track (6T) standard cell library, compared to the 16FFC node that used 9T and 7.5T. So going from the 16FFC to 12FFC process you would expect an area decrease up to 18% and about 5% faster speeds. Nominal supply voltage level can go down to 0.5V.

Related blog – Top 10 Updates from the TSMC Technology Symposium, Part II


TSMC 12FFC photograph. Source: TSMC

Related blog – Embedded In-chip Monitoring, Webinar Recap

So why do you need in-chip monitoring?

Here’s seven good reasons to consider using in-chip monitoring:

  • Optimize SoC performance
  • Detect process variations per chip
  • Enable Dynamic Frequency and Voltage Scaling (DVFS) to optimize power
  • Gate delay measurements
  • Critical path analysis
  • Dynamic voltage analysis
  • Monitor aging effects of the FinFET transistors

OK, you like the concept of PVT monitoring and see the benefits, but how do you communicate with this IP?

Moortec IP uses AMBA APB interfacing and you can even have multiple instances of monitor IP on the same chip, then you connect with a test access port.

Read the latest press release from Moortec here.


ISS 2018 – The Impact of EUV on the Semiconductor Supply Chain

ISS 2018 – The Impact of EUV on the Semiconductor Supply Chain
by Scotten Jones on 01-18-2018 at 8:00 am

I was invited to give a talk at the ISS conference on the Impact of EUV on the Semiconductor Supply Chain. The ISS conference is an annual gathering of semiconductor executives to review technology and global trends. In this article I will walk through my presentation and conclusions.
Continue reading “ISS 2018 – The Impact of EUV on the Semiconductor Supply Chain”


Thermal Modeling for ADAS goes MultiPhysics

Thermal Modeling for ADAS goes MultiPhysics
by Bernard Murphy on 01-18-2018 at 7:00 am

In electronic system design, we have grown comfortable with the idea that different regimes of analysis, such as the chip, the package and the system, or electrical, thermal and stress are more or less independent – what starts in one regime stays in that regime, give or take some margin information passed onto other regimes. And why not? It’s worked pretty well for us so far. But now we face a convergence of factors challenging the effectiveness of that decoupling: ADAS expecting significantly longer system lifetimes in more extreme environments, FinFET technologies in which self-heating and Joule heating are becoming more important and wafer-level packaging technologies crowding more electronics into smaller spaces.


Something has to give and what’s giving in this case is the assumption that these factors can be modeled independently. Which is a little scary – now you have to think about modeling heating at the chip, package and system levels across a wide range, from hot-spots on die to a wafer-level package and system enclosures, cooling effectiveness through radiative and convective cooling, and mechanical/stress factors where bonds may break or traces may lift off the interposer or board. Putting all this together requires a broader portfolio of technologies than we commonly expect in EDA.


Start with a thermal-aware electromigration (EM) analysis, a factor of great importance to reliability in devices where power level can switch, such as in PMICs or power-managed SoCs. In such cases, higher temperatures mean higher resistance and power-switching means inrush currents through those higher resistance paths, so you have a higher risk of EM. Assuming that massive over-design is not an option, selectively mitigating problem cases requires a fine-grained understanding of heating across the die, which isn’t practical in standard EDA thermal analytics, entering finite-element analysis (FEA). You use traditional methods like FEA to model thermal effects within small, manageable regions and use a variety of methods to minimize / smooth out discontinuities between those regions. All of which helps you understand true temperature exposure at a detailed level. That in turn allows you to manage EM risk at the same level without having to over-design everywhere.


Thermally-induced stress is another important factor for reliability. In wafer-level packaging, as one example, thermal stress on very thin redistribution layers (RDL), popular for thin wafer-level packages, can stress traces, vias and dielectric. Cumulative stress cycling leads to fatigue and therefore reliability problems (increased-resistance in connections or opens). A perhaps lesser-known problem relates to a very significant increase in the coefficient of thermal expansion in dielectrics at something known as the glass transition temperature (Tg). This can lead to significant warpage in the dielectric, especially in WLP, with obvious consequences for reliability. Indeed, especially in WLP, thermally-induced stress analysis becomes particularly important in the highly complex structures found in these technologies, all depending on a variety of materials with unavoidably highly-nonlinear thermal expansion properties.


Finally, there’s the small matter of cooling. These are thermal problems after all, so cooling is a part of mitigating those problems. In an ADAS enclosure, modeling the impact of proposed cooling solutions (fans, device positioning, reducing heat from devices) calls for modeling through CFD. Not a chip problem you say? Remember that opening paragraph. We’re building ADAS solutions and they have to be co-optimized in the total package. So yes, CFD modeling is a part of the analysis, perhaps as part of a collaborative development between the Tier 1 and the chip-developer, but each has to be able to exchange models and results to collaborate effectively.

ANSYS unsurprisingly has solutions to these needs, from their chip-package-system (CPS) modeling to their system level thermal and mechanical modeling. You can learn more from work they describe in a paper they presented on thermal-induced stress for fanout wafer-level packaging HERE and a paper they presented on a transient thermal simulation methodology for PMICs HERE.


A Reliable Way to Forecast Growth of Semiconductor Markets

A Reliable Way to Forecast Growth of Semiconductor Markets
by Daniel Nenni on 01-17-2018 at 12:00 pm

Wally Rhines, President and CEO of Mentor, a Siemens Group, did another one of his famous deep learning presentations at SEMI ISS 2018. Using the Gompertz Curve Lifecycle to forecast the future growth of semiconductor markets, Wally looks at: Image sensors, Desktop PCs, PC Notebooks, Cell Phone Subscribers, Smartphones, and IoT products (smart meters, wearables, fitness trackers, and medical wearables). Wally then applied Gompertz to the limitations of growing markets: 3D TVs, Automotive Night Vision, and Drowsiness Systems.

Next Wally used Gompertz to answer the ever burning question: Where are we in the life cycle of semiconductor manufacturing? As it turns out we still have quite a bit of room to grow before we need an alternative to the silicon transistor switch.

Wally and I have discussed the non-traditional semiconductor chip company phenomena many times and he gave a couple of prime examples: Automotive, there are hundreds on new car companies with products in development. Consolidation will come very quickly but not before EDA and IP companies collect their chip design toll. Medical, remember the Google contact lens project that can potentially track hundreds of bio markers including glucose for the hundreds of million people with diabetes? There are a slew of health and wellness related chips and devices coming. IoT, there are thousands of companies buying EDA tools and IP making chips that will be in just about every device we will touch in the coming years.

In his “Where’s the Money” section Wally focused on IoT Data Center, Gateway, and Edge Devices, and rightly so. Here is the “IoT As a Source of Future Semiconductor Revenue Growth” bullet points:

  • IoT data owners will make most of the money
  • Traditional semiconductor companies will attempt to capture data ownership value
  • Growing semiconductor/sensor capability will create new semiconductor applications and bring new companies into semiconductor design

Wally sent me a copy of his presentation which is available HERE.

During breakfast Wally and I discussed a wide variety of topics both personal and professional. I met Wally in my early blogging days when my mantra was “I blog for food”. Wally invited me for lunch and we have been friends ever since. While it is important that I do not play favorites (as the founder of a mega semiconductor media channel) what I can tell you is that Wally is my beautiful wife’s favorite EDA CEO, absolutely. She says Wally is charming…

At the conference Wally was much more relaxed and upbeat than ever before. Engineering the most incredible EDA exit of all times (Siemens) is only part of it. The transformation of Mentor, a Siemens Business, over the last year is something I feel honored to witness. Seriously, in the last 30 years I have seen nothing like it! It is a shame Siemens does not break out IC Design revenue because my guess from customer visits in 2017 is that Mentor, a Siemens Business, is consuming EDA market share at an unheard of rate.


Mentor Tessent Products Ready for Second Edition of ISO 26262 Coming in March 2018

Mentor Tessent Products Ready for Second Edition of ISO 26262 Coming in March 2018
by Mitch Heins on 01-17-2018 at 7:00 am

Have you notice how smart your automobile is getting? Watching the first round of NFL playoffs I lost count on the number of TV commercials showing cars weaving through tight construction zones (and Star Wars figures), big trucks parking in incredibly tight spaces, cars avoiding rear-end collisions and pedestrians, and even a pickup with specialized sensors used when pulling a trailer. That of course is just the beginning. There is much more to come and the electronics for these systems will be big business for IC providers, but it does beg the question of safety and reliability. And as you would expect, more standards are coming out to address this very thing.

Mentor, a Siemens Business, issued a new white paper outlining the upcoming second edition of the ISO 26262 Functional Safety Standard for road vehicles. The standard’s second edition adds sections to cover heavier road cars, trucks, buses and motorcycles. More interesting to IC folks is a completely new section to the standard that covers design and test of semiconductors that go into vehicles. For all those IC providers hoping to leverage this market, you need to take notice of these changes.

The original ISO 26262 was intended to be applied to safety-related systems that include electrical and/or electronic (E/E) systems in series production passenger cars with a maximum grows weight of 3500 kg. The standard provides a definition of what is meant by “safety” and how safety goals are determine and what is a “safe state” (e.g. where do we end up when we do have a malfunction). The standard also speaks to the safety life cycle from management, development, production, operation, service and eventual decommissioning.

Per ISO 26262, designers must develop a safety plan to achieve stated safety goals. Safety integrity levels for automotive-specific risks are classified depending on the level of severity of effects of a device failure. These are known as ASILs (Automotive Safety Integrity Levels). ASIL levels range from ASIL-A through ASIL-D, with ASIL-D being the most stringent. ASIL-D implies that the likely potential for severe life-threatening or fatal injury in the event of a malfunction. Each vehicle E/E component is ASIL-classified based on the severity of the effects of a failure to the driver and passengers as well to persons near the vehicle.

ISO 26262 was first introduced in late 2011. The standard was revised again in 2012 to give clarity to the standard (section 10) and more recently again in 2016 to add section 11, dealing with semiconductors and section 12 dealing with the additional vehicle types already mentioned. The draft standard from 2016 is the basis for the 2[SUP]nd[/SUP] edition of the standard that is to be released in March of 2018.

Mentor’s interest in this comes from their Tessent family of design automation tools that address quality and reliability of semiconductors during both manufacturing and in-system operation. Tessent test solutions are used to target zero-DPM silicon and their unique diagnosis and yield analysis capabilities enable designers to quickly determine root-cause analysis of field returns as well as the identification of systematic defects that lead to yield excursions. Designers use Tessent products to show evidence (as required by ISO 26262) of how work product functional safety has been reached. Not only can the tool be used to prove how functional safety is built into an IC, but they also specify specific customer use cases which can be used to judge the tools’ impact on safety, so even the Tessent tools get ISO certification.

The new part 11 of the ISO 26262 spec gives a comprehensive overview of functional-safety related items for development of semiconductor parts. Pertinent to Mentor is the Design-for-Test (DFT) section that covers hardware faults, errors and failure modes including definitions of fault models and failure modes in relation to functional safety. Semiconductor IP is also addressed relating to how to qualify an IP and how that IP affects parts of the design that use that IP.

Design-for-manufacturing (DFM) tools work to identify systematic design issues that can cause failure and yield loss. ISO 26262 however focuses on random failures that may be introduced by the environment. Causes of these failures may be things like vibration, moisture, dirt, or circuit effects like noise, EMI or electro-migration. The new part 11 of the standard gives clarity and guidelines with examples for how to calculate and use base failure rates. It also provides guidelines for identification of possible common cause and cascading failures between elements through something known as Dependent Failure Analysis (DFA). Look for more functionality from the Mentor Tessent family of products to address these kinds of analysis in the future.

In the meantime, the new section 11 describes important semiconductor use cases covering digital components, memories, analog and mixed-signal, programmable devices, multi-core components as well as sensors and transducers. These are all right down Mentor’s alley, especially with their recent focus on internet-of-things (IoT) design flows that cut across nearly all of Mentor’s IC products. Currently, Mentor’s Tessent DFT and yield family of tools are all ISO 26262 qualified and available now to help IC makers go after the automotive IC market. Check out their white paper for more details on the Tessent product line and the upcoming 2[SUP]nd[/SUP] Edition to ISO 26262

See Also:
White Paper: ISO 26262 Second Edition: What’s New for Semiconductor Test?
Mentor Tessent DFT and Yield Family of Tools


A Golden Age for Semiconductor Growth at ISS 2018

A Golden Age for Semiconductor Growth at ISS 2018
by Daniel Nenni on 01-16-2018 at 4:00 pm

The SEMI Industry Strategy Symposium (ISS 2018) started today with session one on economic trends. Daniel Niles (Alpha One Capital Partners) started it off with “A Golden Age for Global Growth – Semiconductor revenues up over 20% y/y (The Good News is Also the Bad News)”. The good news of course is that semiconductors continue to play a critical role in commercial “quality of life” products and infrastructure (cloud) and defense products (HPC/AI). The bad news of course is that it may not last.

Daniel presented 41 slides covering a wide variety of economic indicators which point to a possible cooling down period. Daniel’s predictions for the short and long term were summarized in his ending slides:

Semiconductor Industry Near Term:

  • Semi revs tracking up +21% y/y for CY17 with units up 15% ex-discretes. This would be the best growth since 2010 when the world was still recovering from the financial crisis. But in 2010 PC units grew 14% (-2% in 2017), cellular phone units grew 32% (4% in 2017), and autos grew 14% (2% in 2017).
  • Y/Y growth of +23% in Nov; +22% in Oct and +20% in September with high of +25% in August
  • Ex-memory, IC sales were +7% Y/Y in September, down from +11% in August, +10% in July, & +15% in May.
  • C17E WFE spend of $45B (up 30% y/y) is growing even faster than semi revs. Excess supply in ’18/19?
  • How much Apple inventory, +128% y/y in Q3, will need to be burned in Q1 given revs were up only 12% y/y?
  • Cisco inventory up 44% y/y; HPQ up 29% y/y; (strategic memory buys) but Samsung Elec +41%; Hynix +23%
  • Auto not large end-market for memory: Volkswagen, Toyota & Ford inventories are up 9-18% y/y; revs 1-11%
  • Lead-times stretched out in the spring and have driven the build in inventories. Even if sell-through is strong during the holiday season, the March qtr is seasonally down in demand and inventory burn could be ugly.
  • Most semi companies: “content gains are driving the stronger demand, there is no double ordering or inventory buildup.” Unit shipments were 5% in 2016 but 15% in 2017 and does not make sense at a high level. We believe a semi correction is coming by early 2018. The only question is severity given the big disconnect between end-market unit growth, inventory on balance sheets and semiconductor unit growth.

Predictions for Semi Industry over the Long-Term:

Positives:

  • Cloud Computing still has a long way to go with even more data created in the future from driverless cars n Artificial Intelligence will require immense computing power to replicate the 100B neurons in one human brain n A computer beat a GO Master for the first time this past year n Augmented Reality market supercharged with the launch of the iPhone X & ~$10 of content per phone
  • 3D sensing will be one of the fastest semi growth markets with penetration of 1.5B phones per year n Fully autonomous cars expected by 2020/2021 will require an immense increase in semiconductor content n Nearly 1.3 million people die in road crashes worldwide every year
  • UK & France prohibit production of diesel and petrol cars by 2040- China looking at timeline for similar action
  • Volkswagen to invest €20B for electric versions of all models by 2030
  • Industrial robot market $12B in 2016 to nearly $35B by 2025. Robots should be taxed according to Bill Gates
  • 5B connected devices in 2015 w/ ~20% growth through 2020.1T cumulative IoT devices shipped by 2035. n Voice and camera as the control input for computing/phones requires a large increase in computing power
  • Smartphone growth should continue at GDP plus as onramp of choice to the internet
  • PC growth likely to be GDP minus for many years as the smartphone continues as device of choice
  • Potential Risks:
  • China is a blessing/curse with 70% IC self sufficiency goal by 2025. Japan in 80s, Korea/Taiwan in the 1990s
  • Semiconductor debt levels have increased post mergers and interest rates are probably headed higher
  • Semiconductor mergers are slowing so investments will be needed to drive future earnings growth
  • A $500B trade deficit is a lot to fix even for a $18 trillion economy & border taxes will be disruptive

The full presentations are online for attendees and SEMI members. There is a LOT of information here and I’m a big fan of Daniel Niles so if you want to discuss this in more detail let me know in the comments section or on SemiWiki.com private email. Personally however, I see no barriers to double digit semiconductor growth for 2018 and am very optimistic in regards to the continued success of the fabless semiconductor ecosystem, absolutely.


Scoreboard and Issues Management Tools for PCB Projects

Scoreboard and Issues Management Tools for PCB Projects
by Tom Dillinger on 01-16-2018 at 12:00 pm

The complexity of an SoC design necessitates that the project managers have accurate visibility into the overall design status, spanning the entire range of tasks – from functional simulation error triage, to physical layout verification errors, to electrical analysis results. Flow scripts used by SoC teams parse the log file data generated by the underlying EDA tools invoked, to capture the status of each design block. These output results are stored in a database, from which a project scoreboard application pulls information for a view into the complete project snapshot (and history).

Given the variety of flows and the disparate nature of EDA vendor tool log file data for each flow, scoreboard development (and maintenance) for an SoC project requires significant CAD team resources. The scoreboard application includes detailed design revision and methodology dependency checking, as well – as project version updates are applied to ongoing design releases, the scoreboard database needs to ensure flows are re-executed and old results invalidated. The CAD and methodology teams collaborate to capture the flow dependency criteria. In return for the CAD team resource investment, the scoreboard information is extremely valuable, as it provides insights into SoC project schedule milestone risks and areas where design engineering resource re-balancing may be needed.

Another key SoC application is issues management, often simply referred to as the bug tracking tool. The issues application utilizes another database, with a rich and diverse set of information to be stored and queried – e.g., text, graphics, issue priority, issue owner, reviewers, approvals required to close, date opened, target close date, design block(s) impacted, related specifications, model build configuration (with all version tags), etc. This information evolves over time, as the investigation of the issue results in comments, proposals, dependencies on other issues, and a resolution recommendation that will need to be reviewed/approved/implemented/verified.

Both commercial and open-source issues management applications are available, and have been adopted by SoC teams. Often, a tool developed for software defect management will be adapted by the CAD team for the specific requirements of an SoC design.

Scoreboard and issues tracking tools are a fundamental aspect of any SoC design project management activity. PCB design projects have comparable tracking requirements, and some unique characteristics. The PCB project involves a broad cross-section of teams involved in design reviews – especially, component qualification engineering and component procurement. And, there is typically not a great deal of CAD resource available to develop and maintain scoreboard and issues applications for the PCB team. (Fortunately, there is less diversity in the EDA tool platform(s) used to capture, design, verify, and release the PCB data for manufacture.)

I recently had the opportunity to chat with Mark Hepburn, Product Management Director, System/Package/Board, in the Custom IC & PCB Group at Cadence, about project management tools for PCB designs. Mark highlighted, “We recognized the need for a broad set of users to obtain project metrics and analytics from the Allegro PCB platform. The users may not be actively working with Allegro, such as the project manager of a PCB design, or the Supply Chain Management organization overseeing multiple designs. We recently introduced Allegro Pulse, an enterprise-grade server database environment, with web portal views into different project tracking applications.”

Mark shared a few screen shots from Allegro Pulse, for examples of the analytic data available. The figure below illustrates the Bill-of-Materials parts management view, which provides the supply chain group early visibility into the project BoM, with comparisons to preferred parts list libraries.


The screen shot below provides an example of a PCB project scoreboard view. (The full scoreboard web portal page is customizable.) The view may include metrics built-in to Pulse and custom metrics defined by users. The built-in metrics include the status of Allegro PCB checking tools – e.g., component placement and connectivity checking, electrical rules checks. The scoreboard view may also incorporate detailed analytics – e.g., pin complexity calculations.


The figure below illustrates the issues management application in Pulse, directly accessible by users from an Allegro toolbar pull down menu. The issues app includes features for database search, e-mail list notification, and a summary rollup into the scoreboard.


A general project management application is also provided in Pulse – the status of project tasks, schedule, progress toward schedule milestones, etc., is coordinated with the other apps.

The complexity of current PCB designs requires comparable project management tool support that SoC designs have employed. Allegro Pulse addresses that need with a suite of integrated, customizable PM applications.

For more information on Allegro Pulse, here are two links that may be of interest – the first link is the general product landing page (with a video), while the second link is an overview product description (.pdf file).

-chipguy


Better than CNN

Better than CNN
by Bernard Murphy on 01-16-2018 at 7:00 am

No, not the news network though I confess I am curious to see how many initial hits that title attracts. Then I clarify that I’m talking about convolutional neural nets, and my would-be social media fame evaporates. Oh well – for those few of you still with me, CNNs in all their many forms are the technology behind image, voice and other types of recognition. Taking an image as an example, a pixel array of that image is passed through a series of layers of neuron-like computations – convolution, activation, pooling (the details vary), potentially many layers – to produce an output. Initially the system is trained using labeled images (this is a robot, in this case) through an iterative process across many examples, this unlikely structure adjusts to being able to recognize in any image the thing for which it was trained.


CNNs are so amazingly effective that they have become true media stars, at least under the general heading of AI, able to recognize dogs, tumors, pedestrians in front of cars and many more high-value (and not so high-value) tricks. They’re even better than us mere humans. Which makes you wonder if CNNs are pretty much the last word in recognition, give or take a little polishing. Happily no, at least for those of us always craving the next big thing. CNNs are great, but they have their flaws.

In fact trained CNNs can be surprisingly brittle. Hacking images for misidentification has become a sport. This seems to be remarkably easy, sometimes requiring changes to only a few pixels. After a spectacularly inept and un-gamed blunder, Google acknowledged that “We still don’t know in a very concrete way what these machine learning models are learning,”.

It’s fun to speculate on the mystery of how these systems are already becoming so deep and capable that we can no longer understand how they work, but that lack of understanding is a problem when they don’t work correctly. Even within the bounds of what they can do, while CNNs are good at translational invariance (doesn’t matter if the cat is on the left or the right of the image), they’re not so good at aspect / rotational invariance (cat turned to the left or the right or standing on its head), unless in the training you include many more labeled examples covering these variants. Which doesn’t sound very intelligent; we mere humans don’t need to see objects from every possible aspect to be able to generalize.

Geoffrey Hinton (U Toronto and Google and a towering figure in neural nets) has been concerned for a long time about weaknesses in the CNN approach and thinks a different method is needed, still using neural nets but in a quite different way. He argues that the way we render computer graphics is a clue. We start with a hierarchical representation of the data, small pieces which are placed and oriented relative to other nearby pieces, forming together larger pieces, which are placed and oriented relative to other large pieces, and so on. He believes that our brains effectively do the inverse of this. We recognize small pieces along with their placement and orientation relative to other small pieces, recursively up through the hierarchy. He calls these sub-components capsules.

You might argue that this is just what CNNs do, recognizing edges, which are then composed into larger features, again recursively through the network. But there are a few important differences as I understand this. CNNs use pooling to simplify regions of an image, sending forward only the strongest signal per pool. Hinton thinks this is a major weakness; the strongest signal from a pool may not be the most relevant signal (at any given layer) if you’re not yet sure what you are going to recognize. Moreover, pooling weakens spatial and aspect relationships between parts of the image.

Additionally, CNNs have only a 2D understanding of images. Capsules build rotation + translation pose matrices for what they are seeing (remember again 3D graphics rendering). This becomes important in recognition in subsequent capsules. Recognition depends on relative poses between capsules; some will correlate with certain trained objects, others will have no correlation. Capsule-based networks consequently need little training on aspects/poses.

Another difference between the CNN approach and the capsule approach is how information is propagated forward. In a CNN, connections between layers are effectively hard-wired. Each element (neuron) in a layer can only communicate with a limited set of elements in the next layer, since being able to connect to all would be massively costly (in much smaller final layers, full connectivity is allowed). In capsule-based networks routing is dynamic, a capsule will send its output to whichever capsule most strongly ‘agrees’ with it; in effect capsules build a voting consensus on what they are seeing, which it appears gives CapNets a huge advantage in accuracy. They can learn on training sets of hundreds of examples rather than tens of thousands of examples.

At least that’s the theory. CapNets are already beating CNNs in recognizing hand-written digits but I haven’t seen coverage of application to more complex image recognition (yeah, not exactly stressing the 3D strength yet). And CapNets are currently quite slow. But they do run on the same hardware, in the same frameworks in which CNNs are trained (see some of the links below). No need to worry that your investment in special hardware or learning TensorFlow will be obsolete any time soon. But you might want to start brushing up on this domain for when they do start moving into production.

Here is a nice summary of the evolution of CNNs and what capsule networks bring to the part. This is taken from this YouTube video. Also another not quite complete explanation.


Moving from FPGA’s to Embedded FPGA Fabric – How it’s Done

Moving from FPGA’s to Embedded FPGA Fabric – How it’s Done
by Tom Simon on 01-15-2018 at 12:00 pm

Buying IP is just a little bit more complicated than buying a pair of shoes. A lot of IP is configurable and requires attention to various design and configuration parameters. We live in an age where commercial soft IP is used pretty often in designs, so people have developed increasing comfort in the process that is required to achieve integration. Hard IP definitely takes it up a level – there are more process specific details that require attention. Nevertheless, it seems that commercial hard IP has become viable and is being used frequently as well. So, the industry is making both hard and soft commercial IP work. But there is a new twist in the IP market, embeddable field programmable gate arrays, or eFPGA as Achronix likes to call their offering.

There are huge and easily grasped advantages to embedding FPGA fabrics inside of SOC’s. Off chip communication is costly from a power, BOM and throughput perspective. Bringing a system’s FPGA onto the SOC is a big win, and though it requires some extra thought, it seems the business and technical model that Achronix uses to onboard customers is well thought out and highly effective. Achronix has put together a white paper explaining the process for evaluating and implementing their embeddable FPGA fabric for use in SOC’s.

In many ways, embedding an FPGA fabric is a lot like embedding a processor, so the evaluation has to look at the target RTL for the FPGA and the resources it will optimally utilize. The elegant part of this is, of course, that the FPGA core can be precisely configured to meet the power, performance and area requirements of the final system. The Achronix white paper goes through this step by step. The first step is a technical discussion with the customer regarding requirements. This is usually done after an NDA so the appropriate level of technical detail can be covered.

The customer can also download the ACE design tools that are optimized for the Achronix eFPGA target. It includes an Achronix-optimized version of Synopsys Synplify Pro that fully supports Achronix Speedcore. The ACE toolkit can provide area, power timing and resource utilization information. It also supports debug and static timing analysis for both functional and timing-annotated simulation.

Achronix supplies two preconfigured Speedcore eFPGA instances for use as targets to help customers understand utilization and optimization. Customers can take their RTL and synthesize it with the ACE toolkit and then evaluate the results to determine what the optimal configuration would be for their customized instance. Of course, there will probably be some changes required to adapt from an existing discrete FPGA architecture to the Speedcore eFPGA. Achronix offers LRAM in addition to BRAM. This LRAM comes as a 4,096 bit configuration that is 128 x 32 and is suitable for buffering tasks. Another difference is that Speedcore uses a 4 input LUT, rather than the more common 6 input LUT in other architectures. Achronix has found that empirical data shows this is more efficient for a majority of programmable logic applications.

There is more information in their white paper about how Achronix works with their customers to evaluate the use of embeddable Speedcore eFPGA in their designs. Because Achronix has enjoyed increasing success with their discrete Speedster 22i FPGA, the evaluation and development steps are well understood, and they have ample experience to make the entire process go smoothly. Achronix also seems to place proper significance on technical dialog with their customers to ensure silicon and design success. The full white paper is available for reading on their website. It’s good to see, despite the additional complexity, that SOC designers who want to take advantage of the benefits of embedded FPGA fabric can fully understand the considerations and benefits before committing.


Broadcom Versus Qualcomm Update

Broadcom Versus Qualcomm Update
by Daniel Nenni on 01-15-2018 at 7:00 am

The Broadcom acquiring Qualcomm drama is still dominating the fabless semiconductor back channel. This week I will be at the SEMI ISS Conference with Scott Jones and several hundred high level semiconductor professionals so it will be interesting to hear the hallway chatter. When it was first announced I was in the minority in thinking it will happen and be for the greater good of the semiconductor industry. Now I would say popular opinion is in my favor based on the SemiWiki Poll where more than 10,000 people voted 58-42% in support and the dozens of people I have spoken privately to inside and outside (Wall Street) the ecosystem.

For me this story started at the TSMC 30[SUP]th[/SUP] Anniversary celebration in Taipei last October. The keynotes were by Nvidia CEO Jensen Huang, Qualcomm CEO Steve Mollenkopf, ADI CEO Vincent Roche, ARM CEO Simon Segars, Broadcom CEO Hock Tan, ASML CEO Peter Wennink, and Apple COO Jeff Williams. Next was a panel discussion led by Chairman Morris Chang (you can see the full video HERE). My takeaway from the event is that Apple, TSMC, and Broadcom are very close partners while Steve Mallenkopf and Qualcomm are on the outside looking in.

Next I see a picture of Hock Tan in the Oval Office with Donald Trump saying, “We are making America home again” after moving the Broadcom HQ back to the United States. Shortly thereafter Broadcom announces a $70 per share acquisition bid for Qualcomm. Qualcomm then issued the standard negative response that was déjà vu of the Avago bid for Broadcom which ended in a record $37B acquisition in 2015. Most financial people I spoke with disagree that $70 (a 33% premium) is not a fair bid but we all know it can and will go higher.

Then Hock goes after the QCOM board by nominating new board members to be voted on in March. QCOM rejected them of course but it really is up to the investors and Hock is speaking to them directly. My guess is that Hock will up the bid before the board meeting by at least $10 per share.

After being part of the fabless semiconductor industry for 30+ years I am seeing a trend that is in full support of the Hock Tan acquisition strategy. Non-traditional chip companies are beginning to dominate some very large market segments. Apple started it all which is documented in our book “Mobile Unleashed” chapter 8. Now the top three smartphone companies (Samsung, Apple, and Huawei) are packing their phones with custom silicon and more are sure to follow.

Tesla is another example of the fabless disruption. The Tesla domain came to SemiWiki in 2016 and now we have an onslaught of automotive content attracting the top car maker domains around the world. The SemiWiki IoT and Artificial Intelligence traffic is also dominated by non-traditional chip makers so the trend continues.

So where does this leave old school fabless semiconductor companies who now compete with their former customers? Can they really compete with rich systems companies on a comparatively low margin fabless chip budget?

Sometimes I post things on SemiWiki just to see the analytics. The Broadcom poll for example. I recently posted notices in the SemiWiki jobs forum for both Broadcom and Qualcomm to gauge the level of interest. Thus far it is running at 2:1 in favor of Broadcom. It is early but it is still an interesting data point to consider.

The one thing I have learned about Hock Tan over the years is that he is a very smart and determined man and I will never bet against him. Hock definitely runs a tight ship but look the investor value he has created over the years with AVGO and compare that to QCOM. I would argue that Hock’s management style is just what the semiconductor doctor ordered and combining Qualcomm and Broadcom (while keeping the Qualcomm name) and creating the third largest semiconductor company (Samsung and Intel are first and second) would in fact be for the greater good, absolutely.