webinar banner2025 (1)

Analytics and Visualization for Big Data Chip Analysis

Analytics and Visualization for Big Data Chip Analysis
by Tom Dillinger on 08-28-2018 at 12:00 pm

Designers require comprehensive logical, physical, and electrical models to interpret the results of full-chip power noise and electromigration analysis flows, and subsequently deduce the appropriate design updates to address any analysis issues. These models include: LEF, DEF, Liberty library models (including detailed CCS-based behavior), SPEF/DSPF, VCD/FSDB, etc. – the size of the complete chip dataset for analytics in current process nodes could easily exceed 3TB. Thus, the power distribution network (PDN) analysis for I*R voltage drop and current densities needs to be partitioned across computation cores.

Further, the simulation vector-based analysis (of multiple operating scenarios) to evaluate dynamic voltage drop (DvD) necessitates high throughput. As a result, elastically scalable computation across a large number of (multi-core) servers is required. Two years ago, ANSYS addressed the demand for computational resource associated with very large multiphysics simulation problems with the announcement of their SeaScape architecture (link).

Some Background on Big Data

Big data is used to:

  • Drive search engines, such as Google Search.
  • Drive recommendation engines, such as Amazon and Netflix (“you might like this movie”).
  • Drive real-time analytics, like Twitter’s “what’s trending”.
  • Significantly reduce storage costs — e.g. MapR’s NFS compliant “Big Data” storage system.

Big data systems require a key new concept. Keep all available data, as you never know what questions you’ll later ask. All big data systems share these common traits:

  • Data is broken into many small pieces called “shards”.
  • Shards are stored and distributed across many smaller cheap disks.
  • These cheap disks exist on cheap Linux machines. (Cheap == low memory, consumer-grade disks and CPU’s.)
  • Shards can be stored redundantly across multiple disks, to build resiliency. (Cheap disks and cheap computers have higher failure rates.)

Big Data software (like Hadoop) use simple, powerful techniques so the data and compute are massively parallel.

  • MapReduce is used to take any serial algorithm and make it massively parallel. (see Footnote [1])
  • In-memory caching of data is used to make iterative algorithms fast.
  • Machine learning packages run natively on these architectures. (see MLlib, http://spark.apache.org/mllib)

ANSYS SeaScape is modeled after the same big data architectures used in today’s internet operations, but purpose-built for EDA. It allows large amounts of data to be efficiently processed across thousands of cores and machines, delivering the ability to scale linearly in capacity and performance.

Simulation of advanced sub-16nm SoCs generates vast amounts of data. Engineers need to be able to ask interesting questions to perform meaningful analyses to help achieve superior yield, higher performance, lower cost – with the most optimum metallization and decap resources. The primary purpose of big data analytics is not simply to have access to huge databases of different kinds of data. It is to enable decisions based on that data relevant to the task at hand, and to do so in a short enough time that engineers can take action to adjust choices while the design is evolving. The SeaScape architecture enables this analytics capability (see the figure below).

RedHawk-SC

The market-leading ANSYS RedHawk toolset for power noise and EM analysis was adapted to utilize the SeaScape architecture – the RedHawk-SC product was announced last year.

I recently had the opportunity to chat with Scott Johnson, Principal Technical Product Manager for RedHawk-SC in the Semiconductor Business Unit of ANSYS, about the unique ways that customers are leveraging the capacity and throughput of RedHawk-SC, as well as recent features that provide powerful analytic methods to interpret the RedHawk-SC results to provide insights into subsequent design optimizations.

“How are customers applying the capabilities of RedHawk-SC and SeaScape?” I asked.

Scott replied, “Here are two examples. One customer took a very unique approach toward PDN optimization. Traditionally, P/G grids are designed conservatively(prior to physical design), to ensure sufficient DvD margins, with pessimistic assumptions about cell instance placement and switching activity. This customer adopted an aggressive ‘thin’ grid design, expecting violations to be reported after EM and DvD analysis – leveraging the throughput available, the customer incorporated iterations of RedHawk-SC into their P&R flow. A set of four ECO operations was defined, to address different classes of analysis issues. The ECO’s were applied in the P&R platform, and RedHawk-SC analysis was re-run. Blocks converged within two or three ECO + RedHawk-SC iterations – on average, the customer calculated they saved 7% in block area, as a result. And, this was all automatic, scripted into the P&R flow, no manual intervention required.”

“The second customer took a different, highly unique approach toward analytics of RedHawk-SC results. The SeaScape architecture inherently supports
(and ships with) a machine learning toolset. The customer had been utilizing senior designers to review EM results, and make a binary fix or waive decision on high EM fail rate segments. The customer implemented an EMWaiver ML application on the SeaScape platform – after training, EMWaiver is presented with the EM results, and its inference engine automatically evaluates the fix-waive decision.” Scott continued.


Illustration of the EM Assistant application, using the ML features of SeaScape

Scott highlighted that as part of the training process, the precision and accuracy of the ML-based flow was assessed. The precision relates to the inferred “fix” designation that could have been waived, requiring additional physical design engineering resource – a precision factor of ~90% was reported (implying ~10% extra fixes). The accuracy relates to the risk of an inferred “waive” that actually requires a fix – the customer was achieving 100% accuracy, as required (i.e., no “escapes”).

“Sounds pretty advanced”
, I interjected. “How would new customers leverage the model information and analytics detail available, after executing RedHawk-SC?”

Scott replied, “We have added a MapReduce Wizard interface in RedHawk-SC. Users progress through a series of menus, to select the specific design attributes of interest – e.g., cell type, cell instances, die area region – followed by the electrical characteristics of interest – e.g., cell loading, cell toggle rate, perhaps specific to cell in the clock trees.”

The figures below illustrate the steps through the RedHawk-SC MapReduce Wizard, starting with the selection of the design model view, and then the specific electrical analysis results of interest.

RedHawk-SC MapReduce Wizard menus for analytics – model data selection


MapReduce Wizard electrical analytics selection — e.g., power, current, voltage

“The MapReduce functional code is automatically generated, and applied to the distributed database.” , Scott continued. “An additional visualization feature in RedHawk-SC creates a heatmap of the analytics data, which is then communicated back to the user client desktop. Design optimizations can then quickly be identified – if a different analytics view is required, no problem. A new view can be derived on a full-chip model within a couple of minutes. Multiple visual heatmaps can be readily compared, offering an efficient multi-variable analysis of general complexity.”

An example of a heatmap graphic derived from the analytics data is depicted below – this design view in this example selected clock tree cells.

Scott added, “In addition to these user-driven analytics, there is a library provided with a wealth of existing scenarios. And, the generated MapReduce code is available in clear text, for users to review and adapt.”

I mentioned to Scott, “At the recent DAC conference there was a lot of buzz about executing EDA tools in the cloud. The underlying SeaScape architecture enables a very high number of workers to be applied to the task. Is RedHawk-SC cloud-ready?”

Scott replied, “SeaScape was developed with the explicit support to run on the cloud. It’s more than cloud-ready – we have customers in production use on a(well-known)public cloud.”

“The level of physical and electrical model detail required for analysis of 5nm and 3nm process node designs will result in chip databases exceeding 10TB, perhaps approaching 20TB. Design and IT teams will need to quickly adapt to distributed compute resources and elastic job management like SeaScape, whether part of an internal or public cloud.”
, Scott forecasted.

The computational resources needed to analyze full-chip designs have led to the evolution of multi-threaded and distributed algorithms and data models. The ability to efficiently identify the design fixes to address analysis issues has traditionally been a difficult task – working through reports of analysis flow results is simply no longer feasible. Analytics applied to a chip model consisting of a full set of logical, physical, and electrical data is needed. (Consider the relatively simple case of a library cell design that correlates highly to DvD or EM issues – how to quickly identify the cell as appropriate for a “don’t_use” designation required big data analytic information).

The combination of the SeaScape architecture with the features of ANSYS analysis and simulation products addresses this need. For more information on RedHawk-SC, please follow this link.

-chipguy

PS. Thanks to Annapoorna Krishnaswamy of ANSYS for the background material on Big Data applications and the ANSYS SeaScape architecture.

Footnote

[1] Briefly, MapReduce refers to the functional programming model that has been widely deployed as part of the processing of queries on very large databases – both Google and the Hadoop developers have published extensively on the MapReduce programming model. The “Map” step involves passing a user function to operate on (a list of) data shards, the same function executing on all workers. The result of executing the Map step is another list, where each entry is assigned a specific “key”, with an associated data structure filled with values. A simple example would be a text file parser function, where the keys are the “tokens” generated by the parser, and the data value is a count of the number of instances of that token (calling a “compress” or “combine” function after individual token parsing).

The next step is to “shuffle” the (key, value) records from the Map step, to assign the records to the specific “Reduce” node allocated to work on the key. All the records from each Map node for the key are sent to the designated Reduce node. As with the Map(function) step, a Reduce(function) step is then executed – Reduce is often referred to as the “aggregation” step. Again using a text parser as the example, the Reduce function could be a sum of the count values received from all the Map nodes for the key, providing the total instance count for each individual token throughout the input text file.


Overview of MapReduce


Illustration of the MapReduce architecture used by Hadoop

The data analytics features in SeaScape are based on the MapReduce programming model. An excellent introduction to MapReduce is available here.


WEBINAR: A UVM Cookbook Update

WEBINAR: A UVM Cookbook Update
by Bernard Murphy on 08-28-2018 at 7:00 am

Something I always admire about Mentor is their willingness to invest their time and money in helping the industry at large. They do this especially in verification where they sponsor periodic Wilson surveys on the state of verification needs and usage in the industry. More recently they introduced their UVM Cookbook, an introduction to help new users and also I’m sure a handy reference to the more arcane corners of the standard for experienced UVM practitioners. Of course a challenge in any how-to guide to an evolving standard is that it inevitably drifts out of date. So Mentor recently released an update, aligned with the IEEE 1800.2 release, which should encourage freshers to seniors to turn to this guide to learn, as a reference and to dip into as a great source of examples.


Check out the Mentor Webinar on September 11[SUP]th[/SUP] at 8am Pacific.

To help me out, Tom Fitzpatrick (Strategic Verification Architect at Mentor) and I first talked about where UVM adoption is these days. While older methodologies (“e”, VMM etc) still claim their adherents, based on the latest Wilson survey UVM is fast becoming the methodology of choice, especially in ASIC design but also starting to see traction in FPGA design. No doubt this is because of the huge learning and legacy investment in testbenches. You don’t want to have to rework that – for any reason. UVM is widely supported and current so is becoming the standard of choice for verification methodology going forward.

That said, when an old-timer like me looks at the Cookbook (on-line, no-one uses paper books anymore) it can seem pretty overwhelming. Where do you start, and do you have to understand the whole thing before you can become effective? All that complexity is needed to handle the significantly higher complexity of verifying the systems we build today, but Tom told me not to panic. Most verification engineers don’t need to digest the whole thing. A few UVM experts will likely build the majority of the infrastructure most teams will need, leaving lesser mortals like me to assemble tests based on a much less demanding understanding of the standard. Following the cookbook metaphor, you can be a superchef and cook a 7-course gourmet meal from scratch if you choose, or you can go the BlueApron route, have most of the work already done for you (by your internal experts) and be able to throw together a great one-pot meal with minimal effort. Encouraging to hear that there’s a reasonably gentle learning curve for us beginners. Starting with, naturally a newly-added Basics section in the Cookbook.

What’s new in this version? Tom tells me of course it is fully updated to 1800.2 and especially all the examples are fully updated. They also archived all the OVM material that had been included in the earlier version – useful back then but largely unnecessary for today’s UVM users.

The main point he stressed for this update is a single recommended architecture to support emulation-friendly UVM testbenches. As emulation plays an increasingly important role in system verification, it becomes essential to ensure that testbenches can migrate with minimal change to emulator-based flows. To accomplish this, the Cookbook recommends split transactors communicating with the DUT through transaction methods rather pin-level driver/monitor components. If you are familiar with this approach, you’ll know one side of the transactor sits with the DUT (on the emulator) and the other side sits in the testbench. Transaction-level rather than signal-level data exchange allows the emulator run at faster speeds than would be possible if required to sync with the testbench on signal-level changes.

The Cookbook recommended methodology is closely aligned with the (open) UVM Framework and Questa VIP so you can spin-up quickly with a methodology that ensures your testbench development will be emulator-friendly. Tom mentioned a number of other improvements to the Cookbook, including (among others) updates to the Register Abstraction Layer reflecting changes in 1800.2, an updated UVM Connect chapter for those of you wanting to drive verification from higher-level TLM abstractions and a chapter consolidating information on messaging in UVM.

I won’t and can’t steal their thunder because there’s way too much to cover in a blog. Tom recommends that for those of you who are interested, you should register for their Webinar on September 11[SUP]th[/SUP] at 8am Pacific where I know you’ll get a much more expert summary of this update.


GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies

GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies
by Daniel Nenni on 08-27-2018 at 3:00 pm

It’s no secret that I have been a big fan of GLOBALFOUNDRIES since they came about in March of 2009. We even included them in our first book “Fabless: The Transformation of the Semiconductor Industry” right next to TSMC. I am also a big fan of pivoting which is the term we use here in Silicon Valley to describe some of the most innovative technology turnarounds. Apple going mobile is one of my favorite pivots, Nvidia is also a pivot master, Intel not so much. As a foundry connoisseur I knew most of the challenges GF would face but the challenges were much greater than even I expected, absolutely.

TSMC has always been a fierce competitor but the Apple partnership makes them unbeatable in my opinion. The new “node per year” cadence that Apple requires to launch new products every September has turned out to be a devastating pivot for the other foundries. Not only does it keep Apple ahead with bleeding edge silicon, it helps TSMC introduce new process technology ecosystems at a dizzying pace.

So yes, GF will skip 7nm and below and join the ranks of UMC, SMIC, TowerJazz etc… as a boutique foundry which can be quite profitable. In fact, with CMOS, FinFET, and the FD-SOI offerings, I would crown GF the King of the Foundry Boutiques!

So where does that leave us on the bleeding edge? TSMC and Samsung. My prediction is that the new Intel CEO will do some refocusing and Intel Custom Foundry will not be part of it.

Upside
Remember that GF 7nm capacity is 15k WPM versus the massive 7nm capacity of TSMC and Samsung so there will be no shortages based on this announcement. Also, AMD and TSMC agreed on this months ago to secure 7nm capacity so this is pure upside for TSMC and possibly Samsung. Will AMD and others use Samsung for a second source? My guess is yes because that is how the foundry business has always worked. AMD already second sources Samsung 14nm. At 7nm AMD will stay with TSMC but at the lower nodes they will straddle TSMC and Samsung, my opinion.

Downside
The semiconductor equipment companies may take a small hit even though GF can re-purpose the 7nm line for 14/12nm but that still leaves ASML EUV machines orphaned. It is also not clear what the IBM Power PC people will do for leading edge silicon moving forward.

There is much more to discuss here so let’s do it in the comments section. And don’t forget, GLOBALFOUDNRIES, Samsung and TSMC all have events coming up in September and October. This announcement will make them that much more interesting.

Here is AMD’s response:

Expanding our High-Performance Leadership with Focused 7nm Development

AMD’s next major milestone is the introduction of our upcoming 7nm product portfolio, including the initial products with our second generation “Zen2” CPU core and our new “Navi” GPU architecture. We have already taped out multiple 7nm products at TSMC, including our first 7nm GPU planned to launch later this year and our first 7nm server CPU that we plan to launch in 2019. Our work with TSMC on their 7nm node has gone very well and we have seen excellent results from early silicon. To streamline our development and align our investments closely with each of our foundry partner’s investments, today we are announcing we intend to focus the breadth of our 7nm product portfolio on TSMC’s industry-leading 7nm process. We also continue to have a broad partnership with GLOBALFOUNDRIES spanning multiple process nodes and technologies. We will leverage the additional investments GLOBALFOUNDRIES’ is making in their robust 14nm and 12nm technologies at their New York fab to support the ongoing ramp of our AMD Ryzen, AMD Radeon, and AMD EPYC processors. We do not expect any changes to our product roadmaps as a result of these changes…

And the official PR from GF:

GLOBALFOUNDRIES Reshapes Technology Portfolio to Intensify Focus on Growing Demand for Differentiated Offerings

Semiconductor manufacturer realigns leading-edge roadmap to meet client need and establishes wholly-owned subsidiary to design custom ASICs

Santa Clara, Calif., August 27, 2018 – GLOBALFOUNDRIES today announced an important step in its transformation, continuing the trajectory launched with the appointment of Tom Caulfield as CEO earlier this year. In line with the strategic direction Caulfield has articulated, GF is reshaping its technology portfolio to intensify its focus on delivering truly differentiated offerings for clients in high-growth markets.

GF is realigning its leading-edge FinFET roadmap to serve the next wave of clients that will adopt the technology in the coming years. The company will shift development resources to make its 14/12nm FinFET platform more relevant to these clients, delivering a range of innovative IP and features including RF, embedded memory, low power and more. To support this transition, GF is putting its 7nm FinFET program on hold indefinitely and restructuring its research and development teams to support its enhanced portfolio initiatives. This will require a workforce reduction, however a significant number of top technologists will be redeployed on 14/12nm FinFET derivatives and other differentiated offerings.

“Demand for semiconductors has never been higher, and clients are asking us to play an ever-increasing role in enabling tomorrow’s technology innovations,” Caulfield said. “The vast majority of today’s fabless customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node. Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law. We are shifting our resources and focus by doubling down on our investments in differentiated technologies across our entire portfolio that are most relevant to our clients in growing market segments.”

In addition, to better leverage GF’s strong heritage and significant investments in ASIC design and IP, the company is establishing its ASIC business as a wholly-owned subsidiary, independent from the foundry business. A relevant ASIC business requires continued access to leading-edge technology. This independent ASIC entity will provide clients with access to alternative foundry options at 7nm and beyond, while allowing the ASIC business to engage with a broader set of clients, especially the growing number of systems companies that need ASIC capabilities and more manufacturing scale than GF can provide alone.

GF is intensifying investment in areas where it has clear differentiation and adds true value for clients, with an emphasis on delivering feature-rich offerings across its portfolio. This includes continued focus on its FDX[SUP]TM[/SUP] platform, leading RF offerings (including RF SOI and high-performance SiGe), analog/mixed signal, and other technologies designed for a growing number of applications that require low power, real-time connectivity, and on-board intelligence. GF is uniquely positioned to serve this burgeoning market for “connected intelligence,” with strong demand in new areas such as autonomous driving, IoT and the global transition to 5G.

“Lifting the burden of investing at the leading edge will allow GF to make more targeted investments in technologies that really matter to the majority of chip designers in fast-growing markets such as RF, IoT, 5G, industrial and automotive,” said Samuel Wang, research vice president at Gartner. “While the leading edge gets most of the headlines, fewer customers can afford the transition to 7nm and finer geometries. 14nm and above technologies will continue to be the important demand driver for the foundry business for many years to come. There is significant room for innovation on these nodes to fuel the next wave of technology.”

About GF
GLOBALFOUNDRIES is a leading full-service semiconductor foundry providing a unique combination of design, development, and fabrication services to some of the world’s most inspired technology companies. With a global manufacturing footprint spanning three continents, GLOBALFOUNDRIES makes possible the technologies and systems that transform industries and give clients the power to shape their markets. GLOBALFOUNDRIES is owned by Mubadala Investment Company. For more information, visit http://www.globalfoundries.com.

Contact:
Jason Gorss
GLOBALFOUNDRIES
(518) 698-7765
jason.gorss@globalfoundries.com


GloFo dropping out of 7NM race?

GloFo dropping out of 7NM race?
by Robert Maire on 08-27-2018 at 12:00 pm

Could this be more bad news for semicap spend? Negative for US chip independence & AMD costs ? Rumors of Global Foundries dropping out of the 7NM race have been increasing rapidly. What could be a fatal blow to the GloFo 7NM program was AMD deciding to go with TSMC for 7NM first for one product and finally for its next generation CPUs. This started back in April and lead to the CPU announcement at the end of June. AMD had been working with TSMC for quite a while as the AMD supply contract with GloFo was coming to an end anyway and GloFo was unable to keep up with AMD’s needs.

Adding to the speculation was a significant round of layoffs at GloFo along with rumors of more to come. GloFo has been under pressure from its owner, investment fund Mubadala of Abu Dhabi, to turn a profit after pouring billions of dollars into the operation and buying IBMs chip operations.

While TSMC has raced ahead and has good yield with 7NM, Global Foundries has struggled to yield. This is nothing to be ashamed off as even the great Intel can’t get its 10NM (which is the rough equivalent of TSMC or GloFo’s 7NM process) to yield.

We applauded the decision of GloFo to skip 10NM and go straight to 7NM as it was the only way to have a chance to catch TSMC and Samsung. We think its been a great effort but very difficult to bring up a new fab with less experience to the bleeding edge of giants like TSMC and Samsung. We also think that the effort has been somewhat hamstrung by reduced financial support. In the end, a great effort but the market doesn’t reward effort, it rewards results.

GloFo’s exit from the race will obviously negatively impact their capex spend levels. This is more negative news heaped on top on the negative reports from Lam and more recently Applied. Memory spending at Samsung is already down and AMAT already spoke of slowing foundry spend from multiple foundries. We thought it was TSMC and Samsung…..maybe its all three, TSMC, Samsung and now GloFo.

There will be some negative impact at ASML as GloFo was an EUV customer and they will not need EUV tools and associated yield management if they are no longer in the race.

The cost of going to EUV is probably is part of the problem of going to 7NM. Justifying the costs is difficult as TSMC will garner the lions share of leading edge revenue and profits. TSMC’s dominance creates a barrier to entry for Samsung, Intel and GloFo.

With GloFo out of the race, AMD will no longer have a choice for 7NM as GloFo is no longer a viable alternative. This removes GloFo as even a stalking horse to keep TSMCs pricing honest. Now TSMC can charge AMD whatever it wants as its the only real game in town.

Though we don’t think this is an immediate negative for AMD, it is a handicap to their margins over the longer term as Intel has much more latitude on pricing given their vertical, insourced structure (Jerry Sanders- “real men have fabs”- coming back to reality…)

Negative for US defense and security
The US defense department and other defense related areas rely on GloFo/IBM chip making which will no longer be leading edge. We had hoped for GloFo success as it was the only pure play foundry in the US. Now if US defense agencies want the best chips they will have to go to Taiwan until China takes it back…

Is Intel not far behind…

We have not heard much out of Intel’s foundry operations and GloFo’s lack of success against TSMC could foreshadow the way Intel’s foundry business will go. Not that its a great loss as it hasn’t been a lot of revenue anyway. It just tough to compete against TSMC. Samsung has found that out as their foundry business is a fraction of their memory business. TSMC is a steamroller……

Negative for semicap names- The chip flu spreads…
If GloFo reduces its leading edge efforts it is obviously going to be spending less, maybe a lot less. Though not a big spender as compared to others in the industry, the loss of their spending couldn’t have come at a worse time for the semicap industry already struggling with reduced revenue due to the sharp drop at Samsung memory with foundry softness recently revealed by Applied.

This calls into question how quickly the industry will bounce back from this down cycle. How can 2019 be an up year with memory down and all the foundries slowing? This also puts even more focus on China spend levels as its now the only remaining chip maker increasing capex spender. Think about that for a second. This increases the risk levels associated with the trade problems as there is less for the industry to fall back on if China goes away due to political reasons.

The stocks…
We had previously mentioned that we thought AMD was a bit overdone and this potential news while not directly near term impactful will be a limit to future margins and earnings.

AMAT and LRCX will have more negative news and ASML may have to shift its shipment plans for a couple of EUV tools. While the broader chip stocks have been doing well, semicap names have had downward momentum and this adds to it. Investors ad analysts looking for a pot of gold at the end of the rainbow in a quarter may be disappointed.

Also Read: GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies


A Closer Look at Fusion from Synopsys at #55DAC

A Closer Look at Fusion from Synopsys at #55DAC
by Daniel Payne on 08-27-2018 at 7:00 am

Synopsys is pretty well-known for their early entry into logic synthesis with the Design Compiler tool and more recent P&R tool with IC Compiler, so I met up with two folks at DAC to get a better idea of what this new Fusion technology was all about where the barriers between tools are changing. Michael Jackson and Rahul Deokar of Synopsys arrived on time in the press area of DAC to chat about Fusion. Historically the EDA industry has used lots of separate databases and data models, engines and fragmented flows, but now we have Fusion technology:

Continue reading “A Closer Look at Fusion from Synopsys at #55DAC”


How Design Can Make Tech Products Less Addictive

How Design Can Make Tech Products Less Addictive
by Vivek Wadhwa on 08-26-2018 at 7:00 am

It’s the summer of 2018, the summer of Fortnite, and we all know we are addicted. Addicted to email, Snapchat, Instagram, Fortnite, Facebook. We swap outdoor time on the trail for indoor time around the console. Our kids log into Snapchat every day on vacation to keep their streaks alive and then get lost in the stream.

We move less and watch more. In particular, the rise of the smartphone tipped the balance. It is now our omnipresent companion, to the point that in research studies, subjects prefer electric shocks to being left, deviceless, to their own devices. Needless notifications flood us on date nights, at family time, and at sports events, invariably when we are supposed to be in the moment. And then there’s Netflix, guiding us into insomnia and sleep deprivation as we blissfully binge watch, an act of willful ignorance of the fact that even small diminutions of shut-eye can cause bumps in depression and significant declines in cognitive functioning. An increasing pile of evidence points to our obsessive use of tech products as diminishing the most important parts of our lives — our relationship with family and friends, our work lives, and our physical and mental health.

For the technology companies, of course, dependency has been at the core of product design. These companies knowingly used many techniques from cognitive science to drive and hold our attention. That’s not entirely negative. The point of any product design is to make it easy to use. But with soft drinks, cigarettes, and gambling, for example, there is some acknowledgement of the negative impacts. Perhaps more responsibility should live with the inventors of these devices and apps who need to make design changes to help people live more healthfully with their tech.

How can we redesign technology to better respect choice, reduce techno stress, and foster creative and social fulfillment? The ideal solution would be easy to implement and customize and easy to apply to multiple devices and platforms. It would have a centralized user account that allows you to customize all your interactions and notifications, to which all applications would refer for guidance and permission. It would be, in other words, a true user agent, an intermediary that brokers our attention and implements our rules in eliciting it. The concept of such a user agent has been discussed repeatedly in industry but has never been instituted. Given our growing collective discontent, our epidemic loneliness, and our declining productivity, the time may have come when such a solution is no longer simply ideal, but essential.

Ultimately, such an agent will have to be habit-forming technology. It will have to take all the techniques that Silicon Valley’s “user-experience designers,” say, at Facebook and Netflix, have used in forming destructive habits and invert them. We need good magic. We need technology to enhance chronic focus rather than bombard us with chronic distraction; to encourage beneficial habits rather than motivate us to pursue pathological addictions; to promote productivity, connectedness, creativity, spontaneity, and engagement rather than cheap facsimiles of those qualities. The well-lived life, which has never been further from our reach, is one that good technology design could and should make more straightforwardly and universally attainable than ever before.

Applications like Moment, Siempo, Unglue, Calendly, and SaneBox are aiming to deliver that kind of beneficial magic and focus enhancement. They seek to reduce the frictions that we as users must endure in attaining focus by batching notifications, setting limits on phone usage, and other modes of helping control our relationships with our devices. Most of the mechanisms that inhibit or destroy our focus create stress, unhappiness, regret, or sadness once they become too interruptive.

In sympathy with Tristan Harris’ user-rights manifesto, we have a vision of a technology world that works for humans rather than against them and that has each and every company consider the long-term health and benefit of its users to be an imperative design consideration. Even if it meant less profit in the short term, they would restrain themselves from inducing patterns of destructive overconsumption. We propose that this would work as follows.

First, technology makers would define patterns that suggest problem use — preferably without identifying problem users as individuals. Such patterns would include spending an inordinate amount of time with the product, spending too much money, or regularly exhibiting unhealthy behaviors such as binge watching. Triggered by such patterns in its use, the technology product would treat the user differently, offering help in altering these patterns. This may seem like a patronizing approach, but we would wager that, given the option, many people would welcome the help.

In a work context, we might see Slack warning heavy users not only that they must keep desktop notifications enabled, but also that they are in the upper percentage of GIF senders or message senders. Email providers might offer batch receiving of messages to its users who otherwise tend to respond the most quickly (which could indicate compulsion to check for and respond to messages). Or every email client could offer batch receiving as its default mode, or simply ask us every day how many times we want to check email that day (in a Siri-like voice, of course).

For consumers of video, product designers for Netflix and YouTube, for example, would make auto-play an opt-in function. In fact, opt in would become the default rather than requiring opt out as the standard product design. And when product designers did choose to deploy opt out, they would allow people to opt out very easily whenever the feature was showing. For example, every video auto-play would also display a “Stop Auto-Play” button as a preference. That might slow consumption, but then again it might help all of us feel more in control, be more productive, and be more loyal customers.

But how can initiatives such as these be given teeth and profit motive? We are hopeful that, in some cases, the profit motive will take care of itself. Both Netflix and LinkedIn have cracked that nut, as have Spotify and numerous other subscription-based technology businesses. In such a case, inducing massive consumption beyond a certain point becomes counterproductive of customer satisfaction; we suspect that these businesses know exactly where that threshold lies.

And, yes, those platforms are now just as guilty of the same attention-grabbing offenses as the free platforms. But they have the benefit of paid users and a willingness to put a value on attention, participation, and services rendered.

The challenge is to price attention, participation, and customer satisfaction and loyalty in the attention economy.

So how might this work? Imagine that Facebook charges looked like our regular mobile-phone bills with a set of à la carte services. We could opt in or out of those services — for example, no ads in our feed, and a “Focus” button on our homepage that blocks all notifications — and pay for them as features.

We realize that charging users is exceptionally difficult and is probably not going to happen with Facebook or Google; it will probably be the next entrant that cracks this model. But we can point to one example in corporate America where businesses are showing exceptional ability to put a price on such fuzzy costs: benefit corporations (B Corps), whose ranks are growing quickly. Some extremely profitable and successful brands, such as Patagonia, Athleta, and All Birds, have become B Corps. And technology companies that don’t run factories full of low-paid workers. The tech elites would find it at least as simple to enact a similar ethos of ensuring that the product and service on offer does no harm and is in the best interests of society.

The B Corp validation and rating process could easily incorporate a set of values and measurements specifically designed for technology companies. For example, a tech company that could be rated as a B Corp would allow users to unsubscribe from the service in no more than three clicks and without having to send an email or make a phone call. (California just passed a state law that mandated precisely this.) The government of China mandates that game companies put in place user warnings beyond a certain number of hours; B Corp tech companies would have to warn users that their actions were perhaps unhealthy after they averaged more than, say, two hours of use per day over a week.

We know now that the major technology companies are considering how to make it easier for users to control the way they interact with those products. Both Apple and Google announced new sets of features for the iOS and Android operating systems, respectively, designed to allow users to better control their experience (and Apple is finally adding robust tools for parents to better monitor and control their children’s technology consumption). Facebook is planning to release a new feature that will help users monitor their own usage of the network.

Whether the tech giants can truly use their product-design superpowers to help users build healthy long-term relationships with technology remains to be seen. A core tenet of behavior design is reducing barriers to the desired behavior as a means of maximizing that behavior. For so long, the desired behavior has explicitly been bingeing and mindless consumption. The economics of these companies, driven by the attention economy, made it so. No, we’re not going to stop using search engines and social media or streaming movies. Nor should we. But maybe, just maybe, the companies that can more closely align with user needs — that sell products to users, like Apple and Netflix, rather than advertisers — can lead the way. There is no free lunch, ever. The same goes for seemingly lucrative lines of business built on behaviors that, frankly, the creators of these same technologies would prefer not to encourage to excess in their own families and friends.

This is an extract from my new book, Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back,coauthored with Alex Salkever


Improving Yield and Reliability with In-Chip Monitoring, there’s an IP for that

Improving Yield and Reliability with In-Chip Monitoring, there’s an IP for that
by Daniel Payne on 08-24-2018 at 12:00 pm

There’s an old maxim that you can only improve what you measure, so quality experts have been talking about this concept for decades and our semiconductor industry has been the recipient of such practices to such an extent that we can now buy consumer products that include chips with over 5 billion transistors in them. You’ve probably heard that semiconductor IP vendors can offer you an incredible array of choices: Standard Cells, Memory, Processors, Interconnect, Serial IO, PLL, radios, FPGA, converters, and the list goes on. Lesser known are the specialty IP vendors that have deep analog expertise, and one of them is Moortec Semiconductor. Developers at Moortec have specialists that created three classes of in-chip monitoring blocks for:

  • Process Monitoring
  • Voltage Monitoring
  • Temperature Sensing

Consider the challenge of reaching timing closure on your SoC where two regions of the same chip have different junction temperatures, VDD supply levels and even process corners:

If your design used in-chip monitoring of Temperature and Process, then you could measure the effects and make system-level decisions to mitigate the effects, per design. Let’s peek a bit deeper into the specific IP that Moortec offers to monitor Process, Voltage and Temperature:

Your chip typically will include multiple sensors at strategic locations in order to measure and control for greatest impact.

Temperature Sensors
The temperature sensor has some impressive specifications that help you know what the junction temperature is across a chip and then take steps to control it:

  • Accuracy of +/- 3C without calibration, +/- 1C when calibrated
  • Resolution of 0.06C
  • An analog test bus for characterization and debug
  • Interface with APB or an I2C
  • Self-checking
  • Different modes for faster sampling: 12-bit, 10-bit or 8-bit

Voltage Monitors
These monitors provide voltage insight for IR drop, core supply, IO supply, and AVS (Adaptive Voltage Scaling). A voltage monitor finds supply events, perturbations and transients. Specifications are:

  • +/- 1% or +/- 0.6% accuracy
  • +/- 1mV accuracy on IR drop analysis
  • Up to 9 channels for 28nm nodes
  • Up to 16 channels for FinFET nodes

Process Monitors
Local variation across an SoC means that there can me multiple process corners present, per die, so being able to measure the process corner is an essential step. The basic circuit for a process monitor is the ring oscillator, so each process monitor contains multiple delay chains to determine which process corner is dominant. The four application areas for a process monitor include:

  • Speed binning during characterization
  • Age monitoring
  • Critical voltage and timing analysis
  • AVS

PVT Controller
The blue area shown above is the PVT Controller and your chip only needs one controller which is then connected to multiple PVT instances. Consulting with Moortec you can best understand how many of each instance your specific chip should have, and where to place these IP cells.

Specifications for the PVT Controller are:

  • Control multiple instances of the Process, Voltage and Temperature monitors across the chip
  • Temperature & Voltage alarms
  • Analytics – max, min, sample values
  • iJTAG access support

The PVT Controller along with monitors enables your engineers to implement:

  • DVFS (Dynamic Voltage Frequency Scaling)
  • Clock speed optimization
  • Power optimization
  • Silicon characterization
  • Improve reliability and device lifetime

Foundry Support
The Moortec Embedded In-Chip monitoring Subsystem is available on various foundries and supports advanced node CMOS technologies on 40nm, 28nm, 16nm, 12nm & 7nm.

Contacts
The team at Moortec has built up world-wide distribution partners, showing just how successful their in-chip monitoring IP has become:

  • UK (Moortec)
  • UK, Europe (AQT)
  • Europe (Sythra)
  • USA (Mark Davitt, Manzanita Semiconductor)
  • Israel (Amos Technologies Ltd.)
  • China (OnePass)
  • Taiwan (Kaviaz)
  • South Korea (Chipinside)
  • Japan (Spinnaker Systems)
  • Russia (Nautech)

Related Blogs


Semiconductors Become a Worldwide Business

Semiconductors Become a Worldwide Business
by Daniel Nenni on 08-24-2018 at 7:00 am

This is the twelfth in the series of “20 Questions with Wally Rhines”

Among the companies that bought a license from AT&T to produce the transistor was Sony. While the U.S. maintained its lead in technology, other countries like Japan emerged as competitors. Semiconductor manufacturing was both labor intensive and capital intensive. Fairchild became the first major semiconductor manufacturer to start operations overseas, adding an assembly site in Hong Kong in 1964 where labor costs would be lower. TI and Motorola followed, although TI began with a misstep by starting an assembly site in Curacao. TI made up for this slow start through a different path – an attempt to sell in the Japan market. After World War II, U.S. companies were not allowed to set up wholly owned subsidiaries in Japan; they had to partner with a Japanese company who would have majority ownership. Companies like IBM and Kodak that had operations in Japan before WWII were grandfathered and could continue with their 100% owned subsidiaries in Japan.

TI wasn’t interested in a joint venture. And Pat Haggerty saw the potential that Japan offered as a future manufacturing power house. So this became the first case of TI using its U.S. patent portfolio for reasons other than defense. The negotiations resulted in permission from the Japan government allowing TI to set up a joint venture with Sony in 1964 merely for appearances. I’m told that Sony people never showed up and TI quietly bought out their share of the business later. But Haggerty established a personal relationship with Morita, founder and CEO of Sony that lasted through Haggerty’s lifetime. This became important in the future.

TI began a successful offshore assembly operation in Hatogaya, Japan on the outskirts of Tokyo, followed by another assembly site in Hiji Japan on the island of Kyushu. The Hiji site was on the top of a small mountain overlooking the ocean on three sides and must have been one of the most valuable pieces of industrial real estate outside Tokyo. This habit of finding valuable real estate for plants was a TI characteristic, rumored to be the responsibility of Board member Buddy Harris. The choice of the TI Nice plant was terrible from the point of view of location for manufacturing but it was on the top of a hill with a panoramic view of the French Riviera. Whatever limitations the site had were, at least partially, offset by the breathtaking view.

Soon the race for offshore manufacturing sites was on. Morris Chang’s influence came to bear and Taiwan would have been the next site but Morris tells me that the Taiwanese government wasn’t flexible enough. TI therefore built the Singapore site in 1968, then Taiwan in 1969, Malaysia in 1972 (simultaneously with Motorola and SGS Thomson in Kuala Lumpur) and the Philippines in 1979 (a site that I was proud to have report to me from 1987 through 1993).

TI did two things that were unique among semiconductor companies in the race to build up offshore manufacturing. First, TI decided that cheap labor was not the only reason to go offshore. The offshore sites had skilled technicians as well. So TI moved automated manufacturing equipment to its offshore sites even though manual labor was cheap. This turned out to be highly advantageous. The other thing TI did was to establish wafer fab manufacturing in Asia, starting in Japan. Intel remained largely in the U.S. Motorola was primarily the U.S. and Europe as were most other semiconductor companies. Europe was necessary, at least for assembly, because they had substantial duties on imported semiconductors. European assembly sites saved money despite the high labor cost. TI, of course, had wafer fabs all over Europe, starting in the UK, then Germany, France and Italy. Assembly sites were limited to Portugal and Italy.

One result of the establishment of wafer fabs in Japan was a creation of awareness of the superb manufacturing process variability control that was possible with Japanese workers. In cases where we sent the same photomask set to Japan, the die sort yields were typically much higher than those of the same devices produced in the U.S. TI used this to its advantage.

When the trade wars between Japan and U.S. semiconductor companies erupted in the 1980’s, MITI (the Japan Ministry of International Trade and Industry) assigned Japanese companies quotas for purchase of semiconductors from U.S. companies. Sony was assigned a very high quota of 20%. All the Japanese companies wanted to fill their quotas with DRAMs but only TI and Micron were still in the business in the U.S. At this time, I was managing an organization I named Application Specific Products, or ASP, that had responsibility for microprocessors and ASICs. Yukio Sakamoto and I went to Japan to negotiate a deal with Sony with a goal of having TI manufacture the chips used in the industry standard Sony Walkman.

Because of the historic relationship between TI and Sony, my meeting started with Norio Oga, the CEO and former opera singer who succeeded Morita as Sony CEO. Sony’s offer: If you can match the Sony Semiconductor internal transfer price and quality, you can have 100% of the business. When we started production, our packaging cost alone for an 84 pin Quad Flat Pack was six cents per pin, more than the total price of the chip plus package. Within four months, thanks to Sakamoto, we were at one yen per pin. Similar ratios existed for the chip. And over the next year, we billed Sony for $200 million for Walkman chips and greatly enhanced our manufacturing capability.

The 20 Questions with Wally Rhines Series


Verifying ESD Fixes Faster with Incremental Analysis

Verifying ESD Fixes Faster with Incremental Analysis
by Tom Simon on 08-23-2018 at 12:00 pm

The author of this article, Dündar Dumlugöl, is CEO of Magwel. He has 25 years of experience in EDA managing the development of leading products used for circuit simulation and high-level system design.

Every designer knows how tedious it can be to shuttle back and forth between their layout tool and analysis tools. Every time an error or rule violation is found, you need to open up the design in the editor and make changes, save, export and re-run the analysis. This is especially true with ESD tools, which are fine for analysis, but often leave designers running blind when it comes to resolving errors. As a result, designers have no recourse other than to iterate back to layout to fix issues. Magwel’s ESDi offers refreshing features for locating the source of an error and, even more importantly, for making changes and testing fixes without leaving the ESD tool itself.

Magwel’s EDSi offers comprehensive and high speed ESD simulation on every pad pair. It takes into consideration competitive triggering so that it more accurately evaluates voltages and currents during discharge events. It also uses I/V curves that can be derived from TLP measurement data or that are user created. Regardless, device model I/V curves can include snapback, which is used to determine whether triggering occurs and the actual voltage after the triggering threshold is reached.

Another important benefit of the tool is its extremely high usability. This starts with ease of set up. For instance, ESD devices and their terminals can be automatically tagged in the layout. Users can also control whether all the pad pairs are run or if only a subset are to be simulated. Parallel processing speeds up the final results.

Another aspect of ESDi’s excellent usability is the error reporting. All test results are provided in a report grouped by category and sortable on any field, right inside the tool. Violations are highlighted for easy identification. By clicking on the reported error, the user can jump to the layout with the relevant geometries highlighted for easy viewing. In addition, EDSi can generate a graph diagram of all the devices and paths involved in the discharge event. Included in this are the net resistances, as well as device voltages and currents.

To illustrate the value of having editing capability in the analysis tool we will use a case where there is an HBM simulation error involving a primary and secondary ESD device with a poly resistor to help primary device triggering and limit current flow. After finding the error, the next step is to modify the resistor to alter the resistance, then perform re-simulation to see if the problem is fixed. This is a tricky modification because too high a resistor value can affect input pin behavior. So, the goal is to add just enough resistance to allow the protection to operate properly, but not to overdesign to the point where performance in operational mode is affected.


Figure 1 – Test circuit schematic with TLP models


Figure 2 – Test case in ESDi GUI

In this design, ESD device “Dev1” initially triggers first. However due to the low resistance of resistor “PolyR” of 3.3 Ohms, the primary device “esd3” never triggers. As a result, all of the current for the discharge event travels through Dev1 and the voltage drop across the device reaches 8V, which will lead to device burnout. After the initial simulation, the designer will want to change the area of the resistor to increase the resistance.


Figure 3 – Circuit with ESD violation

Inside of ESDi there is a suite of layout editing commands that allow the designer to modify the layout geometry. Using these commands, it is easy to change the width of the poly resistor and its contacts.


Figure 4 – Editing operation to modify PolyR resistor prior to reanalysis


Figure 5 – After PolyR resistor modification

Once the geometry is changed, the ESD solver can be quickly rerun to perform an HBM simulation on the pad pair in question. With the changes that were made the new simulation results look much better. With the higher R value on PolyR, the primary device triggers, carrying the brunt of the current. Also, the voltage is clamped at a lower value, avoiding device burnout.


Figure 6 – Circuit after fixing violation in ESDi with layout editing commands

After debugging and experimentation, when optimal results have been obtained, the designer can move back to their layout tool and finalize the changes in the original design.

This is why customers say good things about ESDI for saving them considerable time and hassle by enabling them to make changes right in the analysis tool. For ESD integrity it is very important that designers and ESD experts have accurate, effective and easy to use tools. The earlier and more often ESD protection is reviewed, the lower the likelihood that an error or violation will make it through to silicon. Having editing built-in to ESDi makes the process more efficient and provides better results in the form of fewer design iterations and less rework.

More information about Magwel’s ESD solution can be found at their website.


Webinar: Ensuring System-level Security based on a Hardware Root of Trust

Webinar: Ensuring System-level Security based on a Hardware Root of Trust
by Bernard Murphy on 08-23-2018 at 7:00 am

A root of trust, particularly a hardware root of trust, has become a central principle in well-architected design for security. The idea is that higher layers in the stack, from drivers and OS up to applications and the network, must trust lower layers. What does it help it to build great security into a layer if it can be undermined by an exploit in a lower layer? The lowest level foundations of the stack – hardware and bootloader for example – must guarantee trustworthiness in operation; these become the root of trust. The function of this level is to ensure trust in critical functions – trust in downloaded software/firmware, trust in device authentication and trust in the security of privileged operations such as encryption.

REGISTER HERE for this Webinar on September 6th, 2018 at 11:00 AM PT.

Why is this so important? After all, the easiest place to attack is in application software and the beauty of hardware is that it is difficult/impossible to attack, right? Remember there is software, aka firmware, that runs down in the hardware, driving the bootload process for example. Not as easily accessed as application software, particularly if that code is stored in ROM, but not inaccessible either.

One telling example was demonstrated in Nintendo Switch consoles. This depends on a USB recovery-mode offered by the Nvidia Tegra X1 on which the console is based. Maybe you can already guess the problem. This mode can be tricked into bypassing the normal processes to control external data input. This starts by shorting a pin on a controller connector, forcing the USB recovery mode. Then the USB input fakes out the recovery using a variant on the time-honored buffer-overflow exploit; sending a bad length arg can force the system to request up to 64K bytes per request, overflowing the DMA buffer in the boot ROM, from where it can be copied into the protected application stack. From there it can do anything it wants since it’s already privileged. Yikes.

Of course another problem is that ROM coding is pretty final. Great if you know it can never be hacked but see above for how well that worked out for Nintendo (who apparently have already shipped ~15M consoles with this vulnerability). Perhaps a better approach is to accept that we can never guarantee absolute security and instead allow for carefully-secured updates to firmware to address the latest known threats.

The logical way to do this is through over-the-air (OTA) updates. For many reasons, no-one wants to have to plug in a USB-stick (again, see above) or have to visit a dealer/shop for an update. Security updates should be painless; OTA is the only way we know how to do that today. But how many ways could that be compromised? Man-in-the-middle attacks, or faking OEM credentials? These won’t just spoil a game; they may hack your car and since they’re working with a boot image, again they can take over everything. Fortunately, this kind of attack can be largely averted through strong authentication, using encrypted downloads and signing the code in some manner to detect any code-tampering on each boot.

Adding these kinds of root of trust to a system obviously isn’t just a matter of plugging in an extra block of RTL. There are multiple components: CPU, memory, true random number generator, encryption logic and software at minimum. And these have to be configured together with your system needs, providing plenty of opportunities for you to get it wrong in some subtle way.

Meltown, Spectre and more recently Foreshadow may garner the majority of media attention and panic but all devices need a root of trust. Whether you build your own from scratch, from 3[SUP]rd[/SUP] party IP or you use a pre-built ROT solution, you still have to verify correct operation of the ROT. You might want to watch Tortuga’s webinar on this topic. You can REGISTER HERE.