Bronco Webinar 800x100 1

Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 2: Verification.

Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 2: Verification.
by Raul Perez on 02-09-2021 at 10:00 am

iStock 1176843522

This article about verification is part 2 of a two article series. Please see part 1 on validation HERE.

Verification is a field that has emerged as its own discipline. It’s no longer being relegated to an activity led by the design team to which time is allocated as long as it doesn’t get in the way of designing. Chip companies that want to have predictable product release cycles have realized that it is a false choice to pick between designing or verifying. You need to treat both with absolute devotion to be successful in today’s competitive market. And if you’re a system company, you absolutely need to make sure that your custom silicon supplier has top notch verification methodologies and verification engineers deployed to your project so that your system schedule is predictable, and your chip tape out is of a high quality. I have never met engineering executives and sales executives representing a chip supplier that will not claim that; their company has a great track record of on-time tape outs, first pass silicon, total commitment to excellence and top notch methodologies, not a single one has ever said anything different. Yet, truly first pass silicon is rare, and tape out delays are not uncommon, so someone is not telling the truth. That is why system companies are advised to perform a  detailed verification capabilities review during the chip vendor selection reviews, or to perform multiple verification reviews as part of the full silicon management process. System companies can then make an informed decision when choosing a supplier for their custom silicon program. This verification review effort also helps in reducing the amount of mask sets consumed in a project which can be a very costly item as you use process nodes that are closer to the state of the art. Once the system team gets silicon back they will be very difficult to manage to tape out again to fix ECOs that are not acceptable to the system company. This is especially true if the chip supplier is working based on a fix bid quote since paying for additional masks could wipe out their profit margin. This could lead to an impasse between chip supplier and system company.

I hope that from the explanations above the reader can agree that the alternative of hiring a chip company without silicon management processes and experts on your side, writing them checks for large sums of money as milestones are reached and then crossing your fingers hoping the  silicon comes in working condition is not a good plan at all. By the time you get delivery of the first revision of the chips to your system build you have probably already paid the chip design house most of the agreed NRE. So you have little leverage left to get them to fix the chip. Careful drafting of contracts is a must here and you should definitely select competent legal counsel early so you can ensure your legal front is well thought out. The silicon manager is a key resource to help the legal team call out meaningful milestones as triggers of payment and to anticipate the types of issues that can cause an impasse during the program.

Silicon management is not just about technical checks and project management. It’s also about understanding the motivations and incentives of the parties involved in the project and constantly watching for collision courses and blind spots.

Some of the risks to watch out for when seeking to hire a silicon supplier for your custom silicon program are:

  • Run-break-fix.

I’ve seen this happen in different situations: 

  1. One of them is when you select a mostly analog chip supplier that usually releases small pin count parts, and your custom chip requires them to integrate multiple of those parts into one bigger chip. Add some digital interfaces, control registers and things are very different now compared to what that team usually works on. While this sounds like something simple, when it fails it’s usually because the verification methodologies used by analog designers for small pin count chips may not, and usually don’t scale to higher level integration. To add to the difficulties, analog designers who are used to being top dog in the hierarchy balk at the idea of allowing verification leads to take charge of the top level verification, and instead try to scale up their methodologies and keep control of the project. As irrational as that sounds this happens a lot. Every engineer wants to control their baby. Absent some honest desire by the chip supplier to adopt a verification methodology that scales and can integrate digital and analog, you’re going to have an unpredictable chip release schedule. As a double whammy any delay in design will come at the cost of reduced verification. The ego of many designers simply gets in the way of the success of the program, and that is a very difficult situation to overcome. So best to avoid it altogether and choose a different supplier as soon as you detect that is what is likely to happen.
  2. Another situation that leads to a run-break-fix scenario is when the supplier may in theory have a proper verification methodology in place, but they severely understaffed the verification team to reduce the “overhead costs”. This is peanut butter engineering and tends to happen in companies that are too influenced by the traditional designers who don’t even comprehend why we need these fancy verification guys now that look more like a software engineer than a “real chip guy”. They view it like they’ve been releasing chips for X years without them, blah, blah… So you end up with less coverage than you should/could have because the verification engineers simply don’t have enough cycles, which leads to poorly written tests, poor schematic vs model checkers, lack of sufficient automation for the verification suite which leads to poor regression testing. Simply the verification of the chip is sub-par compared to what it could been given the modern tools and techniques available today and it’s your system that will be taking the brunt of the risk.
  3. If your project ends up in a run-break-fix loop you could have 2,3,4 or more tape outs as you watch your system development take a huge delay, and once you select a supplier that turns out not to have the proper verification chops you end up in a very bad situation of having to decide between continuing to invest more time and taking on more risk to your schedule with this supplier, or to take the full hit of going with a different supplier late in the game. Some system companies try to solve this by having multiple suppliers developing the same pin to pin compatible chip in parallel, but that will dilute the system company’s engineering team focus on making sure the chip is properly designed to the right specs that support the system.
  • Experienced, but done.

It’s not unusual to find, while discussing the requirements of the verification review with a potential chip supplier, and further down the road when discussing the verification that has been run in preparation to tape out, that some of the engineers instead of arguing why some type of verification doesn’t need to be run or improved because it’s covered in some way somewhere else, they will say: “I have X years of experience, and in my experience we just don’t need to do that.” Now don’t get me wrong, any engineer at any experience level can have this attitude, especially the bad ones. But even the good ones can go bad if they don’t watch out. What this engineer is really saying by choosing not to defend his position with an argument, and instead bring up his/her experience is that: “I lost my professional curiosity some time back, and stopped learning, and I am no longer interested in learning. So quit making me uncomfortable by asking me to change the way I do something, and challenging my worldview.” Once an engineer loses their curiosity, they are done as an engineer, and experience is valuable, but it ain’t going to by itself allow you to grow if you stopped being curious. If you as a system company see this type of attitude in a person in lead roles for a custom chip program you need to get out of there, and select another supplier, especially when searching for a company that has good verification methodologies since this is a field that is relatively new and has changed a lot recently compared to other much more mature areas of silicon development.

  • Serializers, and false choices.

The traditional way of developing a chip used to be that you first designed it, and then ran the verification before you taped out. Digital chips have usually had the most robust methodologies for design and verification. But as soon as some significant analog content enters the picture the verification really diverges from supplier to supplier, and it seems to me that everyone does their own thing mixing and matching different commercially available tools with overlapping capabilities which are selected to do different jobs in a somewhat arbitrary manner, and mixing it with internally developed scripts and tools. Out of that blend, some concoction of a verification methodology and its results becomes your verification for the tape out. Verification engineers are expected to start developing models and tests in parallel to the design team designing the chip, they will interview the designers to determine functionality and pin out of the blocks they will work on, and with that information they will start putting together a top down behavioral model and test bench testing environment that eventually will intercept the designer’s schematics, and will then be used to do proper schematic vs model checks to speed up some sims, while choosing to leave some blocks at transistor level for others, all this judiciously done to get excellent overall coverage while maintaining reasonable simulation times. The verification engineers are continuously building that verification environment and searching for bugs throughout the chip development, and that is their core job, they are not designers that came off block designs and now are available to run sims and make models. While augmenting the verification team with idle designers could be beneficial, a plan that requires designers to come off their block designs to be able to complete proper verification is a risky one as designers may need to spend more time to finish their blocks than expected and they will prioritize that work over any verification deliverable they may have assigned to them.

  • Designers as jacks of all trades – masters of only one.

As you may have noticed in my points above, I am not fond of designers when it comes to them interfering with verification engineering, especially analog designers when it comes to doubling up as verification engineers. Their heart is in design, not in verification, and they usually lack not only the passion but also the skills needed to be an effective verification engineer which include excellent coding skills. It should go without saying that assigning a designer to verify their own design should definitely not be part of the plan if you want to avoid tunnel vision getting in the way of finding bugs before tape out.

  • Home brewed, but too cool to show and defend it.

Many companies develop tools, scripts, etc… that they use internally. This is normal, and as long as you can inspect the tools and their inputs, outputs, etc… as part of the tape out phase review this is ok. However, it’s also true that when a commercially available tool that is widely used in the industry is available, but instead the chip supplier uses a home brewed tool they are adding some risk to the chip development because the user base for the commercially available tool is broader and therefore more people are reporting bugs, and there is also an EDA company behind that tool whose business it is to fix it and upgrade it. The home brewed tools may be someone’s pet project, and when that someone moves on from that company, the tool may no longer be updated, and get stale. It’s also especially disruptive if a supplier blocks the system company silicon reviewers from performing an in-depth verification review at tape out because they are trying to protect whatever they think is differentiated IP that this tool contains, and this really handicaps the system company’s ability to check if the tape out is of a high quality or not, and therefore negates the risk mitigation that an independent tape out review provides to the system company.

  • Don’t let documentation get in the way of the “real work”.

Chip companies that have poor internal review processes tend to have poor documentation practices. You can spot this easily because as soon as you need to perform an in-depth verification plan review, the plan is poorly written, lacks test details, lacks specifics of what is being tested, etc… It’s basically a document that doesn’t really provide the reader with the full scope of the verification that is planned to be executed. In these cases you usually have one or maybe a few engineers that are directly coding the verification without taking time to review with the broader team their plans and intended scope of coverage. Without the documentation, the internal reviews at that chip supplier will be much less effective, and it will be pretty much impossible for the system company reviewers to check if the plan is good or not. This will also reduce or eliminate the possibility of having the system company engineers, and the system company FW engineers provide their feedback about the chip verification plan, on the types of tests and coverage that they think would be most appropriate considering the system’s use cases.

There are no perfect supplier teams, and there is no perfect verification flow, every team has its strengths and weaknesses, and when selecting the supplier for your custom chip you need to decide if that is the right team for the type of chip you want to develop. Verification can be run forever with all sorts of randomized inputs, analog operating point combinations, etc… But at some point you need to tape out, and some bugs may have been missed which may be easier to find during validation rather than in a simulator. If you did a pretty thorough job you tend to find few (if any) digital bugs since the digital domain has very good tools already to maximize coverage, and most bugs will be found in the analog or RF parts of the chip.

  • Trust, but verify.

Custom system silicon when done with the assistance of silicon experts puts the system company in control of its own destiny. It’s important to note that when purchasing catalog parts for your system, unless you perform similar due diligence to what is described above, you’re trusting but not verifying that your components will be of good quality and not likely to cause yield or other issues when you go to production in high volumes.

For more information contact us.


Take Your Agile Release Process to the Next Level with Compass 2.0.1 and EssentialSAFe® from HCL

Take Your Agile Release Process to the Next Level with Compass 2.0.1 and EssentialSAFe® from HCL
by Mike Gianfagna on 02-09-2021 at 6:00 am

Take Your Agile Release Process to the Next Level with Compass 2.0.1 and Essential SAFe® from HCL

As I’ve discussed before, HCL Compass is a very flexible tool to define and manage development and release processes at the enterprise level. In HCL’s own words: Low-code/no-code change management software for enterprise level scaling, process customization, and control to accelerate project delivery and increase developer productivity. These are lofty goals. Lean and agile processes are at the center of these kinds of initiatives, and there are many requirements to be met to achieve a lean and agile development process. A new release of HCL Compass is noteworthy with respect to these goals. Read on to see how to take your agile release process to the next level with Compass 2.0.1 and EssentialSAFe® from HCL.

Let’s first examine the nomenclature involved. EssentialSAFe is a new schema that ships with the latest release of Compass (2.01) that helps teams follow SAFe practices. SAFe, or Scaled Agile Framework is a set of organization and workflow patterns for implementing agile practices at enterprise scale. Essential SAFe contains the minimal set of roles, events, and artifacts required to continuously deliver business solutions via an Agile Release Train (ART).

The Agile Release Train is a long-lived team of agile teams, which, along with other stakeholders, incrementally develops, delivers, and where applicable operates, one or more solutions in a value stream. Agile teams are cross-functional groups of 5-11 individuals who define, build, test, and deliver an increment of value in a short time window. So, EssentialSAFe from HCL provides a comprehensive out-of-the-box schema to implement a lean and agile workflow for the enterprise.  The schema is also customizable, so you can fine tune the workflow for your organization.

In the EssentialSAFe schema, there are three work items available to scope, plan and implement experiences in your solutions. They are Features, Stories, and Tasks. These make up part of the SAFe Requirements Model, shown in the figure below.

SAFe Requirements Model

A few more definitions will help:

  • A Feature is a service that fulfills a stakeholder need. Each feature includes a benefit hypothesis and acceptance criteria and is sized or split as necessary to be delivered by a single Agile Release Train (ART) in a Program Increment (PI).
  • Stories are short descriptions of a small piece of desired functionality, written in the user’s language. Agile Teams implement small, vertical slices of system functionality and are sized so they can be completed in a single Iteration. Stories provide just enough information for both business and technical people to understand the intent.
  • Tasks are small work items that new teams might use to split stories into smaller parts. They are completed within a few days, but often finished in less than a day. Tasking stories is an optional practice in SAFe, but it can help new teams improve their sizing of stories and estimation of capacity.

For further reading, HCL provides more details about the new capabilities of Compass 2.0.1 and how to implement it in your enterprise here, with more detail provided here. A complete example implementation is provided that should be quite helpful.  The referenced posts are written by Adam Skwersky, senior software engineer at HCL Technologies. Adam hails from MIT, where he earned a BS and MS in Mechanical Engineering. He was also a research assistant at MIT in robotics before spending 20 years in software development at IBM. He’s been at HCL for four years. You can learn a lot from Adam.

Check out his posts so you can take your agile release process to the next level with Compass 2.0.1 and EssentialSAFe® from HCL.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.


The Five Pillars for Profitable Next Generation Electronic Systems

The Five Pillars for Profitable Next Generation Electronic Systems
by Kalar Rajendiran on 02-08-2021 at 10:00 am

LifeCycle Insights PieChart

Although electronic systems design as a discipline has been around ever since electronics systems came into existence (and that was many decades ago), the design complexities involved and the demands and constraints placed on the systems have multiplied significantly since then. Recent research by LifeCycle Insights shows that 58% of all new design projects incur unexpected additional costs and time delays. And only one in four projects actually goes out on time and on budget.

Source: LifeCycle Insights

When asked the question “what is the secret behind a successful and profitable product?”, the typical answer is two words. Great design. As true as the answer is, this short answer hides all of the details of how it got there. Great design didn’t happen by magic or happenstance. It involves very well thought-through methodologies and software tools to effectively and efficiently manage product, team and process complexities resulting in a great design. A recently posted whitepaper by David Wiens, product marketing manager at Siemens Digital Industries Software entitled “Raising the pillars of digital transformation for next-generation electronic systems design” walks you through those exact details.

I’ll list some of the nuggets I gathered from reading the whitepaper.

The classic prototyping-dependent approach increases the risk of missing product launch schedules. Reason: With increasing complexities in designing systems, the design cycle duration is significantly longer and any problems we wait to learn through physical prototyping methods are too late and incur re-do of lengthy design cycle iterations.

The following diagram highlights the need for catching more errors earlier by integrating verification and validation throughout the design process which is significantly longer than the new production introduction (NPI) phase.

Source: Siemens Digital Industries Software

One way for teams to detect down-stream problems earlier is by performing simulations of prototype performance earlier in the design cycle. This calls for developing a virtual model to represent the intended final physical product. A virtual model of the final product that is being designed is called a digital-twin. The digital-twin allows teams to perform simulations and validations as early in the design cycle as possible.

A model-based system perspective allows teams to not only look at the electrical and functional trade-offs earlier in the design cycle but also product trade-offs that might impact weight, cost and availability of system components.

A digital-twin developed in the context of a model-based system engineering perspective allows for a digital-prototype driven verification, a Shift-Left testing at play. Over the course of a project, the digital-twin model evolves to allow more complex interactions including analysis, simulations and validations earlier in the design cycle. This enables teams to detect problems much earlier when they are easier and cheaper to fix with very little product launch schedule impact. This reduces the need for physical prototypes.

Digital-prototypes lend themselves well to automation technologies that eliminate manual reviews and increase productivity. And the benefits derived from this automation are multi-fold.

Next-generation electronic systems require a next-generation approach. David explains all the details by categorizing them into five transformational factors. He calls them the five essential pillars for consistently delivering profitable electronic designs and systems.

  1. Digitally integrated and optimized multi-domain design
  2. Model-based systems engineering (MBSE)
  3. Digital-prototype driven verification
  4. Capacity, performance, productivity and efficiency
  5. Supplier strength and credibility

I only touched upon some aspects of a couple of the five pillars. Each and every pillar is critical to understand.

If you play any role within the electronic systems ecosystem, whether at the chip level or systems level, whether at the budget owner level or at an influencer level, I strongly recommend downloading and reading David’s complete whitepaper. There is lot of objective and compelling details to help you evaluate your software tools and methodologies that are currently deployed and decide on making critical updates that will enable you and your customers to consistently turn out successful and profitable products.

Also Read:

Probing UPF Dynamic Objects

Calibre DFM Adds Bidirectional DEF Integration

Automotive SoCs Need Reset Domain Crossing Checks


Webinar: Electrothermal Signoff for 2.5D and 3D IC Systems

Webinar: Electrothermal Signoff for 2.5D and 3D IC Systems
by Mike Gianfagna on 02-08-2021 at 6:00 am

Webinar Electrothermal Signoff for 2.5D and 3D IC Systems

The move from single-chip design to system-in-package design has created many challenges. The rise of 2.5D and 3D technology has set the stage for this. Beyond the modeling requirements and the need for ecosystem collaboration to get those models, there is a significant challenge in understanding the data. The only way to truly predict the behavior of a complex design like this is through concurrent analysis across multiple regimes — thermal, mechanical and electrical. Attaining such a global view presents substantial algorithm, flow, analysis and visualization hurdles. When I heard about a webinar from Ansys that addresses these issues I got very interested. The webinar is coming on February 23, 2021. Here is a sneak preview of the event and how electrothermal signoff for 2.5D and 3D IC systems can be implemented.

The webinar focuses on Ansys RedHawk-SC Electrothermal, a new product introduced around the middle of last year with limited customer availability. The product is now moving to general availability and this is the reason for the webinar. There have been many posts about the innovative integration Ansys has delivered across multiple domains. The upcoming webinar takes it up a notch.

Marc Swinnen

First, a bit about the speakers. Marc Swinnen, director of product marketing at Ansys Semiconductor Business Unit kicks off the webinar. Marc hails from Cadence, Synopsys, Azuro, and Sequence Design, where he developed a deep understanding of digital and analog design tools. Marc’s depth of understanding and articulate delivery set the stage for a great event.

Next, Sooyong Kim, director product specialist responsible for 3D-IC and chip-package-system multiphysics solutions at Ansys, takes you through a deep dive of the technology. With 20 years of experience in the EDA industry with a focus on power integrity and reliability analysis and methodologies, Sooyong is clearly up to the challenge. His title conveys an important dimension of the technology being discussed — multiphysics.

Sooyong Kim

Analysis of 3D structures requires representation of multiple physical effects to get the complete picture across thermal, mechanical and electrical. This is clear. What may not be as clear is the need for concurrent analysis of all these effects since thermal stress will impact form factor and planarity (mechanical), electrical will impact thermal, and so on. Millions of such interactions need to be considered. The term multiphysics conveys the concurrent analysis of all these effects to get a true picture of the system.

Sooyong discusses some of the enhancements that comprise RedHawk-SC ElectroThermal. A key one is the analysis cockpit, which now allows concurrent views of electrical, thermal and mechanical effects. These domains were previously supported, but in separate interfaces. The new visualization environment presents all the data in a harmonized way. You need to see this to fully appreciate the benefits and insights available as a result.

The ability to handle data from multiple sources, both on-chip and off-chip are detailed as well. This requires importing information from Ansys tools and partner tools and integrating all the information into a common database for analysis and visualization. One example of this is importing data from Ansys Icepak, which can perform computational fluid dynamic analysis to predict the airflow/temperature profile of the enclosure for the chip as well as the temperature of the actual chip. This data then becomes boundary conditions for RedHawk-SC Electrothermal, creating a link between chip and system design.

Another high-impact capability is how this system supports analysis with incomplete data. This essentially provides a way to prototype the design to start analysis before all the information is known. A capability like this is worth a close look as it can save a lot of time.

The webinar goes into significant detail about the new capabilities of RedHawk-SC Electrothermal. Techniques for modeling multi-die systems, like HBM and PCIE interfaces, with silicon interposers, through-silicon vias (TSVs), and microbumps are discussed. How to perform signoff analysis on multi-die systems for power integrity, signal integrity, thermal integrity, and mechanical stress/warpage is also covered.

Thanks to the cloud-based big data analytics of Ansys SeaScape, all of the required data can be brought together for concurrent analysis and visualization. This new technology from Ansys is quite impressive. I’ve just scratched the surface here. If there is a 2.5D/3D design in your future, you need to attend this webinar. It will illuminate what challenges lie before you and how to address them. The webinar will be broadcast twice on February 23, 2021. You can register here for the 8:00 AM PST webinar.  You can register here for the 6:00 PM PST webinar. Check out how electrothermal signoff for 2.5D and 3D IC systems can be implemented.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read

Best Practices are Much Better with Ansys Cloud and HFSS

System-level Electromagnetic Coupling Analysis is now possible, and necessary

HFSS – A History of Electromagnetic Simulation Innovation


Morgan Stanley’s Tesla, Ford Misses

Morgan Stanley’s Tesla, Ford Misses
by Roger C. Lanctot on 02-07-2021 at 10:00 am

Morgan Stanley Tesla Ford Misses

Gamestop isn’t the only source of wild stock market gyrations. In fact, one might argue that the crazy valuations gravy train got its start at a humble little car company called Tesla Motors. Tesla’s stock has more than doubled in value from $400, six months ago, to more than $850 today.

Competing, so-called legacy, auto makers have watched enviously and helplessly as Tesla’s stock has compiled a market value beyond the combined valuation of the five biggest car makers on the planet. This is in the context of Tesla shipping half a million cars in 2020, not even a one percent share of the total global car market.

Investment bankers like Morgan Stanley have struggled to cope with the meteoric rise in Tesla’s valuation – reversing bearish positions and raising stock price targets. Competing car maker executives have scratched their heads as the stock market shrugs at their own profitable operations and earnings “beats.”

In this time of special purpose access corporations (SPACs) – which have the earmarks of money laundering – and mass market stock manipulation by day traders, car makers are straining for attention, credibility, and validation from the public markets. Somehow running a profitable operation with reliable products and satisfied customers is no longer revered or rewarded on Wall Street.

Sadly, many car makers have turned to bold pronouncements regarding autonomous vehicles, electrification, or strategic tech industry tie ups to juice their own stock prices.

Two weeks ago, GM’s stock price got a momentary lift from the announcement of a $2B investment by Microsoft in GM’s Cruise autonomous vehicle unit.  This week, Ford Motor Company’s stock saw a valuation flutter following its announcement of a major collaboration with Alphabet’s Google.

Never mind that GM’s Cruise operation is burning through cash at a $250M/quarter pace in its pursuit of building a robotaxi for which there is no business rationale or consumer demand.  Ignore the massive organizational impacts that Ford’s Google gambit entails.

Morgan Stanley offered up its assessment of the potential impact of the Ford-Google deal suggesting that Ford will “generate a gusher of $5B in profit” from a $9B revenue stream to be created by connected services enabled by Google.  Writes Morgan Stanley: “A new revenue source of that magnitude might double Ford’s $43B market capitalization and send its stock soaring to $25, up from less than $11 now.”

Let’s be real clear.  That is absurd.  In Morgan Stanley’s scenario, Ford will commence generating $10/month/car in data subscriptions for entertainment or retail services post the Google deal.  Nope.  That is not going to happen.

Morgan Stanley has swallowed whole the now-several-year-old McKinsey perspective that there are billions of dollars in untapped revenue tied up in vehicle data and vehicle-based commerce.  It’s true that vehicle data is valuable, but if Ford is partnering with Google, it is Google that stands to benefit most directly from “monetizing” that data.  And very few Ford owners will want to pay a subscription for connected services obtainable for free via their smartphones.

There are a lot of plusses for Ford cozying up to Google including leveraging its on-board, in-vehicle application platform, Android operating system, and cloud resources.  There are also some tantalizing possibilities in leveraging Google’s marketing and sales resources to push Ford vehicles.

But the Ford-Google deal is not a signal to buy or sell Ford or Alphabet stock.  It is a cause for concern among Ford’s existing hardware, software, and service partners.  It may also be a signal of significant change for various Ford development teams and even for the overall organization of Ford itself.

Ford is not alone.  Volvo, Renault, and GM, among others, have already announced their Google fealty to one degree or another.  All of those companies are weighing how they will preserve their independence from Google while using Google’s resources to reinforce existing customer ties.

Ford has been down this path before.  The company embraced Microsoft’s Windows Embedded operating system as part of its effort to bring its revolutionary Sync smartphone-based platform to market.  The companies parted ways as subsequent generations of Ford Sync ran into technical snags.

Like most other things in the automotive industry, the Ford-Google deal is a gamble with significant up and downside risks.  From here, the future looks bright, but tapping the value of this new relationship will require additional effort and investment, not less.  And don’t expect a doubling of Ford’s stock price. In the currently stingy automotive investment environment simply maintaining Ford’s existing valuation will be a triumph. What is worthy of attention will be the arrival of the Mustang Mach E – which owes nothing to Google.


Will EUV take a Breather in 2021?

Will EUV take a Breather in 2021?
by Robert Maire on 02-07-2021 at 6:00 am

KLA EUV Slowdown

-KLAC- Solid QTR & Guide but flat 2021 outlook
-Display down & more memory mix
-KLAC has very solid Dec Qtr & guide but 2021 looks flattish
-Mix shift to memory doesn’t help- Display weakness
-Despite flat still looking at double digit growth
-EUV driven business may see some slowing from digestion

As always, KLAC came in at the high end of the guided range with revenues of $1.65B and non GAAP EPS of $3.24 versus the guided range of $2.82 to $3.46. Guidance is for $1.7B +-75M and non GAAP EPS range of $3.23 – $3.91. By all financial and performance metrics a very solid quarter

A “flattish” 2021 while WFE grows “mid teens”

Management suggested that WFE which exited 2020 at $59-$60B would grow double digits in 2021 but the year would look a bit more flat for KLAC as its acquired display group is expected to shrink and there is an expected mix shift towards memory which is less process control intensive.

Foundry has been strong which has been very good for KLA and the current quarter is expected to see roughly 68% of business from foundry

Will EUV take a breather?

KLA obviously sells process management tools to companies working on new process such as EUV. TSMC has bought so many EUV tools it probably has problems finding the space for more. TSMC has also clearly gone well over the hump of getting EUV to work and likely may not need as much process control and maybe could slow its EUV scanner purchases a bit given that its so far ahead.

Intel is obviously still coming up the learning curve and purchasing curve and Samsung is in between the two. We would not expect either Samsung nor Intel to be as EUV intensive as TSMC has been, at least not in the near term. All this being said , it is not unreasonable to expect EUV related process management to slow slightly.

Memory not as intensive as Foundry/logic

The industry is expecting memory makers to increase capex spend in 2021 as supply and demand have been in reasonable balance and supply is expected to get tighter.

Most of the expectation is on the DRAM side which is slightly less process control intensive as compared to NAND and likely lower in overall spend. This mix shift towards memory is obviously better for memory poster child Lam than for foundry poster child KLA. However its not like foundry is falling off a cliff with TSMC spending a record of between $26B and $28B in capex.

Service adding nice recurring revenue

As we have seen with KLA’s competitors, the service business continues its rise in importance to the company. The recurring revenue stream counterbalances the new equipment cyclicality and lumpiness. Having 25% or more of your revenue coming from service is very attractive

Wafer inspection positive while reticle inspection negative

EUV “print check” has obviously been very good for KLA and a way to play the EUV transition given the issues in reticle inspection. Patterning (AKA reticle inspection) was down significantly after a nice bump in prior quarters where KLA managed to take back some business from Lasertec (which now sports a $10B Mkt Cap).

Obviously “missing the boat” on EUV reticle inspection is toothpaste that can’t be put back in the tube. We expect Lasertec to get the lions share of Intel’s business as it ramps up EUV.

The stock

If we assume roughly $7B in revenues for 2021 ($1.75B/Q) with roughly $15 in EPS ($3.75/Q) we arrive at roughly 19X forward EPS, at the current stock price. This is likely a pretty good valuation for a company with stellar/flawless execution in a slowing, but still strong, market.

Investors will likely get turned off by the “flattish” commentary despite the good numbers. It also doesn’t help that the chip stocks have been feeling a bit like they are turning over here Despite any weakness KLA remains the top financial performer in the industry.

Also Read:

New Intel CEO Commits to Remaining an IDM

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory

2020 was a Mess for Intel


Podcast EP6: The Traitorous Eight and Fairchild Semiconductor

Podcast EP6: The Traitorous Eight and Fairchild Semiconductor
by Daniel Nenni on 02-05-2021 at 10:00 am

Dan and Mike are joined by John East, a silicon valley industry veteran who takes you on a tour of the very foundation of Silicon Valley and venture capital. John explores the beginnings of these key parts of the world as we know it today and explains who the Traitorous Eight are and what role they played.

Biography
John East retired from Actel Corporation in November 2010 in conjunction with the transaction in which Actel was purchased by Microsemi Corporation.  He had served as the CEO of Actel for 22 years at the time of his retirement.  Previously, he was a senior vice president of AMD, where he was responsible for the Logic Products Group.  Prior to that, Mr. East held various engineering, marketing, and management positions at Raytheon Semiconductor and Fairchild Semiconductor.  In the past he has served on the boards of directors of Adaptec,  Pericom and Zehntel (public companies), and MCC,  Atrenta and Single Chip Systems (private companies).  He currently serves on the boards of directors of SPARK Microsystems – a Canadian start-up involved in high speed, low power radios — and Tortuga Logic  —  a Silicon Valley start-up involved in hardware security.   Additionally,  he is presently an advisor to Silicon Catalyst  — a Silicon Valley based incubator actively engaged in fostering semiconductor based start-ups. Mr. East holds a BS degree in Electrical Engineering and an MBA both from the University of California, Berkeley.  He has lived in Saratoga, California with his wife Pam for 46 years.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Pim Tuyls of Intrinsic ID

CEO Interview: Pim Tuyls of Intrinsic ID
by Daniel Nenni on 02-05-2021 at 6:00 am

Pim from bio

Pim Tuyls, CEO of Intrinsic ID, founded the company in 2008 as a spinout from Philips Research. It was at Philips, where he was Principal Scientist and managed the cryptography cluster, that he initiated the original work on Physical Unclonable Functions (PUFs) that forms the basis of the Intrinsic ID core technology. With more than 20 years of experience in semiconductors and security, Pim is widely recognized for his work in the field of SRAM PUFs and security for embedded applications. He speaks regularly at technical conferences and has written extensively on the field of security. He co-wrote the book Security with Noisy Data, which examines new technologies in the field of security based on noisy data and describes applications in the fields of biometrics, secure key storage, and anti-counterfeiting. Pim holds a Ph.D. in mathematical physics from Leuven University and has more than 50 patents.

What brought you to semiconductors?
For that we must go back to 2002. At that time I was part of the security group of Philips Research and we were working on “Ambient Intelligence,” which is currently known as the Internet of Things (IoT). Foreseeing that everything around us would be connected and operations would be automated, it was clear to us that there were major security issues on the horizon. These issues would come up at the silicon level, as all measurements, processing, and connectivity in the IoT is provided by chips. That is when we started thinking about how we could help to increase the security of chips at a low cost to facilitate the needs of an upcoming market with potentially billions of devices. It was clear from the beginning that this problem required a novel and innovative approach with as little overhead as possible. That is when we decided to base security on the physical characteristics of chips and was the idea born to work with silicon-based Physical Unclonable Functions or PUFs.

PUFs convert tiny variations in silicon into a digital pattern of 0s and 1s that is unique to that specific chip and is repeatable over time. This pattern is a “silicon fingerprint,” comparable to its human biometric counterpart. The fingerprint is turned into a cryptographic key that is unique for that individual chip and is used as its root key. This root key is reliably reconstructed from the PUF whenever it is needed by the system, without a need for storing the key in any form of memory.

And how does this relate to the backstory of Intrinsic ID?
When we started working on these PUFs, our biggest internal customer at Philips was its semiconductor division. However, as we all know, in 2006 Philips decided to spin off this division into the independent company NXP. For our team this meant losing our internal customer, as research teams were supposed to work for internal customers only. At that point we were given the opportunity by Philips to make the change from a research team into a venture activity. This allowed us to create a business for ourselves that we successfully spin out of Philips in 2008, with the help of the VC Prime Ventures, as Intrinsic ID. From that point on we were able to start commercializing our own products and start building our customer portfolio, which includes several of the biggest semiconductor companies in the world, such as NXP, Intel, Silicon Labs, Microchip, and many others.

What customer challenges are you addressing?
The main problem for high-volume semiconductors, such as those in IoT devices, is to have a strong and low-cost implementation of a root of trust that also scales well over the ever-decreasing technology nodes. It is clear that a security implementation needs to be sufficiently strong, otherwise there is no point to it. The enormous volumes in IoT also demand the solution to be low-cost. But the impact and the importance of the solution to be scalable are often overlooked. Especially for hardware-based security, it is not trivial that an implementation scales along with decreasing technology nodes. If this is possible, it enables chip manufacturers to use the same technology over different nodes, which guarantees continuity and eases the burden on development and maintenance of software. High security and low cost with flexible scalability are what we provide with our security solutions based on PUF technology.

What are the products Intrinsic ID has to offer?
We have three flagship products at this moment: a semiconductor product, a software product, and an FPGA product. The semiconductor product is QuiddiKey, which consists of RTL that generates a root of trust for chips from an SRAM PUF. Additionally, QuiddiKey provides key management for the keys that are derived from the PUF. For existing silicon, or chips where additional RTL cannot be added, we have a software implementation of the same solution called BK, which runs on virtually any processor. And since last year, we have a specific solution for FPGA, called Apollo. Apollo facilitates the creation of a PUF-based root of trust in the programmable fabric of Xilinx FPGAs.

I’m happy to mention that later this year we will be launching a brand-new product called Zign, which provides a non-intrusive way to track high-volume devices. We also have a few other new developments in the works regarding random-number generators for off-the-shelf devices, as well as an activation product.

What is your competitive positioning?
PUF technology in general provides several benefits over traditional methods of key provisioning and storage. Most importantly, with an SRAM PUF, no sensitive data is ever stored on a chip. The root key of the device is created from the physical characteristics of the silicon and it is only generated when needed. All sensitive data and keys are encrypted with this root key before storage and therefore uniquely bound to the hardware of the chip, making it impossible to extract or copy any data. Furthermore, because the root key is created from silicon, there is no need for external provisioning of this key. This simplifies the supply chain by eliminating the need for key provisioning at a trusted facility. Also, no member of the supply chain will have any knowledge about the root key because it has not been provisioned and it never leaves the chip – it is intrinsic to the chip itself.

The benefits of our specific SRAM PUF technology include very strong security. This means the SRAM PUF provides high entropy to create the cryptographic root key on any chip. It also has high reliability over time – in fact, in some cases is even higher than the reliability of non-volatile storage for keys. On top of that, SRAM is a standard semiconductor component that is available in any technology node and in every process. This ensures the scalability of SRAM PUF over different nodes and processes and allows for easy testing and evaluation as this is a well-known semiconductor component. And finally, it is fully digital. This means that adding an SRAM PUF does not require any additional mask sets, analog components (like charge pumps), or special programming.

What kind of year has 2020 been for Intrinsic ID?
Clearly 2020 was a tough and challenging year for everyone due to the global pandemic. Working from home and worrying about the health of ourselves and our loved ones was hard on all of us. But despite these challenges, 2020 was a very good year for Intrinsic ID. We saw strong growth in revenue and royalty income, while also being able to launch new products. 2020 has been an important year for our presence on FPGAs, with the launch of our Apollo product for Xilinx FPGAs as well as a dedicated SRAM PUF implementation for Intel FPGAs, such as the Stratix X. So business wise, 2020 has been a great year for us.

What does 2021 have in store for Intrinsic ID?
We expect 2021 to be another great year for us, both financially and in growth of the company itself. We are starting the year strong with a great pipeline with top-tier prospective customers. And given the current growth in the semiconductor market, we also expect a steady growth of our royalty income. With Zign we will be launching another new product this year, which is currently already being evaluated by beta customers. We are also growing our team (see: www.intrinsic-id.com/careers) to keep up with ever-increasing customer demand. And finally, we are launching a new community website for people interested in PUF technology, www.pufcafe.com. This website provides a forum for people from the security community to get together, find resources, attend webinars, and submit their own documents to really drive the discussions on where the development of PUF technology in general (not just our products) should be headed. We are really looking forward to building an active community that will shape the future of PUF technology.

https://www.intrinsic-id.com/

Also Read:

CEO Interview: Tuomas Hollman of Minima Processor

CEO Interview: Lee-Lean Shu of GSI Technology

CEO Interview: Arun Iyengar of Untether AI


Expanding Role of Sensors Drives Sensor Fusion

Expanding Role of Sensors Drives Sensor Fusion
by Tom Simon on 02-04-2021 at 10:00 am

CEVA Sensor Fusion

It is long past the time when general purpose processors could meet the needs of sensor fusion. Sensor fusion performs operations to process and integrate raw sensor data so that downstream processing is simplified and is performed at a higher level. When done properly it offers several other significant benefits such as lower latency & power, bandwidth savings and improved efficiency. CEVA, a provider of processor and platform IP, addressed the growing sophistication of sensor fusion last year with their SensPro Sensor Hub DSP. Since then the market has steadily grown with expanded requirements for new types of sensors and more powerful processing capabilities. In many cases new applications have driven these requirements. This includes everything from earbuds to automotive ADAS systems. CEVA has just announced a major update to this offering which is called SensPro2.

SensPro2 Major Improvements

CEVA has packed a lot into this update. They have expanded the number of cores from 3 to 7. There is ASIL-B compliance and ASIL-D support. Parallel processing benefits from their wide-memory bandwidth. The neural network support includes RNN and FC layers. There are ISA extensions specific for AI, vision, SLAM, Radar and sound. Combined with other changes SensPro2 delivers a 2X boost for AI inferencing, up to 6X peak performance gain, 2X better memory bandwidth and 20% energy savings. These improvements create the opportunity for SensPro2 to support an expanded range of high and low end applications.

CEVA Sensor Fusion

Across all members of the SensPro2 family there is a common ISA, which means that moving to a different core is made seamless when performance needs to scale. In addition to the three previous cores, SP250, SP500 and SP1000, there are two new low-end cores, SP50 and SP100, along with two new floating point cores, SPF2 and SPF4. The SP50 through SP1000 have MACs that support INT8 and INT16, and allow the addition of a FP32 MAC. The SPF2 and SPF4 offers  FP32 floating point only MACs.

Focus on Performance

Under the hood SensPro2 offers impressive specifications. There is 8-way VLIW with a highly configurable architecture. It clocks up to 1.6GHz on 7nm silicon. It can deliver 3.2 TOPS (INT8) and 400 GFLOPS, using 64 single precision or 128 half precision FP MACs. The memory architecture offers 400 GByte/second of bandwidth. It includes a 4-way instruction cache and a 2-way vector data cache.

Using their own benchmark numbers CEVA shows that SensPro2 beats their previous generation by anywhere from 1.8X to 5X on CV benchmarks. SLAM benchmark results for SensPro2 show 1.8X to 6.4X over the previous generation. Similarly, for audio processing the SP250 core showed DeepSpeech2 results that were 18.9X faster than their general purpose CEVA-BX2 DSP. SensPro2 also has improved Radar performance capabilities as well.

Development Environment

CEVA backs up these IP improvements with solid and mature software development libraries. Included are ClearVox noise reduction, WhisPro speach recognition, wide angle imaging, SLAM SDK, Tensor Flow Lite Micro support, CDNN, and OpenVX & OpenCL. These all contribute to an extremely wide range of end applications. In the area of AI they support TensorFlow. CEVA has its own neural network compiler, CDNN, that supports over 200 NNs and is fully optimized for the SensPro2 processors. It includes graph optimizers for accuracy optimization, retraining and scaling per layer.

CEVA is well positioned with this new generation of senor fusion IP. The IP covers the full range of potential applications and is highly configurable. It is well supported with development libraries. They have shown great strides in improving performance to keep up with market needs. The full announcement can be found on the CEVA website.  

Also Read:

Sensor Fusion Brings Earbuds into the Modern Age

Sensor Fusion in Hearables. A powerful complement

Low Energy Intelligence at the Extreme Edge


Best Practices are Much Better with Ansys Cloud and HFSS

Best Practices are Much Better with Ansys Cloud and HFSS
by Daniel Nenni on 02-04-2021 at 6:00 am

Ansys PAM4 PKG

Compute environments have advanced significantly over the past several years. Microprocessors have gotten faster by including more cores, available RAM has increased significantly, and the cloud has made massive distributed computing more easily and cheaply available.

HFSS has evolved to take advantage of these new capabilities, and as a result is magnitudes more capable of solving large designs, designs that you could barely imagine solving before. For some customers, it means being able to solve more design variations in parallel to find the optimal design before manufacturing the first prototype. For other customers, it means spending less time and thought in simplifying designs and instead creating models that include more of the electronic component details, its surrounding system, including the device enclosure and/or even placing it in its operating environment.

The process of preparing a model to be solved with HFSS has decreased significantly over the years as various steps in the process have become automated. But whether the steps are automated or manually done, a decision is made every time a simplification occurs – is removing that detail going to impact accuracy? Is removing that detail going to save significant computational time? And making those decisions, or compromises, requires experience and expertise.

Customers are compromising less and achieving more when using current best practices in the latest version of HFSS. Take, for example, this PAM4 Package model from Socionext where the goal is to extract 12 critical high-speed IO nets.

A legacy best practice is to cut the package model down as much as possible to reduce overall RAM footprint. This often results in a complex shaped cutout with boundaries very close to the critical nets of interest. How close is too close? That decision requires experience and expertise so that accuracy is not compromised. Let’s compare this legacy complex cutout method to a simpler, rectangular cut that preserves the true boundaries of the original package model.

The additional RAM and time to solve is not significant, especially considering the time and thought that goes into creating a conformal cutout and the unfortunate compromise in accuracy when, as in the case above, a differential pair proved to be “too close” to the conformal cut boundary! With the cost of RAM greatly reduced and compute resources more readily accessible, there is no longer a need to compromise on accuracy with legacy best practices like creating conformal cutouts.

But wait – did you miss the fine print in the table above? What do we mean when we indicate that the Ansys BKM model used “5.5GB (44GB distributed across 8 tasks)”? This means the model was solved on 8 compute nodes where it required a maximum of 5.5GB on any given node and 44GB total RAM when you add up the RAM used on all 8 nodes.

Getting your model to distribute the solve process across multiple nodes is enabled by default with HFSS’s automatic HPC setting – in other words, you don’t need to do anything more than submit the job to run on multiple nodes and HFSS will automatically distribute the solve process. Using automatic HPC settings along with submitting jobs to Ansys Cloud are two HFSS HPC best practices, and they were both used to solve the above Socionext PAM4 package model. Simply following those two HPC best practices make it easy to solve bigger problems faster than ever before.

Not sure how much of your layout you can preserve when performing a full 3D extraction with HFSS? Get ready to be amazed by HFSS’s speed and capacity when you read this blog post to discover that HFSS on Ansys Cloud can be used to model an entire RFIC!

HFSS best practices have evolved to reduce time spent in both pre-processing, by eliminating compromises made when simplifying models, and solving, by taking advantage of distributed computing and the cloud, especially Ansys Cloud. To summarize a few of the best practices described in this blog: 1) using the latest version of HFSS, 2) forgoing complex shaped cutouts because the risk to accuracy no longer warrants insignificant RAM-savings, 3) using automatic HPC settings on Ansys Cloud. The best practices discussed here are just a few examples that span across many layout-based applications; please work with the Ansys Customer Excellence team to ensure that you’re using all the latest best practices for your specific application.

If it has been some time since you’ve reviewed and updated your HFSS practices and scripts, you are leaving performance and accuracy on the table. You are entitled to the advantages of HFSS’ advancements. You are paying for it. Make sure you use it. It can boost your productivity by 10x or more.

Related link: The Easiest New Year’s Resolution: Better, Faster Simulations

Also Read

System-level Electromagnetic Coupling Analysis is now possible, and necessary

HFSS – A History of Electromagnetic Simulation Innovation

HFSS Performance for “Almost Free”