RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CEO Interview: Sathyam Pattanam

CEO Interview: Sathyam Pattanam
by Daniel Nenni on 02-12-2021 at 6:00 am

Sathyam Pattanam

Sathyam has over 35 years of experience in company management, R&D management and software development in Electronic Design Automation (EDA) (mostly) and PCB Manufacturing. He has headed companies, global engineering and marketing organizations in Forture-500 and in startups companies to introduce innovative and successful products. He has led startups such as Karthik Electronics, Atrenta and ArchPro Design Automation to successful mergers and acquisitions. His prior management and technical roles were at Avery Design Systems, Atrenta, Synopsys, ArchPro Design Automation BlueSpec, Cadence Design Systems, AT& Bell Labs, Karthik Electronics and Hindustan Teleprinters. Sathyam’s experience in EDA includes SoC Design, Low Power Verification, Electronic System Level Design and Verification, Logic Design Simulation/Formal Verification/Emulation, Fault Simulation and Integrated Circuit Layout Extraction, Design-rule Checking, and Symbolic Layout Compaction. For complete bio of Sathyam please visit his LinkedIn profile.

What brought you to electronics, semiconductors, and EDA?
I was fascinated by electronics in my high school back in the mid-1970s when I built a music stereo system. Later, I got a Bachelors in Electronics at IIT Madras, India and after undergrad did a startup in PCB manufacturing in Chennai, India at the age of 24. Several local magazines in Chennai, India recognized my initiative and wrote articles about me as a promising entrepreneur of the 1982 year. Later, I moved to the US and got a MS in Computer Engineering from Rutgers. I did a design project called “Content Addressable Memory” using UC Berkeley’s Magic Layout tools.  It was amazing to see the impact of EDA on helping do designs.  That experience jump started my journey with EDA, and after my Masters in Computer Engineering/Computer Science at Rutgers University I joined the famed Electronic Design Automation Department at AT&T Bell Laboratories.

What is Anew Design Automation backstory and when was it launched?
My Co-founder, Dr. Rahul Razdan (Linkedin Profile) when working as the Senior VP of Strategy at Flextronics realized that Long Lifecycle Electronic products faced enormous issues dealing with a semiconductor supply chain dominated by consumer markets. The Long Lifecycle Electronic products covered several market verticals such as aerospace, automobiles, defense, industrial equipment, medical, power and energy, IoT, and telecom.  Specific issues included reliability, semiconductor obsolescence, and evolving maintenance functionality.

More recently, Rahul was awarded the Hall of Fame Honors by the ACM for his Ph. D work at Harvard University on reconfigurable computing. He realized that AI/ML and reconfigurable computing techniques could be used to solve the issues of reliability, supply-chain obsolescence, and functionality obsolescence.

Rahul and I had worked together at Cadence Design Systems where we had successfully delivered flagship products such as Incisive and AMS platforms to the marketplace. In July of 2020, we founded Anew Design Automation with me as President and CEO and with Rahul as the Chairman and Chief Technical Advisor. Interestingly, we have discovered that along the way, system board design also needed to be upgraded, and our critical IP could accelerate this task as well.

As CEO, I brought on board another EDA veteran, Faiq Fazal (Linkedin Profile) as VP Engineering in October 2020 to lead the engineering efforts, and we have built a strong team of ten engineers with expertise in design, AI/ML, Database, GUI, and of course EDA.  Interestingly, the original NC-Verilog engineering team which I managed at Cadence Design Systems was about the same size, and that platform has delivered over $1Billion in revenue and counting.

What customer challenges are you addressing?
LLC (Long Lifecycle) customers face a design environment which has not changed very much over several decades, yet the design challenges have accelerated.  Today, the role of programmable devices such as FPGAs, programmable processor cores, programmable analog cores, microprocessors has significantly increased.  Also, in most embedded design, the dominant design element is the software stack and associated ecosystem for their vertical market. For designers, they face a sea of information spread across websites, EDA databases, reference designs, and application notes. Amazingly, the central repository of information is still the datasheet, and PDF search is the EDA tool of choice.  Our previous articles on Semiwiki  describe this situation in detail.  Anew will be addressing this problem with advanced semiconductor component selection through design intent, design abstraction, and design related linting.

After the initial design, LLC customers face a couple of specific pain points:

  1. Say the product has been launched to market. Significant resources have been spent on qualification, certification, system validation and the product is being actively used by customers. There are active maintenance contracts and warranties connected to the system design. Five years later, there is a semiconductor obsolescence or reliability problem. The design team is long gone, and the leadership is left with difficult and expensive choices.
  2. When the product is embedded into infrastructure or distributed out in the field. The cost of updating the hardware is expensive or even impossible (think satellites).

Anew addresses these issues through a Design for LLC flow. Details of the approach can be found on EPSNews Articles  and details of the company can be found at www.anew-da.ai. The leadership team consists of EDA (Cadence, Synopsys) veterans who have built highly differentiated solutions which were not available in the marketplace.  In summary, Anew solves the pain points for electronics design engineering and component management functions for Long Lifecycle Electronic products companies in vertical markets such as aerospace, automobiles, defense, industrial equipment, IoT, medical, power and energy and telecom.

What is your competitive positioning?
Interestingly, Anew seems to be in a gap of functionality between the major parts of the electronics ecosystem.  The key players include:

  1. PLM Companies: PLM (Product Level Management) is very important to LLC customers and the conventional PLM solutions are well integrated. However, PLM capability does not address the specific electronic design challenges described above.
  2. EDA Companies: EDA has an intense focus on semiconductor design. Even today, PCB design tools are still operating in a component structure model with little regard for the current issues of system board designers.
  3. Distributors: Distributors such as Digi-Key and Mouser serve as smart searchers for components, but of course this does not address the core design issues around soft IP, programmable hardware or software.

Anew sits in the large and growing gap between these pillars of conventional functionality with a solution leveraging critical AI/ML and reconfigurable computing technology.

What is the funding situation for Anew?
The company has raised a seed round of around $100K sufficient to do develop the product ideas, functional specifications and a prototype. Currently, we are working on raising $1M+ through institutional investors, angel investors, and US government grants to productize the first product, System Level Design (SLD) Explorer Smart Searcher.

What does the coming months/years have in store for Anew?
We have conceptualized a powerful EDA platform in which we plan on releasing a series of product releases towards a solution which can solve these deep Long Life Cycle Product issues. As you might imagine, in the coming months, we are hyper focused on building the first product: SLD Explorer Smart Searcher.  The figure below should give you a sense of the product architecture and process flow for this product.  We have walked senior designers in the defense, medical, and energy markets through this product, and gotten very positive feedback.  We expect to release this product to the marketplace in Q221.

Finally, in terms of a call-to-action.  We are very interested in engaging with potential partners such as distributor, semiconductor, EMS, or EDA companies who find our value statement of interest.  Also, we would love to engage with design teams who want to work with an innovative company to accelerate their productivity.

Also Read:

CEO Interview: Pim Tuyls of Intrinsic ID

CEO Interview: Tuomas Hollman of Minima Processor

CEO Interview: Lee-Lean Shu of GSI Technology


“For Want of a Chip, the Auto Industry was Lost”

“For Want of a Chip, the Auto Industry was Lost”
by Robert Maire on 02-11-2021 at 10:00 am

Auto Assembly Line

Semiconductor production can’t be turned on and off like a switch

Semiconductor fabs have to run 24/7 to make money. They have to be running full all the time to fully utilize the high running costs and in the case of new fabs, high capital costs.

Unlike a brake pad factory hat can hire and fire people at will and source readily available materials on short notice chip fabs are run non stop by specialized employees.

Its a long pipeline to design chips, build the mask set, start wafers and then package and test the final die.

When auto makers hit the emergency stop button on their chip supply for Covid they didn’t realize how long it would take to restart production.

They also didn’t realize that semiconductor fabs can just as easily make microcontrollers for toaster ovens as they can for cars and that toaster oven demand is not as impacted by Covid as cars are.

Demand in other parts of the tech industry such as IOT has been stronger in 2020 so why in the world would I even consider going back to making chips for anti-lock brakes.

The $60B plus “butterfly effect” on a $2T/year Chaos based auto industry

There have been reports in the media that over $60B in auto production has been lost due to semiconductor shortages.

We would be willing to bet that this shortfall and idling of auto workers and factories was likely caused by only a few tens of millions of dollars worth of semiconductor parts in critical areas.

Think about it….the lack of a $0.50 microcontroller can stop the production of a $50,000 car for months as the part is highly specialized and likely single sourced.

We think the auto industry has exposed the point of failure that the semiconductor industry is to the huge number of other industries that rely on it.

Trillions of dollars of many different industries rely on semiconductor chips that everyone assumes will always be there, freely available, and at low cost at a moments notice.

Old Fabs and Analog companies just saw a large jump in value

Older 8 inch equipment has been going up in price and equipment companies are selling more 8 inch stuff than they were at the peak of 8 inch.

Old fabs which were crated up and sold off by the pound to be shipped off to China to make chips for toaster ovens and dish washers all of a sudden look more valuable and important.

Semiconductor companies that make cheap analog components and microcontrollers which were viewed as trailing edge dinosaurs now look sexy.

We don’t think this is a one time blip in the industry that people will forget about. The auto industry certainly won’t forget any time soon. Other industry better figure it out or they too will suffer the same fate some day soon.

We think both chip companies and foundries should be able to parlay this into higher pricing and better margins for a more guaranteed pipeline of supply of key components.

We wonder if any auto companies or others will think about in-sourcing chip supply.

If we were Apple, run by a former supply chain guru, like Tim Cook, maybe we might get a bit nervous looking at the auto industry and knowing that all my eggs are in TSMC’s basket, especially now after the launch of the M1.

Maybe TSMC and Apple are in a “deadly embrace” that neither can exit from.

We think Apple and many others in the tech industry and beyond have to take a long hard look at their supply chain in the chip industry that has long been taken for granted and assumed would always be there for them.

The risks are in the Trillions and are existential for defense and intelligence of governments.

Even if the chip industry doesn’t go away on its own there’s always the risk of it being taken away by external forces….

You don’t know what you got till its gone…….

Semiconductor Advisors

Also Read:

Will EUV take a Breather in 2021?

New Intel CEO Commits to Remaining an IDM

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory


Need Electromagnetic Simulations for ICs?

Need Electromagnetic Simulations for ICs?
by Daniel Nenni on 02-11-2021 at 6:00 am

RaptorH in Virtuoso

Electromagnetic (EM) simulations have been performed on die metal structures since the 1990s. Originally, the analysis was restricted to a single device (e.g., a spiral inductor). The number of on-die devices simulated simultaneously grew with the increasing capabilities of the computers performing the computations. This recently culminated with Ansys’ announcement that HFSS was used to solve an entire 5.5 mm x 5.5 mm 5G radio frequency integrated circuit (RFIC) in under 30 hours.

People have been using the gold standard accuracy of HFSS for decades to solve on-die structures. But isn’t HFSS hard to use and only for electromagnetic simulation experts? What about the on-die designer who must be an expert in layout and SPICE simulation? Is it too much to ask the designers to become an expert in another simulator? Design cycles are getting shorter and a die designer can no longer afford to wait in line for electromagnetic extraction from a core dedicated group of EM simulation experts.

To address the needs of circuit designers, Ansys developed RaptorH. Powered by Ansys HFSS, RaptorH integrates the HFSS solver with the established integration of RaptorX with Cadence Virtuoso. This means that die designers can now run their own HFSS simulations from within the familiar Cadence Virtuoso environment without having to learn a new software interface. Furthermore, RaptorH provides many benefits to simulation of on-die structures.

Figure 1. RaptorH shown integrated with Cadence Virtuoso

The first benefit is that RaptorH fulfills all foundry requirements ranging from compliance with techfile encryption standards to the support of advanced layout-dependent effects (LDE) down to 3nm nodes. This has many implications. A user no longer must guess at the proprietary material proprieties and thicknesses of the backend metallization to get accurate models and the foundries no longer need to worry about disclosing their intellectual property. In addition, the LDE modifications to the metal are automatically implemented in the model generation so that a user does not have to read, interpret, and modify the geometry manually. This means the user can accurately simulate the true manufactured device performance.

Figure 2. Image of layout-dependent effects (LDE)

The black shapes on the left of the figure are the as-drawn shapes. The red shapes on the right of the figure shows how the lines are actually manufactured. RaptorH reads the techfiles and automatically applies LDE for the most accurate model possible.

Geometry simplification is also automated in the RaptorH flow. Old workflows for simulating on-die structures with HFSS included creating a reduced layout file of just the die metal which needed to be simulated. It also included filling in slots and holes in large metal planes and simplifying the vias into a structure that is fast and efficient to simulate. RaptorH now automates this work for the user. You now can select which cells of the hierarchy to include and how large a hole that HFSS will automatically fill for you, significantly reducing engineering time spent on model creation.

Figure 3. Full layout shown on the left

The automatically reduced layout is shown on the right. Notice that the active device metallization is not included, and that some inductors (which were intentionally left out) are not included.

Not only does RaptorH read the techfile documentation from foundries and simplify the geometry for you, but it also automatically exports an S-parameter model and creates a Spectre netlist file and symbol. This makes it very easy to use the EM model in circuit simulations to verify circuit performance.

As system complexity increases, it is no longer adequate for the die designer to model the die metal alone as the die is always placed in a package of some kind. Today’s system-on-chip (SoC) designs are typically placed in ball grid array (BGA) packages and the metal on the die couples electromagnetically to the metal on the BGA. RaptorH allows the designer to import parts of the BGA package for co-simulation to derive the true manufactured performance. This eliminates surprises at the end of the design cycle or during product testing.

Because RaptorH uses the distributed memory matrix (DMM) solving technology, part of Ansys HFSS, there is no need to limit the size of the problem to what can fit on the RAM of a single machine. DMM allows engineers to efficiently utilize existing compute infrastructure to solve the most demanding problems. Beyond using DMM to solve large problems, RaptorH can also use multiple cores on each of the machines using high-performance computing (HPC) licensing to get your simulations done fast.

Want to learn more? Check out the Ansys blog at https://www.ansys.com/blog

Also Read

Webinar: Electrothermal Signoff for 2.5D and 3D IC Systems

Best Practices are Much Better with Ansys Cloud and HFSS

System-level Electromagnetic Coupling Analysis is now possible, and necessary


Do You Care About What You’re Measuring? Part 3: Industrial Condition Monitoring

Do You Care About What You’re Measuring? Part 3: Industrial Condition Monitoring
by Steve Logan on 02-10-2021 at 10:00 am

Industrial Condition Monitoring

Mountains over 10,000 feet capped with snow in the winter. Some of the deepest, clearest blue sky you’ll find in the United States. Farmlands of green in the spring. That was the view looking out the second story window from the most awesome conference room I’ve ever taken a customer meeting. Even if I didn’t immediately understand their application of measuring eddy current sensors for vibration analysis of a turbine, that conference room view is one to remember.

At this industrial automation giant, precision sensing brings in the revenue to keep them viable in their high-desert location that was strikingly different from the typical suburbia customer locations I usually visited. This company utilizes eddy current proximity sensors, pressure sensors and velocity sensors for vibration analysis of turbines across a variety of condition monitoring applications: power plants, windmills, hydroelectric and oil & gas.

I loved the idea of potentially selling one of my ADCs into an application such as a windmill or turbine. I’m not a motor control expert, but I was amazed with the technology of windmills and turbines. These condition monitoring applications utilize a series of sensors to detect whether a blade is “slightly off” its ideal control loop, producing worse efficiency and therefore less energy. Breaking down too soon, for a windmill in the ocean or a turbine running 99.999% of the time in a power plant, meant a large loss to the industrial giant’s end customer. This customer’s ability to perform vibration analysis across a wide frequency range was mind-blowing.

A slight digression if you’ll allow me… a certain prolific podcaster and author wrote about “the signal and the noise” eight years ago. I’ve always liked that phrase. One of the ideas was being able to get to the truths of real-world circumstances (the signal) amidst a large amount of random, inconsequential data points (the noise). I’ve always thought what this industrial customer did, pulling out the most minute vibration differences in a sea of complex data, all while accurately monitoring over time, was the true definition of analyzing the signal amongst the noise. They definitely care about what they’re measuring.

At the heart of these measurements, the incumbent socket was held by a 24-bit, 105ksps delta sigma ADC.  109dB signal-to-noise ratio. 12uV rms noise – maximum. Not typical, maximum! I really wanted to unseat this competitor’s ADC. We put together an impressive proposed product definition that we called JG17: 24-bits, 300ksps and multiple channels vs their 1-ch ADC. Plus ours offered customized synchronization control.

But alas, it didn’t work out in our favor. It turns out there aren’t cell-phone-level volumes for vibration analysis and industrial condition monitoring. There aren’t too many companies in the world with the engineering craft and capability to make these kinds of products. Their volumes are more in the 10’s of thousands vs 100’s of millions. But that’s also what makes it so intriguing. Designing systems around eddy current sensors isn’t something most engineers do right after getting their BSEE.

In the end, we couldn’t build the business case or get the customer commitment to switch from their existing solution to ours. But that can never take away my memory from that conference room – and the fun of exploring the product definition challenge.


A New ML Application, in Formal Regressions

A New ML Application, in Formal Regressions
by Bernard Murphy on 02-10-2021 at 6:00 am

A New ML Application

Machine learning (ML) is a once-in-a-generation innovation that seems like it should be applicable almost everywhere. It’s certainly revolutionized automotive safety, radiology and many other domains. In our neck of the woods, SoC implementation is advancing through learning to reduce total negative slacks and better optimize floorplans. But functional verification has been curiously resistant to the charms of ML. I know this is not for lack of trying. Some of the superficially “obvious” candidates, such as improving constrained random test generation, have proven not to be as easy or as effective as you might think.

ML for orchestration

That doesn’t mean there aren’t ways to use ML in this field. We just have to think more creatively. Formal verification has already breached the barrier by using ML to orchestrate use of 30 or more formal engines to prove (or disprove) assertions. Formal isn’t just one technique; there are many engines and methodologies to use those engines in order to work towards a proof. There’s not a fixed way to know in advance what will work. You try something for a while – if that isn’t getting you anywhere, you try something else. Orchestration is managing this process automatically. Knowing how to do this efficiently is quite dependent on experience, and therefore amenable to ML training.

ML for regression acceleration

Another application is in accelerating regression runs. Regression is a natural for ML because the whole process is a continuous refinement with a growing database of results (until you make big changes). Synopsys recently posted a webcast detailing how they now offer ML-based regression mode acceleration (RMA) in VC Formal. The image above gives a simplified explanation of how this works. In the first run, proving/disproving progresses through multiple paths until proofs or counterexamples are found. On a subsequent run, those conclusive paths can be checked first to re-verify. If the checks are good, regression moves on to the next step. If not, the search can be expanded to find new proofs or counterexamples.

The impact is obvious. Regressed runs don’t have to start from “zero knowledge” each time. They can build on what they already know. With the caveat that where logic changed and therefore certain proofs did not work as before, engines need to back off to generate new proofs. Which then become the basis for new learning. From that point, subsequent regressions can start. This isn’t just theory. Synopsys show examples in which they get to very impressive speedups (24-65X) in simply re-running regressions. And in some cases are able to complete additional assertions which were previously inconclusive. Speedups in these cases are not as impressive, but hey, you completed more proofs than before. And next time around you should get those big speedups again!.

ML for bug hunting

The presenter (Sai Karthik Madabhushi, Sr Apps Engineer) also talked about applying this capability to bug hunting. This is a neat application for RTL developers while still in design. Testbench development at this stage is uncommon, however bug hunting is a very productive way to look for bugs early on. Here you create assertions you think should hold true, then run formal to see if you can find counterexamples. This can be very productive, but without intelligence, you keep retracing unproductive paths in subsequent runs as you try to extend the depth of your search. RMA can help here also, by following successful traces from previous runs to reach further out, to find deeper failures.

You can watch the webcast HERE.

Also Read:

Change Management for Functional Safety

What Might the “1nm Node” Look Like?

EDA Tool Support for GAA Process Designs


Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 2: Verification.

Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 2: Verification.
by Raul Perez on 02-09-2021 at 10:00 am

iStock 1176843522

This article about verification is part 2 of a two article series. Please see part 1 on validation HERE.

Verification is a field that has emerged as its own discipline. It’s no longer being relegated to an activity led by the design team to which time is allocated as long as it doesn’t get in the way of designing. Chip companies that want to have predictable product release cycles have realized that it is a false choice to pick between designing or verifying. You need to treat both with absolute devotion to be successful in today’s competitive market. And if you’re a system company, you absolutely need to make sure that your custom silicon supplier has top notch verification methodologies and verification engineers deployed to your project so that your system schedule is predictable, and your chip tape out is of a high quality. I have never met engineering executives and sales executives representing a chip supplier that will not claim that; their company has a great track record of on-time tape outs, first pass silicon, total commitment to excellence and top notch methodologies, not a single one has ever said anything different. Yet, truly first pass silicon is rare, and tape out delays are not uncommon, so someone is not telling the truth. That is why system companies are advised to perform a  detailed verification capabilities review during the chip vendor selection reviews, or to perform multiple verification reviews as part of the full silicon management process. System companies can then make an informed decision when choosing a supplier for their custom silicon program. This verification review effort also helps in reducing the amount of mask sets consumed in a project which can be a very costly item as you use process nodes that are closer to the state of the art. Once the system team gets silicon back they will be very difficult to manage to tape out again to fix ECOs that are not acceptable to the system company. This is especially true if the chip supplier is working based on a fix bid quote since paying for additional masks could wipe out their profit margin. This could lead to an impasse between chip supplier and system company.

I hope that from the explanations above the reader can agree that the alternative of hiring a chip company without silicon management processes and experts on your side, writing them checks for large sums of money as milestones are reached and then crossing your fingers hoping the  silicon comes in working condition is not a good plan at all. By the time you get delivery of the first revision of the chips to your system build you have probably already paid the chip design house most of the agreed NRE. So you have little leverage left to get them to fix the chip. Careful drafting of contracts is a must here and you should definitely select competent legal counsel early so you can ensure your legal front is well thought out. The silicon manager is a key resource to help the legal team call out meaningful milestones as triggers of payment and to anticipate the types of issues that can cause an impasse during the program.

Silicon management is not just about technical checks and project management. It’s also about understanding the motivations and incentives of the parties involved in the project and constantly watching for collision courses and blind spots.

Some of the risks to watch out for when seeking to hire a silicon supplier for your custom silicon program are:

  • Run-break-fix.

I’ve seen this happen in different situations: 

  1. One of them is when you select a mostly analog chip supplier that usually releases small pin count parts, and your custom chip requires them to integrate multiple of those parts into one bigger chip. Add some digital interfaces, control registers and things are very different now compared to what that team usually works on. While this sounds like something simple, when it fails it’s usually because the verification methodologies used by analog designers for small pin count chips may not, and usually don’t scale to higher level integration. To add to the difficulties, analog designers who are used to being top dog in the hierarchy balk at the idea of allowing verification leads to take charge of the top level verification, and instead try to scale up their methodologies and keep control of the project. As irrational as that sounds this happens a lot. Every engineer wants to control their baby. Absent some honest desire by the chip supplier to adopt a verification methodology that scales and can integrate digital and analog, you’re going to have an unpredictable chip release schedule. As a double whammy any delay in design will come at the cost of reduced verification. The ego of many designers simply gets in the way of the success of the program, and that is a very difficult situation to overcome. So best to avoid it altogether and choose a different supplier as soon as you detect that is what is likely to happen.
  2. Another situation that leads to a run-break-fix scenario is when the supplier may in theory have a proper verification methodology in place, but they severely understaffed the verification team to reduce the “overhead costs”. This is peanut butter engineering and tends to happen in companies that are too influenced by the traditional designers who don’t even comprehend why we need these fancy verification guys now that look more like a software engineer than a “real chip guy”. They view it like they’ve been releasing chips for X years without them, blah, blah… So you end up with less coverage than you should/could have because the verification engineers simply don’t have enough cycles, which leads to poorly written tests, poor schematic vs model checkers, lack of sufficient automation for the verification suite which leads to poor regression testing. Simply the verification of the chip is sub-par compared to what it could been given the modern tools and techniques available today and it’s your system that will be taking the brunt of the risk.
  3. If your project ends up in a run-break-fix loop you could have 2,3,4 or more tape outs as you watch your system development take a huge delay, and once you select a supplier that turns out not to have the proper verification chops you end up in a very bad situation of having to decide between continuing to invest more time and taking on more risk to your schedule with this supplier, or to take the full hit of going with a different supplier late in the game. Some system companies try to solve this by having multiple suppliers developing the same pin to pin compatible chip in parallel, but that will dilute the system company’s engineering team focus on making sure the chip is properly designed to the right specs that support the system.
  • Experienced, but done.

It’s not unusual to find, while discussing the requirements of the verification review with a potential chip supplier, and further down the road when discussing the verification that has been run in preparation to tape out, that some of the engineers instead of arguing why some type of verification doesn’t need to be run or improved because it’s covered in some way somewhere else, they will say: “I have X years of experience, and in my experience we just don’t need to do that.” Now don’t get me wrong, any engineer at any experience level can have this attitude, especially the bad ones. But even the good ones can go bad if they don’t watch out. What this engineer is really saying by choosing not to defend his position with an argument, and instead bring up his/her experience is that: “I lost my professional curiosity some time back, and stopped learning, and I am no longer interested in learning. So quit making me uncomfortable by asking me to change the way I do something, and challenging my worldview.” Once an engineer loses their curiosity, they are done as an engineer, and experience is valuable, but it ain’t going to by itself allow you to grow if you stopped being curious. If you as a system company see this type of attitude in a person in lead roles for a custom chip program you need to get out of there, and select another supplier, especially when searching for a company that has good verification methodologies since this is a field that is relatively new and has changed a lot recently compared to other much more mature areas of silicon development.

  • Serializers, and false choices.

The traditional way of developing a chip used to be that you first designed it, and then ran the verification before you taped out. Digital chips have usually had the most robust methodologies for design and verification. But as soon as some significant analog content enters the picture the verification really diverges from supplier to supplier, and it seems to me that everyone does their own thing mixing and matching different commercially available tools with overlapping capabilities which are selected to do different jobs in a somewhat arbitrary manner, and mixing it with internally developed scripts and tools. Out of that blend, some concoction of a verification methodology and its results becomes your verification for the tape out. Verification engineers are expected to start developing models and tests in parallel to the design team designing the chip, they will interview the designers to determine functionality and pin out of the blocks they will work on, and with that information they will start putting together a top down behavioral model and test bench testing environment that eventually will intercept the designer’s schematics, and will then be used to do proper schematic vs model checks to speed up some sims, while choosing to leave some blocks at transistor level for others, all this judiciously done to get excellent overall coverage while maintaining reasonable simulation times. The verification engineers are continuously building that verification environment and searching for bugs throughout the chip development, and that is their core job, they are not designers that came off block designs and now are available to run sims and make models. While augmenting the verification team with idle designers could be beneficial, a plan that requires designers to come off their block designs to be able to complete proper verification is a risky one as designers may need to spend more time to finish their blocks than expected and they will prioritize that work over any verification deliverable they may have assigned to them.

  • Designers as jacks of all trades – masters of only one.

As you may have noticed in my points above, I am not fond of designers when it comes to them interfering with verification engineering, especially analog designers when it comes to doubling up as verification engineers. Their heart is in design, not in verification, and they usually lack not only the passion but also the skills needed to be an effective verification engineer which include excellent coding skills. It should go without saying that assigning a designer to verify their own design should definitely not be part of the plan if you want to avoid tunnel vision getting in the way of finding bugs before tape out.

  • Home brewed, but too cool to show and defend it.

Many companies develop tools, scripts, etc… that they use internally. This is normal, and as long as you can inspect the tools and their inputs, outputs, etc… as part of the tape out phase review this is ok. However, it’s also true that when a commercially available tool that is widely used in the industry is available, but instead the chip supplier uses a home brewed tool they are adding some risk to the chip development because the user base for the commercially available tool is broader and therefore more people are reporting bugs, and there is also an EDA company behind that tool whose business it is to fix it and upgrade it. The home brewed tools may be someone’s pet project, and when that someone moves on from that company, the tool may no longer be updated, and get stale. It’s also especially disruptive if a supplier blocks the system company silicon reviewers from performing an in-depth verification review at tape out because they are trying to protect whatever they think is differentiated IP that this tool contains, and this really handicaps the system company’s ability to check if the tape out is of a high quality or not, and therefore negates the risk mitigation that an independent tape out review provides to the system company.

  • Don’t let documentation get in the way of the “real work”.

Chip companies that have poor internal review processes tend to have poor documentation practices. You can spot this easily because as soon as you need to perform an in-depth verification plan review, the plan is poorly written, lacks test details, lacks specifics of what is being tested, etc… It’s basically a document that doesn’t really provide the reader with the full scope of the verification that is planned to be executed. In these cases you usually have one or maybe a few engineers that are directly coding the verification without taking time to review with the broader team their plans and intended scope of coverage. Without the documentation, the internal reviews at that chip supplier will be much less effective, and it will be pretty much impossible for the system company reviewers to check if the plan is good or not. This will also reduce or eliminate the possibility of having the system company engineers, and the system company FW engineers provide their feedback about the chip verification plan, on the types of tests and coverage that they think would be most appropriate considering the system’s use cases.

There are no perfect supplier teams, and there is no perfect verification flow, every team has its strengths and weaknesses, and when selecting the supplier for your custom chip you need to decide if that is the right team for the type of chip you want to develop. Verification can be run forever with all sorts of randomized inputs, analog operating point combinations, etc… But at some point you need to tape out, and some bugs may have been missed which may be easier to find during validation rather than in a simulator. If you did a pretty thorough job you tend to find few (if any) digital bugs since the digital domain has very good tools already to maximize coverage, and most bugs will be found in the analog or RF parts of the chip.

  • Trust, but verify.

Custom system silicon when done with the assistance of silicon experts puts the system company in control of its own destiny. It’s important to note that when purchasing catalog parts for your system, unless you perform similar due diligence to what is described above, you’re trusting but not verifying that your components will be of good quality and not likely to cause yield or other issues when you go to production in high volumes.

For more information contact us.


Take Your Agile Release Process to the Next Level with Compass 2.0.1 and EssentialSAFe® from HCL

Take Your Agile Release Process to the Next Level with Compass 2.0.1 and EssentialSAFe® from HCL
by Mike Gianfagna on 02-09-2021 at 6:00 am

Take Your Agile Release Process to the Next Level with Compass 2.0.1 and Essential SAFe® from HCL

As I’ve discussed before, HCL Compass is a very flexible tool to define and manage development and release processes at the enterprise level. In HCL’s own words: Low-code/no-code change management software for enterprise level scaling, process customization, and control to accelerate project delivery and increase developer productivity. These are lofty goals. Lean and agile processes are at the center of these kinds of initiatives, and there are many requirements to be met to achieve a lean and agile development process. A new release of HCL Compass is noteworthy with respect to these goals. Read on to see how to take your agile release process to the next level with Compass 2.0.1 and EssentialSAFe® from HCL.

Let’s first examine the nomenclature involved. EssentialSAFe is a new schema that ships with the latest release of Compass (2.01) that helps teams follow SAFe practices. SAFe, or Scaled Agile Framework is a set of organization and workflow patterns for implementing agile practices at enterprise scale. Essential SAFe contains the minimal set of roles, events, and artifacts required to continuously deliver business solutions via an Agile Release Train (ART).

The Agile Release Train is a long-lived team of agile teams, which, along with other stakeholders, incrementally develops, delivers, and where applicable operates, one or more solutions in a value stream. Agile teams are cross-functional groups of 5-11 individuals who define, build, test, and deliver an increment of value in a short time window. So, EssentialSAFe from HCL provides a comprehensive out-of-the-box schema to implement a lean and agile workflow for the enterprise.  The schema is also customizable, so you can fine tune the workflow for your organization.

In the EssentialSAFe schema, there are three work items available to scope, plan and implement experiences in your solutions. They are Features, Stories, and Tasks. These make up part of the SAFe Requirements Model, shown in the figure below.

SAFe Requirements Model

A few more definitions will help:

  • A Feature is a service that fulfills a stakeholder need. Each feature includes a benefit hypothesis and acceptance criteria and is sized or split as necessary to be delivered by a single Agile Release Train (ART) in a Program Increment (PI).
  • Stories are short descriptions of a small piece of desired functionality, written in the user’s language. Agile Teams implement small, vertical slices of system functionality and are sized so they can be completed in a single Iteration. Stories provide just enough information for both business and technical people to understand the intent.
  • Tasks are small work items that new teams might use to split stories into smaller parts. They are completed within a few days, but often finished in less than a day. Tasking stories is an optional practice in SAFe, but it can help new teams improve their sizing of stories and estimation of capacity.

For further reading, HCL provides more details about the new capabilities of Compass 2.0.1 and how to implement it in your enterprise here, with more detail provided here. A complete example implementation is provided that should be quite helpful.  The referenced posts are written by Adam Skwersky, senior software engineer at HCL Technologies. Adam hails from MIT, where he earned a BS and MS in Mechanical Engineering. He was also a research assistant at MIT in robotics before spending 20 years in software development at IBM. He’s been at HCL for four years. You can learn a lot from Adam.

Check out his posts so you can take your agile release process to the next level with Compass 2.0.1 and EssentialSAFe® from HCL.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.


The Five Pillars for Profitable Next Generation Electronic Systems

The Five Pillars for Profitable Next Generation Electronic Systems
by Kalar Rajendiran on 02-08-2021 at 10:00 am

LifeCycle Insights PieChart

Although electronic systems design as a discipline has been around ever since electronics systems came into existence (and that was many decades ago), the design complexities involved and the demands and constraints placed on the systems have multiplied significantly since then. Recent research by LifeCycle Insights shows that 58% of all new design projects incur unexpected additional costs and time delays. And only one in four projects actually goes out on time and on budget.

Source: LifeCycle Insights

When asked the question “what is the secret behind a successful and profitable product?”, the typical answer is two words. Great design. As true as the answer is, this short answer hides all of the details of how it got there. Great design didn’t happen by magic or happenstance. It involves very well thought-through methodologies and software tools to effectively and efficiently manage product, team and process complexities resulting in a great design. A recently posted whitepaper by David Wiens, product marketing manager at Siemens Digital Industries Software entitled “Raising the pillars of digital transformation for next-generation electronic systems design” walks you through those exact details.

I’ll list some of the nuggets I gathered from reading the whitepaper.

The classic prototyping-dependent approach increases the risk of missing product launch schedules. Reason: With increasing complexities in designing systems, the design cycle duration is significantly longer and any problems we wait to learn through physical prototyping methods are too late and incur re-do of lengthy design cycle iterations.

The following diagram highlights the need for catching more errors earlier by integrating verification and validation throughout the design process which is significantly longer than the new production introduction (NPI) phase.

Source: Siemens Digital Industries Software

One way for teams to detect down-stream problems earlier is by performing simulations of prototype performance earlier in the design cycle. This calls for developing a virtual model to represent the intended final physical product. A virtual model of the final product that is being designed is called a digital-twin. The digital-twin allows teams to perform simulations and validations as early in the design cycle as possible.

A model-based system perspective allows teams to not only look at the electrical and functional trade-offs earlier in the design cycle but also product trade-offs that might impact weight, cost and availability of system components.

A digital-twin developed in the context of a model-based system engineering perspective allows for a digital-prototype driven verification, a Shift-Left testing at play. Over the course of a project, the digital-twin model evolves to allow more complex interactions including analysis, simulations and validations earlier in the design cycle. This enables teams to detect problems much earlier when they are easier and cheaper to fix with very little product launch schedule impact. This reduces the need for physical prototypes.

Digital-prototypes lend themselves well to automation technologies that eliminate manual reviews and increase productivity. And the benefits derived from this automation are multi-fold.

Next-generation electronic systems require a next-generation approach. David explains all the details by categorizing them into five transformational factors. He calls them the five essential pillars for consistently delivering profitable electronic designs and systems.

  1. Digitally integrated and optimized multi-domain design
  2. Model-based systems engineering (MBSE)
  3. Digital-prototype driven verification
  4. Capacity, performance, productivity and efficiency
  5. Supplier strength and credibility

I only touched upon some aspects of a couple of the five pillars. Each and every pillar is critical to understand.

If you play any role within the electronic systems ecosystem, whether at the chip level or systems level, whether at the budget owner level or at an influencer level, I strongly recommend downloading and reading David’s complete whitepaper. There is lot of objective and compelling details to help you evaluate your software tools and methodologies that are currently deployed and decide on making critical updates that will enable you and your customers to consistently turn out successful and profitable products.

Also Read:

Probing UPF Dynamic Objects

Calibre DFM Adds Bidirectional DEF Integration

Automotive SoCs Need Reset Domain Crossing Checks


Webinar: Electrothermal Signoff for 2.5D and 3D IC Systems

Webinar: Electrothermal Signoff for 2.5D and 3D IC Systems
by Mike Gianfagna on 02-08-2021 at 6:00 am

Webinar Electrothermal Signoff for 2.5D and 3D IC Systems

The move from single-chip design to system-in-package design has created many challenges. The rise of 2.5D and 3D technology has set the stage for this. Beyond the modeling requirements and the need for ecosystem collaboration to get those models, there is a significant challenge in understanding the data. The only way to truly predict the behavior of a complex design like this is through concurrent analysis across multiple regimes — thermal, mechanical and electrical. Attaining such a global view presents substantial algorithm, flow, analysis and visualization hurdles. When I heard about a webinar from Ansys that addresses these issues I got very interested. The webinar is coming on February 23, 2021. Here is a sneak preview of the event and how electrothermal signoff for 2.5D and 3D IC systems can be implemented.

The webinar focuses on Ansys RedHawk-SC Electrothermal, a new product introduced around the middle of last year with limited customer availability. The product is now moving to general availability and this is the reason for the webinar. There have been many posts about the innovative integration Ansys has delivered across multiple domains. The upcoming webinar takes it up a notch.

Marc Swinnen

First, a bit about the speakers. Marc Swinnen, director of product marketing at Ansys Semiconductor Business Unit kicks off the webinar. Marc hails from Cadence, Synopsys, Azuro, and Sequence Design, where he developed a deep understanding of digital and analog design tools. Marc’s depth of understanding and articulate delivery set the stage for a great event.

Next, Sooyong Kim, director product specialist responsible for 3D-IC and chip-package-system multiphysics solutions at Ansys, takes you through a deep dive of the technology. With 20 years of experience in the EDA industry with a focus on power integrity and reliability analysis and methodologies, Sooyong is clearly up to the challenge. His title conveys an important dimension of the technology being discussed — multiphysics.

Sooyong Kim

Analysis of 3D structures requires representation of multiple physical effects to get the complete picture across thermal, mechanical and electrical. This is clear. What may not be as clear is the need for concurrent analysis of all these effects since thermal stress will impact form factor and planarity (mechanical), electrical will impact thermal, and so on. Millions of such interactions need to be considered. The term multiphysics conveys the concurrent analysis of all these effects to get a true picture of the system.

Sooyong discusses some of the enhancements that comprise RedHawk-SC ElectroThermal. A key one is the analysis cockpit, which now allows concurrent views of electrical, thermal and mechanical effects. These domains were previously supported, but in separate interfaces. The new visualization environment presents all the data in a harmonized way. You need to see this to fully appreciate the benefits and insights available as a result.

The ability to handle data from multiple sources, both on-chip and off-chip are detailed as well. This requires importing information from Ansys tools and partner tools and integrating all the information into a common database for analysis and visualization. One example of this is importing data from Ansys Icepak, which can perform computational fluid dynamic analysis to predict the airflow/temperature profile of the enclosure for the chip as well as the temperature of the actual chip. This data then becomes boundary conditions for RedHawk-SC Electrothermal, creating a link between chip and system design.

Another high-impact capability is how this system supports analysis with incomplete data. This essentially provides a way to prototype the design to start analysis before all the information is known. A capability like this is worth a close look as it can save a lot of time.

The webinar goes into significant detail about the new capabilities of RedHawk-SC Electrothermal. Techniques for modeling multi-die systems, like HBM and PCIE interfaces, with silicon interposers, through-silicon vias (TSVs), and microbumps are discussed. How to perform signoff analysis on multi-die systems for power integrity, signal integrity, thermal integrity, and mechanical stress/warpage is also covered.

Thanks to the cloud-based big data analytics of Ansys SeaScape, all of the required data can be brought together for concurrent analysis and visualization. This new technology from Ansys is quite impressive. I’ve just scratched the surface here. If there is a 2.5D/3D design in your future, you need to attend this webinar. It will illuminate what challenges lie before you and how to address them. The webinar will be broadcast twice on February 23, 2021. You can register here for the 8:00 AM PST webinar.  You can register here for the 6:00 PM PST webinar. Check out how electrothermal signoff for 2.5D and 3D IC systems can be implemented.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read

Best Practices are Much Better with Ansys Cloud and HFSS

System-level Electromagnetic Coupling Analysis is now possible, and necessary

HFSS – A History of Electromagnetic Simulation Innovation


Morgan Stanley’s Tesla, Ford Misses

Morgan Stanley’s Tesla, Ford Misses
by Roger C. Lanctot on 02-07-2021 at 10:00 am

Morgan Stanley Tesla Ford Misses

Gamestop isn’t the only source of wild stock market gyrations. In fact, one might argue that the crazy valuations gravy train got its start at a humble little car company called Tesla Motors. Tesla’s stock has more than doubled in value from $400, six months ago, to more than $850 today.

Competing, so-called legacy, auto makers have watched enviously and helplessly as Tesla’s stock has compiled a market value beyond the combined valuation of the five biggest car makers on the planet. This is in the context of Tesla shipping half a million cars in 2020, not even a one percent share of the total global car market.

Investment bankers like Morgan Stanley have struggled to cope with the meteoric rise in Tesla’s valuation – reversing bearish positions and raising stock price targets. Competing car maker executives have scratched their heads as the stock market shrugs at their own profitable operations and earnings “beats.”

In this time of special purpose access corporations (SPACs) – which have the earmarks of money laundering – and mass market stock manipulation by day traders, car makers are straining for attention, credibility, and validation from the public markets. Somehow running a profitable operation with reliable products and satisfied customers is no longer revered or rewarded on Wall Street.

Sadly, many car makers have turned to bold pronouncements regarding autonomous vehicles, electrification, or strategic tech industry tie ups to juice their own stock prices.

Two weeks ago, GM’s stock price got a momentary lift from the announcement of a $2B investment by Microsoft in GM’s Cruise autonomous vehicle unit.  This week, Ford Motor Company’s stock saw a valuation flutter following its announcement of a major collaboration with Alphabet’s Google.

Never mind that GM’s Cruise operation is burning through cash at a $250M/quarter pace in its pursuit of building a robotaxi for which there is no business rationale or consumer demand.  Ignore the massive organizational impacts that Ford’s Google gambit entails.

Morgan Stanley offered up its assessment of the potential impact of the Ford-Google deal suggesting that Ford will “generate a gusher of $5B in profit” from a $9B revenue stream to be created by connected services enabled by Google.  Writes Morgan Stanley: “A new revenue source of that magnitude might double Ford’s $43B market capitalization and send its stock soaring to $25, up from less than $11 now.”

Let’s be real clear.  That is absurd.  In Morgan Stanley’s scenario, Ford will commence generating $10/month/car in data subscriptions for entertainment or retail services post the Google deal.  Nope.  That is not going to happen.

Morgan Stanley has swallowed whole the now-several-year-old McKinsey perspective that there are billions of dollars in untapped revenue tied up in vehicle data and vehicle-based commerce.  It’s true that vehicle data is valuable, but if Ford is partnering with Google, it is Google that stands to benefit most directly from “monetizing” that data.  And very few Ford owners will want to pay a subscription for connected services obtainable for free via their smartphones.

There are a lot of plusses for Ford cozying up to Google including leveraging its on-board, in-vehicle application platform, Android operating system, and cloud resources.  There are also some tantalizing possibilities in leveraging Google’s marketing and sales resources to push Ford vehicles.

But the Ford-Google deal is not a signal to buy or sell Ford or Alphabet stock.  It is a cause for concern among Ford’s existing hardware, software, and service partners.  It may also be a signal of significant change for various Ford development teams and even for the overall organization of Ford itself.

Ford is not alone.  Volvo, Renault, and GM, among others, have already announced their Google fealty to one degree or another.  All of those companies are weighing how they will preserve their independence from Google while using Google’s resources to reinforce existing customer ties.

Ford has been down this path before.  The company embraced Microsoft’s Windows Embedded operating system as part of its effort to bring its revolutionary Sync smartphone-based platform to market.  The companies parted ways as subsequent generations of Ford Sync ran into technical snags.

Like most other things in the automotive industry, the Ford-Google deal is a gamble with significant up and downside risks.  From here, the future looks bright, but tapping the value of this new relationship will require additional effort and investment, not less.  And don’t expect a doubling of Ford’s stock price. In the currently stingy automotive investment environment simply maintaining Ford’s existing valuation will be a triumph. What is worthy of attention will be the arrival of the Mustang Mach E – which owes nothing to Google.