wide 1

Podcast EP132: The Growing Footprint of Methodics IPLM with Simon Butler

Podcast EP132: The Growing Footprint of Methodics IPLM with Simon Butler
by Daniel Nenni on 12-16-2022 at 10:00 am

Dan is joined by Simon Butler, the founder and CEO of Methodics Inc, Methodics was acquired by Perforce in 2020, and he is currently the general manager of the Methodics business unit at Perforce. Methodics created IPLM as a new business segment in the enterprise software space to service the needs of IP and component based design. Simon has 30 years of IC design and EDA tool development experience and specializes in product strategy and design.

Dan discusses the growing need for IP lifecycle management across design and manufacturing. How the Chips Act impacts these activities is discussed, along with the requirements of legacy node design and emerging chiplet-based design approaches.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Ron Black of Codasip

CEO Interview: Ron Black of Codasip
by Daniel Nenni on 12-16-2022 at 6:00 am

RBl

Dr. Black has over 30 years of industry experience. Before joining Codasip, he has been President and CEO at Imagination Technologies and previously CEO at Rambus, MobiWire (SAGEM Handsets), UPEK, and Wavecom. He holds a BS and MS in Engineering and a Ph.D. in Materials science from Cornell University. A consistent thread of his career has been processors including PowerPC at IBM, network processors at Freescale, security processors at Rambus, and GPUs at Imagination.

Tell us about Codasip
Codasip is unique. It was founded in 2014, and a year later we were offering the first commercial RISC-V core and co-founding RISC-V International. Since then, we have grown rapidly, particularly in the past two years. Today we have 179 employees in offices around the world in 17 locations. What I find so interesting is that we do ‘RISC-V with a twist’. We design RISC-V cores using Studio, our EDA tool, and then license both the cores and Studio to our customers so they can customize the processors for their unique applications. Think of Codasip as providing a very low-cost architectural license with a fantastic EDA tool to change the design so it is unique for you – ‘design for differentiation’.

Our customers all seem to have one common characteristic – they are ambitious innovators that want to make their products better than what you get from just a standard offering.

‘Codasip makes the promise of RISC-V openness a reality’, can you explain?
The RISC-V instruction set architecture, or ISA, is an open standard specifically designed for customers to be able to extend it to fit their specific need, whilst still having a base design that is common. You can add optional standard extensions and non-standard custom extensions whenever you want to ensure the processor you are designing truly runs your workload optimally.

Some people say that this creates fragmentation, but it really does not. Indeed, alternative proprietary architectures have segment specific versions that one could call fragmented because they are not interoperable. The key question is – do you want the processor supplier to control what you do, or do you want to decide for yourself? I think the answer is obvious. We see the industry moving to letting customers decide, not the supplier.

With our approach you can always use our standard processor offering to start with, and be assured that you can change it in the future if you want to. In fact, we like to think that describing the processor using CodAL source code plus the open RISC-V ISA reinvents the concept of architecture licenses to give customers the best of both worlds – a base design with a proven quality through unparalleled verification, plus an easy way to customize for any application.

You recently announced several partnerships with RISC-V players, can you tell us more about your role in the RISC-V ecosystem?
We strongly believe that to be successful RISC-V requires a community – nobody can or should walk alone. By partnering with other key players in the industry we all build the RISC-V ecosystem together.

Two areas we feel the community needs to focus on and excel at are processor verification and security. So we were proud to partner with Siemens EDA on verification, and CryptoQuantique on security. Each has industry-leading solutions and are great partners.

We also recently joined the Intel Pathfinder for RISC-V program, which is helping the industry scale. We made our award-winning L31 core available for evaluation on Intel’s widely accepted FPGA platform, targeted for both educational and commercial purposes.

Similarly, we were keen to help the ecosystem to increase the quality of RISC-V processor IP by being part of the Open HW Group, which has a strong belief in commercial grade verification.

You also recently announced the acquisition of a cybersecurity company, can you tell us more?
We fundamentally believe in both organic and inorganic growth because we are always looking for the absolutely best talent, and were lucky enough to find the Cerberus team, a UK-based cybersecurity company known for its strong hardware and software IP. The Cerberus team really embraced the Codasip approach and have already been instrumental in helping us to win new business in secure processors and secure processing. To expand the initiative, we are now in the process of combining our automotive safety initiative with our security initiative, which is something that we believe can be incredibly important for the industry. Stay tuned.

As a leading European RISC-V company, how do you influence the European industry and market?
We like to think of ourselves as a global company, engaging customers and partners across the world, but always operating locally and very proud of our European heritage. Europe is home to many great semiconductor and systems companies doing chip design, and has a fantastic STEM (Science, Technology, Engineering, and Mathematics) education system supplying a large number of talented graduates each year. Our university program launched this year is expanding rapidly and we look to be at 24 universities by the end of next year. Given the geopolitical situation today, we believe that it is incredibly important to have a strategy of balancing and being both local and global.

How do you see the future of RISC-V and the future of Codasip?
Definitely extremely bright! RISC-V is growing and getting serious attention for good reasons – customers are looking for open ISA alternatives with ecosystem support, and RISC-V is what they are all turning to. Everyone knows about RISC-V and Codasip is no longer a well-kept secret. The question is no longer if RISC-V is too risky to adopt, but whether it is too risky not to adopt?

Also Read:

Re-configuring RISC-V Post-Silicon

Scaling is Failing with Moore’s Law and Dennard

Optimizing AI/ML Operations at the Edge


Functional Safety for Automotive IP

Functional Safety for Automotive IP
by Daniel Payne on 12-15-2022 at 10:00 am

functional safety in automotive electronics

Automotive engineers are familiar with the ISO 26262 standard, as it defines a process for developing functional safety in electronic systems, where human safety is preserved as all of the electronic components are operating correctly and reliably.  Automotive electronics have now grown to cover dozens of applications, and George Wall of Cadence presented on this topic at the recent IP SoC event. I learned that ISO 26262 came from the parent standard IEC 61508, first released in 1998. Functional safety is defined as the, “Absence of unreasonable risk due to hazards caused by malfunctioning behavior of Electrical/Electronic systems.

Automotive Electronics

A systematic failure would be a design bug in the hardware, causing something unintended in the system, while a random hardware fault would be a silicon defect in a chip causing a stuck bit or even an alpha particle causing a memory bit to flip. The goal for SoC designers is to make their design resilient to faults, ensuring safety.

The Automotive Safety Integrity Level (ASIL) defines the highest level of protection from systematic and random faults, called ASIL-D, which requires >99% coverage against single-point faults, and >90% coverage against latent faults. For automotive electronics that control braking and air-bags you need ASIL-D level compliance.

To reach the safety goals, then any faults need to be blocked, avoided, designed out, or mitigated. Four commonly used hardware safety mechanisms found in processor-based SOCs include the following:

  • ECC protection of memories
  • Watchdog Timer
  • Software Self-Test
  • Dual-Core Lockstep

These safety mechanisms are specific to processors, and the list is not exhaustive.

It was 9 years ago in 2013 that Cadence acquired Tensilica for their IP cores, and that investment has grown over time to supply IP for automotive in several categories:

The Tensilica processor IP has been certified to be ASIL-D compliant for systematic faults, where a single processor is ASIL-B compliant against random faults and two processors operating in lockstep are ASIL-D compliant against random faults. Even the Tensilica C/C++ compiler toolchain is certified to ASIL-D. The IP has both fault reporting and fault protection mechanisms built in.

Memory ECC

Error Correcting Code (ECC) is available for memory interfaces like: instruction local SRAM or cache, data local SRAM or cache, and the cache tag store. At the system level you can monitor the ECC error information. A 7-bit ECC syndrome is calculated on 32-bit words, and single bit memory data errors are automatically corrected, while for multi-bit errors an exception is signaled.

Windowed Watchdog Timer (WWDT)

To ensure normal execution of software a WWDT acts as a system supervisor, where inside the normal operation window the software restarts the WWDT, however if the WWDT counts down too early or too late it will cause a reset request to the SoC. The ISO 26262 standard defines Program Sequence Monitoring (PSM) as a way to ensure correct code execution, so the WWDT is the safety mechanism used.

WWDT

Logic BIST, Software Self-test

Using  logic Built-In Self Test (BIST) the hardware tests a portion of logic, detecting static faults , while adding about 5% overhead in the silicon and running in only milliseconds of time, typically producing 90% fault coverage. Logic BIST can be run at startup time and then report any detected faults.

With software self-test there’s no hardware overhead, because it’s just software to test logic at different times, like for periodic runtime checking. ISO 26262 lists software self-test as a safety mechanism from random faults, with a medium diagnostic coverage, and it’s included in the Tensilica qualitative FMEDA. The Xtensa Software Test Library (XT-STL) provides tests to confirm the basic processor operation, non-intrusively. You would combine XT-STL functions along with your own tests, like during power-on testing, or mission mode tests.

Hardware Redundancy

Higher fault tolerance can be achieved through hardware redundancy, either with time-based redundancy or hardware redundancy. ECC for memory is an example, and you can add triple-redundancy voting flip-flops, parity protection of Critical State Registers (CSRs), or for processors use a Dual-Core Lockstep (DCLS).

Tensilica supports DCLS with a technology called FlexLock, where two cores run the same code in lockstep with each other, and a comparator finds any differences, supporting ASIL-D requirements.

DCLS

There’s even a dual memory lockstep, adding redundancy on core logic and memories.

Dual Memory Lockstep

Security

There’s a cybersecurity standard for road vehicles, dubbed ISO 21434, adding a security lifecycle for automotive. Four commonly used threat protection mechanisms used in SoCs include:

  • Hardware root of trust – secure boot, authentication of boot
  • Cryptography – protecting data
  • Hardware isolation – divide trusted and non-trusted regions in memory
  • Anomaly detection – alert suspicious activity

Tensilica has Xtensa LX processors that support hardware isolation using a secure mode for running authenticated code, then a non-secure mode for running untrusted code.

Anomaly detection can be implemented with WWDT, alerting about unexpected program execution. With the dual memory lockstep approach any divergent execution causes a safety fault.

Summary

A traditional car today has at least 40 kinds of chips, while the total number of chips in a car can reach 500, so designing for safety requires the disciplines of following ISO 26262 standards. Meeting safety goals means that processor IP used in cars be ASIL certified to the appropriate level. Cadence has a good track record in their Tensilica IP of using safety and security measures to meet automotive requirements.

Review the 33 slide presentation from IP SoC 2022 here.

Related Blogs

 


Cracking post-route Compliance Checking for High-Speed Serial Links with HyperLynx

Cracking post-route Compliance Checking for High-Speed Serial Links with HyperLynx
by Peter Bennet on 12-15-2022 at 6:00 am

hyperlynx flow

SemiWiki readers from a digital IC background might find it surprising that post-PCB route analysis for high speed serial links isn’t a routine and fully automated part of the board design process. For us, the difference between pre- and post-route verification is running a slightly more accurate extraction and adding SI modelling, while GHz signals aren’t microwaves – they’re just faster than MHz ones.

PCB design is not so forgiving. Traces at the board level are much longer and we need S-parameters and transmission line modeling for high speed signals. It’s a far more demanding design flow and EDA challenge, requiring greater user expertise, time and effort. Several of the intricate process steps are not fully automated, run time can be far too slow and the whole process not smoothly automated and reliably repeatable. In practice then it’s a flow step that’s not always fully verified, leaving projects at risk of tricky PCB debug and respin delays and costs.

Can’t we do better than this ? Aren’t there are too many designs with too many serial links these days ? And too few signal integrity experts to do the work ? Isn’t it time for EDA to catch up with such pockets of the design flow still resisting automation 58 years after the first DAC ?

Enter HyperLynx

Todd Westerhoff’s white paper explains what Siemens EDA is doing to remove this critical flow bottleneck with their HyperLynx PCB signal integrity tool, taking a SerDes protocol compliance check as an example.

The goals of this HyperLynx flow are simple:

  • automate as much of the flow as possible so that design teams can target overnight post-route verification of all serial links on a design
  • deliver a flow that can be quickly and easily repeated
  • avoid reliance on slow, manual PCB layout inspection (often used today to cover the risk of skipping post-route analysis)
  • allow design teams to do all analysis work in house
  • ease the workload on scarce signal integrity experts
  • directly target protocol compliance (does the interface perform correctly) rather than proxy metrics

Let’s look first at how this post-route analysis of high speed serial links might be done today in a protocol compliance checking flow. An IBIS-AMI simulation flow would be slightly different, but with similar complexity.

We won’t try to explain all the details here – the paper does this very well. Just note how many steps there are, many requiring user effort, expertise and output checking. And the three main parts: preparing the design for analysis, running the analysis and figuring out what the results actually mean.

Let’s look at these in turn.

Channel Modeling

Getting to the analysis step where we’ll run full wave simulations takes a lot of care and effort. Full EM solving takes serious run time, so we only want to run it on the high speed links if we can. But we also want to run on all these nets as the layout of each is unique – we cannot reliably second guess which is likely the worst of a set and skip the rest to save time.

Perhaps the trickiest step in getting to the channel models needed for the simulations is isolating and modeling the physical path for a channel with sufficient precision that accuracy is not lost, a process known as cut and stitch. Each net can be cut into longer sections where transverse electromagnetic mode (TEM) propagation holds and regions around discontinuities like vias where more time costly non-TEM propagation must be modelled. It’s a typical run time vs accuracy tradeoff we make all the time in EDA, but here we have to decide exactly where to break the sections. Precision really matters here and this isn’t easy. Nor is stitching these back together for simulation. It takes experts and multiple iterations to do this reliably well. HyperLynx automates both the cut decisions (using its DRC engine) and the stitching where transmission line length adjustment is needed. That’s a key breakthrough which opens the door to creating interconnect models for hundreds of serial channels, automatically, overnight.

Analysis

There are two methods for post-layout analysis of the serial links: IBIS-AMI simulation and standards-based compliance analysis. Ideally, we’d use the first, but this often runs into practical issues with availability, completeness and accuracy of IBIS-AMI models and excessive run times. Not the ideal technique if you need to run it repeatedly.

Protocol standards-based compliance analysis is quicker to run. Driver (Tx) and receiver (Rx) models for the serial protocol can be used instead of vendor IBIS models, achieving run times below a minute per channel against 30 minutes or more for AMI simulations. But if you had to do all the work yourself to configure all the compliance models for the myriad of serial protocols, this would be of little practical help. This is where HyperLynx automation steps in with its Compliance Wizard allowing simple specification of protocols for each channel from a library of 210 protocols and configuring the checking parameters needed for each.

Results Processing

Conventional simulation analysis only gets us to signal waveforms. The critical question – “does this still work ?” – is not directly answered. But now we’re doing protocol-based analysis we need not stop and rely on interpreting eye diagrams. We know the exact limits –and any design margins we wish to apply – for all the key parameters. So HyperLynx can directly report which high-speed signals passed and which failed with complete, detailed reports.

HyperLynx Flow

We can see the greater simplicity and automation of the HyperLynx flow below.

It’s important to note here that Siemens is not arguing that IBIS-AMI simulations don’t have a role to play in post-route verification. Their point here is that a lot of what would have been done that way can now be done quicker and more easily with protocol-based analysis. The protocol compliance approach uses standard models which just meet the protocol spec – so if a design passes compliance testing, it may well show some margin in IBIS-AMI simulation when the actual board Tx and Rx models are used.

Summary

HyperLynx looks to be closing an important gap in pre-fab PCB verification here, helping designers avoid needless prototype PCB respins by enabling faster, more reliable verification of all serial links post-layout, putting overnight verification within reach. And also doing what good EDA tools should – automating the workflow – managing complexity and design partitioning for modeling and simulations and giving clear pass/fail results. And making good engineers better and more productive. Including those scarce SI experts.

This very clear and highly readable white paper covers this all in a lot more detail than we have space for here:

Automated compliance analysis of serial links reduces schedule risk

https://resources.sw.siemens.com/en-US/white-paper-automated-compliance-analysis-of-serial-links-reduces-schedule-risk

Also Read:

Calibre: Early Design LVS and ERC Checking gets Interesting

Architectural Planning of 3D IC

Pushing Acceleration to the Edge


Podcast EP131: Intrinsic ID – Implementing Security Across the Electronics Ecosystem

Podcast EP131: Intrinsic ID – Implementing Security Across the Electronics Ecosystem
by Daniel Nenni on 12-14-2022 at 10:00 am

Dan is joined by Pim Tuyls, CEO of Intrinsic ID. Pim founded the company in 2008 as a spinout from Philips Research. With more than 20 years of experience in semiconductors and security, Pim is widely recognized for his work in the field of SRAM PUF and security for embedded applications. He speaks at technical conferences and has written significantly in the field of security. He co-wrote the book Security with Noisy Data, which examines new technologies in the field of security based on noisy data and describes applications in the fields of biometrics, secure key storage and anti-counterfeiting. Pim holds a Ph.D. in mathematical physics from Leuven University and has more than 50 patents.

Pim discusses the underlying technology, business strategy and ecosystem partnerships that all help Intrinsic ID to deliver security, flexibility and trust across a growing electronics ecosystem.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Crosses $5 Billion Milestone!

Synopsys Crosses $5 Billion Milestone!
by Daniel Nenni on 12-14-2022 at 6:00 am

Synopsys NASDAQ SemiWiki

“We intend to grow revenue 14% to 15%, continue to drive notable ops margin expansion and aim for approximately 16% non-GAAP earnings per share growth.”

Synopsys, Inc. (NASDAQ:SNPS) Q4 2022 Earnings Call Transcript

Synopsys is the EDA bellwether since they report early and are the #1 EDA and #1 IP company.  In addition to crossing the $5B mark, Aart de Geus shocked everyone with a 14-15% growth estimate for 2023. REALLY?!?!?! Yes, really, and don’t ever bet against Aart. SNPS is generally conservative with fiscal year growth numbers so if you are betting the over/under bet the over.

“Looking at the landscape around us, some of you have asked us why customers design activity remains solid throughout waves of the business cycle. Two reasons. First, the macro quest for Smart Everything devices and with its AI and big data infrastructure is unrelenting and expect it to drive a decade of strong semiconductor growth. Second, semiconductor and systems companies, be it traditional or new entrants, prioritize design engineering throughout economic cycle precisely to be ready to feel competitive new products when the market turns upward again. We’ve seen this dynamic consistently in past up and down markets and expect it to continue.”

As history has shown, semiconductor companies design their way out of challenging times with a “design or die” mantra. When semiconductor companies cut EDA budgets then you should be concerned. And yes, the fabless systems companies (Apple, Google, Amazon, Microsoft, etc…) are now leading the EDA budget charge. In the past few years fabless systems companies have taken over as the leading readers of SemiWiki and I expect that to continue for the foreseeable future.

“Synopsys is uniquely positioned to address these challenges as we provide the most advanced and complete design and verification solutions available today, the leading portfolio of highly valuable semiconductor IP blocks and the broader set of software security testing solutions. In the past few years, we have introduced some truly groundbreaking innovations that radically advance how design is done.”

If I had to rank these Synopsys market segments in regards to importance, I would put the IP business as #1. The other EDA companies just do not get this, IP is everything. This also puts Synopsys in the unique position to capitalize on the chiplet revolution that is coming because chiplets are IP. And the key to chiplets, like IP, are the foundries (TSMC) stamp of silicon proven approval. Synopsys already has the advantage of closer relationships with the foundries since Synopsys IP is always on the first test chips. This has HUGE value today and tomorrow!

“Today, we’re already tracking more than 100 multi-die designs for a range of applications, including high-performance compute, data centers, and automotive, seeing strong adoption of our broad solution. A notable example is achieving plan of record for multiple 3D stack designs at a very large, high-performance computing company as well as expanded deployment at a leading mobile customer.”

You will be hard pressed to find a tape-out that does NOT involve a Synopsys product so these numbers are legit. In fact, I would say multiple products from Synopsys is the tape-out norm.

It really has been an amazing career experience watching the EDA business grow from my first DAC in 1984 to now. Synopsys and Cadence did not even exist back then. It was Daisy, Mentor, and Valid Logic or what we called DMV, and now Synopsys hits $5 billion, simply amazing!

“In summary, Synopsys exceeded beginning of year targets and delivered a record fiscal 2022 across all metrics with the additional spark of passing the $5 billion milestone. We enter FY 2023 with excellent momentum and a resilient business model that provides stability and wherewithal to navigate market cycles. Notwithstanding, some economic uncertainty, our customers are continuing to prioritize their chip system and software development investments to be ready with differentiated products at the next upturn. On our side, many game changing innovations across our portfolio position as well to capitalize a decade of semiconductor importance and impact.”

I am much less concerned with the economic uncertainty of 2023. I have never seen a stronger demand and respect for semiconductors and it will only get stronger as AI touches the majority of the chips being designed today. If you have any doubts look at TSMC’s 2022 numbers, 40%+ growth?!?!?!

In the old days, like Joe Costello said, “We’re stuck in a fixed-pie model.  Have you seen three big dogs hovering over one bowl of dog food?  It’s not a pretty
picture.” Today there are four dogs hovering over the EDA bowl (Synopsys, Cadence, Siemens EDA and Ansys) but now it is a VERY large bowl and I give Synopsys their due credit for innovating outside of the EDA box, absolutely.

Also Read:

Configurable Processors. The Why and How

New ECO Product – Synopsys PrimeClosure

UCIe Specification Streamlines Multi-Die System Design with Chiplets


VeriSilicon’s VeriHealth Chip Design Platform for Smart Healthcare Applications

VeriSilicon’s VeriHealth Chip Design Platform for Smart Healthcare Applications
by Kalar Rajendiran on 12-13-2022 at 10:00 am

VeriHealth Showing Fall Detection

The wearables electronics market is a large and fast growing one. According to Precedence Research, the global wearable technology market is expected to grow at a compound annual growth rate of 13.89% during the forecast period 2022 to 2030. Precedence estimated the global wearable technology market size at USD 121.7 billion in 2021 and expects the market to surpass USD 392.4 billion by 2030. By the segments served, this huge market breaks down into Defense, Healthcare, Entertainment, Fitness and Wellness, Enterprise and Industrial applications.

While entertainment-oriented and fitness-oriented wearables may be the conspicuous ones among the healthcare related wearables have grown to be a significant segment over the last few years. Within this segment, the applications fall into health and fitness, remote patient monitoring and home healthcare. Most commonly known for measuring heart rate, blood pressure, temperature, and oxygenation level, the list of potential healthcare applications is endless. By various estimates, the Healthcare segment is around 10% of the total market and expected to stay so for the foreseeable future.

With such a large market opportunity, a number of players are interested in tapping into the healthcare wearables market segment. At the same time, there are a number of challenges too to becoming a successful player. On the one hand is the complexity of physical design and manufacture of the wearable to make the device work reliably across the entire population. On the other is the complexity of electronics hardware and software design and development.

As a “Silicon Platform as a Service (SiPaaS)” company, VeriSilicon recently announced its VeriHealth Chip Design Platform to address the electronics hardware and software aspects of developing healthcare wearables.

Market Entry Challenges

With wearables taking the form of bodywear, neckwear, headwear, wristwear, footwear and eyewear, the design and development of these products could be very challenging. The products have to work reliably across the entire cross-section of the population. On top of this, the products have to be light weight, have a long battery life between recharges and deliver enough compute performance too.

VeriSilicon Accelerates Wearables Development

The VeriHealth reference platform supports many different ISA including RISC-V, Arm Cortex-M and VeriSilicon’s own ZSP and provides a unified hardware abstraction layer (HAL) interface for various hardware platforms. The platform also provides a scalable software toolset that includes firmware SDK, mobile SDK and mobile reference applications to enable a multi-level software framework design involving the driver, hardware abstraction layer, middleware and application layers.

The mobile SDK supports both iOS and Android, and provides unified interface and protocol, which can be customized based on the SDK. VeriHealth also offers capabilities including full-link OTA and end-to-end data encryption.

Flexible Configurations

VeriHealth can fully support various application scenarios, such as nursing service for elders and kids, exercise monitoring, virus prevention, etc. The platform is equipped with more than ten VeriSilicon developed health and exercise physiology algorithm modules for quick development of customers’ wearable devices.

VeriSilicon’s proprietary health model supports a full set of functions, such as fall detection, atrial fibrillation monitoring, heart rate variability (HRV) detection, blood pressure estimation, EEG detection, EMG detection, sleep quality tracking, sedentary reminder, physical activity monitoring and calorie calculation, as well as health anomalies’ prediction.  After processing the data acquired from peripheral PPG, ECG, EEG, EMG, IMU and TMP sensors, VeriHealth provides accurate real-time health information, including heart rate, respiration and blood oxygen saturation.

Taking user fall detection application as an example, when the wearable detects a fall, a notification would be sent to the App over a Bluetooth connection. If the user does not press the stop button within a preset time limit, the App will notify the preset emergency contact by phone call and text.

Low Power with Performance Optimized for an Application

VeriHealth based SoC solution enables long battery life through VeriSilicon’s high-performance, low-power ZSP IP and ultra-low power BLE IP. The resulting wearable device can be powered by a 200mAh rechargeable battery and work continuously for 30 days between recharges.

VeriSilicon’s ZSP architecture offers an optimal combination of MCU+DSP capabilities. The ZSP’s efficient digital signal processing ability enhances algorithm execution at the same time reduces power consumption.

Driving Research, Cultivating Talent, Expanding the Ecosystem

With the healthcare wearables market projected to grow to about USD 40 billion by 2030, there are plenty of technological advances to be expected in this space. VeriSilicon is committed to research and development and has established a Smart Medical Treatment and Healthcare Innovation Laboratory to cooperate with a major university in China. Such initiatives should help nurture chip industry related talent and expand the smart healthcare ecosystem.

Summary

To date, VeriSilicon has developed two types of smart devices – a wristband wearable and a patch wearable, as well as Apps for the iPhone, iPad and Android phone. VeriSilicon through its VeriHealth chip design platform has helped customers design industry-leading chips for health monitoring, gene sequencing, and capsule endoscopy.

For more details, please contact VeriSilicon.

Also Read:

VeriSilicon’s AI-ISP Breaks the Limits of Traditional Computer Vision Technologies


eFPGAs handling crypto-agility for SoCs with PQC

eFPGAs handling crypto-agility for SoCs with PQC
by Don Dingee on 12-13-2022 at 6:00 am

Improving crypto-agility using hybrid PQC with ECC

With NIST performing its down-select to four post-quantum cryptography (PQC) algorithms for standardization in July 2022, some uncertainty remains. Starting an SoC with fixed PQC IP right now may be nerve-wracking, with possible PQC algorithm changes before standardization and another round of competition for even more advanced algorithms coming. Yet, PQC mandates loom, such as an NSA requirement starting in 2025. A low-risk path proposed in a short white paper by Xiphera and Flex Logix sees eFPGAs handling crypto-agility for SoCs with PQC.

Now the PQC algorithm journey gets serious

NIST selected four algorithms – CRYSTALS-Kyber, CRYSTALS-Dilithium, Falcon, and SPHINCS+ – that withstood the best attempts to expose vulnerabilities during its competition phase. In doing so, NIST now focuses resources on these four algorithms for standardization, marking the start of the PQC journey in earnest. Using teams of researchers armed with supercomputers, it can take years to thoroughly study a proposed crypto algorithm for potential vulnerabilities. A prime example: two PQC algorithms in the NIST competition broke under the weight of intense scrutiny very late in the contest, eliminating them from consideration.

While the odds of a significant break in these four selected PQC algorithms are low, minor changes are a distinct possibility. Uncertainty keeps many in the crypto community up at night, and changes that could disrupt hardware acceleration IP are always a concern for SoC developers. Hardware acceleration for these complex PQC algorithms is a must, especially in edge devices with size, power, and real-time determinism constraints.

Unfortunately, staying put isn’t an option, either. Existing crypto algorithms are vulnerable to quantum computer threats, if not immediately, then very soon. SoCs designed for lifecycles of more than a couple of years using only classical algorithms will be in dire peril when quantum threats materialize. The challenge becomes how to start a long life cycle SoC design now that can accelerate new PQC algorithms without falling victim to changes in those algorithms during design or, even worse, after it is complete.

Redefining crypto-agility practices for PQC in hardware

Crypto-agility sounds simple. Essentially, the idea is to run more than one crypto algorithm in parallel, with the objective that if one is compromised, the others remain intact, keeping the application secure. Researchers are already floating the idea of hybrid mechanisms as a safety net for PQC implementations. It’s possible to combine a traditional crypto algorithm, likely an ECC-based one, with a new PQC algorithm for the key derivation function (KDF).

 

 

 

 

 

 

 

 

But in SoC form, hybrid mechanisms have a cost, which gets higher as complexity increases. Instead of replacing the existing crypto hardware IP, a hybrid approach adds more circuitry for PQC and coordination between the algorithms. Size, power consumption, and latency increase, and another risk emerges. Designers would have to guess correctly about implementing a PQC algorithm; otherwise, the implementation would essentially be classical. The PQC hardware would lay unutilized, wasting space and power used for it entirely and leaving the design as vulnerable as it was without PQC.

A better approach to crypto-agility is reconfigurable computing. If hardware is reconfigurable, patching, upgrading, or replacing algorithms is straightforward. A creative design could even implement a hybrid mechanism on the fly, running one algorithm for a classical key, then reconfiguring to run PQC for its key, then reconfiguring again for operation on a data stream once keys are derived.

eFPGA technology provides a robust, proven reconfigurable computing solution for SoCs now. It’s efficient from a power and area standpoint, rightsized to the SoC design and the logic needed for algorithms. And in a PQC context, it provides the ultimate protection while designs are in progress and algorithms may be in flux.

Xiphera, a hardware-based security solution provider, is teaming up with Flex Logix to bring crypto-agility to SoCs using eFPGAs. Following is a page describing the effort with a link to a short white paper with more background, and a link to the Flex Logix eFPGA page.

Xiphera: Solving the Quantum Threat with Post-Quantum Cryptography on eFPGAs

Flex Logix: What is eFPGA?


TSMC OIP – Analog Cell Migration

TSMC OIP – Analog Cell Migration
by Daniel Payne on 12-12-2022 at 10:00 am

Analog Cell min

The world of analog cell design and migration is quite different from digital, because the inputs and outputs to an analog cell often have a continuously variable voltage level over time, instead of just switching between 1 and 0. Kenny Hsieh of TSMC presented on the topic of analog cell migration at the recent North American OIP event, and I watched his presentation to learn more about their approach to these challenges.

Analog Cell Challenges

Moving from N7 to N5 to N3 the number of analog design rules have dramatically increased, along with more layout effects to take into account. Analog cell heights tend to be irregular, so there’s no abutment like with standard cells. Nearby transistor layout impacts adjacent transistor performance, requiring more time spent in validation.

The TSMC approach for analog cells starting at the N5 node is to use layout with fixed cell heights, support abutment of cells to form arrays, re-use pre-drawn layouts of Metal 0 and below, and that are silicon validated. Inside the PDK for analog cells are active cells, plus all the other parameters for: CMOS, guard ring, CMOS tap, decap and varactor.

Analog cells now use fixed heights, placed in tracks, where you can use abutment, and even customize the transition, tap and guardring areas. All possible combinations of analog cells are exhaustively pre-verified.

Analog Cell

With this analog cell approach there is a uniform Oxide Diffusion (OD) and POlysilicon (PO), which improve silicon yields.

Analog Cell Layout

Automating Analog Cell Layout

By restricting the analog transistors inside of analog cells to use more regular patterns, then layout automation can be more readily used, like: automatic placement using templates, automatic routing with electrically-aware widths and spaces, and adding spare transistors to support any ECOs that arrive later in the design process.

Regular layout for Analog Cells

Migrating between nodes the schematic topology is re-used, while the width and lengths per device do change. The APR settings are tuned for each analog component of a cell. APR constraints for analog metrics like currents and parasitic matching make this process smarter. To support an ECO flow, there’s an automatic spare transistor insertion feature. Both Cadence and Synopsys have worked with TSMC since 2021 to enable this improved analog automation methodology.

Migrating analog circuits to new process nodes requires a flow of device mapping, circuit optimization, layout re-use, analog APR, EM and IR fixes and post-layout simulations. During mapping an Id saturation method is used, where devices are automatically identified by their context.

Pseudo post-layout simulation can use estimates and some fully extracted values to shorten the analysis loop. Enhancements to IC layout tools from both Cadence and Synopsys now support schematic migration, circuit optimization and layout migration steps.

A VCO layout from N4 was migrated to the N3E node using automation steps and a template approach, reusing the placement and orientation of differential pair and current mirror devices. The new automated approach for migration was compared to a manual approach, where the time required for manual migration was 50 days and with automation only 20 days, so a 2.5X productivity improvement. Early EM, IR and parasitic RC checks was fundamental to reaching the productivity gains.

N4 to N3E VCO layout migration

A ring-based VCO was also migrated both manually and automatically from the N40 to N22 node, using Pcells. The productivity gain was 2X by using the automated flow. Pcells had more limitations, so the productivity gain was a bit less.

Summary

TSMC has faced the challenges of analog cell migration by: collaborating with EDA vendors like Cadence and Synopsys to modify their tools, using analog cells with fixed heights to allow more layout automation, and adopting similar strategies to digital flows. Two migration examples show that the productivity improvements can reach 2.5X when using smaller nodes, like N5 to N3. Even with mature nodes like N40, you can expect a 2X productivity improvement using Pcells.

If you registered for the TSMC OIP, then you can watch the full 31 minute video online.

Related Blogs


Bizarre results for P2P resistance and current density (100x off) in on-chip ESD network simulations – why?

Bizarre results for P2P resistance and current density (100x off) in on-chip ESD network simulations – why?
by Maxim Ershov on 12-12-2022 at 6:00 am

Fig 1

Resistance checks between ESD diode cells and pads or power clamps, and current density analysis for such current flows are commonly used for ESD networks verification [1]. When such simulations use standard post-layout netlists generated by parasitic extraction tools, the calculated resistances may be dramatically higher or lower than real values by a factor of up to 100x, which is huge.  Current densities can also be significantly off. Relying on such simulations leads to either missed ESD problems, or to wasted time trying to fix artificial errors on a good layout. The root causes of such errors are the artifacts of parasitic extraction, including the incorrect treatment of a distributed ESD diode as a cell with a single instance pin, or connecting ESD diode with a port by a small (1 mOhm) resistor. This paper discusses how to detect,  identify, and get around these artifacts.

Problem statement

Resistance checks and current density checks are often performed on post-layout netlists to verify ESD protection networks [1] – see Fig.1. Point to point (P2P) resistance is used as a proxy, or a figure of merit for the  quality of metallization, and as a proxy for ESD stress voltage. High P2P resistance values (e.g., higher than 1 Ohm) indicate some problems with the metallization, and should be debugged and improved.

Figure 1. (a) ESD current paths, and (b) P2P resistances (red arrows) in ESD protection network. Resistances between pads and ESD diodes, diodes to power clamps, and other resistances are calculated to verify robustness and quality of ESD protection.

In recent years, many fabless semiconductor design companies have reported puzzling problems with ESD resistance and current density simulations, when post-layout netlists generated by standard parasitic extraction tools are used. These problems include unreasonably large or low (by ~100x) resistances between ESD diodes and pads or power clamps, and unphysical current densities in the interconnects. These problems became especially severe in the latest, sub-10nm technology, nodes with high interconnect resistances.

These problems usually happen when fabless companies use ESD diode p-cells provided by the foundries. The cells are designed, verified, and qualified by the foundries, and should be good. However, the quality of the connections of these ESD cells to the power nets and to IO nets can be poor. Such poor connections can lead to high resistances and current densities, and to big ESD problems. That’s why, even when ESD cells themselves are high quality, the resistance and current density checks on the complete ESD network are required.

Artificially high resistance case

In foundry-provided PDKs, ESD diodes are often represented as p-cells (parameterized cells) with a single instance pin for each of the terminals, anode and cathode. This is different from how power clamp MOSFETs are usually treated in the PDK – where each individual finger of a multi-finger devices is represented as a separate device instance, with its own instance pins for terminals.

These instance pins are usually used as a start point or a destination point for P2P resistance simulations. As a result, in the case of ESD diode p-cell simulations, current flows into the discrete point, creating artificial current crowding, high-current density values, and a high spreading resistance – see Fig.2.

Figure 2. Vertical cross-section of ESD diode, showing current flow pattern for simulation using (a) single instance pin, (b) distributing current in a realistic manner over the diode area.

This is an artifact of simulation, induced by artifacts of representing a large distributed device by a single, discrete instance pin. In real operation, in real life, ESD diodes will conduct current by all fingers, and the total ESD current will be distributed over a large area, more or less uniformly, to many fingers. In advanced technology nodes with many layers, the lower metal layers have high sheet resistivity, and they are used for vertical current routing, and contribute little to the total resistance. Contacts and vias above the active device are all conducting vertical current in parallel, ideally – uniformly. The current is shared by many contacts and vias – which leads to a low total resistance.

On the contrary, in simulations using a single instance pin as a start point or as a destination point – the current is getting concentrated and crowded near that instance pin. It creates artificial, unrealistic current flow patterns – such as lateral current in lower metal layers (M0, M1, M2, …), highly non-uniform current in vias, with high current density in vias close to the instance pin, and so on.

This leads to an artificially high spreading resistance. Fig. 3 compares the results of simulation for a standard ESD diode for 5nm technology. The resistance calculated using a single instance pin is ~7.65 Ohm. The resistance simulated using conditions providing the realistic (distributed) current distribution over the device area is 0.069 Ohm – more than 100x lower value!

Furthermore, the layers show very different ranking in their contributions to the total P2P resistance, for these two simulation conditions. Simulations with discrete instance pins may lead to a completely wrong layer optimization strategy, focusing on the wrong layers.

Figure 3. P2P resistance from ESD diode to ground net port, and resistance contribution by layer, for (a) single instance pin case, and (b) distributed simulation conditions.

Current density distribution in lower layers shows a strong current crowding near a single instance pin – see Fig. 4. In the case of distributed current flow, current density is more or less uniform, and its peak value is ~63x lower than in single instance pin case.

Figure 4. Current density distributions in (a) single instance pin, and (b) distributed simulation conditions. Peak current density for case (a) is 63x higher than for the case (b).

Artificially low resistance case

In some situations, the ESD diode instance pin is connected not to the low-level layers (such as diffusion or contacts), but directly to a port (pin) of a power net, located at the top metal layer. This connector resistor is very low, such as 1 mOhm. Why does that happen? I can guess that the terminal of the ESD diode is mapped to a well or substrate layer, that is not extracted for resistance. As a result, the parasitic extraction tool connects it to a net’s R network at a rather arbitrary point, which turns out to be a port, by a connector resistor. This is similar to how MOSFET’s bulk terminals are typically connected to the port (because wells and substrates are not extracted for resistance).

Visualization of parasitics and their probing allows engineers to identify such extraction details, and to understand what’s going on in parasitic extraction and electrical analysis, as illustrated in Fig. 5.

Figure 5. Visualization of parasitics over layout, helping identify connectivity, non-physical connector resistors, and probe parasitics.

Thus, the connectivity of ESD diode to the power net is incorrect. The resistance from the ESD diode to the port of the power net is very low (1 mOhm), due to this connector resistor bypassing the real current path through the interconnects.

Figure 6. (a) Schematic illustration of a connector resistor connecting ESD diode instance pin with power net port, and (b) Top-view illustration of real ESD current path from ESD diode to power clamp (shown in green) versus artificial simulated current path.

Similarly, the simulated current path from the ESD diode to power clamp differs from the real current path, see Fig.6. The current goes along the way of minimum resistance (minimum dissipated power), from ESD diode to the power net port, then flows along the (low-resistive) top metal, and then flows down to power clamp. Simulated resistance and current densities are artificial and different form the real resistance and current density.

To properly simulate the resistance and current for this case, the connector resistance has to be removed, and the diode’s instance pin should be connected to the lowest layer, in a distributed manner. It would be ideal if this is done by the parasitic extraction tool.

Connector resistors

Connector resistors are a semi-hidden feature in parasitic extraction tools. These are non-physical resistors, i.e. they are not generated by layout shapes and their resistivity. These resistors are not controllable by the users. Extraction tool vendors do not educate semiconductor companies about this feature, probably because it’s considered an internal detail of implementation.

Connector resistors are used for various connectivity purposes – for example, to connect instance pins of devices to resistive network or to other instance pins, to connect disconnected (“opens”) parts of a net, to “short” ports, and for many other purposes. Their value is usually very low – such as 0.1, 1, 10, or 100 mOhms. Most of the time, they do not have any harmful effect on electrical simulation results. However, sometimes, as discussed in a previous section, they can have a strange, or a very bad effect – such as shorting a finite resistance of interconnects, or adding 0.1 Ohm resistance to a system that has much lower resistance (e.g., power FET have interconnects resistance values in the range of mOhms).

Being able to “understand,” identify and visualize connector resistors on the layout (as shown in Fig.5), and just to be aware of their presence and potential impact, is very important to have a good understanding of the structure, connectivity, and potential pitfalls in a post-layout netlist.

Conclusions

Resistance and current density checks are useful and necessary steps for ESD verification, but proper care must be taken when setting up the simulations. Simulation conditions should mimic and reproduce the realistic current flow over and near the devices, to avoid parasitic extraction and simulation artifacts.

All simulations and visualizations presented in this paper were done using ParagonX [2].

References

  1. “ESD Electronic Design Automation Checks”, Technical report, ESD Association, 2014. Free download: https://www.esda.org/zh_CN/store/standards/product/4/esd-tr18-0-01-14
  2. ParagonX Users Guide, Diakopto Inc., 2022.

Also Read:

Your Symmetric Layouts show Mismatches in SPICE Simulations. What’s going on?

Fast EM/IR Analysis, a new EDA Category

CEO Interview: Maxim Ershov of Diakopto