Banner 800x100 0810

Load-Managing Verification Hardware Acceleration in the Cloud

Load-Managing Verification Hardware Acceleration in the Cloud
by Bernard Murphy on 09-22-2022 at 6:00 am

Scheduling emulation min

There’s a reason the verification hardware accelerator business is growing so impressively. Modern SoCs – now routinely multi-billion gate devices – must be verified/validated against massively demanding test plans, requiring high levels of test coverage. Use cases extend all the way up to firmware, OSes, even application software, across a dizzying array of power saving configurations. Testing for functionality, performance, peak and typical power, security and safety goals. All while also enabling early software stack development and debug. None of this would be possible without hardware accelerators, offering many orders of magnitude higher verification throughput than is possible with software simulators.

Maximizing Throughput and ROI

Hardware accelerators are not cheap, but there is no other way to get the job done; SoC design teams must include accelerators in their CapEx budgeting. But they want to exploit their investment as fully as possible. Ensuring machines are kept fully occupied and are fairly allocated across product development teams.

In the software world, this load management and balancing problem is well understood. There are plenty of tools for workload allocation, offering a range of sophisticated capabilities. But they all assume a uniform software view of the underlying hardware. Hardware options can range across a spectrum of capacities and speeds, while still allowing at least in principle any job to be virtualized anywhere in the cloud/farm. Not so with hardware-accelerated jobs which must provide their own virtualization options given the radically different nature of their architectures.

Another wrinkle is that there are different classes of hardware acceleration, centered either on emulation or FPGA prototyping. Emulation is better for hardware debug, at some cost in speed. FPGA prototyping is faster but not as good for hardware debug. (GPUs are sometimes suggested as another option for their parallelism, though I haven’t heard of GPUs playing a major role so far in this space.)

Verifiers like to use emulators or prototypers in in-circuit emulation (ICE) configurations. Here they connect the design under test to real hardware. This directly mimics the hardware environment in which the chip under design must ultimately function. This requires physical connectors, and hence application-specific physical setups. Further constraining virtualization except where the hardware offers multiple channels between connectors and the core emulator/prototyper.

Swings and Roundabouts

Expensive hardware and multiple suppliers suggest an opportunity for allocation software to maximize throughput and ROI against a range of hardware options. Altair aims to tap this need with their Altair® Hero™ product, an enterprise job scheduler built for multi-vendor emulation environments. As their bona fides for this claim, they mention that they already have field experience deploying with Cadence Palladium Z1 and Z2, Synopsys Zebu Z4 and Synopsys HAPS. They expect to extend this range over time to also include Cadence Protium and Siemens EDA Veloce. They also hint at a deployment in which users can schedule jobs in a hardware accelerator farm, choosing between Palladium and Zebu accelerators.

In a mixed vendor environment, clearly Altair has an advantage in providing a neutral front-end for user selected job scheduling. Advantages are less clear if users hope for the scheduler to dynamically load-balance between different vendor platforms. Compile for FPGA-based platforms is generally not hands-free; a user must pick a path before they start. Unless perhaps the flow compiles for both platforms in parallel, allowing for a switch before running? Equally scheduling ICE flows must commit to a platform up-front given physical connectivity demands.

Another point to consider is that some vendors closely couple their emulation and prototyping platforms. In order to support quick debug handoff between the two flows. Treating such platforms as independently allocatable would undermine this advantage.

In a single vendor environment, I remain open-minded. The hardware guys are very good at their hardware and have apparently put work into supporting virtualization. But could a 3rd party with significant cloud experience add scheduling software on top? To better optimize throughput and ROI in balancing jobs? I don’t see why that shouldn’t be possible.

You can learn more from this Altair white paper.

 

 


The Truly Terrifying Truth about Tesla

The Truly Terrifying Truth about Tesla
by Roger C. Lanctot on 09-21-2022 at 2:00 pm

The Truly Terrifying Truth about Tesla

When I think about Tesla’s impact on the wider automotive industry, I conjure images of auto CEOs waking up in the middle of the night in cold sweats. It isn’t just that Tesla virtually or actually gets away with murder, it’s that the level of impunity is almost completely unchallenged by regulators and is actually celebrated by consumers – or at least Tesla owners.

Nowhere is this clearer than on the question of build quality. Tesla vehicles are so notorious for uneven body panel gaps, mismatched paint, and broken windshields that posters on Tesla forums provide checklists for Tesla buyers to thoroughly inspect their cars upon delivery.

This behavior would not be tolerated for a nanosecond by a buyer of a Chevy, Ford, Toyota, or BMW. Consumers would refuse to accept or take delivery of a new Honda if the vehicle had a single scratch. Not so with Tesla. Some Tesla buyers expect to take their brand new vehicles to a car detailer for a remedial once over after taking delivery.

It’s not just that the average Tesla buyer has likely waited months for their vehicle. Tesla’s are given a pass because the company has tapped into the vast, aching consumer dissatisfaction with existing vehicles and the entire business of buying, owning, insuring, maintaining, and even driving a typical car.

Musk, Tesla’s CEO, takes it even further, increasingly positioning Tesla ownership as both a privilege and a responsibility. Tesla is engaged in a great project of refining automated or semi-automated driving and every Tesla owner is contributing to this effort – especially drivers of Tesla’s with the full-self-driving beta. Musk has told these folks – who have paid $15K for the privilege – that they have been selected or accepted into the beta program.

A very different picture emerges among legacy auto makers competing with Tesla. Prior to her ascension to the position of CEO at General Motors Mary Barra captured the car marketing malaise in the U.S. and globally when she told a gathering of female executives that her goal was to end the era of sub-standard vehicles: “No more crappy cars,” she said.

Auto makers conduct studies to better understand what features, colors, and smells consumers prefer, how much they want to pay, what services they’d like to subscribe to, and how they feel about the brand. Legacy auto makers obsess over user interfaces, industry standards and safety protocols.

To top these marketing efforts off, car companies then spend billions of dollars on advertising trying to convince consumers that the vehicles they have built are precisely what the consumer is looking for to enhance their social status, help them have more fun, or make them more productive.

Tesla does none of this. Tesla’s customer engagement is nothing more than a fait accompli. You get what you’re gonna get and you’re going to like it because there is no alternative. Not only that, you’re going to wait for it – maybe months.

The message to the Tesla consumer and from the Tesla consumer is the same. No matter what shortcomings there may be in the Tesla product, anything is better than the “crappy cars” available from all other auto makers. All of them.

It is not just the cars, of course. It’s the entire experience. The dealerships. The insurance companies. The mechanics. The gas stations. Tesla offers a vast parallel universe where sales, service, recharging, and insurance are available under one roof. It’s a value proposition that legacy auto makers simple can’t match.

But the most galling manifestation of this unique customer relationship is the consumer acceptance of bad build quality – an enduring aspect of Tesla’s brand value proposition. It’s almost a feature. It’s almost laughable. It’s actually quite serious, but consumers routinely cut Tesla a bit of slack.

All of this was brought home to me during Green Hills Software CEO Dan O’Dowd’s advertising campaign as part of his Dawn Project to have Tesla sanctioned for making what he saw as false claims regarding the performance of its full-self-driving (FSD) beta software. O’Dowd may have exaggerated the weaknesses of Tesla’s software – with images of cardboard children being repeatedly run over, but observers could plainly see that not only was O’Dowd incapable of stirring any consumer outrage, he raised concerns that outraged Tesla fanboys might actually turn on him.

O’Dowd pointed out the obvious weaknesses of Tesla’s FSD beta, and consumers shrugged. Consumers have been so hungry for a transformative driving experience and a transcendent transportation experience – which many feel they have found in Tesla – that they will forgive just about anything.

Tesla’s have proven that they can last forever, maintain or improve their resale value after the initial sale (whether purchased new or used), and service (when needed) is delivered directly to the consumer. It is to the point that any time a consumer has a problem with their Tesla, they assume it’s their fault.

This is the truly terrifying thing about Tesla. No other auto maker has this kind of relationship with their customers. No other auto maker can count on this level of pre-emptive forgiveness for any and all sins. And the cars are sufficiently expensive and represent such a commitment and social and emotional statement that consumers will defend the company like they would an ex-spouse being criticized by a relative. Tesla owners are deeply invested in the experience of owning a Tesla. It really is terrifying.

What might one day be terrifying to Tesla is when the day arrives, as it no doubt will, that Tesla consumers begin to regard the company as just another auto maker – just like all the rest. That day is sure to come, but until then Tesla will continue to ride the benefit of the doubt wave at the vanguard of global vehicle electrification. Still, if Tesla can’t get the paint right, what does that say about what might be wrong with its beta software?

Also Read:

GM Should BrightDrop-Kick Cruise

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

GM Buyouts: Let’s Get Small!

Automotive Semiconductor Shortage Over?


Die-to-Die Interconnects using Bunch of Wires (BoW)

Die-to-Die Interconnects using Bunch of Wires (BoW)
by Daniel Payne on 09-21-2022 at 10:00 am

BoW min

Chiplets are a popular and trending topic in the semiconductor trade press, and I read about  SoC disaggregation at shows like ISSCC, Hot Chips, DAC and others. Once an SoC is disaggregated, the next challenge is deciding on the die-to-die interconnect approach. The Open Compute Project (OCP) started 10 years ago as a way to share designs of data center products between a diverse set of companies, like: ARM, Meta, IBM, Intel, Google, Microsoft, Dell, NVIDIA, HP Enterprise, Lenovo and others. In July the OCP Foundation announced their approach to SoC disaggregation with an interface specification, and dubbed it Bunch of Wires (BoW).

I contacted Elad Alon, CEO and co-founder of Blue Cheetah Analog Design to learn about BoW and how it compared to the UCIe approach. Here’s a quick comparison:

  • BoW
    • Focused on die disaggregation
    • Open standard from the start
    • Allows design freedom and application-specific optimization
  • UCIe
    • Focused on package aggregation
    • Interoperability favored over design freedom, similar to PCIe
    • Specified by Intel, then other members added

In a package aggregation approach, there would’ve been separate chips to begin with, but now the chiplets are brought into the same package, almost a PCB-type thinking. With die disaggregation, all of the chiplet functions would’ve been combined in a single SoC, but the die size was too large or too costly. There’s room for both the BoW and UCIe approaches as chiplet use expands.

Bunch of Wires

The BoW is a open PHY specification for die-to-die (D2D) parallel interfaces that can be implemented in organic laminate or advanced packaging technologies. Here’s a diagram of what it looks like, along with some metrics:

BoW Features

With BoW the D2D interfaces can be optimized to the host chiplet products, using minimal required features while supporting interoperability. A slice for BoW contains 16 data wires, a source-synchronous differential clock, and two optional signals – FEC (error control), AUX (DBI, repair, control).

BoW signals

A stack is a group of slices extending towards the inside of the chiplet, then a link is one or more slices forming a logical interface from one chiplet to another.

Slice, Stack, Link

In the BoW specification it only requires the wire order on the package, but not a specific bump map, giving you some flexibility while retaining interoperability. All of the PHYs must support 0.75V for compatibility across a wide range of process technologies, although systems can use other voltages to optimize for performance, BER or reach.

BoW interoperability and flexibility

BoW Adoption

A version 1.0 specification was formally approved and released in July 2022, and an ecosystem has already formed as BoW is being designed into products using multiple process nodes: 65nm, 22nm, 16nm, 12nm, 6nm, 5nm.

Companies using or supporting BoW:

  • Blue Cheetah Analog Design
  • eTopus and QuickLogic  – eFPGA Chiplet template
  • Samsung – foundry
  • NXP – BoW PHY design
  • Keysight – test and measurement
  • Ventana Micro Systems
  • DreamBig  – hyperscale Smart NIC/DPU chiplet
  • d-Matrix – AI compute platform
  • Netronome – SmartNIC

Both Blue Cheetah and d-Matrix have taped out with Bunch of Wire test chips, so we could expect silicon results later this year. You can even get involved with the weekly meetings, or start by reading the 33 page specification. BoW is an open specification, so there’s no NDA to sign or legal paperwork.

There is an OCP Global Summit, scheduled for October 18-20, and there’s an ODSA session on BoW.

Summary

Chiplet-based electronic systems are quickly emerging by semiconductor companies from around the globe, so it’s an exciting time to see emerging chiplet interconnect standards arrive to provide some standardization. The Open Compute Project has good momentum with version 1.0 of the BoW specification, and you can expect to see more news as companies announce products that use this interconnect later this year.

There’s even a “plugfest” for BoW PHY interoperability testing, and the interoperability community has several participants: Google, Cisco, Arm, Meta, JCET, d-Matrix, Blue Cheetah, Analog Port.

Related Blogs


Verification IP Hastens the Design of CXL 3.0

Verification IP Hastens the Design of CXL 3.0
by Dave Bursky on 09-21-2022 at 6:00 am

cxl standards 1

Although version 2.0 of the Computer Express Link (CXL) standard is just making it into new designs, the next generation, version 3.0, has been approved and is now ready for designers to implement the new silicon and firmware needed to meet the new standard’s performance specifications. CXL, an open industry-standard interconnect, defines an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion and Accelerators. (For more about the CXL standard, check out the CXL Consortium website – www.computeexpresslink.org.)

The CXL Consortium is an open industry standard group that was created to develop technical specifications that enable companies to deliver breakthrough performance for emerging usage models. Also able to support an open ecosystem for data-center accelerators and other high-speed enhancements, the standard offers coherency and memory semantics using high-bandwidth, low-latency connectivity between host processor and devices such as accelerators, memory buffers, and smart I/O devices. The updated standard (version 3.0) provides a range of advanced features and benefits including doubling bandwidth with the same latency (see the table).

To help speed the design of the new chips need to implement CXL 3.0, Avery Design Systems has developed verification IP (VIP) and virtual platform solutions supporting verification for the first wave of CXL 3.0 designs.  As explained by Chris Browy, vice president sales/marketing of Avery, “we can enable leading developers of server processors, managed DRAM and storage class memory (SCM) buffers, switch/retimer, and IP companies to rapidly meet the growing needs for the CXL datacenter ecosystem in 2022 and beyond. Our collaboration with key ecosystem companies allows Avery to deliver best-in-class, robust CXL 3.0 VIP solutions that streamline the design and verification process and foster the rapid adoption of the CXL standard by the industry.  Our CXL virtual platform and VIP co-simulation enables complete CXL system-level bring-up of SoCs in a linux environment.”

Avery provides a complete System Verilog/UVM verification solution including models, protocol checking, and compliance test suites for PCIe® 6.0 and CXL 3.0 for CXL host, Type 1-3 devices, switches, and retimers. The verification solution is based on an advanced UVM environment that incorporates constrained random-traffic generation, robust packet-, link-, and physical-layer controls and error injection, protocol checks and coverage, functional coverage, protocol analyzer-like features for debugging, and performance analysis metrics. Thanks to the advanced capabilities of the Avery VIP, claims Browy, engineers can work more efficiently, develop more complex tests, and work on more complex topologies, such as multi-path, multi-link solutions. The company’s compliance test suites offer effective core -through-chip-level tests, including those used in compliance workshops as well as extended tests developed by Avery to cover the specification features.

The VIP suite adds key CXL 3.0 updates to the 2.0 VIP offering. Some of the additions include:

  • Double the bandwidth using PCIe 6.0 PHY for 64 GT/s
  • Fabric capabilities
    • Multi-headed and fabric-attached devices
    • Enhanced fabric management
    • Composable disaggregated infrastructure
  • Improved capability for better scalability and resource utilization
    • Enhanced memory pooling
    • Multi-level switching
    • Direct memory/ Peer-to-Peer accesses by devices
    • New symmetric memory capabilities

Additional capabilities and features included in the VIP CXL 3.0 release include:

  • Additional CXL switch agent with fabric manager support
  • Support for AMBA® CHI to CXL/PCIe via CXS
  • Dynamic configuration of VIP for legacy PCIe, CXL 3.0, 2.0 or CXL 1.1 including CXL device types 1-3
  • Realistic traffic arbitration among CXL.IO, CXL.Cache, CXL.Mem and CXL control packets.
  • Unified user application data class for both pure PCIe and CXL traffic.

In addition to the CXL 3.0 support mentioned above, Avery has recently announced extensions to its QEMU-CXL virtual platform specifically for the 3.0 version. The enhancements include the latest linux kernel 5.19.8 supporting CXL and interoperability tests such as using ndctl for memory pooling provisioning, resets and Sx states, and Google stressapptest using randomized traffic from processor to HDM, creating realistic high-workload situations.

Co-simulating the SoC RTL with a QEMU open software virtual machine emulator environment rnel allows software engineers to natively develop and build custom firmware, drivers, and applications. They can then run them unaltered as part of a comprehensive system-level validation process using the actual SoC RTL hardware design. In a complementary manner, hardware engineers can evaluate how the SoC performs through executing UEFI and OS boot and custom driver initialization sequences, Additionally, designers can run real application workloads and utilize the CXL protocol aware debugging features of the VIP to effectively investigate any hardware related issues.

“Combined with our CXL compliant VIP, our QEMU CXL virtual platform and VIP co-simulation enables complete CXL system-level bring-up of SoCs in a Linux environment. With this approach customers can address new CXL 3.0 design and verification challenges even when no mainstream commercial platforms support the latest standards,” said Chris Browy, vice president sales/marketing at Avery.

Through the development of VIP, Browy feels that Avery enables system and SOC design teams to achieve dramatic functional verification productivity improvements through the comprehensive VIP and virtual platforms.

www.avery-design.com

cbrowy@avery-design.com

Also Read:

Verifying Inter-Chiplet Communication

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions


Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC
by Daniel Nenni on 09-20-2022 at 10:00 am

Ansys chip package board
Thermal, mechanical, electrical, and power analysis must be analyzed simultaneously to capture the interdependencies that are accentuated in muti-die 3D-IC designs

Over its 40+ year history, electronic design automation (EDA) has seen many companies rise, fall, and merge. In the beginning, in the 1980s, the industry was dominated by what came to be known as the big three — Daisy Systems, Mentor Graphics, and Valid Logic (the infamous “DMV”). The Big 3 has morphed over the years, eventually settling in for a lengthy run as Cadence, Synopsys, and Mentor. According to my friend, Wally Rhines’ always informative presentations, the Big 3 have traditionally represented over 80% of the total EDA revenue. However, EDA is now absolutely led by four major billion-dollar-plus players, plus a collection of much smaller niche suppliers. The “Big 3” is now the “Big 4,” consisting of Ansys, a $2 billion Fortune 500 company, Synopsys, Cadence Design Systems, and Siemens Digital Industries Software (née Mentor Graphics). Much smaller, niche players like Silvaco (the next largest) follow in the distance with a cool $50 million in revenues.

Ansys has a 50-year history in engineering simulation, but its involvement in EDA began just with the acquisition of Apache in 2011. In the process, Ansys acquired Redhawk — a verification platform specifically for the power integrity and reliability of complex ICs, along with a suite of related products that included Totem, Pathfinder and PowerArtist. This evolution continued with the acquisition of Helic with its suite of on-chip electromagnetic verification tools, and subsequent acquisitions in domains such as photonics, to address key verification issues in IC development.

To give context to Ansys’ role in the current EDA ecosystem, the chip design flow consists of 40 or more individual steps executed by a wide range of software tools. No single company can hope to cover the entire flow. While every stage is important, there are certain extremely difficult, critical verification steps mandated by semiconductor manufacturers that all chips must pass. These signoff verifications are required before foundries will accept a design for manufacture.

Ansys multiphysics solutions for 3D-IC provide comprehensive analysis solutions along 3 axis: across multiple physics, across multiple design stages, and across multiple design levels

Referred to as golden signoff, they include:

  • Design Rule Checking (DRC)
  • Layout vs. Schematic (LVS)
  • Timing Signoff (STA: static timing analysis)
  • Power Integrity Signoff (EM/IR: electromigration/voltage drop).

The reliability and confidence in these checks rests on industry reputations and experience built up over decades, including years of close collaboration with foundries, which makes golden signoff tools very difficult and risky to displace. Today, virtually every IC design relies on Calibre from Siemens and PrimeTime from Synopsys. Joining these two longstanding golden tools, Ansys RedHawk-SC™ (for digital design) and Ansys Totem (for analog/mixed signal (AMS) design) are golden tools for power integrity signoff, critical for today’s advanced semiconductor designs.

Foundry Certification

Beyond the signoff certification of RedHawk-SC and Totem for power integrity, other Ansys tools have also been qualified by the foundries for a range of verification steps including on-chip electromagnetic modeling (RaptorX, Exalto, VeloceRF), chip thermal analysis (RedHawk-SC Electrothermal), and package thermal analysis (Icepak).

Due to limited engineering resources and demanding schedules, foundries typically work with just a few select EDA vendors as they develop each new generation of silicon processes. Most of these collaborations now exist within the bubble of the Big 4, hinging on relationships built on a reputation for delivering specific technological capabilities, working relationships forged over many years, and the reliability of those tools established by working with a wide spectrum of customers over many technology generations.

Ansys Brings EDA Into the 3D Multiphysics Workflow

The evolution of semiconductor design is moving beyond scaling down to ever-smaller feature sizes and is now addressing the interdependent system challenges of 2.5D (side-by-side) and 3D (stacked) integrated circuits (3D-IC). These disintegrate traditional monolithic designs into a set of ‘chiplets’ that offer benefits in yield, scale, flexibility, reuse, and heterogenous process technologies. But in order to access these advantages, 3D-IC designers must grapple with the significant increase in complexity that comes with multi-chip design. Many more physical effects must be analyzed and controlled than in traditional single-chip designs, and a broad suite of physical simulation and analysis tools is critical to manage the added complexity of the multiphysics involved.

Ansys has strategically positioned itself to take on these challenges as an industry leader by leveraging its lengthy, broad multiphysics simulation experience with updated Redhawk-SC and Totem capabilities to support advances in power integrity for 3D-IC. This includes brand new capabilities like RedHawk-SC Electrothermal that are targeted specifically at 3D-IC design challenges with thermal and high-speed integrity.

Over the past few years, Ansys has been recognized by TSMC for its critical role in the EDA design flow. In 2020, Ansys achieved early certification of its advanced semiconductor design solution for TSMC’s high-speed CoWos (Chip-on-Wafer-on-Substrate) design, and InFO (Integrated Fan-Out) 2.5D and 3D packaging technologies. Continued successful collaboration with TSMC has delivered an hierarchical thermal analysis solution for 3D-IC design. In a more recent collaboration, Ansys Redhawk-SC and Totem achieved signoff certification for TSMC’s newest N3E and N4P process technologies. Similar collaborations for advanced processes, multi-die advanced packaging, and high-speed design have led to certifications from Samsung and GlobalFoundries. Ansys is even moving beyond foundry signoff and certification to define reference flows incorporating these tools, such as TSMC’s N6 radio frequency (RF) design reference flow.

TSMC has also recognized Ansys with multiple Partner of the Year Awards in the past 5 years, most recently in:

  • Joint Development of 4nm Design Infrastructure for delivering foundry-certified, state-of-the-art power integrity and reliability signoff verification tools for TSMC N4 process
  • Joint Development of 3DFabric™ Design Solution for providing foundry-certified thermal, power integrity, and reliability solutions for TSMC 3DFabric™, a comprehensive family of 3D silicon stacking and advanced packaging technologies

Achieving Greater Efficiency through Engineering Workflows

As more system companies embark on designing their own bespoke silicon and 3D-IC technology becomes more pervasive, more physics must be analyzed, and they must be analyzed concurrently, not in isolation. Multiphysics is not merely multiple physics. Building a system with several closely integrated chiplets is more complex, so more physical/electrical issues come into play. In response, Keysight, Synopsys and others have chosen to partner with Ansys, recognizing the value of its open and extensible multiphysics platforms. Keysight has integrated Ansys HFSS into their RF flow, while Synopsys has tightly integrated Ansys tools into their IC design flow.

Ansys is well-positioned to accelerate 3D-IC system design, offering essential expertise in different disciplines — in EDA and beyond — for an efficient workflow that spans a range of physics in virtually any field of engineering. For example, Ansys solutions support the complete thermal analysis of a 3D systems, including the application of computational fluid dynamics to capture the influence of cooling fans, and mechanical stress/warpage analysis to ensure system reliability despite differential thermal expansion of the multiple chips. Ansys even provides technology to address manufacturing reliability, predicting when a chip will fail in the field. These products enable the understanding of silicon and systems engineering workflows from start to finish.

Ansys’ influence as a leader in physics spans decades. It extends beyond multiple physics to multiphysics-based solutions that consider interactions more consistent with 3D-IC systems development simultaneously — in thermal analysis, computational fluid dynamics for cooling, mechanical, electromagnetic analysis of high-speed signals, low-frequency power oscillations between components, safety verification, and more, all within the context of the leading EDA flows. And, Ansys’ open and extensible analysis ecosystem connects to other EDA tools and the wider world of computer-aided design (CAD), manufacturing, and engineering.

Summary

There’s little doubt that 3D-IC innovation is accelerating. As systems companies expand further into 3D-IC, they will continue to look to, and trust Ansys solutions in support of their IC designs. To date, the vast majority of the world’s chip designers rely on Ansys products for accurate power integrity analysis. Ansys provides cyber-physical product expertise, with an acute understanding of silicon and system engineering workflows. With one foot in the semiconductor world, and another in the wider system engineering world, Ansys is uniquely positioned to provide broader multiphysics solutions for 2.5D/3D-IC that will continue to grow its footprint in EDA. The EDA Big 3 is now the Big 4, absolutely.

Also Read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

What Quantum Means for Electronic Design Automation

The Lines Are Blurring Between System and Silicon. You’re Not Ready.


Finally, A Serious Attack on Debug Productivity

Finally, A Serious Attack on Debug Productivity
by Bernard Murphy on 09-20-2022 at 6:00 am

Verisium min

Verification technologies have progressed in almost all domains over the years. We’re now substantially more productive in creating tests for block, SoC and hybrid software/hardware verification. These tests provide better coverage through randomization and formal modeling. And verification engines are faster – substantially faster in hardware accelerators – and higher capacity. We’ve even added non-functional testing, for power, safety and security. But one area of the verification task – debug – has stubbornly resisted meaningful improvements beyond improved integration and ease of use.

This is not an incidental problem; debug now accounts for almost half of verification engineer hours on a typical design.  Effective debug depends on expertise and creativity and these tasks are not amenable to conventional algorithmic solutions. Machine learning (ML) seems an obvious answer; capture all that expertise and creativity in training. But you can’t just bolt ML onto a problem and declare victory. ML must be applied intelligently (!) to the debug cycle. There has been some application-specific work in this direction, but no general-purpose solutions of which I am aware. Cadence has made the first attack I have seen on that bigger goal, with their Verisium ™ platform.

The big picture

Verisium is Cadence’s name for their new AI-driven verification platform. This subsumes the debug and vManager engines for those tracking product names, but what is most important is that this platform now becomes a multi-run, multi-engine center applying AI and big data methods to learning and debug. Start with the multi-run part. To learn you need historical data; yesterday the simulation was fine, today we have bugs – what changed? There could be clues in intelligent comparison of the two runs. Or in checked-in changes to the RTL, or in changes in the tests. Or in deltas in runs on other engines – formal for example. Maybe even in hints further afield, in synthesis perhaps.

Tapping into that information must start with a data lake repository for run data. Cadence has built a platform for this also, which they call JedAI for Cadence Joint Enterprise Data and AI Platform. Simulation trace files, log files, even compiled designs go into JED AI. Design and testbenches can stay where they are normally stored (Perforce or GitHub for example). From these Verisium can easily access design revs and check-in data.

Drilling down

Now for the intelligent part of applying ML to all this data in support of much faster debug. Verisium breaks the objective down into four sub-tasks. Bug triage is a timing-consuming task for any verification team. Grouping bugs with a likely common cause to minimize redundant debug effort. This task is a natural candidate for ML, based on experience from previous runs pointing to similar groupings. AutoTriage provides this analysis.

SemanticDiff identifies meaningful differences between RTL code checkins, providing another input to ML. WaveMiner performs multi-run bug root-cause analysis based on waveforms. This looks at passing and failing tests across a complete test suite to narrow down which signals and clock cycles are suspect in failures. Verisium Debug then provides a side-by-side comparison between passing and failing tests.

Cadence is already engaging with customers on another component called PinDown, an extension which aims to predict bugs on check-in.  This looks both at historical learning and behavioral factors like check-in times to assess likely risk in new code changes

Putting it all together

First a caveat. Any technique based on ML will return answers based on likelihood, not certainties. The verification team will still need to debug, but they can start closer to likely root causes and can get to resolution much faster. Which is a huge advance over the way we have to do debug today. As far as training is concerned, I am told that AutoTriage requires 2-3 regressions worth of data to start to become productive.  PinDown bug prediction needs a significant history in the revision control system, but if that history exists, can train in a few hours. Looks like training is not a big overhead.

There’s a lot more that I could talk about, but I’ll wrap with a few key points. This is the first release of Verisium, and Cadence will be announcing customer endorsements shortly. Further, JedAI is planned to extend to other domains in Cadence. They also plan APIs for customers and other tool vendors to access the same data, acknowledging that mixed vendor solutions will be a reality for a long time 😊

I’m not a fan of overblown product reviews, but here I feel more than average enthusiasm is warranted here. If it delivers on half of what it promises, Verisium will be a ground breaker. You should check it out.

 


WEBINAR: O-RAN Fronthaul Transport Security using MACsec

WEBINAR: O-RAN Fronthaul Transport Security using MACsec
by Daniel Nenni on 09-19-2022 at 10:00 am

Commcore OMAC Webinar

5G provides a range of improvements compared to existing 4G LTE mobile networks in regards to capacity, speed, latency and security. One of the main improvements is in the 5G RAN; it is based on a virtualized architecture where functions can be centralized close to the 5G core for economy or distributed as close to the edge as possible for lower latency performance.

SEE THE REPLAY HERE

The functional split options for the baseband station processing chain results in a separation between Radio Units (RUs) located at cell sites implementing lower layer functions, and Distributed Units (DUs) implementing higher layer functions.

This offers centralized processing and resource sharing between RUs, simple RU implementation requirements, easy function extendibility, and easy multivendor interoperability. The fronthaul is defined as the connectivity in the RAN infrastructure between the RU and the DU.

The O-RAN Alliance, established in February 2018, is an initiative to standardize the RAN with open interoperable interfaces between the radio and signal processing elements to facilitate innovation and reduce costs by enabling multi-vendors interoperable products and solutions while consistently meeting operators’ requirements.

The O-RAN Alliance defines that the fronthaul has to support Low Level Split 7-2x between the O-RAN Radio Unit (O-RU) and the O-RAN Distributed Unit (O-DU). The O-RAN Fronthaul is divided into data planes over different encapsulation protocols: Control Plane (C-Plane), User Plane (U-Plane), Synchronization Plane (S-Plane), and Management Plane (M-Plane). These planes carry very sensitive data and are constrained by strict performance requirements.

SEE THE REPLAY HERE

For its ability to mix different traffic types and ubiquitous application Ethernet is the preferable packet-based transport technology for the next generation fronthaul. An insecure ethernet transport network can expose the fronthaul to different types of threats that can compromise the operation of the network.

For example, data can be eavesdropped due to the packets’ clear-text nature, and the lack of authentication can allow an attacker to impersonate a network component. This can result in the manipulation of the data planes that can be used maliciously to cause a complete network Denial-of-Service, making security solution for the O-RAN Fronthaul critical.

In this live webinar, we will take a look at MACsec as a persuasive security solution for the O-RAN Fronthaul. We will understand the very sensitive data that the fronthaul transports, its strict high-performance requirements, and the urgent need to secure it against several threats and attacks.

We will learn the features that MACsec has to protect the fronthaul together with its implementation challenges. Finally, we will see how the Comcores’ MACsec solution can be integrated and customized for Open-RAN projects accelerating developments and reducing risks and costs.

Also read:

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

CEO Interview: John Mortensen of Comcores

LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface


Advanced EM simulations target conducted EMI and transients

Advanced EM simulations target conducted EMI and transients
by Don Dingee on 09-19-2022 at 6:00 am

Advanced EM simulations yield both conducted and radiated EMI in automotive power integrity analysis

A vital benefit of advanced EM simulations is their ability to take on complicated physical test setups, substituting far easier virtual tests yielding accurate results earlier during design activities. The latest release of Keysight PathWave ADS 2023 continues speeding up engineering workflows. Let’s look at three areas of new capability: conducted EMI analysis, SMPS transient analysis, and cloud-accelerated EM simulation.

Chasing conducted and radiated EMI before finalizing layout

Analyzing EMI has often been an after-the-fact exercise, done late in the game with a complete system hardware prototype. When it fails, there’s either a retrofit or a re-spin coming, followed by another if the problem isn’t fixed the first time.

Guessing the source of EMI and the correct fix is getting much harder. Systems now have many more power rails. For instance, in an automotive environment, 12V battery power turns into 48V for distribution, 3.3V for modules, and 1.25 or 0.8 volts for high-speed logic. Noise shows up every time there is DC/DC conversion anywhere in that chain. Using physical measurements, untangling what’s causing conducted EMI versus radiated EMI can be tricky.

Sorting out EMI observations in virtual space requires two EM simulation methods and high-fidelity modeling with real-world effects. Near-field simulation contributes to conducted EMI, and far-field contributes to radiated EMI. In PathWave ADS 2023, one simulation setup easily runs both and automates parameter iterations, quickly providing complete, accurate EMI results.

Adding conducted EMI analysis is a breakthrough. “With new automated differential setup techniques in modeling and simulation, PIPro is now able to assess potential conducted and radiated EMI issues as layout happens in PathWave ADS 2023,” says Heidi Barnes, Power Integrity Product Manager for Keysight EDA.

Simulations yield both time-domain (ripple) and frequency-domain (spikes or bands) results for EMI, and these results help parameterize sweeps to drill down on the root cause in the layout. For example, PIPro now automates setup of ground plate reference ports, populates a generic large signal switching model, and allows users to insert a higher fidelity switching model if needed. Another benefit of simulation is analyzing layouts under various loading conditions. Barnes concludes, “There’s not a lot of manual setups left – designers can focus on finding and correcting the causes of conducted and radiated EMI issues with simulation at layout.”

An SMPS report card and building better models

There are also improvements on the power electronics engineering side in PathWave ADS 2023 and PEPro. The first is looking at figures of merit for switched-mode power supplies (SMPS). “Measurements like transient recovery, efficiency, and voltage ripple are hard to set up in the real world, and every time the prototype changes, it all needs to be done again,” says Steven Lee, Power Electronics Product Manager for Keysight EDA.

Keysight has been building an SMPS report card, using modeling and advanced EM simulations to streamline analysis of common metrics at layout with just a few clicks in PEPro or ADS. Transient recovery is the first metric available in the new SMPS Performance Testbench, with efficiency and more metrics coming in future releases.

 

Another new capability goes after a nagging problem for designers: how to get more detailed and accurate models for power transistors. SPICE models often leave a lot of context out. Syntaxes differ, and translation doesn’t go well. And with simulation now gaining momentum in the SMPS design community, modeling problems are just being discovered.

Transistor IV and CV response curves don’t lie. However, translating them to a model using equations and polynomial fitting can be a time sink for power engineers. Behind the scenes, Keysight teams have been working on artificial neural networks (ANNs) for automatically creating models. In PathWave ADS 2023 and PEPro, it’s as simple as scanning images of transistor response curves off the datasheet. Cool, right? Support for silicon and silicon carbide power transistors is in this release, with gallium nitride and other technologies coming later.

Lee also says a new EMI curriculum for PathWave ADS 2023 and PEPro is launching, developed by Professor Nicola Femia at the University of Salerno. It focuses on SMPS design with workspaces, labs, reference hardware from Digi-Key, and simulation models.

Here comes the cloud for PathWave ADS 2023

One more area Keysight teams have been working on for some time and are now ready to roll out: high-performance computing support for PathWave ADS 2023. Engineers are accustomed to waiting for simulations to finish, often planning their workflow around the wait. Distributed computing can give back valuable design time by reducing EM simulation run times by up to 80%, using multiple concurrent simulations running in an HPC cluster. Teams with limited hardware access can scale up instantly using turnkey cloud platforms.

For example, in a DIMM-based system with DDR4 memory on a 12-layer board running up to 10 GHz, a single signal integrity simulation takes 3 hours. Parallelizing 12 SIPro jobs within PathWave ADS 2023 on a cloud HPC platform improves simulation time by 84%. It’s not just more powerful processors at work. Keysight has looked at the steps in advanced EM simulations where multi-threading and parallelization can speed up results.

There are two licensing models, both using Rescale as the turnkey cloud provider for PathWave ADS 2023. In Keysight’s Design Cloud user experience, the GUI runs on a local machine with simulations launched on HPC clusters, which can be on-prem or in the cloud. Existing ADS licenses cover the GUI and simulator, and floating HPC licenses enable parallel jobs.

Discover more about PathWave ADS 2023

Whether design teams run entirely on Keysight EDA software or are looking for advanced EM simulations and productivity within other EDA environments, these enhancements can help. Here are a few resources for more info.

Web pages:

What’s New in High-Speed Digital Design and Simulation?

What’s New in Power Electronics Design and Simulation?

Videos:

PathWave ADS: Power Integrity Simulation Workflow with PIPro

Building Your Own Switching Device Models in ADS for SMPS Design

Introduction to High-Performance Computing in PathWave ADS on Rescale

Press release:

Keysight Delivers Design-to-Test Workflow for High-Speed Digital Designs


GM Should BrightDrop-Kick Cruise

GM Should BrightDrop-Kick Cruise
by Roger C. Lanctot on 09-18-2022 at 6:00 pm

GM Should BrightDrop Kick Cruise

The GM Authority newsletter informed us last week that General Motors’ BrightDrop commercial vehicle group was planning to adopt autonomous vehicle technology created by GM-owned Cruise Automation for its delivery vans. Days later, Cruise CEO Kyle Vogt announced plans to bring its nascent autonomous taxi service to Phoenix and Austin before year’s end.

Transitioning GM’s autonomous vehicle development activities toward the commercial vehicle and delivery sector makes a lot of sense. It is the one sector that offers the prospect of rapid scaling to target applications that are dependent upon predictable routes.

Were GM to perform a complete pivot of its Cruise development activities toward commercial vehicles it might be seen as the most brilliant move that the company has made yet in AV. Instead, we are left with a teasing suggestion from BrightDrop’s CEO with no formal endorsement from senior GM management.

In fact, the subsequent announcement from Cruise can be seen as a riposte, a brushback, to suggest that GM execs and their “ideas” are not welcome at Cruise’s San Francisco headquarters. Cruise is doubling down on its pointless money-burning pursuit of unscalable autonomous technology intended to solve a non-existent problem.

The main different between commercial vehicle sector AV applications and the robotaxi path to market is that commercial vehicle operators face real challenges in terms of personnel shortages, safety, and logistics. There is money to be made and saved and useful operational gains to be had from automating delivery vehicles.

Robotaxis are nothing more than an expensive replacement for existing human-operated taxis and ride hailing operators. Robotaxis are not solving a problem and they have a too-narrow operational design domain – i.e. they cannot drive passengers from the city to the airport or suburbs.

This is no time for Cruise to spread its twilight driver novelty act (Cruise is currently operating within restricted neighborhoods and timeframes in San Francisco) to multiple other U.S. cities. There is no organic demand for robotaxis – certainly not as currently conceived.

This is no time for GM to play footsie with a wouldn’t-that-be-nice approach to automating commercial vehicles. With Cruise torching hundreds of millions of dollars each quarter in pursuit of a fantasy, it’s time for a massive rethink and refocus.

GM should shift its massive resources, personnel, technical capabilities, and financial backing toward a campaign to automate commercial vehicles piggy-backed on BrightDrop. BrightDrop is sailing into the market on a sound footing of almost limitless demand ideally tuned to finance AV development and expand valuable data gathering. How about it GM? Hit that clutch and pull Cruise out of the ditch.

Also Read:

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

GM Buyouts: Let’s Get Small!

MAB: The Future of Radio is Here


Intellectual Abilities of Artificial Intelligence (AI)

Intellectual Abilities of Artificial Intelligence (AI)
by Ahmed Banafa on 09-18-2022 at 4:00 pm

Intellectual abilities of artificial intelligence AI

To understand AI’s capabilities and abilities we need to recognize the different components and subsets of AI. Terms like Neural Networks, Machine Learning (ML), and Deep Learning, need to be define and explained.

In general, Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

There are three types of artificial intelligence (AI)

·       Artificial Narrow Intelligence (ANI)

·       Artificial General Intelligence (AGI)

·       Artificial Super Intelligence (ASI)

The following chart explains them

Neural networks

In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory.

Typically, a neural network is initially “trained” or fed large amounts of data and rules about data relationships (for example, “A grandfather is older than a person’s father”). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world).

Deep learning vs. machine learning

To understand what Deep Learning is, it’s first important to distinguish it from other disciplines within the field of AI.

One outgrowth of AI was #machinelearning , in which the computer extracts knowledge through supervised experience. This typically involved a human operator helping the machine learn by giving it hundreds or thousands of training examples, and manually correcting its mistakes.

While machine learning has become dominant within the field of #ai , it does have its problems. For one thing, it’s massively time consuming. For another, it’s still not a true measure of machine intelligence since it relies on human ingenuity to come up with the abstractions that allow a computer to learn.

Unlike machine learning, deep learning is mostly unsupervised. It involves, for example, creating large-scale neural nets that allow the computer to learn and “think” by itself — without the need for direct human intervention.

Deep learning “really doesn’t look like a computer program” where ordinary computer code is written in very strict logical steps, but what you’ll see in deep learning is something different; you don’t have a lot of instructions that say: ‘If one thing is true do this other thing.’“.

Instead of linear logic, deep learning is based on theories of how the human brain works. The program is made of tangled layers of interconnected nodes. It learns by rearranging connections between nodes after each new experience.

Deep learning has shown potential as the basis for software that could work out the emotions or events described in text (even if they aren’t explicitly referenced), recognize objects in photos, and make sophisticated predictions about people’s likely future behavior. Example of deep learning in action is voice recognition like Google Now and Apple’s Siri.

Deep Learning is showing a great deal of promise — and it will make self-driving cars and robotic butlers a real possibility. The ability to analyze massive data sets and use deep learning in computer systems that can adapt to experience, rather than depending on a human programmer, will lead to breakthroughs. These range from drug discovery to the development of new materials to robots with a greater awareness of the world around them.

Deep Learning and Affective Computing 

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science (#deeplearning ), psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical inquiries into emotion (“affect” is, basically, a synonym for “emotion.”), the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions.

Affective computing technologies using deep learning sense the emotional state of a user (via sensors, microphone, cameras and/or software logic) and respond by performing specific, predefined product/service features, such as changing a quiz or recommending a set of videos to fit the mood of the learner.

The more computers we have in our lives the more we’re going to want them to behave politely, and be socially smart. We don’t want it to bother us with unimportant information. That kind of common-sense reasoning requires an understanding of the person’s emotional state.

One way to look at affective computing is human-computer interaction in which a device has the ability to detect and appropriately respond to its user’s emotions and other stimuli. A computing device with this capacity could gather cues to user emotion from a variety of sources. Facial expressions, posture, gestures, speech, the force or rhythm of key strokes and the temperature changes of the hand on a mouse can all signify changes in the user’s emotional state, and these can all be detected and interpreted by a computer. A built-in camera captures images of the user and algorithm s are used to process the data to yield meaningful information. Speech recognition and gesture recognition are among the other technologies being explored for affective computing applications.

Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using deep learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection.

Emotion in machines

A major area in affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine. While human emotions are often associated with surges in hormones and other neuropeptides, emotions in machines might be associated with abstract states associated with progress (or lack of progress) in autonomous learning systems in this view, affective emotional states correspond to time-derivatives in the learning curve of an arbitrary learning system.

Two major categories describing emotions in machines: Emotional speech and Facial affect detection.

Emotional speech includes:

  • Deep Learning
  • Databases
  • Speech Descriptors

Facial affect detection includes:

  • Body gesture
  • Physiological monitoring

The Future

Affective computing using deep learning tries to address one of the major drawbacks of online learning versus in-classroom learning _ the teacher’s capability to immediately adapt the pedagogical situation to the emotional state of the student in the classroom. In e-learning applications, affective computing using deep learning can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client’s emotional state.

Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing with deep learning abilities to enhance realism and provide a higher degree of autonomy.

Other potential applications are centered around Social Monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing with deep learning at the core has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood. Companies would then be able to use affective computing to infer whether their products will or will not be well received by the respective market. There are endless applications for affective computing with deep learning in all aspects of life.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

https://www.mygreatlearning.com/blog/what-is-artificial-intelligence/

http://www.technologyreview.com/news/524026/is-google-cornering-the-market-on-deep-learning/

http://www.forbes.com/sites/netapp/2013/08/19/what-is-deep-learning/

http://www.fastcolabs.com/3026423/why-google-is-investing-in-deep-learning

http://www.npr.org/blogs/alltechconsidered/2014/02/20/280232074/deep-learning-teaching-computers-to-tell-things-apart

http://www.technologyreview.com/news/519411/facebook-launches-advanced-ai-effort-to-find-meaning-in-your-posts/

http://www.deeplearning.net/tutorial/

http://searchnetworking.techtarget.com/definition/neural-network

https://en.wikipedia.org/wiki/Affective_computing

http://www.gartner.com/it-glossary/affective-computing

http://whatis.techtarget.com/definition/affective-computing

http://curiosity.discovery.com/question/what-is-affective-computing

Also Read:

Synopsys Vision Processor Inside SiMa.ai Edge ML Platform

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Samtec is Fueling the AI Revolution