100X800 Banner (1)

Podcast EP93: The Unique Role of EMD Electronics to Enable Technology Advances Across Our Industry

Podcast EP93: The Unique Role of EMD Electronics to Enable Technology Advances Across Our Industry
by Daniel Nenni on 07-08-2022 at 10:00 am

Dan is joined by Dr. Jacob Woodruff, Head of Technology Scouting and Partnerships with EMD Electronics, where he works to find and advance external early stage and disruptive technologies in the semiconductor and display materials space. Dr. Woodruff is an experienced technologist, having managed global R&D groups developing semiconductor deposition materials at EMD Electronics. Previously at ASM, Jacob led ALD process technology teams, and at SunPower and Nanosolar he managed R&D labs and developed processes for solar cell manufacturing. He holds a Masters in Materials Science and Engineering and a PhD in Physical Chemistry from Stanford University.

EMD Electronics has recently joined as the newest Silicon Catalyst Strategic Partner, details can be found at https://siliconcatalyst.com/silicon-catalyst-welcomes-emd-electronics-as-newest-strategic-partner, with a focused search for startups developing innovative electronic materials required for next-generation semiconductor devices.

Dan discusses the structure and focus of EMD Electronics with Jacob. The company’s primary areas of research and innovation and the far-reaching impact their work has across a large part of the semiconductor value chain are explored.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Barry Paterson of Agile Analog

CEO Interview: Barry Paterson of Agile Analog
by Daniel Nenni on 07-08-2022 at 6:00 am

Barry Paterson Agile Analog CEO

Barry Paterson is the CEO of UK-based analog IP pioneer, Agile Analog. He has held senior leadership, engineering and product management roles at Dialog Semiconductor and Wolfson Microelectronics. He has been involved in the development of custom, mixed-signal silicon solutions for many of the leading mobile and consumer electronics companies across the world. He has a technical background in Ethernet, Audio, Haptics and Power Management and is passionate about working with customers to deliver high quality products. He has an honours bachelor degree in electrical and electronic engineering from the University of Paisley in Scotland.

Agile Analog had a funding round of $19 Million last year. What was the key USP that investors liked? We have a disruptive technology called Composa™. This is an internal tool that enables us to automatically generate analog IP solutions to exactly meet customer specifications on their specific process technology.

What makes Agile Analog’s approach so different to existing analog solutions?

Until now, analog IP was typically custom designed or ported from a different process node. We re-use our tried and tested analog IP circuits that are within our Composa library. This enables us to build a customer solution using our IP for a specific process node taking into account exact customer requirements of Power, Performance and Area. Effectively, the design-once-and-re-use-many-times model of digital IP can be applied to analog IP for the first time.

What are the benefits of your new approach to the customer?

First, the analog IP circuits in the Composa library have been extensively verified and used in previous designs. The verification is automated and applied to every build of the IP core. Second, our Composa automated approach creates bespoke analog IP solutions in a fraction of the time that it would normally take develop them using tradition analog design methodologies. Third, we enable customers to license and integrate analog IP into their products allowing them to focus on faster time to market and differentiation within their market.

You have used the phrase process-agnostic in a recent press release. What does that mean for customers?

Composa can simply regenerate an analog IP solution using the PDK selected by a customer. This may be on a different process technology, for example when switching to a different foundry or taking advantage of advances in process technology to improve their products. This allows customers to select the optimum process technology and capacity options they want to use without being constrained by the analog IP. Historically the availability of analog IP has been a constraint in process selection. Our aim at Agile Analog is to remove that constraint.

Why is analog IP becoming so important?

There is a huge demand for analog ICs, that has been estimated to be valued at 83.2 billion USD in 2022 according to IC Insights’ 2022 McClean Report. We aim to address this space by making it very easy to integrate the functions of these discrete analog ICs onto the main SoC to reduce overall BOM costs, complexity, size and power consumption.

The drive to integrate analog functionality into System on Chip solutions is increasing as consumers expect a better user experience and more functionality. Some customers of Agile Analog have been exclusively been focused on digital products and now have a need to integrate some external analog functionality. This is where our foundation IP can be used. Other customers are developing devices that have increasing amounts of inputs and outputs to sensors and the real world. This invariably requires analog signals and the conversion to and from the digital domain. In this case, our data conversion IP is an ideal solution. In addition to data conversion, there is also a need to optimise power conversion to address the supply requirements of internal and external features. This is where Agile Analog’s power conversion IP can be deployed. Finally, for some customers, they need a system solution where the integration of several analog IP cores interfaces directly to their digital cores. For this, we offer a number of bespoke subsystems.

 What IP do you offer?

We are building up our portfolio of foundation IP building blocks which provide some of the basic analog housekeeping that customers require. We also offer data conversion IP that includes a number of ADCs and DACs. In the power conversion space, we have LDOs and voltage references and we have future plans for buck and boost converters. Running alongside those IP cores, we want to build IP subsystems for IoT and wearables that will look like digital blocks to an EDA system to make it easy to drop them into the digital design flow. Customers can review our current IP portfolio on our website at agileanalog.com

 I hear that you are moving to new larger offices, why is that?

As the Agile Analog team has grown, we have a requirement to find a larger office with more space for collaboration and innovation. We looking forward to moving into a significantly larger office at the iconic Radio House building in the heart of Cambridge, UK. This will enable us to scale to over a hundred staff as we grow. Our new office will be our global headquarters and has been designed with collaboration and team networking in mind. We will continue to have a hybrid working model with a large number of staff working from both from home and at the office so that geography is not a barrier to recruitment of top analog engineers as well as the software and digital engineers that we also need.

So are you actively recruiting now?

Yes, we have just started a major recruiting drive with the aim to increase our engineering head count by over 50%. We are looking for a number of engineers across multiple domains. If there are analog design engineers that want to work on advanced process nodes, developing different IP cores and using our latest Composa technology then they should look at the latest vacancies on our website.

Why do you think analog engineers would beat a path to your door?

Engineers always like a challenge and to be doing something different. Our approach to analog IP is completely different and is revolutionising the way that the vital analog interfaces are being incorporated into next generation semiconductors. We are working on multiple IP cores and multiple processes therefore we offer the opportunity to build experience in many exciting areas.

https://www.agileanalog.com/contact

Also read:

CEO Interview: Vaysh Kewada of Salience Labs

CEO Interview: Chuck Gershman of Owl AI

CEO Interviews: Dr Ali El Kaafarani of PQShield


Altair at #59DAC with the Concept Engineering Acquisition

Altair at #59DAC with the Concept Engineering Acquisition
by Daniel Nenni on 07-07-2022 at 10:00 am

Altair HPC Banner

The Design Automation Conference has been the pinnacle for semiconductor design for almost 60 years. This year will be my 38th DAC and I can’t wait to see everyone again. One of the companies I will be spending time with this year is Altair.

Last month Altair acquired our friends at Concept Engineering, the leading provider of electronic system visualization software. Prior to that Altair acquired our friends at Runtime Design Automation. The Runtime people are still at Altair which is a very good sign. Prior to the Runtime acquisition I had little contact with Altair but over the last two years I have developed a great amount of respect for what they have accomplished, absolutely.

Altair will be at DAC this year in a very big way, which I greatly appreciate. Here is a quick preview from their DAC landing page:

Compute Intelligence for Breakthrough Results Visit us at #59DAC!

Join us to learn more about Altair’s world-class, high-throughput solutions for every step of the semiconductor design process (and more!).

Altair solutions are used by leading companies all over the globe to keep EDA, HPC, and cloud compute resources running smoothly and efficiently. We care about the same critical components you do — including cores, licenses, and emulation — and know that even the most capable hardware can’t do its job without the right tools to enable top performance and high throughput.

Schedule Meeting

Rapidly Advancing Electronics: Altair Solutions Enable Rapid Growth for Wired Connectivity Leader Kandou

Faced with rapid growth, the team at Kandou needed to manage workloads and licensing for their expensive EDA tools. The team chose Altair® Accelerator™ for job scheduling and Altair® Monitor™ for real-time license monitoring and management, resulting in improved product development, getting to market faster, and saving money on expensive EDA tools.

Read the Customer Story

Inphi Corporation Speeds Up Semiconductor Design with Altair Accelerator

The team at Inphi understands the importance of HPC and EDA software performance optimization better than most. They evaluated several competing solutions before selecting Altair Accelerator, which stood out among the competition for superior performance and Altair’s reputation for excellent customer service.

Read the Customer Story

CEA Speeds Up EDA for Research: Powering R&D at the French Alternative Energies and Atomic Energy Commission

CEA Tech, the Grenoble-based technology research unit for the French Alternative Energies and Atomic Energy Commission (CEA) is a global leader in miniaturization technologies that enable smart digital systems and secure, energy-efficient solutions for industry.

Read the Customer Story

Using I/O Profiling to Migrate and Right-size EDA Workloads in Microsoft Azure

Semiconductor companies are taking advantage of Microsoft Azure HPC infrastructures for their complex electronic design automation (EDA) software. When one of the largest semiconductor companies asked for help using Azure to run its EDA workloads, Microsoft teamed up with Altair. This presentation outlines how Microsoft used Altair Breeze™ to diagnose I/O patterns, choose the workflow segments best suited for the cloud, and right-size the Azure infrastructure. The result was better performance and lower costs for our semiconductor customer.

Watch Now

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

There are few fields in the world as competitive as semiconductor design exploration and verification. Teams might run tens of millions of compute jobs in a single day on their quest to bring new chips to market first, requiring vast quantities of compute and, increasingly, cloud and emulator resources, as well as expensive EDA licenses, and the all-important resource, time. In this roundtable, experts will discuss the license-, job-, compute-, and host-based metrics, highlighting the optimization strategies that edge out the competition and drive up profitability.

Learn More

I hope to see you there!

Also Read:

Future.HPC is Coming!

Six Essential Steps For Optimizing EDA Productivity

Latest Updates to Altair Accelerator, the Industry’s Fastest Enterprise Job Scheduler


CXL Verification. A Siemens EDA Perspective

CXL Verification. A Siemens EDA Perspective
by Bernard Murphy on 07-07-2022 at 6:00 am

CXL Verification

Amid the alphabet soup of inter-die/chip coherent access protocols, CXL is gaining a lot of traction. Originally proposed by Intel for cross-board and cross-backplane connectivity to accelerators of various types (GPU, AI, warm storage, etc.), a who’s who of systems and chip companies now sits on the board, joined by an equally impressive list of contributing members. The standard enables coherent memory sharing between a central processor/CPU cluster with its own cache coherent memory subsystem, with memory/caching on each of multiple accelerator systems. This greatly simplifies life for software developers since memory consistency is managed in hardware. No need to worry about this in software; it’s all just one unified memory model, whether software is running on the processor or on an accelerator.

CXL and PCIe

As an Intel-initiated standard, CXL layers on top of PCIe (as does NVMe, but that’s another story). PCIe already provides the physical interface standard, also the protocols and traffic management for IO communication. CXL builds on top of this for memory and cache communication between devices. This makes it a complex protocol to verify out of the gate, requiring PCIe compliance just as a starting point.

CXL layers on top of PCIe three protocols:

  • io for configuration and a variety of administrative functions
  • cache providing peripherals with low-latency access to host memory
  • memory allowing the host to coherently access memory attached to attached CXL devices

The coherency requirement adds more complications such as compliance with the associated coherency protocol (eg MESI). Also add in Integrity and Data Encryption (IDE) to ensure secure connection and computing. Put all of this together and it is clear that CXL protocol checking is a very complex beast, for which a well-defined VIP would be enormously helpful.

Questa VIP for CXL

Siemens EDA have built a Questa VIP to address this need. QVIP can model any or all the CXL-compliant components in a system, including IDE, generating fully compliant stimulus in host, device, or passive device roles. The VIP comes with a comprehensive verification plan covering simple and complex scenarios. The VIP comes with predefined sequences to support generating these scenarios. Checkers are provided to validate compliance with the coherency protocol of choice, also to validate data integrity through cache reads, writes, and updates.

When a problem is found, possibly elsewhere in the system, the VIP provides detailed logging, from both device to host and from host to device. This logs all information on the CXL interconnect by timestamp, which simplifies tracking problems back to transactions. It is also possible to enable detailed debug messages. Once you know roughly where you want to look you can trigger detailed transaction information in both directions.

Finally, for coverage, the testplan supplied with the VIP is designed to guide you to high coverage over your CXL compliance testing. Table entries define the main test objective, and each objective comes with predefined coverpoints. You can tweak weights for these as appropriate to your verification goals. So, it’s an all-in-one package: VIP, testplan, debug support, and coverage. You just have to dial in your menu choices.

CXL looks likely to be the multi-chip/chiplet solution of choice for coherent memory sharing. This means that you should expect to see this play a larger role in verification planning. If you want to learn more about the Questa Verification IP solution, click HERE.


What Quantum Means for Electronic Design Automation

What Quantum Means for Electronic Design Automation
by Kelly Damalou and Kostas Nikellis on 07-06-2022 at 10:00 am

Ansys quantum blog Image1

In 1982, Richard Feynman, a theoretical physicist and Nobel Prize winner, proposed the initial quantum computer; Feynman’s quantum computer would have the capacity to facilitate traditional algorithms and quantum circuits with the goal of simulating quantum behavior as it would have occurred in nature. The systems Feynman wanted to simulate could not be modeled by even a massively parallel classical computer. To use Feynman’s words, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.

Today, companies like Google, Amazon, Microsoft, IBM, and D-Wave are working to bring Feynman’s ambitious theories to life by designing quantum hardware processing units to address some of the world’s most complicated problems—problems it would take a traditional computer months or even years to solve (if ever). They’re tackling cryptography, blockchain, chemistry, biology, financial modeling, and beyond.

The scalability of their solutions relies on a growing number of qubits. Qubits are the building blocks of quantum processing; they’re similar to bits, the building blocks of traditional processing units. IBM’s roadmap for scaling quantum technology shows the 27-qubit IBM Q System One release in 2019, and less than 5 years later, they expect to release the next family of IBM Quantum systems at 1,121 qubits.

Achieving a sufficient level of qubit quality is the main challenge in making large-scale quantum computers possible. Today, error correction is a critical operation in quantum systems, and it preoccupies the vast majority of qubits in each quantum processor. Improving fault tolerance in quantum computing requires error correction that’s faster than error occurrence. Beyond error correction, there are plenty of challenges on the road to designing a truly fault-tolerant quantum computer with exact, mathematically accurate results. Qubit fidelity, qubit connectivity, granularity of phase, probability of amplitude, and circuit depth are all important considerations in this pursuit.

While quantum computing poses a major technological leap forward, there are similarities between quantum designs and traditional IC designs. Those similarities allow the electronic design automation (EDA) industry to build on existing knowledge and experience from IC workflows to tackle quantum processing unit design.

Logic Synthesis in Quantum and RFIC Designs

In quantum designs on superconductive silicon, the basic building block is the Josephson Junction. In radio-frequency integrated circuit (RFIC) chips, that role is played by transistors. In both situations, these fundamental building blocks are used to build gates that ultimately form qubits in quantum and bits in RFIC.

Image source: “An Introduction to the Transmon Qubit for Electromagnetic Engineers”, T. E. Roth, R. Ma, W. C. Chew, 2021, arXiv:2106.11352 [quant-ph]

Caption: From the Josephson junction to the quantum processor

In RFICs, the state of a bit can be read with certainty—it’s either 0 or 1. Determining the state of a qubit is much more complicated. Yet, it’s a critical step for accurate calculations. Due to the peculiar laws of quantum mechanics, qubits can exist in more than one state at the same time—a phenomenon called superposition. Superposition allows a qubit to assume a value of 0, 1, or a linear combination of 0 and 1. It’s instrumental to the operations of a quantum computer because it provides exponential speedups in memory and processing. The quantum state is represented inside the quantum hardware, but when qubits are measured, the quantum computer reports out a 0 or a 1 for each.

Entanglement is another key quantum mechanical property that describes how the state of one qubit can depend on the state of another. In other words, observing one qubit can reveal the state of its unobserved pair. Unfortunately, observation (i.e., measurement) of the state of a qubit comes at a cost. When measuring, the quantum system is no longer isolated, and its coherence—a definite phase relation between different states—collapses. This phenomenon, quantum decoherence, is roughly described as information loss. The decoherence mechanism is heavily influenced by self and mutual inductance among qubits, which must be modeled with very high accuracy to avoid chip malfunctions.

Quantum processors are frequently implemented using superconductive silicon because it’s lower in cost and easy to scale. Further, it offers longer coherence times compared to other quantum hardware designs. In this implementation, integrated circuits (ICs) are designed using traditional silicon processes and cooled down to temperatures very close to zero Kelvin. Traditional electromagnetic solvers struggle with the complexity and size of quantum systems, so simulation providers need to step up their capacity to meet the moment.

Image credits: IBM

Caption: An IBM quantum computer

Modeling Inductance in Quantum and RFIC Designs

It’s worth noting that superconductors are not new, exotic materials. Common metals like niobium or aluminum are found in superconducting applications. Once these metals are cooled down to a few millikelvin, using a dilution refrigerator, a portion of their electrons do not flow as they normally would. Instead, they form cooper-pairs. This superconductive current flow results in new electromagnetic effects that need to be accurately modeled. For example, inductance is no longer simply the sum of self and mutual inductance. It includes an additional term, called kinetic inductance:

This summation is not as straightforward as it looks. Kinetic inductance has drastically different properties than self and mutual inductance, which are frequency independent and temperature dependent. In a similar fashion, the minimal resistance in a superconductor has different properties than the ohmic resistance of conductors (i.e., proportional to the square of frequency). Electromagnetic modeling tools must account for these physical phenomena both accurately and efficiently.

Scale also poses challenges for electromagnetic solvers. Josephson Junctions, the basic building block of the physical qubit, combine with superconductive loops to form qubit circuits. The metal paths form junctions and loops with dimensions of just a few nanometers. While qubits only need a tiny piece of layout area, they must be combined with much larger circuits for various operations (e.g., control, coupling, measurement). The ideal electromagnetic modeling tool for superconductive hardware design will need to maintain both accuracy and efficiency for layouts ranging from several millimeters down to a few nanometers to be beneficial in all stages of superconductive quantum hardware design.

Image source: “Tunable Topological Beam Splitter in Superconducting Circuit Lattice”, L. Qi, et.al., Quantum Rep. 2021, 3(1), 1-12

Caption: An indicative quantum circuit

 

Looking Forward (or backward – It’s hard to tell with Quantum)

Designers in the quantum computing space need highly accurate electromagnetic models for prototyping and innovation. Simulation providers need to rise to the challenge of scaling to accommodate large, complex designs that push the boundaries of electromagnetic solvers with more and more qubits.

Ansys, the leader in multiphysics simulation, recently launched a new high-capacity, high-speed electromagnetic solver for superconductive silicon. The new solver, RaptorQu, is designed to interface seamlessly with existing silicon design flows and processes. Thus far, our partners are particularly pleased with their ability to accurately predict the performance of their quantum computing circuits.

Caption: Correlation of RaptorQu with HFSS on inductance (left) and resistance (right) for a superconductive circuit

Interested? For updates, keep an eye on our blog.

Dr. Kostas Nikellis, R&D Director at Ansys, Inc., is responsible for the evolution of the electromagnetic modeling engine for high speed and RF SoC silicon designs. He has a broad background in electromagnetic modeling, RF and high-speed silicon design, with several patents and publications in these areas. He joined Helic, Inc. in 2002, and served as R&D Director from 2016 to 2019, when the company was acquired by Ansys, Inc. Dr. Nikellis received his diploma and PhD in Electrical and Computer Engineering in 2000 and 2006 respectively, both from the National Technical University of Athens and his M.B.A. from University of Piraeus in 2014.

Kelly Damalou is Product Manager for the Ansys on-chip electromagnetic simulation portfolio. For the past 20 years she has worked closely with leading semiconductor companies, helping them address their electromagnetic challenges. She joined Ansys in 2019 through the acquisition of Helic, where, since 2004 she held several positions both in Product Development and Field Operations. Kelly holds a diploma in Electrical Engineering from the University of Patras, Greece, and an MBA from the University of Piraeus, Greece.

Also Read:

The Lines Are Blurring Between System and Silicon. You’re Not Ready.

Multiphysics, Multivariate Analysis: An Imperative for Today’s 3D-IC Designs

A Different Perspective: Ansys’ View on the Central Issues Driving EDA Today


Multi-FPGA Prototyping Software – Never Enough of a Good Thing

Multi-FPGA Prototyping Software – Never Enough of a Good Thing
by Daniel Nenni on 07-06-2022 at 8:00 am

PlayerPro EN

Building a multi-FPGA prototype for SoC verification is complex with many interdependent parts – and is “always on a clock”.  The best multi-FPGA prototype implementation is worthless if its not up and running early in the SoC design cycle, where it offers the highest verification ROI terms of minimizing the cost of bug fixes and accelerating the SoC time-to-market.  So, any automation software that enables a more accurate, higher performing prototype implementation in less time should be warmly welcomed by the SoC verification people prototyping large SoCs.

There are at least three pertinent challenges to the implementation of multi-FPGA prototypes;

  1. Cutting large SoC designs into blocks that will “fit” into each FPGA of a multi-FPGA prototyping platform,
  2. Assuring the overall timing integrity of the multi-FPGA prototype when all the FPGAs are connected together, and
  3. Managing the trade-off between the scarcity of FPGA I/O pins that limits the amount of logic in a partition “cut” when the design is spread across several FPGAs, and the prototype performance.

Adding to these prototype implementation challenges are other second-order challenges, like connecting thousands of debug probes, which consumes FPGA connectivity and impacts utilization, and connections to real-world target systems, which consumes FPGA connectivity and FPGA I/O, that impact how easy, or difficult, it is to compile all the FPGAs into a multi-FPGA prototype in an acceptable amount of time with manageable effort.  The tighter you pack the FPGAs (higher utilization), the harder it is for the FPGA compiler tools to find a place and route solution that meets timing targets, and the longer they will take to complete.  But, we’ll defer discussion of these challenges to a future blog.

Automation tools for partitioning large SoC’s for multi-FPGA prototyping should offer a spectrum of “level-of-automation”, from heavily-assisted partitioning, where the user chooses to “guide” the partitioning process with specific design knowledge that will enable a specific partitioning result, to fully automatic partitioning, where the user kicks off a partition run and goes for coffee while the partitioner does its thing.  The basis for choosing the level of automation may be as simple as project schedule, where the designer wants to get to a working multi-FPGA prototype in a hurry and is willing to sacrifice prototype performance for fast compile times.  Some SoC designs lend themselves to intuitive partitioning across multiple FPGAs, and the partition “cut lines” are easily imagined by the designer, while other designers choose higher automation due to the complexity of the critical timing paths, or the prototype target performance, or an aggressive project schedule.  Partitioning at the RTL level is great for early estimations of performance and prototype fit into a multi-FPGA hardware platform, while heavy designer involvement in partitioning may go straight to the gate-level and render unnecessary the need for RTL partitioning.

As unimaginable as it may be today, early commercial multi-FPGA prototyping products did not include integrated timing analysis.  Correct prototype timing in the early days was achieved by applying input stimulus to the prototype and observing the prototype output waveforms with debug probes, and then manually adjusting the relative-edge timing of failed-timing paths by inserting additional FPGA logic gates into the failed-timing path to fix hold-time violations.  That approach by FPGA prototype product providers quickly drew the wrath of early users and led to integrated timing analysis into the FPGA prototyping flow.  Today’s complex multi-FPGA prototypes would be unmanageably difficult without system-level timing analysis that considers the prototype timing of multiplexed FPGA I/O pins, and interconnect cables between FPGAs.

The scarcity of FPGA I/O pins continues to be the bane of multi-FPGA prototyping, even with the new massively large prototyping FPGAs from Intel and Xilinx (up to 80M usable gates per FPGA), because the number of “natural partition cut” interconnections between SoC design partitions often far exceeds the available I/O pins on the FPGAs.  The number of partition interconnections can number in the tens of thousands, whereas the number of available I.O pins on the latest prototyping FPGAs is only a few thousand (1,976 max single-ended HP I/O’s for the Xilinx VU19P, and 2,304 maximum user I/O pins for the Intel Stratix GX 10M.  Consequently, multi-FPGA prototyping must often resort to pin-multiplexing the FPGA I/O pins to implement a multi-FPGA prototype.  The pin-multiplexing is usually accomplished with TDM soft-IP that is implemented with FPGA logic gates with the embedded multiplexors run at the upper limit of the FPGA’s switching speeds.  Different levels of pin-multiplexing (2:1, 4:1, etc.) effectively expands the effective FPGA I/O but sacrifices higher prototype performance.

So, it goes without saying that more automation for multi-FPGA prototype implementation is a good thing, and it comes as no surprise that S2C would offer more of a good thing to its customers by continuing to advance its multi-FPGA prototyping software.  Hence, S2C has recently announced a new release of its Prodigy Player Pro-7TM prototyping software – for use with its Logic System and Logic Matrix families of multi-FPGA prototyping hardware platforms.  S2C has been in production with these multi-FPGA hardware platforms now for a while that incorporate the largest available prototyping FPGAs, like the Xilinx VU19P and the Intel Stratix GX 10M.

According to S2C, the salient features of the new Player Pro-7 software include;

  • RTL Partitioning and Module Replication to support Parallel Design Compilation and reduce Time-to-Implementation
  • Pre/Post-Partition System-Level Timing Analysis for Increased Productivity
  • SerDes TDM Mode for Optimal Multi-FPGA Partition Interconnect and Higher Prototype Performance

The new Player Pro-7 software suite is organized into three separate tools; Player Pro-CompileTimeTMPlayer Pro-DebugTimeTM, and Player Pro-RuTimeTM.  While the new releases of DebugTime and RunTime software include upgrades nfor multi-FPGA debug probing and trace viewing, and strengthening prototype hardware platform control and test, respectively – the most significant multi-FPGA prototyping feature improvements are in the new CompileTime software.

Previous releases of the Player Pro software supported design partitioning at the gate-level, so RTL partitioning is a big step forward for S2C, simplifying the management of multi-core design implementations, and enabling an early assessment of the number of prototype FPGAs required.

For more information about S2C’s multi-FPGA prototyping hardware and software, please visit S2C’s web site at www.s2cinc.com.  Or, stop by S2C’s booth at the 2022 Design Automation Conference from July 11th to July 13th at the Moscone Center in San Francisco.

Also read:

Flexible prototyping for validation and firmware workflows

White Paper: Advanced SoC Debug with Multi-FPGA Prototyping

Prototype enables new synergy – how Artosyn helps their customers succeed


Accellera Update: CDC, Safety and AMS

Accellera Update: CDC, Safety and AMS
by Bernard Murphy on 07-06-2022 at 6:00 am

logo accellera min

I recently had an update from Lu Dai, Chairman of Accellera, also Sr. Director of Engineering at Qualcomm. He’s always a pleasure to talk to, in this instance giving me a capsule summary of status in 3 areas that interested me: CDC, Functional Safety and AMS. I will start with CDC, a new proposed working group in Accellera. To manage hierarchical CDC analysis back in my Atrenta days, you would first analyze a block, then use that analysis to define pseudo constraints on ports of the block, and so on up through the hierarchy. These pseudo constraints might capture things like internal input or output synchronization with related clock info. Sort of a CDC-centric abstraction of the block.

We should have guessed that other tool providers would do something similar, with their own constraint extensions. Which creates a problem when using IP from multiple vendors, each of whom use their own tools for CDC. Maybe you would have to re-do the analysis from scratch for a block? Which may not be possible for encrypted RTL. This is an obvious candidate for standardization – defining abstractions in a common language. SDC-based, no doubt, since these constraints must intermingle with the usual input, output and clock constraints. A worthy effort in support of CDC verification teams.

Functional Safety

It might seem that ISO 26262 is the final word in defining functional safety (FuSa) requirements for electronic design for vehicles. In fact, like most ISO standards ISO 26262 is more about process than detailed guidelines. As tools, IPs and Systems development have advanced to comply with FuSa needs it has become obvious that we need more rigor in those expectations. Take a simple example. What columns should appear in an FMEDA table, in what order and with what headings? Or could this information be scripted instead? None of this is nailed down by ISO 26262. Formats/scripting approaches are completely unconstrained, creating a potential nightmare for integrators.

More generally, there is a need to ensure standardized interoperability in creating and exchanging FuSa information between suppliers and integrators. Which should in turn encourage more automation. So when I claim my IP meets some safety goal, you don’t just have to take my word for it. You can run your own independent checks. On a related note, the methodology should support traceability (a favorite topic of mine). Allowing for validation across the development lifecycle, from IPs to cars. Incidentally there is a nice intro to Accellera work in this area from DAC 2021.

Lu mentioned a related effort in IEEE. I believe this is IEEE P2851, looking at some fairly closely related topics. Lu tells me the Accellera and IEEE groups have had a number of discussions to ensure they won’t trip over each other. His quick and dirty summary is that Accellera is handling the low-level tool and format details while IEEE is aiming somewhat higher. I’m sure that eventually the two efforts will be merged in some manner.

UVM-AMS

The stated objective of this working group is to standardize a method to drive and monitor analog/mixed-signal nets within UVM. Also to define a framework for the creation of analog/mixed-signal verification components by introducing extensions to digital-centric verification IP.

In talking with Lu, the initial objective is to align with existing AMS efforts, in Verilog, SystemVerilog and SystemC. There’s a nice background to the complexities of AMS modeling in simulation HERE for those of us who might have thought this should be easy to solve. Even the basics of real number modeling are still not frozen. Analog signals are not just continuous variants of digital signals; think of the complex number representations common in RF. So there’s history and learning which the standard should leverage yet not disrupt unnecessarily.

AMS teams want the benefits of UVM methodologies, but they don’t want to start from scratch. Aligning those benefits with existing AMS requirements is the current focus. Lu says that many of these requirements aren’t language specific. The working group is figuring out the semantics of the methodology first, then will look more closely at syntax issues.

Accellera will be presenting more on this topic at DAC 2022 so you’ll have an opportunity to learn more there.


Jade Design Automation’s Register Management Tool

Jade Design Automation’s Register Management Tool
by Kalar Rajendiran on 07-05-2022 at 10:00 am

RegMan supervisor CSRs

When more than one person is working on any project, coordination is imperative. When the team size grows, being in sync becomes essential. When it comes to SoC design management, registers and bit fields are used to communicate status of results and execute conditional controls. The Register Management function plays an essential role during the course of any modern day SoC product development. Earlier this year, SemiWiki introduced Jade Design Automation (JadeDA) to its readership, through an interview with its CEO and Founder, Tamas Olaszi.  JadeDA is focused on register management of a chip design starting from the system architecture stage all the way to software bring-up.

This post will discuss register management and a feature to configure RISC-V processor registers. JadeDA will be showcasing their Register Manager tool at the upcoming DAC 2022 in San Francisco. I had an opportunity to chat with Tamas and this blog is based on that conversation.

Register Management Benefits

Where

While register management has always been important on any chip design project, it takes more importance in today’s world of hardware/software co-design/co-development. Even an average complexity chip could include 100,000 or even millions of registers. During the design phase, bit fields in those registers could be change frequently, even many times in a day. This necessitates the validation, regeneration of RTL, updating of UVMs and the relevant documentation in close to real-time as possible. Without register management, different teams could be out-of-sync. For example, a change made by the design team may not be noticed by the verification team right away.  The software team may be working off of outdated information, wasting cycles on developing code that would need to be changed.

Following is a real life example that Tamas narrated during our chat. It was an embedded software development project. The documentation the team was working off of said to set certain bits and then wait for certain things to happen and then perform some actions. The hardware team knew when that something happens because they have access to an internal register. But the software team doesn’t have access to this register. No status bits or interrupts were triggered. Without knowing, the software could be waiting forever to take action. This is the kind of thing that can happen if there is no centralized information access that all teams could review, verify and work from.

While the above example is from an embedded real-time device application, the same goes for any device including HPC-oriented high-performance application. Only difference is, we can expect even more frequent updates the larger and more complex a design gets. And the speed at which the centralized information gets updated and all relevant code and documentation gets regenerated becomes critical.

Who

A good register management capability will render the following functional roles the respective benefits. It also allows automated broadcast of updated information to all the different teams working on a project.

  • System architects can capture and maintain all the high level system information in a centralized way.
  • System Integrators can pull together IPs from various sources to a centralized platform for enhanced quality.
  • IP teams can auto-generate production ready RTL and UVM register descriptions throughout the development process, which is a great productivity booster.
  • Engineering Managers can monitor the consistent and high quality release deliverables offered to their internal or external customers.
  • Software Engineers can have register information loaded into their debuggers so they can instantly see what register they are working with on a particular offset; they can do this without having to wade through pages of documentation.

JadeDA’s Register Management Tool

JadeDA has kept it simple and straightforward by naming their register management tool, the Register Manager. The tool efficiently manages all tasks around the HW/SW interface of an SoC. Users can capture register and bitfield information at the IP level as well as the memory maps on the IP, subsystem and SoC level. The Register Manager generates RTL, Verification, SW, Documentation and data Interoperability formats like IP-XACT 1685-2009 and 1685-2014 from these descriptions.

Data Model and Flexibility for Customization

While data models can be based off of standards such as IP-XACT and SystemRDL, standards evolve very slowly. A proprietary data model from a supplier with strong support for customization serves customers well. JadeDA’s importing tools can migrate IP-XACT and SystemRDL based data models. Data models/tools based off of IP-XACT usually have vendor extensions. JadeDA tool’s data model is richer than what IP-XACT offers. Legacy data in custom formats can be imported via the tool’s API. The API is very efficient and well documented. JadeDA can also easily import register information stored in excel sheets.

GUI

The Register Manager has a rich and intuitive GUI to visualize and edit the HW/SW interface and edit the register and bitfield information. The GUI is much more than just entry fields for various attributes. It can be controlled with mouse only to change attributes like offsets, widths, access types and reset values. There is also a full keyboard support with intuitive focus traversal that allows quick and efficient data capture without raising the hand from the keyboard. Typing pre-existing information from a PDF document can be done without having to reach for the mouse in-between keyboard entries. This is a productivity enhancement.

Note: The tool also has a fully functional shell mode for power users as well as fully scriptable command files for automated flows.

Performance

As changes happen in a design, the tool can capture the data, validate it and generate RTL, UVM, documentation, software and IP-XACT collateral in a few seconds. Jade DA has noticed that its tool runs in an order of magnitude less time than what is available in the marketplace today. And the performance of JadeDA tool scales linearly.

Processor Registers Configurability Feature

JadeDA will be showcasing this new feature of the Register Manager tool at DAC 2022.

JadeDA can deliver its customers the superset of control and status registers (CSRs) through the tool’s GUI. As the customers configure their designs, they can get rid of the CSRs they don’t need for a particular design. A RISC-V based design serves as a good case study. The RISC-V specification offers a whole bunch of CSRs, not all of which are used by all customers. And different customers or different projects at the same customer may use different selection of CSRs. The tool captures all of the registers in all the details contained in the RISC-V specification. With the configurability feature, users can configure the particular subset they need. Some of the configuration options are available as RTL configurable parameters. If the customer turn them off, the users won’t be able to configure the corresponding registers.

This configurability feature is something that JadeDA can implement in its tool for any processor architecture/ISA. Contact JadeDA to explore.

You have to see the live demo to fully appreciate the power of the tool, the user interface and ease of use. See the Register Manager Tool Demo @Booth Number 2430 at DAC 2022 in San Francisco.

Meanwhile, here are some screenshots from the tool. The following two Figures show the scenarios of when a FPU and corresponding registers are included in the configuration and when they are not.

 

The following Table shows the supervisor related CSRs found in the RISC-V specification.

The following relates to a case of an application processor where supervisor related CSRs are needed. The screenshot below shows their conditional presence being enabled.

See the Register Manager Tool Demo @Booth Number 2430 at DAC 2022 in San Francisco.


5G for IoT Gets Closer

5G for IoT Gets Closer
by Bernard Murphy on 07-05-2022 at 6:00 am

5G for IoT

Very recently, 3GPP announced that 5G Release 17 was finalized. One important consequence is that 5G RedCap (reduced capacity) is now real and that means 5G becomes accessible to IoT devices. Think smart wearables (e.g. watches), industrial sensors and surveillance devices. “So what?”, you protest. “I don’t need 5G on my watch. It can link to my phone over Bluetooth and let the phone handle 5G communication.” Yes it can, but have you ever wondered why you always need your phone to use your watch?

That seems like a half-step to convenience, a nice light device on your wrist tethered to an increasingly bulky device in your pocket. When you’re jogging, hiking, working out, wouldn’t it be nice to only need the watch? Industrial sensors and surveillance devices rely more on Wi-Fi for communication but what if the Wi-Fi isn’t very good, or non-existent? Is it time to cut the cord and let these devices talk directly to the cellular network?

The real growth in 5G

The smartphone market is already slowing according to multiple surveys. 5G may generate a boost in support of mobile gaming and high-quality streaming but still the heady growth of early years seems unlikely to re-emerge. That’s why IoT applications have become so interesting. The total available market is not bounded by human users, only by applications. Millions of smart parking meters, moisture sensors in field, bridge stress sensors, power grid sensors, etc, etc. Analysts estimate 1.24 billion M2M non-handset devices shipping in 2027. Smart watch volume estimates show up to 230 million units by 2026, making them an encouraging consumer option to pick up from declining volumes in smart phones. There doesn’t seem to be a killer app here. Volumes are projected to be roughly divided between public sector infrastructure, smart metering, consumer electronics, intelligent buildings, security, retail and commerce, healthcare and transport and logistics.

What underlines the strength of this opportunity is that 5G infrastructure build-out is already underway. Not as fast as we’d like, and it may be a financial challenge for the mobile network providers but coming. There has been talk of expanding the reach of Bluetooth (mesh networks) and Wi-Fi (Wi-Fi 6). Technologically these are possible, but someone must pay for building wide coverage infrastructure. Which seems unlikely given existing investment in 5G infrastructure. Moreover, it’s difficult to beat cellular reach for remote applications – agriculture, highways, power grids, etc.. 5G RedCap is increasingly looking like the best fit for IoT communication.

PentaG2-Lite Well Positioned to Help

As the only 5G NR IP platform on the market, CEVA’s PentaG2 is compelling solution for those needing an embedded solution to meet cost and power goals. This will particularly be true for IoT builders who are likely to see a good fit in the PentaG2-Lite version. This IP offers a wide range of accelerators for modem and other functions.  First product shipments probably will appear 2025, but that date requires builders to start planning now. CEVA offers an integrated SystemC simulation environment for architects in support of that early design.

You can learn more by watching this webinar.


Using AI in EDA for Multidisciplinary Design Analysis and Optimization

Using AI in EDA for Multidisciplinary Design Analysis and Optimization
by Daniel Payne on 07-04-2022 at 10:00 am

Optimality min

Most IC and system engineers follow a familiar process when designing a new product: create a model, use parameters for the model, simulate the model, observe the results, compare results versus requirements, change the parameters or model and repeat until satisfied or it’s time to tape out. On the EDA side, most tools perform some narrow function in a single domain, and it’s up to the EDA user to control the tool, read the results, and then iterate while manually optimizing.

In the late 1980’s we saw the birth of smarter EDA tools like logic synthesis, which at first only optimized a gate level netlist into a reduced form, then later accepted RTL language and produced an optimized, process-specific, gate-level netlist. By the mid 2000’s there was an application of Machine Learning (ML) to Monte Carlo simulations for SPICE simulators, saving circuit designers time and effort. Recently, even Google has applied ML to produce better placement results for large SoC designs than what a human can produce. The trends have been clear,  EDA tool developers have created smarter tools, but mostly limited to single domains, like: Logic design, SPICE and floor planning.

On June 7 some big news in EDA came from Cadence, as they announced something called Optimally Intelligent System Explorer, an AI-based approach for Multidisciplinary Design Analysis and Optimization (MDAO). The days of separated silos of EDA tools operating in only one domain are changing into more complex, multi-domain tools. Cadence has gone so far as to organize a Multi-Physics System Analysis Group, where Ben Gu is the Vice President. The new product name isOptimality Explorer, and it works across three system-level EDA tools:

  • Clarity – 3D Electromagnetic (EM) field solver
  • Sigrity X – Distributed simulation for signal and power integrity (SI/PI)
  • Celsius – Thermal solver (Optimality integration coming soon)
Optimality Explorer

The diagram above shows a system design where a communication channel consists of an IC driver, package, PCB layout, package, and finally a receiver inside the final IC. Criteria for success is optimizing the physical layouts to ensure an acceptable return and insertion loss, while managing cross-talk issues and maintaining signal isolation. Optimality Explorer is used to automatically guide optimization, using both the Clarity and Sigrity X tools, and it decides what to change for each tool run, and figure out when an optimal solution has been found.

For example, the system designer specifies that return loss has to be lower than some threshold, and then Optimality Explorer reads from Allegro, creates design variables,  controls the optimization process, and finds the optimum solution. Here’s a plot from an optimization run where the criteria was a return loss under -35dB:

Optimization Results: Return Loss

The blue dots each represent an iteration during optimization, and the red line is the progress towards reaching the design goal. This automated method for optimization happens much faster than the manual approaches used for the past decades. Cadence is claiming a 10X faster time to optimization by using Optimality.

The theory of applying ML to optimization sounds good, but what about real world results? Great question. At DesignCon there was a presentation by Kyle Chen of Microsoft, where they used Optimality to optimize micro-stacked vias in a rigid-flex PCB. Kyle wrote, “As an early adopter of the Cadence Optimality Intelligent System Explorer, we stressed its performance on a rigid-flex PCB with multiple via structures and transmission lines. The Optimality Explorer’s AI-driven optimization allowed us to uncover novel designs and methodologies that we would not have achieved otherwise. Optimality Explorer adds intelligence to the powerful Clarity 3D Solver, letting us meet our performance target with accelerated efficiency.”

Micro-Stacked Vias

This approach may sound familiar to Cadence IC designer users  in the RTL to GDS flow, because last year they announced Cerebrus, an AI approach using ML to explore the design space for Power, Performance and Area (PPA) through placement, routing and timing closure. The same kind of reinforcement ML in Cerebrus has also been used in Optimality Explorer.

Summary

EDA tools have been used to create every AI chip every designed, and now AI and specifically ML is being applied to EDA tools like Optimality Explorer, to explore the design space of systems by optimizing more quickly than manual methods. The first two tools from Cadence that work with Optimality Explorer are Sigrity X and Clarity, then expect Celsius to be the next tool added. Multi-physics EDA, or multidisciplinary design analysis and optimization (MDAO) has begun in earnest.

Related Blogs