SILVACO 073125 Webinar 800x100

S2C and Sirius Wireless Collaborate on Wi-Fi 7 RF IP Verification System

S2C and Sirius Wireless Collaborate on Wi-Fi 7 RF IP Verification System
by Daniel Nenni on 05-21-2024 at 6:00 am

wifi7 success story

Sirius Wireless partnered with FPGA prototyping expert S2C to develop the Wi-Fi 7 RF IP Verification System, enhancing working efficiency and accelerating time-to-market for clients.

Wi-Fi 7 is the latest Wi-Fi technology, with speeds of up to 30Gbps, approximately three times the peak performance of Wi-Fi 6. This enhanced performance will position Wi-Fi 7 to lead the market quickly, delivering users a more stable and faster wireless experience. However, Wi-Fi 7 sets rigorous standards for chipset designers and RF IP vendors, demanding excellent capabilities to handle 320 MHz bandwidth and 4096-QAM, including faster, lower-noise ADCs/DACs, sophisticated RF designs, and complex baseband processing. Enhanced Error Vector Magnitude (EVM) and noise control requirements in RF front-end modules exceed those of Wi-Fi 6/6E. Features like MRU and MLO increase complexity in baseband and MAC layer processing. Overcoming these challenges require innovative system architectures, algorithm designs, and advanced semiconductor processes for optimized performance and power management. Chip designers must also ensure flexible software support for interoperability among expanding wireless protocols, enhancing user experience while catering to diverse application demands.

Leveraging the S2C Prodigy S7-9P Logic System, Sirius Wireless conducts comprehensive verification and testing of RF performance indicators such as throughput, reception sensitivity, and EVM. Then they used Prodigy Prototype Ready IP which are ready-to-use daughter cards and accessories from S2C to interface with digital MAC, offer an end-to-end verification solution from RF to MAC to overcome the RF design nightmare, accelerating their time-to-market by shortening the entire chip verification cycle.

Sirius Wireless Validates Wi-Fi 7 RF with S2C Prodigy S7-9P Logic System

S2C’s extensive range of prototype tools, including productivity software suite, debugging solutions, and daughter boards, empowers designers to accelerate their functional verification by quickly building a target prototyping environment. In addition, Prodigy S7-9P Logic System also serves as a demonstration platform prior to tape-out to showcase and help them kickstart software development early. An example of such benefits is Sirius’s development of its Wi-Fi6 IP verification system. With this system, one of Sirius’s customers on short-range wireless chip designs spent only three months to complete the pre-silicon hardware performance analysis and performance com-parison test. The company thus shortened its production verification time and customers’ product introduction cycle, significantly improving efficiency by over 40%.

Sam Chu, VP of Marketing at Sirius Wireless, states, ” We have had a longstanding deep collaboration with S2C, jointly providing end-to-end verification solutions from RF to MAC for our clients. After our successful partnership on Wi-Fi 6, we’re confident in S2C’s Prodigy System for Wi-Fi 7 development. Its mature performance, user-friendly operation, and abundant validation experience reinforce our high expectations of Wifi-7 products.”

“S2C aims to boost partners’ market competitiveness”, said Ying Chen, VP of Sales & Marketing at S2C, “Sirius Wireless stands out in RF IP, being the sole company with TSMC’s advanced processes and Wi-Fi 7 RF design expertise. S2C is glad to work together with them to breathe new life into the whole industry.”

About Sirius Wireless

Headquartered in Singapore, Sirius Wireless was registered and established in 2018. The company has professional and outstanding R&D staff with more than 15 years of working experience in Wi-Fi, Bluetooth RF/ASIC/SW/HW.

About S2C

S2C is a leading global supplier of FPGA prototyping solutions for today’s innovative SoC and ASIC designs, now with the second largest share of the global prototyping market. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 600 customers, including 6 of the world’s top 10 semiconductor companies, our world-class engineering team and customer-centric sales team are experts at addressing our customer’s SoC and ASIC verification needs. S2C has offices and sales representatives in the US, Europe, mainland China, Hong Kong, Korea and Japan. For more information, please visit: https://www.s2cinc.com/ and more details about vu440, vu13p, vu19p board, vu9p FPGA, etc.

Also Read:

Accelerate SoC Design: DIY, FPGA Boards & Commercial Prototyping Solutions (I)

Enhancing the RISC-V Ecosystem with S2C Prototyping Solution

2024 Outlook with Toshio Nakama of S2C


An open letter regarding Cyber Resilience of the UK’s Critical National Infrastructure

An open letter regarding Cyber Resilience of the UK’s Critical National Infrastructure
by admin on 05-20-2024 at 10:00 am

Codasip UK Security Letter 1

Codasip announced a commercially available RISC-V processor with CHERI for license in October of 2023 and is demonstrating technology for IP provenance. 

Dear Members of the Science, Innovation and Technology Committee,

Let me start by applauding your hearing on 24 April 2024, and in particular the evidence of Professor John Goodacre, Challenge Director of Digital Security by Design at Innovate UK, and Mr Richard Grisenthwaite, Executive Vice President and Chief Architect at Arm. During this hearing, the witnesses discussed two extremely important cybersecurity issues: memory safety and IP provenance. In this letter, I would like to provide additional information about these topics that the committee should find relevant.

WEBINAR: Fine-grained Memory Protection to Prevent Cyber Attacks

Memory Safety and CHERI

As discussed in the hearing, memory safety issues represent roughly 70-80% of the cyber issues being tracked by the industry. These issues are referred to as Common Vulnerabilities and Exposures, or CVEs. The number of CVEs has grown exponentially over the last twenty years while the percentage of memory safety CVEs has been roughly constant.

Figure 1: Published CVE records

The reason is primarily related to the fact that most software is written in languages like C and C++, which do not provide inherent memory protection. What complicates the problem even more is that software is not normally developed monolithically, but by integrating pre-developed software from third parties, including open-source, where absolutely anyone can contribute potentially malicious changes.

Figure 2: Percentage of CVEs caused by memory safety issues. Source: Trends, challenges and strategic shifts in the software vulnerability mitigation landscape. Matt Miller, Microsoft Security Response Center (MSRC), Blue Hat IL, 2019

A rough estimate is that over one trillion lines of code are in use today, an enormous amount! The software industry has improved over the last decades, especially regarding “verification”, which is the part of the development process that checks for bugs and corrects them. However, as verification will never be perfect, nor will any developer, there will always be bugs for hackers to exploit in cyberattacks.

The UK is not alone in noticing the enormous memory security issue as the United States White House did a press release on 26 February 2024 entitled: Future Software Should Be Memory Safe.

As Professor Goodacre and Mr Grisenthwaite noted in the hearing, there are economic challenges for companies to take action to address memory safety issues, so they have been slow to do so, even where solutions are readily available. You may think of this situation as similar to the automotive industry’s challenge in adopting safety features that are standard today: seat belts, airbags, and crumple zones. It took decades to have such basic features in all automobiles, and it was only after regulations required them that every manufacturer did so.

For cyberattacks, whilst they are increasingly devastating, causing roughly $10 trillion of economic loss worldwide each year, the direct impact on each company is small enough that all too many do not choose to protect their customers.

In the US, the White House has realised this fact and indicates in its press release that it will be taking action “…shifting the responsibility of cybersecurity away from individuals and small businesses and onto large organizations like technology companies and the Federal Government that are more capable of managing the ever-evolving threat.

Over the last decade, Professor Goodacre has led outstanding work on CHERI at Digital Security by Design (DSbD), partnering with universities such as the University of Cambridge and semiconductor companies such as Arm. Indeed, Arm has produced a valuable CHERI research platform called Morello. During the hearing, Professor Goodacre noted that despite this exceptional work, the problem in general and specifically with Morello is that it is not a commercial offering, and consequently the industry has not been able to deploy CHERI. Whilst this has been true, I am pleased to update the committee that Codasip has recently launched a commercially available CHERI processor for license and has committed to making its entire portfolio of processors available with CHERI variants. We are also working very closely with the University of Cambridge and other companies to ensure CHERI is standardised and available to everyone.

Design Provenance and Traceability

The second topic discussed by Professor Goodacre and Mr Grisenthwaite was design provenance, which we believe must also include traceability. By provenance, we mean the origin of the design, including knowing the specific designers. By traceability, we mean changes to the design over time, including knowing the specific designers that made the changes. Additional information regarding the design, such as when, where, and with what tools changes were made should also be collected.

As Professor Goodacre and Mr Grisenthwaite explained, most semiconductor chips today are complete systems in themselves containing billions of transistors. Given the incredible complexity, chips are not designed monolithically, transistor by transistor, but assembled from pre-designed “IP blocks”, such as processors, memory, on-chip interconnects between IP blocks, and chip-to-chip interconnects such as USB. Companies like Arm and Codasip make processor IP blocks, while companies like Synopsys and Cadence make memory and chip-to-chip interconnects. Indeed, there is an entire IP industry for semiconductors. As Professor Goodacre and Mr Grisenthwaite discussed during the hearing, some IP are more prone to cyber issues than others, with processors being the most important and problematic.

For the previously discussed topic of memory security, CHERI involves invasive changes to the processor and lesser changes to memory. The additional cybersecurity challenge regarding provenance and traceability is that when one licenses IP blocks, one does not know who actually designed the IP, nor its possible history of modification. Consequently, when the inevitable bugs are found, it is not possible to irrefutably determine who made the errors. Most bugs will be accidental, but it is also possible that nefarious actors could have inserted malicious circuitry to appear as an accidental bug. We believe that provenance and traceability will increase in importance as cyberattacks increase in frequency and are increasingly used in military conflicts – indeed the Economist recently noted The cyberwar in Ukraine is as crucial as the battle in the trenches.

Fortunately, Codasip is also addressing the problem of provenance and traceability with a new software tool using blockchain technology to irrefutably log the processor design process and create a record of provenance with traceability. This new software tool is currently being demonstrated to customers in a pre-release version.

So, in summary, Codasip today has solutions to the two major problems that the committee identified in their hearing: (i) commercial availability of CHERI-based processors; and (ii) methods for providence and traceability of semiconductor IP blocks. Much of this work was and is being done in the UK, and the rest is done solely in Europe as we do not have R&D in other geographies.

If the committee has further interest in the technology we are making available, it would be a pleasure to arrange a meeting at your convenience.

Sincerely yours,

Dr Ron Black

CEO, Codasip GmbH

ron.black@codasip.com

WEBINAR: Fine-grained Memory Protection to Prevent RISC-V Cyber Attacks

About Dr. Ron Black

Dr Black, CEO at Codasip since 2021, has over 30 years of industry experience. Before joining Codasip, he was President and CEO at Imagination Technologies and previously CEO at Rambus, MobiWire, UPEK, and Wavecom. He holds a BS and MS in Engineering and a Ph.D. in Materials science from Cornell University. A consistent thread of his career has been processors including PowerPC at IBM, network processors at Freescale, security processors at Rambus, and GPUs and CPUs at Imagination.

About Codasip 

Codasip is a processor technology company enabling system-on-chip developers to differentiate their products for competitive advantage. Codasip is based in Munich and has development centres throughout Europe, including in Bristol and Cambridge in the UK. The company specializes in processors based on RISC-V (Reduced Instruction Set Computing, Generation Five), which is an open Instruction Set Architecture (ISA) alternative to proprietary architectures such as Arm and Intel x86. Codasip also has extensive experience in cybersecurity, with a team in Bristol that has spent the last two years architecting and designing the recently announced CHERI processor.

Also Read:

Webinar: Fine-grained Memory Protection to Prevent RISC-V Cyber Attacks

How Codasip Unleashed CHERI and Created a Paradigm Shift for Secured Innovation

RISC-V Summit Buzz – Ron Black Unveils Codasip’s Paradigm Shift for Secured Innovation


How to Find and Fix Soft Reset Metastability

How to Find and Fix Soft Reset Metastability
by Mike Gianfagna on 05-20-2024 at 6:00 am

How to Find and Fix Soft Reset Metastability

Most of us are familiar with the metastability problems that can be caused by clock domain crossings (CDC). Early static analysis techniques can flag these kinds of issues to ensure there are no surprises later. I spent quite a bit of time at Atrenta, the SpyGlass company, so I am very familiar with these challenges. Due to the demands of high-speed interfaces, the need to reduce power, and the growing focus on functional safety, soft resets are often used in advanced designs to clear potential errors. This practice can create hard-to-find metastability issues which remind me of CDC challenges. Siemens Digital Industries recently published a comprehensive white paper on this class of problem. If you use soft resets in your design, it’s a must read. A link is coming, but first let’s look at what Siemens has to say about how to find and fix soft reset metastability.

The Problem

As design complexity increases, systems contain many components such as processors, power management blocks, and DSP cores. To address low-power, high-performance and functional safety requirements, these designs are now equipped with several asynchronous and soft reset signals. These signals help safeguard software and hardware functional safety – they can quickly recover the system to an initial state and clear any pending errors or events. Using soft resets vs. a complete system re-start saves time and power.

The multiple asynchronous reset sources found in today’s complex designs result in multiple reset domain crossings (RDCs). This can lead to systematic faults that create data corruption, glitches, metastability or functional failures. This class of problem is not covered by standard, static verification methods such as the previously mentioned CDC analysis. And so, a proper reset domain crossing verification methodology is required to prevent errors in reset design during the RTL verification stage.

Let’s look at an example circuit that can cause soft reset metastability. A reset domain crossing (RDC) occurs when a path’s transmitting flop has an asynchronous reset, and the receiving flop has either a different asynchronous reset than the transmitting flop or has no reset at all. These two examples are summarized in the figure below.

Circuits with potential soft reset metastability issues

The circuit on the left shows a simple RDC problem between two flops having different asynchronous reset domains. The asynchronous assertion of the rst1 signal immediately changes the output of Tx flop to its assertion value. Since the assertion is asynchronous to clock clk, the output of Tx flop can change near the active clock edge of the Rx flop, which can violate the set-up hold timing constraints for flop Rx. So, the Rx flop can go into a metastable state.

To review, metastability is a state in which the output of a register is unpredictable or is in a quasi-stable state. The circuit on the right shows an RDC problem from a flop with an asynchronous reset domain to a non-resettable register (NRR), which does not have a reset pin.

Note that an RDC path with different reset domains on the transmitter and receiver does not guarantee that the path is unsafe.

Also, an RDC path having the same asynchronous reset domains on the transmitter and receiver does not guarantee that the path is safe, as issues may occur due to soft resets. Different soft resets in a design can induce metastability and cause unpredictable reset operations or, in the worst case, overheating of the device during reset assertion.

There are many additional examples in the white paper along with a detailed discussion of what type of analysis is required to determine if a potential real problem exists. A link is coming so you can learn more.

The Solution

The white paper then proposes a methodology to detect RDC issues. It is pointed out that RDC bugs, if ignored, can have severe consequences on system functionality, timing, and reliability. To ensure proper operation and avoid the associated risks, it is essential to detect unsafe RDCs systematically and apply appropriate synchronization techniques to tackle any issues that may arise due to reset path delays caused by soft resets.

The white paper explains that, by handling RDCs effectively, designers can mitigate potential issues and enhance the overall robustness and performance of a design. A systematic flow to assist in RDC verification closure using standard RDC verification tools is detailed in the white paper. The overall flow for this methodology is shown in the figure below.

Flowchart for proposed methodology for RDC verification

To Learn More

If some of the design challenges discussed here resonate with you, the Siemens Digital Industries white paper is a must read. Beyond a detailed explanation of the approach to address these design issues, data from real designs is also presented. The results are impressive.

You can get your copy of the white paper here. You will also find several additional resources on that page that present more details on RDC analysis. You will learn a lot about how to find and fix soft reset metastability.


Podcast EP224: An Overview of the Upcoming 2024 Electronic Components and Technology Conference with Dr. Michael Mayer

Podcast EP224: An Overview of the Upcoming 2024 Electronic Components and Technology Conference with Dr. Michael Mayer
by Daniel Nenni on 05-17-2024 at 10:00 am

Dan is joined by Dr. Michael Mayer, the 2024 Electronic Components and Technology Conference (ECTC) Program Chair. Michael is an Associate Professor in the department of Mechanical and Mechatronics Engineering at the University of Waterloo in Ontario, Canada. Michael has co-authored technical publications and patents about wire bonding methods and various microsensor tools for diagnostics of bonding processes, as well as reliability of micro joints. More recently he has been working on direct bonding of optical glasses and laser joining of biological materials.

Michael discusses the upcoming ECTC conference with Dan. The event will take place May 28 – 31, 2024 in Denver, Colorado. Michael discusses some of the innovation trends such as hybrid bonding presented at ECTC and how these technologies are paving the way for 2.5/3D heterogeneous integration. Michael provides an overview of the broad research in design, packaging and manufacturing that is presented at the conference.

Michael also discusses the trends in university research for advanced materials and packaging and highlights the more than 10 professional development courses available at the upcoming ECTC.


A Webinar with Silicon Catalyst, ST Microelectronics and an Exciting MEMS Development Contest

A Webinar with Silicon Catalyst, ST Microelectronics and an Exciting MEMS Development Contest
by Mike Gianfagna on 05-17-2024 at 8:00 am

A Webinar with Silicon Catalyst, ST Microelectronics and an Exciting MEMS Development Contest

Most MEMS and sensor companies struggle to find an industrialization partner that can support early-stage research and help develop and transition unique concepts to high-volume production. The wrong partner means delays and increased development costs as the design moves between various facilities. Recently, Silicon Catalyst joined forces with ST Microelectronics and a few other partners in a webinar to discuss these challenges. Silicon Catalyst also announced an exciting contest that helps new entrants to the MEMS market to get off the ground. If your product plans include MEMS devices, you will want to watch this webinar. A link is coming, but most importantly you’ll also want to check out the contest – this could be your big break. Read on to learn about a webinar with Silicon Catalyst, ST Microelectronics and an exciting MEMS development contest.

Webinar Background – the MEMS Development Dilemma

The event contained several parts:

  • A webinar introduction Paul Pickering, Managing Partner, Silicon Catalyst
  • A useful MEMS industry highlights presentation from Pierre Delbos, Market and Technology Analyst, Yole Group
  • An overview of the unique ST Lab-in-Fab fabrication concept from Dr. Andreja Erbes, Director, STMicroelectronics
  • Details of the contest
  • An informative Q&A session
Webinar Presenters

I highly recommend you watch this webinar; there is a lot of very useful information. A link is coming, but first let’s take a quick look at the key points.

Silicon Catalyst Introduction

Paul began the event by reviewing the incredible ecosystem Silicon Catalyst has built to foster semiconductor-related innovation. The organization does have a focus on advanced materials, so the MEMS topic fits quite well. The organization has a list of high-profile strategic partners, as shown in the graphic below.

Silicon Catalyst’s Strategic Partners

You can learn more about Silicon Catalyst’s impact and the impact of its incubator companies on SemiWiki here.

MEMS Overview from The Yole Group

Pierre provided an eye-opening tour of the MEMS industry. Yole has been tracking this market for 20 years and the diagram below shows the steady growth over that time. Pierre reported that there were 30 billion units in the MEMS market in 2023.

MEMS Industry History

Paul went on to review the players, markets and growth areas. You will likely learn a few things about this market by listening to Paul’s overview.

Addressing MEMS Development With “Lab-in-Fab” Approach

Next, Dr. Andreja Erbes discussed the challenges of MEMS development and presented a unique approach being pioneered by ST Microelectronics. The challenge Andreja described is one of too many hand-offs in the MEMS development process as a concept moves from idea to production. Each handoff (e.g., research, low-volume production, high-volume production) introduces delays, new learning curves and opportunities for errors. This flow is depicted in the figure below.

Typical MEMS Development Cycle

ST Microelectronics, in cooperation with the Institute of Microelectronics (IME) has built a unique MEMS development facility in Singapore. By bringing all phases of MEMS product development into one location, ST is delivering leading-edge competence and access to a global ecosystem. The figure below summarizes the elements of this rapid product development strategy.

Rapid Product Development

Andreja went on to describe the substantial physical campus layout, including a virtual connection to a fab in Italy. The development capabilities of the unique Lab-in Fab are reviewed in detail, along with example applications and third-party collaborations. It’s a very impressive overview. The figure below summarizes the engagement model and accelerated timeline that is enabled.

Lab in Fab Engagement Model

To Enter the Contest and Learn More

And now for the key item – entering the contest. Silicon Catalyst and STMicroelectronics announced the 2024 Lab-in-Fab development contest during the webinar.

The contest affords companies of all sizes an opportunity to align with one of the premiere MEMS manufacturing companies and benefit from its development expertise and world-class fabrication. Silicon Catalyst will conduct the screening and selection process that will include various MEMS experts as judges.

The contest offers the opportunity to engage with the Lab-in-Fab team for a free project evaluation. This includes:

  • Expense paid visit to meet the teams in Singapore (reimbursable expenses up to $10K USD)
  • Work with world-class teams to scope out the manufacturing plan
  • Participate in various PR activities ST, IME and Silicon Catalyst
  • Receive introductions to investors and VC’s to help fund your project

The deadline for submission to the contest is Monday, June 3, 2024. The winner will be notified by Wednesday, June 12th.

Click here to take you to a page with a link to watch the webinar replay and to access the short contest entry form. Click on it today! And that’s the details on a webinar with Silicon Catalyst, ST Microelectronics and an exciting MEMS development contest.


CEO Interview: Roger Espasa of Semidynamics

CEO Interview: Roger Espasa of Semidynamics
by Daniel Nenni on 05-17-2024 at 6:00 am

Roger Espasa

Roger Espasa is the CEO and founder of Semidynamics, an IP supplier of two RISC-V cores, Avispado (in-order) and Atrevido (out-of-order) supporting the RISC-V vector extension and Gazzillion(tm) misses, both targeted at HPC and Artificial Intelligence. Prior to the foundation of the company, Roger was Technical Director/Distinguished Engineer at Broadcom leading a team designing a custom ARMv8/v7 processor on 28nm for the set-top box market. Before its experience at Broadcom, from 2002 to 2014, Roger led various x86 projects at Intel as Principal Engineer: SIMD/vector unit and texture sampler on Knights Ferry (45nm), L2 cache, texture sampler on Knights Corner (22nm), the out-of-order core on Knights Landing (14nm) and the Knights Hill core (10nm). From 1999 to 2001 he worked for the Alpha Microprocessor Group on a vector extension to the Alpha architecture.

Roger got his Phd in Computer Science from Universitat Politècnica de Catalunya in 1997 and has published over 40 peer reviewed papers on Vector Architectures, Graphics/3D Architecture, Binary translation and optimization, Branch Prediction, and Media ISA Extensions. Roger holds 9 patents with 41 international filings.

Tell us about your company?
Processors are my passion. I’ve work on major processor architectures such as Alpha, x86, ARM and now RISC-V. When I became aware of the new RISC-V architecture, I realised that it was going to be the future of processors. Rather than being locked into a choice of either Arm or Intel, companies would have a choice of which IP processor vendor they wanted to use. In addition to vendor-choice, the fact that RISC-V is an open standard means that both customers and vendors can extend the ISA with whatever features they need. This flexibility and this freedom-to-change is something you simply can’t have if you are using Arm or Intel.

So, in 2016, I founded the company and we did a multi-core, RISC-V chip design for Esperanto Technologies. This financed the company as it started up. We had some other design projects that provided the cash flow while we developed our own range of 64-bit RISC-V IP cores such as Atrevido that we announced last year.  I am proud to say that we are entirely self-funded through sales and a few European grants which has enabled us to build a dynamic, highly knowledgeable team of over 70 and growing. This means that we are totally in control of our destiny and the pace at which we build the business.

What problems are you solving?
The key problem is that customers have a limited choice when it comes to IP cores, even if you include ARM as a supplier. Furthermore, those IP cores tend to come in a “fixed menu” format, i.e., you can’t add custom features to them. Granted, they all come with some configuration options (cache size, for example), but they can hardly ever be expanded with the customer’s special features needed for their application. We made the decision to accept any change request made by the customer, even if it implied deep “surgery” inside the core. Hence came our motto, “Open Core Surgery”. With us, the customer has total control over the specification, be it new instructions, separate address spaces, new memory accessing capabilities, etc.

This means that Semidynamics can precisely tailor a core to meet each project’s needs so there are no unnecessary overheads or compromises. Even more importantly, Semidynamics can implement a customer’s ‘secret sauce’ instructions and features into the core in a matter of weeks, which is something that no-one else offers.

Semidynamics also enables customers to achieve a fast time to market for their customised core as a first drop can be delivered that will run on an FPGA. This enables the customer to check functionality and run software on it while Semidynamics does the core verification. By doing these actions in parallel, the product can be brought to market faster and with reduced risk.

What application areas are your strongest?
We target any application that needs to move massive amounts of data around very fast such as AI and ML. Semidynamics has the fastest cores on the market for moving large amounts of data even when the data does not fit in the cache. Thanks to our “Gazzillion™ technology”, we can sustain a bandwidth of a “cache-line per clock cycle”, i.e., 64 Bytes every clock. And this can be done at frequencies up to 2.4 GHz on the right node. The rest of the market averages about a cache line every many, many cycles; that is nowhere near Semidynamics’ one every cycle. This makes the core perfect for applications that stream a lot of data and/or the application touches very large data that does not fit in cache. This unique capability is thanks to the fact that our cores can support up to 128 simultaneous requests for data and track them back to the correct place in whatever order they are returned. This is nearly 20 times more requests than competitors.

This is ability to move large amounts of data is required by Semidynamics’ Vector Unit that is the largest, fully customisable Vector Unit in the RISC-V market, delivering up to 2048b of computation per cycle for unprecedented data handling. The Vector Unit is composed of several ‘vector cores’, roughly equivalent to a GPU core, that perform multiple calculations in parallel. Each vector core has arithmetic units capable of performing addition, subtraction, fused multiply-add, division, square root, and logic operations. Semidynamics’ vector core can be tailored to support different data types: FP64, FP32, FP16, BF16, INT64, INT32, INT16, INT8, or INT4, depending on the customer’s target application domain. The largest data type size in bits defines the vector core width or ELEN. Customers then select the number of vector cores to be implemented within the Vector Unit, either 4, 8, 16 or 32 cores, catering for a very wide range of power-performance-area trade-off options. Once these choices are made, the total Vector Unit data path width or DLEN is ELEN x number of vector cores. Semidynamics supports DLEN configurations from 128b to 2048b.

Last but not least, our Tensor Unit is built on top of the Semidynamics RVV1.0 Vector Processing Unit and leverages the existing vector registers to store matrices. This enables the Tensor Unit to be used for layers that require matrix multiply capabilities, such as Fully Connected and Convolution, and use the Vector Unit for the activation function layers (ReLU, Sigmoid, Softmax, etc), which is a big improvement over stand-alone NPUs that usually have trouble dealing with activation layers.

The Tensor Unit leverages both the Vector Unit capabilities as well as the Atrevido-423 Gazzillion™ capabilities to fetch the data it needs from memory. Tensor Units consume data at an astounding rate and, without Gazzillion, a normal core would not keep up with the Tensor Unit’s demands. Other solutions rely on difficult-to-program DMAs to solve this problem. Instead, Semidynamics seamlessly integrates the Tensor Unit into its cache-coherent subsystem, opening a new era of programming simplicity for AI software.

Every designer using RISC-V wants to have the perfect set of Power, Performance and Area along with unique differentiating features and now, for the first time, they can have just that. This makes it ideal for the next generation applications of AI, Machine Learning (ML) and High-Performance Computing especially where big data, such as ChatGPT’s 14GB, just won’t fit into L1, L2 or L3 cache.

What keeps your customers up at night?
Finding that their data is too big to be handled with standard core offerings that also struggle to cope with the flow of data. There is a huge demand for AI hardware where this is a major problem. Our solution is the new All-In-One AI IP. This brings together all our innovations to create a unified IP solution that combines RISC-V, Vector, Tensor and Gazzillion technology so that AI chips are now easy to program and scale to whatever processing power is required.

The problem that we address is that the data volume and processing demand of AI is constantly increasing and the current solution is, essentially, to integrate more individual functional blocks. The CPU distributes dedicated partial workloads to gpGPUs (general purpose Graphical Processor Units) and NPUs (Neural Processor Units), and manages the communication between these units. But this has a major issue as moving the data between the blocks creates high latency. The current AI chip configuration is inelegant with typically three different IP vendors and three software tool chains, with poor PPA (Power Performance Area) and is increasingly hard to adapt to new algorithms. For example, they have difficulties handling an AI algorithm called a transformer.

We have created a completely new approach that is easy to program as there is just the RISC-V instruction set and a single software development environment. Integrating the various blocks into one RISC-V AI processing element means that new AI algorithms can easily be deployed without worrying about where to distribute which workload. The data is in the vector registers and can be used by the Vector Unit or the Tensor Unit with each part simply waiting in turn to access the same location as needed. Thus, there is zero communication latency and minimized caches that lead to optimized PPA but, most importantly, it easily scales to meet greater processing and data handling requirements.

In our solution there is just one IP supplier, one RISC-V instruction set and one tool chain making implementation significantly easier and faster with reduced risk. As many of these new processing elements as required to meet the application’s needs can be put together on a single chip to create a next generation, ultra-powerful AI chip.

The RISC-V core inside our All-In-One AI IP provides the ‘intelligence’ to adapt to today’s most complex AI algorithms and even to algorithms that have not been invented yet. The Tensor provides the sheer matrix multiply capability for convolutions, while the Vector Unit, with its fully general programmability, can tackle any of today’s activation layers as well as anything the AI software community can dream of in the future. Having an All-In-One processing element that is simple and yet repeatable solves the scalability problem so our customers can scale from one TOPS to hundreds of TOPS by using as many processing elements as needed on the chip. In addition, our IP remains fully customisable to enable companies to create unique solutions rather than using standard off-the-shelf chips.

What does the competitive landscape look like and how do you differentiate?
There are a lot of competitors and a small handful of big ones but, essentially, they fall in two camps: either they offer a core and, maybe a Vector Unit, or they offer a not-so-flexible NPU. We are unique in providing a fully customisable all-in-one solution comprising a core with our Open Core Surgery, Tensor Unit, Vector Unit and Gazzillion that provide further differentiation to create the high performance, custom core that they need.

What new features/technology are you working on?
One of the many delights of the RISC-V community is that there are always new great ideas being brought into RISC-V. For example, we will be announcing Crypto and Hypervisor in the near future. Plus, of course, a steady stream of new, even more powerful cores.

How do customers normally engage with your company? 
For a number of years, it was word of mouth as processor community is relatively small. I have been in it for years so customers sought us out as being RISC-V processor experts that could think outside the box and create exactly the core that they wanted. More recently, we have moved from stealth mode to actively promoting our cores and now we have a growing number of customers from around the world.

Also Read:

Semidynamics Shakes Up Embedded World 2024 with All-In-One AI IP to Power Nextgen AI Chips

RISC-V Summit Buzz – Semidynamics Founder and CEO Roger Espasa Introduces Extreme Customization

Deeper RISC-V pipeline plows through vector-scalar loops


Sondrel’s Drive in the Automotive Industry

Sondrel’s Drive in the Automotive Industry
by Daniel Nenni on 05-16-2024 at 10:00 am

Oliver Jones VP Worldwide Sales and Marketing at Sondrel

Ollie Jones, Vice President of Strategic Sales at Sondrel, has worked extensively across Europe, North America and Asia and has held a variety of commercial leadership roles in FTSE 100, private equity owned and start-up companies.

Most recently Ollie was Chief Commercial Officer for an EV battery start up where he led the acquisition of new customer partnerships with some of the world’s leading car brands.

Prior to that, roles held include VP Commercial and Business Development for a market leading global automotive engineering firm with responsibility for driving the sales growth of its electrification business unit, and VP Customer Business Group where he was responsible for leading multiple large and complex key accounts across Europe and Asia with over $1B cumulative revenues.

I read you have a lot of experience in the automotive industry?
Yes, for the past 20 years I have been working within the automotive industry for such players as Britishvolt, GKN, Williams Advanced Engineering and Prodrive.

Is that why Sondrel has just announced a drive into automotive in the recent press release on Software Defined Vehicles?
Actually, Sondrel has focused on automotive chips for a while now with several successful chips either created or in progress. Just last week, we taped out a custom ASIC for a Tier 1 OEM automotive manufacturer.

It’s a great track record and that was one of the reasons that I joined the company as, with my knowledge and network, I think there is a sizeable opportunity for Sondrel to develop further in the automotive  market.

Why should a company come to Sondrel for a solution in the automotive space?
Differentiation has always been key for car manufacturers. With software, electronics and connectivity becoming the major sources of innovation in cars, each car manufacturer is looking to create their own platform of bespoke chips and software that can be scaled to suit all the various models of vehicle in their range. Such a Software Defined Vehicle (SDV) platform will provide cost savings from economies of scale and prevent rivals from copying their innovations as significantly fewer off-the-shelf chips are used.

We have a head start in providing such bespoke platform solutions for customers as we already have our innovative family of Architecting the Future frameworks with the SFA 250A and the SFA 350A that are specially designed for automotive use and are ready for the ISO 26262 compliance process. They have already been successfully used to fast-track automotive projects by up to 30%. These powerful platforms are modularly designed to support scalability based on requirements and to be easily configured with the processing capability and power depending on the end use case and the demands of the customer’s software.

The challenge is that SDV chips will need several processors with billions of transistors to deliver the processing performance to run all the functions such as infotainment, ADAS, vehicle sensing and connectivity. Such ultra-complex custom chips have been our speciality for over two decades.

The key to Sondrel’s success with delivering such ultra-complex chips is that our team works closely with customers at the architectural design stage to ensure that the right balance of power, performance and cost is achieved right from the start. We use the most advanced semiconductor nodes and our turnkey service can take projects all the way from initial architectural design through to the exhaustive testing needed for automotive components to supplying chips. This turnkey service frees the Automotive OEMs and, of course, other customers, from the risks of a multi-stage, multi-supplier, supply chain that have become all too problematic over the past couple of years. By partnering with Sondrel, they can now own their chip destiny and, with our global footprint, we are ideally positioned to deliver it for customers around the world.

Sondrel’s experience in automotive electronics has been developed from many chip projects that built up the company expertise in functional safety (FuSa), ISO 26262 and silicon-level security. The latter is particularly important for SDV cars as the 10- to 15-year lifetime of cars means that updates and patches will have to be implemented without exposing vulnerabilities to hackers so security needs to be integral to the chip design. This has to be right from the start when the chip is being specified and continued through all the stages of the chip design and on into silicon production.

So, that’s why we think that we are the solution provider of choice as we are one of the few companies with the experience and expertise to design such ultra-complex chip projects that fully integrate hardware and software that will be the enablers of SDVs.

Sondrel is known for being a leading design house and now you are extending that to include the supply chain. Why do that?
The industry is changing. The old model of companies, especially systems houses, buying in chips from third parties is being replaced by them wanting to have their own custom chips. This gives them total control of their IP and differentiation. In the process of designing these ultra-complex chips for them we start with the architectural specification stage where we work intimately with the customer to determine exactly what they want to achieve with the chip. This enables us to use our many years of experience to precisely specify the chip in terms of performance, power and area. The classic PPA. Integral to this is our holistic view of the whole supply chain that means that right at this initial stage we consider, for example, the best node to use to deliver the price and performance target required. And then there is the choice of packaging. What testing to do, etc. All of these decisions ripple up and down the supply chain. As we have a complete understanding and viewpoint on all the interactions as a turnkey provider, we can ensure everything dovetails together smoothly. And, should an issue arise, it is solved by us all under one roof. A classic one-stop-shop or, as we say, the buck never leaves the building.

As you can see, the architectural specification stage is the foundation for a successful custom chip project. We are one of the few companies to provide such a service and that is a key differentiator from some of our rivals who seem to often take a design specification from the customer and just get on with it with little if any interaction along the way to a final design. There is no value add in that approach. No opportunity to bring a wealth of skills and experience to the design to enhance and improve it as we do. It also means that we develop a deep relationship of trust with the customer that is a foundation for further engagement as a turnkey supplier.

There are so many horror stories of supply chains where disputes have occurred between the various sub-contractors with each blaming the other for problems at hand over stages. This is why we are seeing more and more demand from customers for a turnkey partner, that can manage all aspects of the value chain from initial concept and architecture, where tough decisions need to be made, all the way through implementation and NPI to managing the supply chain. Plus, Sondrel has very close relationships with the major Foundries and OSATs as well as market-leading design expertise, earned over more than two decades.

What if a company just needs help with a part of the supply chain?
That often happens and, with our large team of highly experienced, senior engineers, we can step in and help wherever needed along the supply chain. We have just done that with one company where the design demands were too much for their team and we helped get them back on schedule. And, in another instance, our team found and fixed a deep-rooted bug that had been in a couple of iterations before we came on board.

Finally, any other hot topics to cover such as exciting new areas for custom chips?
We have hundreds of Arm-based custom chip designs under our belt and recently we became are one of the founding members of Arm® Total Design, an ecosystem that is committed to bringing Arm Neoverse™ CSS-based designs to market. Arm Neoverse CSS features high-end Arm Neoverse cores that are designed for infrastructure and datacentre applications and play to our strength of design and supply of high performance, ultra-complex ASICs that will be crucial for customers wanting to rapidly bring such products to market.

These new high-performance cores will enable us to design next generation chips for demanding, compute intensive applications in automotive, machine vision, AI, security, HPC, and networking. Crucially, we are one of the few partners to offer a full, turnkey service from concept through every stage to final chips, which provides customers with peace of mind that their, bet the farm, multi-million-dollar investment in a new chip project will progress smoothly with every stage being handled by our in-house experts.

Also Read:

Transformative Year for Sondrel

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US

Integrating High Speed IP at 5nm


A Recipe for Performance Optimization in Arm-Based Systems

A Recipe for Performance Optimization in Arm-Based Systems
by Bernard Murphy on 05-16-2024 at 6:00 am

Performance cookbook for Arm min (1)

Around the mid-2000’s the performance component of Moore’s Law started to tail off. That slack was nicely picked up by architecture improvements which continue to march forward but add a new layer of complexity in performance optimization and verification. Nick Heaton (Distinguished Engineer and Verification Architect at Cadence) and Colin Osbourne (Senior Principal System Performance Architect and Distinguished Engineer at Arm) have co-written an excellent book, Performance Cookbook for Arm®, explaining the origins of this complexity and how best to attack performance optimization/verification in modern SoC and multi-die designs. This is my takeaway based on a discussion with Nick and Colin, complemented by reading the book.

Who needs this book?

It might seem that system performance is a problem for architects, not designers or verification engineers. Apparently this is not not entirely true; after an architecture is delivered to the design team those architects move on to the next design. From that point on, or so I thought, design team responsibilities are to assemble all the necessary IPs and connectivity as required by the architecture spec, to verify correctness, and to tune primarily for area, power, and timing closure.

There are a couple of fallacies in this viewpoint. The first is the assumption that the architecture spec alone locks down most of the performance, and the design team need not worry about performance optimizations beyond implementation details defined in the spec. The second is that real performance measurement, and whatever optimization is still possible at that stage, must be driven by real workloads – perhaps applications running on an OS, running on firmware, running on the full hardware model.

But an initial architecture is not necessarily perfect, and there are still many degrees of freedom left to optimize (or get wrong) in implementation. Yet many of us, certainly junior engineers, have insufficient understanding of how microarchitecture choices can affect performance and how to find such problems. Worse still, otherwise comprehensive verification flows lack structured methods to regress performance metrics as a design evolves.  Which can lead to nasty late surprises.

The book aims to inform design teams on the background and methods in design and verification for high performance Arm-based SoCs, especially around components that can dramatically impact performance: the memory hierarchy, CPU cores, system connectivity, and the DRAM interface.

A sampling of architecture choices affecting performance

A Modern SoC Architecture (Courtesy Cadence Design Systems)

I can’t do justice to the detail in the book, but I’d like to give a sense of big topics. First, the memory hierarchy has huge impact on performance. Anything that isn’t idling needs access to memory all the time. Off-chip/chiplet DRAM can store lots of data but has very slow access times. On-chip memory is much faster but is expensive in area. Cache memory, relying on typical proximity of reference in memory addresses, provides a fast on-chip proxy for recently sampled addresses, needing to update from DRAM only on cache misses or a main memory update. All this is generally understood, however sizing and tuning these memory options is a big factor in performance management.

Processors are running faster, outpacing even fast memories. To hide latencies in fetching they take advantage of tricks like pre-fetch and branch prediction to request more instructions ahead of execution. In a multi-core system this creates more memory bandwidth demand. Virtual memory support also adds to latency and bandwidth overhead. Each can impact performance.

On-chip connectivity is the highway for all inter-IP traffic in the system and should handle target workloads with acceptable performance through a minimum of connections. This is a delicate performance/area tradeoff. For a target workload, some paths must support high bandwidth, some low latency, while others can allow some compromise. Yet these requirements are at most guidelines in the architecture spec. Topology will probably be defined at this stage: crossbar, distributed NoC, or mesh, for example. But other important parameters can also be configured, say FIFO depths in bridges and regulator options to prioritize different classes of traffic. Equally endpoint IP connected to networks often support configurable buffer depths for read/write traffic. All these factors affect performance, making connectivity a prime area where implementation is closely intertwined with architecture optimization.

Taking just one more example, interaction between DRAM and the system is also a rich area for performance optimization. Intrinsic DRAM performance has changed little over many years, but there have been significant advances in distributed read-write access to different banks/bank groups allowing for parallel controller accesses, and prefetch methods where the memory controller guesses what range of addresses may be needed next. Both techniques are supported by continually advancing memory interface standards (eg. in DDR) and continually more intelligent memory controllers. Again, these optimizations have proven critical to continued advances in performance.

A spec will suggest IP choices of course, and initial suggestions for configurable parameters but based on high-level sims; it can’t forecast detailed consequences emerging in implementation. Performance testing on the implementation is essential to check performance remains within spec, and quite likely tuning may at times be needed to stay within that window. Which requires that you have some way to figure out if you have created a problem, then have some way to isolate a root cause, and finally understand how to correct the problem.

Finding, fixing, and regressing performance problems

Key performance metrics

First, both authors stress that performance checking should be run bottom-up. Should be a no-brainer but the obvious challenge is what you use for test cases in IP or subsystem testing, even perhaps as a baseline for full system testing. Real workloads are too difficult to map to low-level functions, come with too much OS and boot overhead, and lack any promise of coverage however that should be defined. Synthetic tests are a better starting point.

Also you need a reference TLM model, developed and refined by the architect. This will be tuned especially to drive architecture optimization on the connectivity and DDR models.

Then bottom-up testing can start, say with a UVM testbench driving the interconnect IP connected to multiple endpoint VIPs. Single path tests (one initiator, one target) provide a starting point for regression-ready checks on bandwidth and latencies. Also important is a metric I hadn’t considered, but which makes total sense: Outstanding Transactions (OT). This measures the amount of backed up traffic. Cadence provides their System Testbench Generator to automate building these tests, together with Max Bandwidth, Min Latency and Outstanding Transaction Sweep tests, more fully characterizing performance than might be possible through hand-crafted tests.

The next level up is subsystem testing. Here the authors suggest using Cadence System VIP and their Rapid Adoption Kits (RAKs). These are built around the Cadence Perspec System Verifier augmented by the System Traffic Library, AMBA Adaptive Test Profile (ATP) support and much more. Perspec enables bare metal testing (without need for drivers etc.), with easy system-level scenario development. Very importantly, this approach makes extensive test reuse possible (as can be seen in available libraries). RAKs leverage these capabilities for out-of-the-box test solutions and flows, for an easy running start.

The book ends with a chapter on a worked performance debug walkthrough. I won’t go into the details other than to mention that it is based on an Arm CMN mesh design, for which a performance regression test exhibits a failure because of an over-demanding requester forcing unnecessary retries on a cache manager.

My final takeaway

This is a very valuable book, also very readable. These days I have a more theoretical than hands-on perspective, yet it opened my eyes on the both the complexity of performance optimization and verification, while for the same reasons making it seem more tractable. Equally important, this book charts a structured way forward to make performance a first-class component in any comprehensive verification/regression plan. With all the required elements: traffic generation, checks, debug, score boarding and the beginnings of coverage.

You can buy the book on Amazon – definitely worth it!


Synopsys Accelerates Innovation on TSMC Advanced Processes

Synopsys Accelerates Innovation on TSMC Advanced Processes
by Mike Gianfagna on 05-15-2024 at 10:00 am

Synopsys Accelerates Innovation on TSMC Advanced Processes

We all know that making advanced semiconductors is a team sport. TSMC can innovate the best processes, but without the right design flows, communication schemes and verified IP it becomes difficult to access those processes. Synopsys recently announced some details on this topic. It covers a lot of ground. The graphic at the top of this post will give you a feeling for the breadth of what was discussed. I’ll examine the announcement and provide a bit more information from a conversation with a couple of Synopsys executives. Let’s see how Synopsys accelerates innovation on TSMC advanced processes.

The Big Picture

Advanced EDA tools, silicon photonics, cutting edge IP and ecosystem collaboration were all touched on in this announcement. Methods for creating new designs as well as migrating existing designs were also discussed.

Sanjay Bali, vice president of strategy and product management for the EDA Group at Synopsys had this to say:

“The advancements in Synopsys’ production-ready EDA flows and photonics integration with our 3DIC Compiler, which supports the 3Dblox standard, combined with a broad IP portfolio enable Synopsys and TSMC to help designers achieve the next level of innovation for their chip designs on TSMC’s advanced processes. The deep trust we’ve built over decades of collaboration with TSMC has provided the industry with mission-critical EDA and IP solutions that deliver compelling quality-of-results and productivity gains with faster migration from node to node.”

And Dan Kochpatcharin, head of Design Infrastructure Management Division at TSMC said:

“Our close collaboration with Open Innovation Platform (OIP)® ecosystem partners like Synopsys has enabled customers to address the most challenging design requirements, all at the leading edge of innovation from angstrom-scale devices to complex multi-die systems across a range of high-performance computing designs. Together, TSMC and Synopsys will help engineering teams create the next generation of differentiated designs on TSMC’s most advanced process nodes with faster time to results.”

Digital and Analog Design Flows

It was reported that Synopsys’ production-ready digital and analog design flows for TSMC N3P and N2 process technologies have been deployed across a range of AI, high-performance computing, and mobile designs.

To get access to new processes faster, the AI-driven analog design migration flow enables rapid migration from one process node to another. Also discussed was a new flow for TSMC N5 to N3E migration.  This adds to the established flows from Synopsys for TSMC N4P to N3E and N3E to N2 processes.

Interoperable process design kits (iPDKs) and Synopsys IC Validator™ physical verification run sets were also presented. These capabilities allow efficient transition of designs to TSMC advanced process technologies. Using Synopsys IC Validator, full-chip physical signoff can be accomplished. This helps deal with the increasing complexity of physical verification rules. It was announced that Synopsys IC Validator is now certified on TSMC N2 and N3P process technologies.

Photonic ICs

AI training requires low-latency, power-efficient, and high-bandwidth interconnects for massive data sets. This is driving the adoption of optical transceivers and near-/co-packaged optics using silicon photonics technology.  Delivering these capabilities requires ecosystem collaboration.

Synopsys and TSMC are developing an end-to-end multi- die electronic and photonic flow solution for TSMC’s Compact Universal Photonic Engine (COUPE) technology to enhance system performance and functionality. This flow spans photonic IC design with Synopsys OptoCompiler™ and integration with electrical ICs utilizing Synopsys 3DIC Compiler and Ansys multiphysics analysis technologies.

Broad IP Portfolio N2 and N2P

Design flows and communication strategies are critical for a successful design, but the entire process is really enabled by verified IP for the target process. Synopsys announced the development of a broad portfolio of Foundation and Interface IP for the TSMC N2 and N2P process technologies to enable faster silicon success for complex AI, high-performance computing, and mobile SoCs.

Getting into some of the details, high-quality PHY IP on N2 and N2P, including UCIe, HBM4/3e, 3DIO, PCIe 7.x/6.x, MIPI C/D-PHY and M-PHY, USB, DDR5 MR-DIMM, and LPDDR6/5x, allows designers to benefit from the PPA improvements of TSMC’s most advanced process nodes. Synopsys also provides a silicon-proven Foundation and Interface IP portfolio for TSMC N3P, including 224G Ethernet, UCIe, MIPI C/D-PHY and M-PHY, USB/DisplayPort and eUSB2, LPDDR5x, DDR5, and PCIe 6.x, with DDR5 MR-DIMM in development.

Synopsys reported this IP has been adopted by dozens of leading companies to accelerate their development time. The figure below illustrates the breadth and performance of this IP portfolio for the TSMC N3E process. 

The Backstory

I was able to speak with two Synopsys experts –  Arvind Narayanan, Executive Director, Product Management and Mick Posner, Vice President, Product Management, High Performance Computing  IP Solutions.

Arvind Narayanan

I know both Arvind and Mick from my time working at Synopsys and I can tell you together they have a very deep understanding of Synopsys design technology and IP.

Arvind began by explaining how seamlessly Synopsys 3DIC Compiler, OptoCompiler, and the Ansys Multiphysics technology work together. This tightly integrated tool chain does an excellent job of supporting the TSMC COUPE technology. A well-integrated flow that is solving substantial data communication challenges.

It’s difficult to talk about communication challenges without discussing the growing deployment of multi-die strategies.  In this area, Mick explained that there is now an integration of 3DIC Compiler with the popular UCIe standard. This creates a complete reference flow for die-to-die interface connectivity.

Mick Posner

Arvind touched on the roles DSO.ai plays in the design migration process. For the digital portion, the models and knowledge built in DSO.ai for a design allows re-targeting of that design to a new process node with far less learning, simulation and analysis.  For the analog portion, the circuit and layout optimization capabilities of DSO.ai become quite useful.

Mick said he believes that Synopsys has the largest analog design team in the world. After thinking about it a bit, I believe he’s right. It is a very large team across the world working in many areas. Mick went on to point out that the significant design work going on at advanced nodes across that team becomes a substantial proving ground for new technology and flows. This is part of the reason why Synopsys tools are so well integrated.

To Learn More

You can access the full content of the Synopsys announcement here. In that announcement, you will find additional links to dig deeper on the various Synopsys technologies mentioned. And that’s how Synopsys accelerates innovation on TSMC advanced processes.


Podcast EP223: The Impact Advanced Packaging Will Have on the Worldwide Semiconductor Industry with Bob Patti

Podcast EP223: The Impact Advanced Packaging Will Have on the Worldwide Semiconductor Industry with Bob Patti
by Daniel Nenni on 05-15-2024 at 8:00 am

Dan is joined by Bob Patti, the owner and President of NHanced Semiconductors. Previously, Bob founded ASIC Designs Inc., an R&D company specializing in high-performance systems and ASICs. During his 12 years with ASIC Designs he participated in more than 100 tapeouts. Tezzaron Semiconductor grew from that company, with Bob as its CTO, and became a leading force in 3DIC technology. Tezzaron built its first working 3DICs in 2004. NHanced Semiconductors was spun out of Tezzaron to further advance and develop 2.5D/3D technologies, chiplets, die and wafer stacking, and other advanced packaging. Bob holds 21 US patents, numerous foreign patents, and many more pending patent applications in deep sub-micron semiconductor chip technologies.

In this broad analysis of the semiconductor industry, Bob discusses the significant impact advanced packaging is having and will have on innovation. The investments being made to bolster US capability in semiconductors is discussed with an evaluation of what areas of advanced packaging are opportunities for US growth.

Bob examines the various 2.5D/3D and mixed material assembly technologies on the horizon. He talks about a future semiconductor industry where “super OSATs” will play a major role in innovation and advanced technology sourcing.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.