DAC2025 SemiWiki 800x100

Joseph Sawicki of Siemens EDA at User2User

Joseph Sawicki of Siemens EDA at User2User
by Daniel Payne on 05-19-2022 at 10:00 am

Joseph Sawicki

I attended the annual user group meeting called User2User in Santa Clara this year, hosted by Siemens EDA, with 51 presentations by customers in 11 tracks, and keynotes during each lunch hour from semiconductor executives. Joseph Sawicki, Executive VP, IC Segment, at Siemens EDA presented on a Tuesday, along with Prashant Varshney, Microsoft, and Mahesh Tirupattur, Analog Bits. This blog focuses on what I heard from Mr. Sawicki.

The host Harry Foster said that each keynote was like a Ted Talk, and they certainly lived up to that.  Joseph’s topic was, From ICs to Systems – New Opportunities for the Semiconductor Industry. Digitalization is driving across all industries: Aerospace, auto, consumer, electronics and semi, energy, heavy, medical, marine, industrial.

There is now a pervasive AI enablement; in sensors, edge computing, 5G/wireless comm, cloud and data centers. The share of semiconductors in electronic systems from the time period of 1992- 2014 was 16%, but now has grown to about 24%, and predictions show that electronic systems will reach $3.2T revenue by 2025, so quite the growth market.

Systems companies are becoming IC designers, and there are many examples: Apple, Amazon, Google, ZTE, Tesla, Bosch, Huawei, Facebook. Foundries have seen the Systems companies grow from just 1% of revenue in 2011, now to 21% in 2021, that’s big growth and Apple has become the number one customer of TSMC. At the Hot Chips conference in 2006 just 16% of the accepted papers came from systems companies, while by 2021 that number had grown to 33% of the papers. So the systems companies are driving innovation in chip design.

Consider the history of Apple, their first 64 bit application processor was introduced way back in 2013, but why do that? Even in 2022 you still don’t need 64 bits for the larger RAM address space. The answer was for performance, an ARM core can run either in 32 or 64 bit modes, and running in 64 bit mode has 31% better performance.

Apple A7, Source: Chipworks

There’s some new trends in automotive, Cars as a Service, where Volvo plans to reach 50% of their revenue through services by 2025. Tesla provides OTA (Over The Air) update services and new feature upgrades, adding revenue after the initial sales. Gartner Group reports that half of the top 10 auto OEMS will be designing some of their own chips by 2025, with 7 of 10 already announced by 2021. Ford and Globalfoundries will partner in IC design in order to smooth out the supply chain issues that have hurt the industry since the COVID pandemic started in 2020.

The gradual electrification of vehicles is a major driver of new IC design starts, and the semiconductor revenue per vehicle should reach $500 per car by 2028, so that’s a $24B market. 5G communication will be important to automotive for OTA updates and services, and is growing 3X per year.

The total number of sensors connected to Internet was 1.6B in 2015, exploding in growth to 29.6B sensors by 2025, so that’s big growth of video, data, and data center.

Source: IC Insights

Semiconductor content inside of Data Centers is growing to $242B by 2030, which is a 14% CAGR, per IBS, Sept. 2021.

With all of these demands on semiconductors in growth markets, how is our industry going to meet the them? Joseph summarized that there are three trends to meet demand:

  1. Technology scaling – new nodes and 3D IC
  2. Design scaling – silicon integration
  3. System scaling – digital twin, verifying a device against spec and the full SW stack with apps

For technology scaling we can look at Moore’s Law, it’s not quite dead, because look at the A-series from Apple over time. From 2013 to 2021, we saw transistor counts growing from 1 billion to 15 billion, so it’s still scaling pretty well at 15X. Dennard Scaling has died – so clock cycle rates are not improving by 16X over that 8 year time frame. Looking at single core CPU performance , the Geekbench scores have ranged from 269 to 1734, so it’s growing on track. Even the foundries have another 8 years to grow process technology in their road maps.

Monolithic integration is growing, yes, but 3D design is coming along too, combined with innovative packaging. System in a Package is a new trend. System and design technology co-optimization is needed to be successful.

On design scaling there are charts that claim that 7nm designs cost $280M, but is that reality? That number sounds too big, yet the trend line is true, as small nodes drive up the design and verification costs. One method to counter that increase in costs is to move from RTL up to C level design for systems designs. Consider the example of NVIDIA, where a small team of just 10 engineers in 6 months taped out a new chip for deep learning inference accelerator by using C++ with an HLS (High Level Synthesis) methodology, as reported at Hot Chips in Aug 2019. Google is another systems company using HLS to help manage SoC design costs.

For System Scaling the idea is to create a true digital twin. One example of digital twin is something called PAVE360 – it’s a way to validate automotive models with traffic, people and vehicles. You can run this digital twin in order to validate virtual models, or run SW for your ADAS system, to model power, vehicle modeling (power train, chassis, seating, effect of road on occupants). It’s a way to safely validate before production starts.

The final topic was lifecycle management, so consider a data center with hundreds of thousands of blades, where you can actually monitor all of the blades in real time, debug any reliability issues in that data center, and you can analyze all of the embedded sensor data, literally tracking the health of the data center.

Summary

A trillion dollar semiconductor industry is shortly approaching us, so this is an exciting time to be part of the EDA industry which enables all of this growth, as systems designers are taking on new design starts, AI is everywhere, electrification of vehicles continues, and digital twins are being adopted. The mood of the presentation was quite upbeat, and well received by the audience.

Related Blogs


TSMC N3 will be a Record Setting Node!

TSMC N3 will be a Record Setting Node!
by Daniel Nenni on 05-19-2022 at 6:00 am

waferr007 2518 Q9Wf 0

With the TSMC Technical Symposium coming next month there is quite a bit of excitement inside the fabless semiconductor ecosystem. Not only will TSMC give an update on N3, we should also hear details of the upcoming N2 process.

Hopefully TSMC will again share the number of tape-outs confirmed for their latest process node. Given what I have heard inside the ecosystem, N3 tape-outs will be at a record setting number. Not only has Intel joined TSMC for multi-product high volume N3 production, it has been reported that Qualcomm and Nvidia will also use N3 for their leading edge SoCs and GPUs. In fact, it would be easier to list the companies that will not use TSMC for 3nm but at this point I don’t know of any. It is very clear that TSMC has won the FinFET battle by a very large margin, absolutely.

“The most outstanding news of TSMC in 2021 was the success in attracting more new business from Intel, while being able to maintain great relationships with existing customers such as AMD, Qualcomm and Apple. With its 3 nm N3 entering volume production later in 2022 with good yield and 2 nm N2 development being on track for volume production in 2025, TSMC is expected to continue its technology leadership to support its customer innovation and growth. Thanks to the demand of bleeding-edge technologies, TSMC’s foundry leadership position seems to have become even more concrete in recent years.”

Samuel Wang, analyst with Gartner, from their recent analyst report: “Market Share Analysis: Semiconductor Foundry Services, Worldwide, 2021” that went live on May 9, 2022.

Now that live events have started up again in Silicon Valley the information flow inside the semiconductor ecosystem has returned to pre pandemic levels. I have attended (6) live semiconductor events thus far in 2022 and have several more to go before the biggest foundry event The 2022 TSMC Technology Symposium which kicks off at the Santa Clara Convention Center on June 16 followed by events in Europe (6/20), China (6/30), and Taiwan (6/30).

SemiWiki has covered the TSMC events for the past 11 years and we will have bloggers attending this year as well. This is the number one networking event for TSMC customers, partners, and suppliers so I can assure you there will be a lot to write about.

Here is the most recent update on technology development from TSMC:

  • TSMC’s 3nm technology development is on track with good progress, and the company has developed complete platform support for HPC and smartphone applications. TSMC N3 is entering volume production in the second half of 2022, with good yield.
  • TSMC N3E will further extend its 3nm family, with enhanced performance, power, and yield. The Company also observed a high level of customer engagement at N3E, and the volume production for N3E is scheduled for one year after N3
  • Faced with the continuous challenge to significantly scale up semiconductor computing power, TSMC has focused its R&D efforts on contributing to customers’ product success by offering leading-edge technologies and design solutions. In 2021, the company started risk production of 3nm technology, the 6th generation platform to make use of 3D transistors, while continuing the development of 2nm, the leading-edge technology in the semiconductor industry today. Furthermore, the company’s research efforts pushed forward with exploratory studies for nodes beyond 2nm. TSMC’s 2nm technology has entered the technology development phase in 2021, with development being on track for volume production in 2025.

TSMC also introduced N4P process in October 2021, a performance-focused enhancement of the 5nm technology platform. N4P delivers an 11% performance boost over the original N5 technology and a 6% boost over N4. Compared to N5, N4P will also deliver a 22% improvement in power efficiency as well as a 6% improvement in transistor density.

TSMC introduced N4X process technology at the end of 2021, which offers a performance boost of up to 15% over N5, or up to 4% over the even faster N4P at 1.2 volt. N4X can achieve drive voltages beyond 1.2 volt and deliver additional performance. TSMC expects N4X to enter risk production by the first half of 2023. With N5, N4X, N4P, and N3/N3E, TSMC customers will have multiple and compelling choices for power, performance, area, and cost for their products.

Bottom line: This will be one of the more exciting TSMC Technical Symposiums. TSMC N2 has been under NDA while the IDM foundries have been leaking details about their upcoming 2nm processes. This is a classic marketing move, when you don’t have a competing product today, talk about tomorrow.

One thing we should all remember is that all of the leading semiconductor companies, with the exception of Samsung, are collaborating with TSMC. The TSMC ecosystem consists of 100s of customers, partners, and suppliers and it is like no other. If you think another foundry will have a higher yielding 2nm process that can support a wide range of products you would be wrong.

Also read:

Can Intel Catch TSMC in 2025?

TSMC’s Reliability Ecosystem

Self-Aligned Via Process Development for Beyond the 3nm Node


Webinar – 112 Gbps PAM4 Implementation with Real-World Case Studies

Webinar – 112 Gbps PAM4 Implementation with Real-World Case Studies
by Mike Gianfagna on 05-18-2022 at 10:00 am

Webinar – 112 Gbps PAM4 Implementation with Real World Case Studies

Are 112G PAM4 channels in one of your current or future designs? If you’re focusing on advanced products, the answer is likely YES. Design of these channels is quite challenging. Silicon design, SerDes, PCB traces, and interconnect all need to be balanced to achieve success. As they say, getting there is half the fun. An upcoming webinar tackles these challenges head-on, with no less than six real-world case studies to show how it’s done. The speakers are world-class, as are their companies. This is where you find the tricks of the trade that save you time, and perhaps save your project. The webinar is in early June, so there’s plenty of time to register. Read on if you want to learn more about 112 Gbps PAM4 implementation with real-world case studies.

The presenters, and their companies

Clint Walker

Clint Walker, VP of marketing at Alphawave IP. Clint has over 24 years of semiconductor experience. Before moving to Alphawave IP, Clint was a principal engineer and senior director at Intel where he worked for 22 years focused on high-speed I/O systems and circuit architecture. Clint has participated and contributed to JEDEC DDR, PCI-SIG, and IEEE 802.3 standards development and is the former chair of USB3.0 Electrical Work Group.

Alphawave IP is a global leader in high-speed connectivity for the world’s technology infrastructure. Its IP solutions meet the needs of global tier-one customers in data centers, compute, networking, AI, 5G, autonomous vehicles, and storage. Its mission is to focus on the hardest-to-solve connectivity challenges.  You can learn more about Alphawave IP on SemiWiki here.

Matt Burns

Matthew Burns, technical marketing manager at Samtec. Matt develops go-to-market strategies for Samtec’s Silicon to Silicon solutions. Over the course of 20+ years, he has been a leader in design, technical sales and marketing in the telecommunications, medical and electronic components industries. He holds a B.S. in Electrical Engineering from Penn State University.

Founded in 1976, Samtec is a privately held, global manufacturer of a broad line of electronic interconnect solutions, including High-Speed Board-to-Board, High-Speed Cables, Mid-Board and Panel Optics, Precision RF, Flexible Stacking, and Micro/Rugged components and cables. Samtec Technology Centers are dedicated to developing and advancing solutions to optimize both the performance and cost of a system from the bare die to an interface 100 meters away. You can learn more about Samtec on SemiWiki here.

What you’ll see

The six real-world test cases include Samtec’s 112 Gbps PAM4 connector systems. They include board-to-board connector sets, as well as two Samtec Flyover® cable systems. One is a mid-board to cable backplane configuration, and the other is mid-board to front panel. These cable systems are emulating real-world data center system architectures.

Each board also has four Alphawave AlphaCORE Multi-Standard SerDes. For each connector set, the Alphawave SerDes is transmitting 112 Gbps PAM4 PRBS data, and Samtec is receiving and analyzing the signal performance on the other board.

The six connector systems are on two boards. The two boards are communicating bidirectionally at 112 Gbps PAM4. All of the cable assemblies use Samtec Eye Speed® ultra-low skew twinax. The tight coupling between signal conductors in this co-extruded cable, made by Samtec, improves signal integrity performance, bandwidth, and reach.

These configurations are using cutting-edge, 112 Gbps PAM4 data. The results presented in the webinar are spectacular. If high-performance channels are in your future, you really need to see this. Precision, high-performance SerDes, advanced channel design and aggressive signal integrity methods all play a role here.

To learn more

The results presented in this webinar will wow you. These are real, live physical systems, not simulations. The implementations presented will give you courage to embark on your next close-to-impossible design project. Those are the best kind.

The webinar will be broadcast on Thursday, June 2, 2022, from 11:00 AM – 12:00 PM EDT. You can view the replay here. Now you know how to become an expert on 112 Gbps PAM4 implementation with real-world case studies.


Podcast EP80: The Future of Silicon Photonics with Dr. Anthony J. Yu

Podcast EP80: The Future of Silicon Photonics with Dr. Anthony J. Yu
by Daniel Nenni on 05-18-2022 at 8:00 am

Dan is joined by Dr. Anthony J. Yu, vice president of the Computing and Wired Infrastructure (CWI) Business Unit at GlobalFoundries (GF), where he is responsible for providing differentiated photonic manufacturing services and solutions to clients across multiple industries. Prior to being named VP of CWI, Dr. Yu was vice president of GF’s Aerospace and Defense Business Unit. Before joining GF, he held multiple executive positions at IBM, including vice president of Semiconductor Technology for Engineering and Technology Services.

Dan and Anthony explore the applications of silicon photonics, both today and what will be coming soon. The applications, both current and future as well as the players, current and emerging are all discussed. The impact of this technology is substantial.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Why Traceability Now? Blame Custom SoC Demand

Why Traceability Now? Blame Custom SoC Demand
by Bernard Murphy on 05-18-2022 at 6:00 am

Traceability abstract min

In the SoC world, we can’t believe our good luck. Every product maker now wants bespoke silicon solutions with the most advanced AI, communications, SLAM, etc. Which is fantastic for business, but this level of demand also drags us into a new level of accountability, especially in requirements traceability. Time was that only software teams had to worry about the headaches of traceability. Hardware platforms used branded devices such as microprocessors with proven track records; variability at most was in board design. However, in a scramble for functional differentiation at acceptable power and cost, product builders now demand both custom software and hardware stacks. Inevitably, traceability then crosses into hardware. Which is why SoC developers are now hearing about this topic.

A review

Traceability is a concept which mandates tracking between initial requirements and implementation as represented in logic design, verification and documentation. This method is used widely in logistics, food processing and many other domains. In electronic systems development, system builders first pushed traceability to control compliance in software development and verification, and more recently into SoC design as well. This level of tracking is active already in safety-critical automotive, aerospace and medical markets. Expect to see similar moves around security-sensitive products also.

An enforceable system must be based on structured requirements rather than unstructured natural language specifications. In support of this objective, systems designers have adopted tools like IBM DOORS and Jama Connect. Using such tools, they can describe hierarchically and concisely what they expect to satisfy their objective for the complete system. A supplier architect will typically further elaborate a subset of these requirements applicable to the supplier’s deliverable. Leveraging prior understanding of their own organization’s products and strengths. Further elaboration is possible inside subsystems.

Top-level requirements are decomposed hierarchically, concise at each level, simplifying locating and checking correspondence between a requirement and its implementation. This structure also makes it easy to isolate–if something went wrong–where and why it went wrong. Was it a problem in software, a sensor or one of multiple SoCs? Maybe there was a bug in the memory map? Requirements systems provide a system designer the means to pin down a possible cause without having to read through countless manuals written by multiple suppliers, with the imprecision of natural language. And without having to depend on appeals to trust from suppliers.

Entry-level traceability support

The simplest solution for traceability is a traceability matrix. This is typically a spreadsheet, listing one requirement per row with collateral information (e.g., related files and line numbers) in columns. Spreadsheets can be organized hierarchically and can become quite large and sophisticated.

Manual solutions like this are an obvious starting point but quickly run out of gas. Tracking 100 requirements isn’t a problem. Tracking 10,000 across multiple design, verification and documentation teams would be a huge problem without some level of semantic connection between the spreadsheet, design, test and documentation data. From system developers to SoC architects, translating higher-level requirements to lower-level requirements is a manual step; spreadsheets won’t catch mistakes at these transitions. What happens if a designer makes what seems a harmless change, without being aware of some dependency in another spreadsheet? Can the product maker see the disconnect or will they only trip over it when they run their application software and find it doesn’t work?

There isn’t an infallible answer to these questions, but there is a better answer than spreadsheets. That’s where Harmony Trace™ from Arteris IP comes in.

Professional traceability support

Clearly, a professional solution must support semantically aware connections between requirements and all aspects of the design implementation. Some of these will be derivable connections, given sufficient understanding of the design. In modern SoC design, that generally means down to the IP level. Others will require expert design understanding of system objectives. Generally, the majority will be derivable from a sufficiently structured starting point; think of memory map and register map detail as examples.

Semantically Aware SoC Traceability

These data and traceability connections should remain current as the design evolves, which requires that they should be semantically aware. Traceability can’t simply be to design files and line numbers. If an IP or an address offset changes, that detail should track automatically in traceability without need for designer updates to keep links current.

The link should be bidirectional. Requirements evolve through improved understanding or though necessary updates at the OEM level. Such changes should trickle down, automatically flagging affected design teams. Conversely, if a designer, verifier or documentation writer makes a change on an artifact within the scope of a requirement but on which the requirement is silent, perhaps the system designer should be notified anyway. Maybe that change is safe, maybe not?

There is great opportunity for SoC design organizations but that opportunity comes with more accountability. In being able to prove they built what they were told to build, down to a quite fine level of detail. Arteris IP has a solution for those design teams.


Meet Kandou’s Frank Lavety, Behind the Scenes Point Person for Timely Product Delivery

Meet Kandou’s Frank Lavety, Behind the Scenes Point Person for Timely Product Delivery
by Lauro Rizzatti on 05-17-2022 at 10:00 am

Frank Lavety

I learned about Kandou a year ago and liked what I heard, as should anyone who wants higher res displays and faster downloads from their electronic devices. I’ve been tracking Kandou since and believe it’s living up to its goal to be the undisputed innovator in high-speed, energy-efficient chip-to-chip link solutions to improve the way the world connects and communicates.

My blog last year looked at Kandou’s Matterhorn USB-C multiprotocol retimers with USB4 support. Two variations of Matterhorn, named for the famous Alps Mountain, are already adopted and successfully implemented by OEMs and ODMs developing mobile and desktop PCs. Matterhorn often overshadows Kandou’s equally impressive Chord signaling technologies optimized for high-speed, ultra-low power chiplet and chip-to-chip interconnects delivered as blocks of IP.

Recently, I chatted with Frank Lavety, Kandou’s General Manager and point person behind the scenes ensuring timely delivery of its entire product line. He recently described to me how USB4, the latest USB standard, will do even more to help quench our thirst for information. And noted as an aside, with a worldwide workforce, Frank said it’s nothing short of remarkable that Kandou’s interconnections worked efficiently and with little productivity impact in a pandemic scenario few of us imagined.

Frank joined Kandou in 2018 as VP Operations initially responsible for building Kandou’s fabless manufacturing operations and in 2019 assumed the role of General Manager, leading engineering, manufacturing and the commercialization of Kandou’s Matterhorn product family.

Here is a condensed version of our conversation.

Q: Frank, tell us about your background.

A: After graduating from University, I spent seven years with the medical device manufacturer Haemonetics Corporation, where I learned so much about high-quality, cost-efficient manufacturing. After finishing my MBA, I returned to Scotland from Boston, joining Wolfson Microelectronics in 2001, a fabless semiconductor start-up based in Edinburgh. Wolfson successfully IPO’d on the FTSE in 2003, eventually reaching a market cap of more than $1.5 billion.

During my 10 years at Wolfson, I established global manufacturing operations to support annual revenues in excess of $230 million. As VP Operations, shipped billions of mixed-signal ICs to the world’s largest consumer electronics brands, establishing Wolfson as a world leader in audio technologies.

In 2012, I was looking for a new technology challenge and joined Adlens Ltd., an Oxford University (Oxford, U.K.) spin out developing adaptive lens technologies used in eyewear and AR/VR applications, where I was COO and CEO before joining Kandou two years ago.

What attracted you to Kandou?

A: It reminded me of my early days at Wolfson and the satisfaction I had building a very successful global business from the ground up. I also saw similarities to my time at Oxford and the fact that Kandou is an EPFL spin out.

Furthermore, when I met with CEO and founder Amin Shokrollahi and the Board, I was inspired by the energy behind the vision. I felt strongly that I could add a depth of experience and a broad range of skills to help Kandou navigate from start-up to market leader.

What surprised you most about Kandou?

A: The passion and ideas around revolutionizing wired connectivity, the talent pool we built and the genuine interest from customers in our technologies.

What differentiates Kandou?

A: The vision around disrupting wired connectivity and challenging the status quo. Also, the charisma of Amin Shokrollahi and the culture he and the team have developed.

How have the semiconductor supply chain challenges affected Kandou’s ability to deliver products?

A: We correctly anticipated strong demand for Matterhorn and secured foundry capacity and backend IC packaging and testing services to meet all 2022 customer requirements. We worked hard to secure capacity commitments to support our customers’ volume requirements through 2022 and began engaging with our tier 1 supply chain early.

What keeps you up at night?

A: I’m constantly thinking about what our customers really want. I then reflect on our decisions and their significance to our business. I strongly believe that being vigilant and nimble drives innovation which in turn creates radically successful businesses. Flawless execution is key!

Thanks for your time, Frank.

About Kandou
Kandou, an innovative leader in high-speed, energy-efficient chip-to-chip link solutions to improve the way the world connects and communicates, is revolutionizing wired connectivity with greater speed and efficiency. It enables a better-connected world by offering disruptive technology through licensing and standard products for smaller, more energy efficient and cost-effective electronic devices. Kandou has a strong IP portfolio that includes Chord™ signaling, adopted by the OIF and JEDEC standards organizations. Kandou offers fundamental advances in interconnect technology that lower power consumption and improve the performance of chip links, unlocking new capabilities for customer devices and systems. Kandou is a fabless semiconductor company founded in 2011 and headquartered in Lausanne, Switzerland, with offices in Europe, North America and Asia.

Also Read:

CEO Interview: Chuck Gershman of Owl AI

CEO Interviews: Dr Ali El Kaafarani of PQShield

CEO Interview: Dr. Robert Giterman of RAAAM Memory Technologies


224G Serial Links are Next

224G Serial Links are Next
by Daniel Nenni on 05-17-2022 at 6:00 am

link designs

The tremendous increase in global data traffic over the past decade shows no sign of abating.  Indeed, the applications for all facets of data communications are expanding, from 5G (and soon, 6G) wireless communications to metropolitan area networks serving autonomous vehicles to broader deployment of machine learning algorithms.  As a result, there is a continuing need for faster signaling bandwidth among data centers – i.e., within racks, through the switch hardware, and between distant sites across a network.

There is also a greater diversity in the physical interfaces and signaling media to support communication between nodes, from long reach (LR) to ultra-short reach (USR), and from copper traces on an advanced package or printed circuit board to electro-optical conversion and optical cables.

The foundation for the increasing signal datarate support is the SerDes IP available for integration in SoCs and switch hardware (excluding the unique short-reach cases where clock-forwarded data-parallel interfaces may apply).

At the recent DesignCon conference at the Santa Clara Convention Center, Wendy Wu, Product Marketing Director in the IP Group at Cadence, provided both a historical perspective and a future roadmap outlook for serial data communications in her presentation “The Future of 224G Serial Links”.  This article summarizes the highlights of her talk.

224G links are critical in the next-generation datacenter

The first diagram below depicts a spine-leaf data center network architecture – the server to top-of-rack switch interface is in blue, and the rack-to-spine bandwidth interface is in orange.  The spine-leaf architecture is emerging in importance, due to the increasing “east-west” data traffic between servers for distributed applications.

The second figure below provides a roadmap for the switch hardware, highlighting the upcoming 224G SerDes generation, in support of a 102.4T switch bandwidth. This switch design incorporates 512 lanes of 224G SerDes to provide 102.4Tbps, or 800G/1.6T Ethernet utilizing 4/8 transceiver lanes.

Switch Hardware Implementation Options

Recent “More than Moore” silicon and advanced packaging technology enhancements have enabled sophisticated, heterogeneous 2.5D implementations for the next generation of switch hardware.  Wendy presented potential design examples, as shown below.

The 224G long-reach differential SerDes lanes are illustrated in blue.  Extra/ultra short-reach die-to-die interfaces are shown in pink.  The implementation on the right in the figure above adopts a “chiplet” approach, where the silicon technology solution for the core SoC and the 224G SerDes tiles could use different process nodes.

224G Standards Activity

There are two organizations collaborating on the standards activity for the 224G datarate definition.  The IEEE 802.3df working group and the Optical Internetworking Form (OIF) consortium are focused on the definition of the 224G interface.

The goal of their efforts is to enable interoperability across the SerDes interface, encompassing:  the physical media layer (both copper and fiber cable), data link layer, and link management parameters.

The OIF table below indicates some of the key targets for the 224G standard, and how it compares to previous datarate definitions.

Note that there is still active discussion on several parameters – specifically, the signal modulation format and the allowable electrical channel insertion loss for the physical medium and reach target.  The transition to higher datarates introduced pulse-amplitude modulation with 2 bits per unit interval – i.e., PAM4 – using four signal levels replacing the 1 bit per UI NRZ signaling used previously.  The PAM definition for 224G is still being assessed, with difficult tradeoffs between:

  • more bits per UI and more signal levels (e.g., PAM-6, PAM-8, or a quadrature phase and amplitude modulation)
  • signal-to-noise ratios
  • channel bandwidth required
  • power dissipation per bit
  • target bit error rate (pre-forward error correction)
  • link latency

More bits per UI reduces the channel bandwidth requirements, but increases the SNR sensitivity.  Continuing with PAM-4 leverages compatibility with the 112G interface, but would then require 2X the channel bandwidth (as the UI duration will be halved).

Note that the “raw” bit error rate for the high datarate PAM signaling interface is unacceptably high.  A forward error correction (FEC) symbol encoding method is required, extending “k bits” in a message to “n bits” at the transmitter, with error detection and correction decoding applied at the receiver.

The table above includes a forecast for when the interoperability targets for 224G will be finalized.

Physical Topologies for 224G Links

Wendy presented several physical topologies for the 224G link (in orange), including some unique approaches, as shown below.

The top part of the figure above illustrates how 224G SerDes would be used in more traditional single PCB-level implementations – e.g., chip-to-chip (possibly with a mezzanine card for an accelerator, medium reach), and chip-to-optoelectronic conversion plug-in module (very short reach).

The bottom part of the figure shows two emerging designs for serial communications:

  • a die-to-die, extra short reach integration of the optoelectronic circuitry in a novel 2.5D package accommodating optical fiber coupling
  • the use of an “overpass cable” on the PCB itself to reach the connector, enabling a long(er) reach

The signal insertion loss for the combination of the SoC-to-overpass connector plus the overpass cable is significantly improved compared to conventional PCB traces (at the frequency of 224G PAM-4 signaling).  The overpass connector is mounted on the PCB adjacent to the SoC, or potentially, integrated into an expanded package.  The twin-axial overpass cable carries the differential SerDes data signal.

DSP-based Receiver Equalization

In order to double the SerDes speed from 112G to 224G, the analog front-end bandwidth needs to increase by 2X for PAM4 or by 1.5X for PAM6 to allow higher frequency component to pass through. Improved accuracy and reduced noise should to be considered for the analog-to-digital (ADC). In order to compensate for the additional loss due to the higher data rate (i.e, higher Nyquist frequency), stronger equalization might be required. This could mean more taps in the FFE and DFE. In addition, advanced algorithms such as maximum likelihood sequence detector and others will become more important at 224G. Last but not the least, because the UI is reduced by 50%, ideally the PLL clock jitter should be reduced by 50% as well.

Wendy indicated, “Cadence is an industry leader in DSP-based SerDes.  There is extensive expertise in 112G SerDes IP development, spanning 7nm, 6nm, and 5nm process nodes.  Customer adoption has been wide-spread.  That expertise will extend to the upcoming 224G SerDes IP, as well.” 

The drive for increased SerDes lane bandwidth continues at a torrid pace, with the 224G standard soon to emerge, enabling higher data traffic within and between servers.

For more information on Cadence high-speed SerDes IP offerings, please follow this link.

Also Read:

Tensilica Edge Advances at Linley

ML-Based Coverage Refinement. Innovation in Verification

Cadence and DesignCon – Workflows and SI/PI Analysis


Cybersecurity Threat Detection and Mitigation

Cybersecurity Threat Detection and Mitigation
by Daniel Payne on 05-16-2022 at 10:00 am

Embedded Analytics min

Every week in the technology trade press I am reading about cybersecurity attacks against web sites, apps, IoT devices, vehicles and even ICs. At the recent IP SoC Silicon Valley 2022 event in April I watched a cybersecurity  presentation from Robert Rand, Solution Architect for Tessent Embedded Analytics at Siemens EDA. Common jargon in cybersecurity discussions are terms like: vulnerability, threat, attack, risk, asset and attack surface.

Regulated industries with safety-critical usage like automotive, aerospace and medical all have a keen awareness of cybersecurity threats, so there’s a culture of detection and mitigation. There’s even a national vulnerability database maintained by NIST (National Institute of Standards and Technology), and they show a growing trend of vulnerabilities over time.

Embedded Analytics

Having hardware security allows an electronic system to detect a threat quickly, understand it, then act by responding, which avoids a dangerous outcome and places the system into a safe mode, keeping everyone safer. The specific technology that Siemens EDA brings to cybersecurity is something called Embedded Analytics, extra hardware that gives developers and operators of embedded systems the visibility needed to see what’s happening inside their SoC, and the ability to quickly respond to a cyber threat. This added analytics IP inside your SoC subsystem connects to all of the IP, giving it system-level visibility:

Tessent Embedded Analytics

This added IP is customized for your specific SoC, and the data it analyzes can be communicated with existing IO (USB 2, USB 3, JTAG, Aurora Serdes, PCIexpress, Ethernet).  There’s an Eclipse-based IDE tool to allow SW/HW debug capabilities of the complete system, or you can even use a Python API to get the visibility and control desired.

SystemInsight IDE

Adding embedded analytics does give your SoC team detailed insight in how their subsystem is being used in realtime, provides visibility of the whole system from boot, allows monitoring of interconnect at a transaction level, lets you control and trace each processor. All of the common ISAs and interconnect protocols are supported, so you’re not waisting any time. You get to decide which subsystems are connected to the embedded analytics.

Sentry IP

To detect and respond to cybersecurity threats in just a few clock periods there’s the Sentry IP, and this hardware block is placed as a bus manager interface. This Sentry IP filters all transactions and only passes through safe transactions, blocks illegal transactions and can redirect when an attack is detected.

Sentry IP

There are 1,024 possible matches to identify vulnerabilities, so this system can adapt to new threats as they emerge.

Embedded Security Subsystem

Robert shared an example system that had several CPUs, RAM, Flash, CAN and interconnect components shown in light blue:

Embedded Security System

The embedded analytics IP modules are shown in teal color, and the Bus Sentry blocks are outlined in bright green. The subsystem can be run in either an application domain, or the security domain. The sentry blocks enforce your specific filters for each IP block connected. When your subsystem is run in the application domain you have complete visibility of what’s going on, provided by embedded analytics:

Application Domain

In the security domain all of the bus sentries are activated, providing hardware-level threat detection and mitigation:

Security domain

Summary

Cybersecurity threats are a growing concern for companies providing electronic systems, but there’s some good news as the approach of adding Embedded Analytics hardware to detect, filter and mitigate threats is now possible. With Siemens EDA the Tessent Embedded Analytics approach provides analytic tools for building signatures of normal operation, so that signatures of threats can be quickly detected. Monitors can continuously collect fine-grained data to detect new threats, and enable your security team to develop mitigation.

The Bus Sentry IP blocks allow your chip to mitigate any threats at the hardware level. Over time you can always add new filters and mappers to provide improved security hardening. On-chip monitors and sentries provide new promise against cybersecurity attacks.

There’s also a 19 minute YouTube video of this presentation.

Related Blogs


Bringing Prototyping to the Desktop

Bringing Prototyping to the Desktop
by Kalar Rajendiran on 05-16-2022 at 6:00 am

Corigine MimicTurbo GT Card

A few months ago, I wrote about Corigine and their MimicPro FPGA prototyping system and MimicTurbo GT card solutions. That article went into the various features and benefits of the two solutions, with the requirements for next-generation prototyping solutions as the backdrop. You can access that article here. At 250+ employees, spread over 10+ sites around the world, the startup is well-established. It is on a mission to make prototyping solutions accessible to a wider audience of engineers, right on their desktop. Customers have also been providing feedback to Corigine based on their hands-on experience since the launch of these solutions last year.

I followed up with Corigine to identify some key differentiating aspects of their solutions against the traditional competitive solutions. This post is based on my conversation with Mike Shei, Head of R&D and Ali Khan, VP Business & Product Development.

Competitive Comparison

Corigine has incorporated lot of run time capabilities to their prototyping solutions compared to traditional prototyping solutions such as Synopsys HAPS. For example, the “Local Debug and System Scope” functionality provides high visibility to data right on the desktop, for quickly identifying and resolving bugs. The memory analyzer capability provides backdoor runtime memory access to both user design memories and external DDR memories. HAPS does not offer these kinds of capabilities.

Corigine’s solutions use the ASIC’s clock scheme compared to Cadence’s Protium which uses cycle-based fast clock scheme when running the design. As the real user clock will always be half of the fast clock, the performance will be degraded 50% from a system performance point of view. Protium also relies on the Palladium platform for the debug capability. Mimic does not rely on any emulator for performing the debug.

Corigine’s auto-partitioning tool takes a system level point of view to minimize the hops when routing between FPGAs in a multi-FPGA based design. Partitioning tools of competitive solutions consider the logical partitioning, physical partitioning and system routing as three independent steps. Whereas Corigine considers these three aspects in an integrated fashion, thereby yielding better system performance.

Customer Feedback

Software Development

Competitive prototyping and emulation systems are expensive and thus access by engineers is prioritized. Corigine is hearing from the customer base that their software engineers are competing with hardware verification engineers for access to these resources. That puts a crimp on the bigger goal of hardware/software co-development and co-verification. Hardware/software co-verification is a very important aspect of product development, for integration of software (sw) with hardware (hw) well before final chips and boards become available. A good prototyping system should allow for not only ease of verification of the hardware but also enable hardware validation, software development, debugging and hw/sw integration.

With Corigine’s MimicTurbo GT card solution, software engineers now have access right on their desktop irrespective of wherever in the world they may be based. They could now do their software co-development right from their desktop without having to wait for access to expensive emulation platforms.

Customers also want their software developers to have access in a distributed environment and the MimicTurbo GT card solution makes that possible.

Scalability

While each MimicTurbo GT card can handle a design up to 48 million gates, multiple cards can be used to handle larger designs. Corigine uses QSFP cables to interconnect multiple servers to deliver this expanded capability. Customers are able to tap just the right amount of hardware resources as called for.

Time to Market

There are many scenarios where customers have multiple versions of cores that need to be verified. For instance, multiple customized versions of RISC-V cores. These multiple verification runs can be done simultaneously on a number of desktops with MimicTurbo GT cards, thereby cutting down on time.

Opportunity For IP Vendors

IP vendors could verify their IP cores using the MimicTurbo GT card solution and then provide the card and the IP cores to their customers for quick verification by them. This should be attractive to customers as it would save them time and effort from having to set up a new verification system.

Also read:

A Next-Generation Prototyping System for ASIC and Pre-Silicon Software Development

Scaling is Failing with Moore’s Law and Dennard

High-speed, low-power, Hybrid ADC at IP-SoC

Why Software Rules AI Success at the Edge


Taxis Don’t Have a Prayer

Taxis Don’t Have a Prayer
by Roger C. Lanctot on 05-15-2022 at 6:00 am

Taxis Dont Have a Prayer

Ever since the emergence of Uber and DiDi and Gett and Yandex and all the rest of the app-based ride hailing operators I have been worried about taxis. I had a sneaking suspicion that the ride hail operators were exploiting a loophole that put taxis at a disadvantage creating a mortal threat.

My suspicions were borne out by taxi driver protests throughout the world and by individual driver suicides. In this context, I was heartened when map-maker HERE introduced the world to HERE Mobility – an Israeli-based operation dedicated to the creation of a fleet management platform – which had all the earmarks of a global taxi aggregation solution. A way for taxi Davids around the world to take on ride hailing Goliaths maybe?

HERE Mobility looked to me like a global taxi-based alternative to ride hailing – a way for taxi operators to fight back against the ride hailing phenomenon. While HERE Mobility’s mission was broader than this, the ultimately doomed initiative (HERE Mobility shuttered its operations almost two years ago) highlighted the basic challenges facing the taxi industry in its efforts to take on app-based ride hailers.

That challenge – competing with an app – was further highlighted for me upon my arrival in Israel Saturday, to attend Ecomotion. While local taxi regulations in Israel supposedly require all taxis to accept credit card payments, the reality is something quite different and it highlights the fatal flaw in the taxi eco-system.

As I was entering my taxi at Ben Gurion airport I asked the driver if he would accept credit card payment as I was Shekel-less. We were well on our way to Tel Aviv before it became clear that the driver did not have a means to accept credit card payment. He suggested we could stop at an ATM machine – which are ubiquitous in the city.

There are few things sketchier than a taxi driver driving a customer to an ATM to obtain cash to pay for the ride. It has all the earmarks of a hostage situation – particularly considering the need to leave the vehicle along with one’s possessions – to obtain that essential cash.

Everything worked out fine and there were no hard feelings, but the experience was a head scratcher. Didn’t this taxi driver understand that cashless payment was one of the core value propositions of app-based transportation from operators such as Gett and Yango in Israel?

The overwhelming convenience of a line of waiting taxis at the airport – or any other similar taxi “rank” – is vastly diminished if those taxis are insisting on cash payment. But the challenges facing taxi operators are even greater than that.

The cards are indeed stacked against individual taxi operators because the sector is both highly regulated and fragmented. The appeal of HERE Mobility lay in its ability to attract hundreds of taxi operators around the world to its aggregation platform.

Aggregation alone, though, was not enough to overcome the structural impediments to successful competition with ride hailing operators. Taxi app operator Curb has learned this reality the hard way.

In the U.S., where Curb operates, taxi regulators impose two burdens that are nearly impossible for drivers to overcome. First, taxis are generally not allowed to quote a fixed fare for a ride – with the exception of zoned fares which are predetermined in many markets. And, second, taxi operators are usually confined to operating within or out of a specific geographic area.

Curb has sought to overcome the first restriction be obtaining wavers – one market at a time in the U.S. – to allow participating drivers to quote fixed fares for a particular trip. Quoting a fixed fare is simply taken for granted for ride hail operators – but generally forbidden for taxis. The only thing working in favor of taxis, in respect to fares, is the inability of taxis to use surge pricing – but, if the traffic is heavy, the regular taxi passenger will pay more anyway.

As for the second issue, geographic constraints, taxi drivers hands are simply tied. A taxi operating out of Washington, DC, for example, can take a fare to the suburbs (which include three different states), but cannot turnaround and bring a fare from the suburbs back into the city – the Washington-based taxi must dead-head back to DC.

Ubers and Lyfts can pick up fares from almost anywhere and drop them almost anywhere with few, if any, constraints. We won’t even bother to dwell on the fact that these drivers are using their own or a rented car with limited regulatory oversight of the vehicle’s condition or operation.

All of these impediments – nearly unlimited geographic operating area, fixed pricing, personal/rental vehicle operation, non-existent regulatory oversight, app-based cashless convenience – mean that taxis simply do not have a prayer. Without significant regulatory review – less regulations for taxis or more regulations for ride hail operators – the days of taxis are limited as is their scope of operation.

The one policy strategy that seems to make sense and to have some enduring credibility is the option – taken in countries such as Germany and Turkey – to require ride hail companies to cooperate with existing taxi operators. This seems to be an effective working solution – but not one that has been universally adopted.

Taxi operators – highly regulated and fragmented – with taxi drivers clinging to their cash-centric ways of doing business, will need help to survive. If regulators are unwilling to impose a co-operative approach, then it’s time to lighten the regulatory burden to give the sector a fighting chance. This is especially true if existing regulations are either not enforced or routinely ignored.

We need taxis. We should not forget that ride hailing is a luxury proposition and transportation a societal necessity. The ride hailers should not be allowed to skim off the cream and leave the less profitable and less favorable fares to taxis. That’s not fair.

Also read:

The Jig is Up for Car Data Brokers

ITSA – Not So Intelligent Transportation

OnStar: Getting Connectivity Wrong