Bronco Webinar 800x100 1

Is your career at RISK without RISC-V?

Is your career at RISK without RISC-V?
by Sivakumar PR on 12-05-2022 at 6:00 am

Fig 1 1

I am delighted to share my technical insights into RISC-V in this article to inspire and prepare the next generation of chip designers for the future of the open era of computing. If you understand how we build complex electronic devices like desktops and smartphones using processors, you would be more interested in learning and exploring the Instruction Set Architectures.

Usually, we prefer Complex Instruction Set Computer, CISC for desktops/laptops, and Reduced Instruction Set Computer, RISC for smartphones. The OEMs like Dell and Apple have been using x86 CISC processor for their laptops. Let me explain here the laptop design approach. The motherboard has a multicore CISC processor as the main component, which is connected to GPUs, RAM, storage memory, and other subsystems and I/O interfaces. The operating system runs multiple applications in parallel on the multicore processor, managing the memory allocation and I/O operations.

This is how we can realize any electronic system using a processor. However, we prefer System-On-a-Chip using RISC processor for smartphones as it helps us reduce the motherboard’s size and power consumption. Almost the entire system with multi-core RISC CPUs, GPUs, DSPs, Wireless and interface subsystems, SRAMs, Flash memories, and IPs is implemented on an SoC. The OEM Apple is following this smartphone’s SoC design approach even for their MAC books as an OEM trendsetter. All the latest MAC books use their M-series SoCs that use ARM’s RISC processor.

So, it’s evident that the proprietary ISAs Intel’s x86 or ARM’s RISC processors have been the choice of OEMs like Apple, Dell, Samsung, and others, but now why do we need an open ISA like RISC-V beyond all these well-proven proprietary ISAs. 

In today’s situation, everyone uses SoCs for their laptops and smartphones. This kind of complex SoC demands both general-purpose and specialized processors. To realize chips like Apple’s M-series SoCs, we need different kinds of processors like RISC CPUs, GPUs, DSPs, Security Processors, Image processors, Machine Learning accelerators, Security and Neural engines, based on various general purpose and specialized ISAs from multiple IP vendors, as shown in figure1.

Figure1: Apple M1 SoC Ref: AnandTech

In this scenario, the major challenges would be:

  1. Choosing and working with multiple IP vendors
  2. Different IP vendors may have different IP licensing schemes, and the engineers will not have the freedom to customize the ISAs and design as they prefer to meet their design goals.
  3. All specialized ISAs will not last/survive for long, affecting the long-term product support plans and roadmaps.
  4. Also, the software/application development and updates involving multiple ISAs and toolchains would be challenging.

RISC-V is a general-purpose license-free open ISA with multiple extensions. It is an ISA separated into a small base integer ISA, usable as a base for customized accelerators and optional standard extensions to support general-purpose software development.

You can add your own extensions to realize your specialized processor or customize the base ISA if needed because it’s open. No license restrictions. So, in the future, we could create all general-purpose and specialized processors using only one RISC-V ISA and realize any complex SoC.

1. What’s RISC-V, and how it’s different from other ISAs?

RISC-V is a fifth major ISA design from UC Berkeley. It’s an open ISA maintained by a non-profit organization, RISC-V international, that involves all the stakeholders’ community to implement and maintain the ISA specifications, Golden reference models, and compliance test suites.

RISC-V is not a CPU implementation. It is an open ISA for both general-purpose and specialized processors. A completely open ISA that is freely available to academia and industry

RISC-V ISA is separated into a small base integer ISA, usable by itself as a base for customized accelerators or educational purposes, and optional standard extensions to support general-purpose software development

RISC-V supports both 32-bit and 64-bit address space variants for applications, operating system kernels, and hardware implementations. So, it is suitable for all computing systems, from embedded microcontrollers to cloud servers, as mentioned below.

Simple embedded microcontrollers

Secure embedded systems that run RTOS

Desktops/Laptops/Smartphones that run operating systems

Cloud Servers that run multiple operating systems

2. RISC-V Base ISA

RISC-V is a family of related ISAs: RV32I, RV32E, RV64I, RV128I

What RV32I/ RV32E/ RV64I/RV128I means:

RV – RISC-V

32/64/128 – Defines the Register width[XLEN] and address space

I – Integer Base ISA

32 Registers for all base ISAs

E – Embedded: Base ISA with only 16 registers

2.1 RISC-V Registers:

All the base ISAs have 32 registers as shown in the figure2, except RV32E. Only RV32E base ISA has only 16 Registers for simple embedded microcontrollers, but the register width is still 32 bits.

The register X0 is hardwired to zero. The special register called Program Counter holds the address of current instruction to be fetched from the memory.

As shown in figure-2, RISC-V Application Binary Interface, ABI defines standard functions for registers. The software development tools usually use ABI names for simplicity and consistency. As per the ABI, additional registers are dedicated for saved registers, function arguments and temporaries in the range X0 to X15, mainly for RV32E base ISA which needs only the top 16 registers for realising simple embedded microcontrollers. But the RV32I base ISA will have all 32 registers X0 to X31.

Figure2: RISC-V Registers and ABI Names Ref: RISC-V Specification

2.2 RISC-V Memory:

A RISC-V hart [Hardware Thread / Core] has a single byte-addressable address space of 2^XLEN bytes for all memory accesses. XLEN to refer to the width of an integer register in bits: 32/64/128.

Word of memory is defined as 32 bits (4 bytes). Correspondingly, a halfword is 16 bits (2 bytes), a doubleword is 64 bits (8 bytes), and a quadword is 128 bits (16 bytes).

The memory address space is circular, so that the byte at address 2^XLEN −1 is adjacent to the byte at address zero. Accordingly, memory address computations done by the hardware ignore overflow and instead wrap around modulo 2^XLEN.

RISC-V base ISAs have either little-endian or big-endian memory systems, with the privileged architecture further defining big-endian operation. Instructions are stored in memory as a sequence of 16-bit little-endian parcels, regardless of memory system endianness.

2.3 RISC-V Load-Store Architecture

You can visualize the RISC-V load-store architecture that is based on RISC-V registers and memory, as shown below in figure3.

The RISC-V processor fetches/loads the instruction from main memory based on the address in PC, decodes the 32-bits instruction, and then the ALU performs Arithmetic/Logic/Memory-RW operations. The results of ALU would be stored back into its registers or memory.

Figure3: RISC-V Load-Store Architecture

2.4 RISC-V RV32 I Base ISA:

RV32I base ISA has only 40 Unique Instructions, but a simple hardware implementation needs only 38 instructions. The RV32I instructions can be classified as:

R-Type:  Register to Register instructions

I-Type:   Register Immediate, Load, JLR, Ecall & Ebreak

S-Type:  Store

B-Type:  Branch

J-Type:   Jump & Link

U-Type:  Load/Add upper Immediate

Figure 4: RV32I Base ISA Instruction Formats

2.5 RISC-V ISA for an optimized RTL Design:

Here I would like to explain how RISC-V ISA enables us to realize an optimized Register Transfer Level design to meet the low-power and high-performance goals.

As shown in figure4, the RISC-V ISA keeps the source (rs1 and rs2) and destination (rd) registers at the same position in all formats to simplify decoding.

Immediates are always sign-extended, and are generally packed towards the leftmost available bits in the instruction and have been allocated to reduce hardware complexity. In particular,

the sign bit for all immediates is always in bit 31 of the instruction to speed sign-extension circuitry.

Sign-extension is one of the most critical operations on immediates (particularly for XLEN>32),

and in RISC-V the sign bit for all immediates is always held in bit 31 of the instruction to allow

sign-extension to proceed in parallel with instruction decoding.

To speed up decoding, the base RISC-V ISA puts the most important fields in the same place in every instruction. As you can see in the instruction formats table,

  • The major opcode is always in bits 0-6.
  • The destination register, when present, is always in bits 7-11.
  • The first source register, when present, is always in bits 15-19.
  • The second source register, when present, is always in bits 20-24.

But why are the immediate bits shuffled? Think about the physical circuit which decodes the immediate field. Since it’s a hardware implementation, the bits will be decoded in parallel; each bit in the output immediate will have a multiplexer to select which input bit it comes from. The bigger the multiplexer, the costlier and slower it is.

It’s also interesting to note that only the major opcode (bits 0-6) is needed to know how to decode the immediate, so immediate decoding can be done in parallel with decoding the rest of the instruction.

2.6 RV32I Base ISA Instructions

3. RISC-V ISA Extensions

All the RISC-V ISA extensions are listed out here:

Figure 5: RISC-V ISA Extensions

We follow the naming convention for RISC-V processors as explained below:

RISC-V Processors:  RV32I, RV32IMAC, RV64GC

RV32I:           Integer Base ISA implementation

RV32IMAC:  Integer Base ISA + Extensions: [Multiply + Atomic + Compressed]

RV64GC:       64bit IMAFDC [G-General Purpose: IMAFD]

Integer 64 bits Base ISA + Extensions: [Multiply + Atomic + SP Floating + DP Floating + Compressed]

4. RISC-V Privileged Architecture

RISC-V privileged architecture covers all aspects of RISCV systems beyond the unprivileged ISA which I have explained so far. Privileged architecture includes privileged instructions as well as additional functionality required for running operating systems and attaching external devices.

As per the RISC-V privileged specification, we can realize different kinds of systems from simple embedded controllers to complex cloud servers, as explained below.

Application Execution Environment – AEE: “Bare metal” hardware platforms where harts are directly implemented by physical processor threads and instructions have full access to the physical address space. The hardware platform defines an execution environment that begins at power-on reset.  Example: Simple and secure embedded microcontrollers

Supervisor Execution Environment – SEE: RISC-V operating systems that provide multiple user-level execution environments by multiplexing user-level harts onto available physical processor threads and by controlling access to memory via virtual memory.

Example: Systems like Desktops running Unix-like operating systems

Hypervisor Execution Environment – HEE: RISC-V hypervisors that provide multiple supervisor-level execution environments for guest operating systems.

Example: Cloud Servers running multiple guest operating systems

Figure 6: RISC-V Privileged Software Stack Ref: RISC-V Specification

Also, the RISC-V privileged specification defines various Control and Status Registers [CSR] to implement various features like interrupts, debugging, and memory management facilities for any system. You may want to refer to the specification to explore more.

As explained in this article, we could efficiently realize any system, from simple IoT devices to complex smartphones and cloud servers, using a common open RISC-V ISA. With monolithic semiconductor scaling failing, specialization is the only way to increase computational performance.  The open RISC-V ISA is modular and supports custom instructions making it ideal for creating a wide range of specialized processors and accelerators.

As we witnessed great success in chip verification from the emergence of IEEE standard Universal Verification Methodology, the open RISC-V ISA will also emerge as an industry-standard ISA by inheriting all good features from various proprietary ISAs and lead us to the future of an open era of computing.  Are you ready with RISC-V expertise for this amazing future?

Author: P R Sivakumar, Founder and CEO, Maven Silicon

LinkedIn Profile: https://www.linkedin.com/in/sivapr/

Also Read:

Experts Talk: RISC-V CEO Calista Redmond and Maven Silicon CEO Sivakumar P R on RISC-V Open Era of Computing

Verification IP vs Testbench

CEO Interview: Sivakumar P R of Maven Silicon


High-End Interconnect IP Forecast 2022 to 2026

High-End Interconnect IP Forecast 2022 to 2026
by Eric Esteve on 12-04-2022 at 10:00 am

TSMC Revenue by Platform 1Q22 1

The Interface IP market has grown with 21% CAGR from 2017 to 2021 and we review the part of this market restricted to the high-end of PCIe, DDR, Ethernet and D2D IP made of PHY and controller targeting the most advanced technology nodes and latest protocol release. We will show that an IP vendor focusing investment on the high-end interconnect IP can benefit for a very healthy ROI based on a business growing with 75% CAGR for 2022 to 2026.

To keep a leading market share on this very demanding IP business, he will have to show the best time to market (TTM) and demonstrate 100% perfect execution. Taking Alphawave as an example, If the company can perfectly execute this strategy, he can grow his IP business from $90 million in 2021 to $600 million or more in 2026.

We have used market information extracted from the “Interface IP Survey & Forecast”, and we have selected the following interconnect protocols and the best technology nodes to focus on, because showing the highest ROI, 7nm, 5nm, 3nm and even lower:

  • PCIe 4 and above (PCIe 5, 6…)
  • CXL 1, CXL 2, CXL 3
  • UCIe
  • Ethernet based on 56G SerDes, 112G SerDes, 224G…
  • LPDDR5 memory controller
  • HBM3 memory controller

We can see that all these protocols support HPC, becoming the segment leader (size and growth) for TSMC in 2022.

Revenue and Growth Rate by Platform – TSMC 2022

Selected Protocol Forecast 2022-2026

IP outsourcing rate and IP market size are growing year over year (about by 20% by year for the total interface IP market since the last five years) and, as we will see, even more for the high-end protocol-based IP.

For PCIe 5 and 6, mature technology nodes don’t make sense and we split the PHY IP forecast between mainstream and advanced technology nodes, as ASP are different, but the Controller IP ASP is expected to be technology independent. We assume that a combo PCIe/CXL PHY will be proposed.

Advanced memory controller includes:

  • DDR5, LPDDR5 memory controller according with above technology split
  • GDDR6, GDDR7 targeting 7nm, 5nm and below
  • HBM3 targeting 7nm, 5nm and below

The picture below describes the number of VHS SerDes IP commercial design starts for the three data rate: 56Gbps, 112Gbs and 224Gbps for the next 5 years.

SerDes PHY IP Number of Sales Forecast 2021-2026

The semiconductor industry is undertaking a major strategy shift towards multi-die systems.

Multi-die systems are driving the need for standardized die-to-die interconnects. Several industry alliances have come together to define such standards, the most promising being Unified Chiplet Interconnect Express (UCIe).

D2D Design Start IP Forecast 2021-2026

Top4 High-End Interface IP by Revenue 2021-2026

The group made of the top 4 Interface IP (PCIe, DDR, Ethernet and D2D) is forecasted to grow with 27% CAGR for 2022 to 2026. If we consider the high-end only, the Global CAGR for HE Top 4 Interfaces will be 75% for 2021 to 2026:

The Top 4 Interface IP by Revenue 2021-2026

The weight of the HE Top 4 protocols IP was $370 million in 2021, the value forecasted in 2026 will be $2115 million, or CAGR of 75%, compared with 27% for the overall IP sales for these Top 4 protocols IP.

We will look at Alphawave IP vendor case study. Created in 2017, the company has focused on high-end IP since the beginning with PAM 4 DSP based 112G SerDes, supporting Ethernet and PCIe segments. The strategy was successful as 2021 IP revenues was $89.9 million. In the meantime, Alphawave IP has acquired an Ethernet controller IP vendor and propose PCIe controller validated by the PCI-SIG for PCIe 5 complete solution.

Moreover, in June 2022, they acquired OpenFive. The first result was to enlarge Alphawave IP portfolio and address two more segments: high-end DDR memory controller (HBM3 and LPDDR5) and D2D. The second result was to bring design services capability to create a potential emerging chiplet business.

That’s why Alphawave is a good candidate for a case study “What can be the potential business development for an IP vendor focusing primarily on high-end Interface IP (PCIe, DDR memory controller, Ethernet and D2D)”.

2022-2026 Market Share Evolution When Focusing on

High-End Interface IP

Starting with 24% market share of HE Interface ($370 million in 2021), we evaluate the revenues generated when keeping deploying this strategy, with three scenarios: flat market share, growing by 1% per year and by 10% by year. The real case may be inserted between these two scenarios, calling for Alphawave revenues from IP only on HE interfaces market to be between $500 million and $800 million in 2026.

 

In 2020, we have seen the emergence of Alphawave IP building strong position on the high-end interface IP segment (thanks to PAM4 DSP SerDes), creating “Stop-for-Top” strategy, by opposition with Synopsys “One-Stop-Shop”. If we consider that this high-end segment, strongly driven by HPC (including datacenter, IA, storage, etc.), is expected to considerably grow on the 2020 decade, Alphawave IP could enjoy major market share on this $2 billion interface IP sub-segment by 2026, a revenue between $500 and $800 million being realistic.

** This white paper has been sponsored by Alphawave IP, nevertheless the content reflects the author’s positioning about the IP market and the way it expected to evolve in the future, during the 2020 decade.

By Eric Esteve from IPnest

The white paper can be downloaded

https://www.awaveip.com/en/news-views/high-end-interconnect-ip-forecast-2022-to-2026/


Predicting EUV Stochastic Defect Density

Predicting EUV Stochastic Defect Density
by Fred Chen on 12-04-2022 at 6:00 am

Predicting EUV Stochastic Defect Density

Extreme ultraviolet (EUV) lithography targets patterning pitches below 50 nm, which is beyond the resolution of an immersion lithography system without multiple patterning. In the process of exposing smaller pitches, stochastic patterning effects, i.e., random local pattern errors from unwanted resist removal or lack of exposure, have been uncovered due to the smaller effective pixel size and the smaller number of photons absorbed per pixel. In this article, I present a way to visualize the defective pixel rate and how it may be tied to stochastic defect density.

Here, for the most straightforward analysis we will consider an idealized image: a 1:1 duty cycle line grating, with binary amplitude. Also, we will focus on the pitch range of 50 nm and below for a 0.33 NA EUV system. Consequently, the normalized image can be represented mathematically as 0.25+(1/pi)^2+1/pi*cos(2*pi*x/pitch). The absorbed dose profile in the resist will therefore be proportional to this expression, basically multiplied by the absorbed average dose. Again, for keeping things simple, we ignore the polarization and angle-based 3D-mask effects which are actually present, as well as electron blur, which would become much more significant for the 0.55 NA EUV systems [1].

This absorbed dose profile is plotted on a preset grid. I used a 99 x 101 nm pixel grid, where the pixel is normalized to 1/100th of the pitch. Poisson statistics is used to obtain the random absorbed dose at each pixel. The pixel is considered defective if it falls below a certain threshold for exposure, producing an unexposed defect, or if it exceeds the same threshold, producing a potential bridge defect. By changing the dose, improperly unexposed or exposed pixels can be visualized (Figure 1).

Figure 1. Stochastic defects at lower doses (left) tend to be unexposed pixels (blue in central orange area), while at higher doses (right) tend to be improperly exposed pixels (orange in top/bottom blue area).

By scanning the dose, the defective pixel rate may be plotted as a function of absorbed dose. Unexposed pixels decrease with increasing dose, while beyond some dose, improperly exposed pixels leading to bridging start increasing (Figures 2,3). The smallest defective pixel rate that can be detected for this small grid is 1e-4. The defective pixel rate is not a direct measure of predicted defect density. Instead, we rely on a formula from de Bisschop [2] used for inspection image pixels: defects/cm2 = 1e14 pixNOK/(NPR), where pixNOK is the defective pixel rate, N is the average number of pixels per defect, P is the pitch, and R is the pixel size in nm. For the 50 nm pitch case, a 3e-10 defective pixel rate with 0.5 nm/pixel and 100 pixels/defect gives 12 defects/cm2. For the 40 nm pitch case, a 1e-9 defective pixel rate with 0.4 nm/pixel and 125 pixels/defect gives 50 defects/cm2. These values are comparable to recently published values [3].

Figure 2. Defective pixel rate (out of 99 x 101 0.5 nm pixels) for 25 nm half-pitch vs. absorbed dose.

Figure 3. Defective pixel rate (out of 99 x 101 0.4 nm pixels) for 20 nm half-pitch vs. absorbed dose. Optimum absorbed dose and minimum defective rate are higher for the reduced pitch.

At the same average absorbed dose, the smaller pitch shows larger variations due to the smaller pixel size. It is therefore to be expected that larger doses are preferred to maintain a given defective pixel rate. The standard deviation is also smaller for the smaller pitch (due to fewer photons within the grid area), within a given dose range, which would also lead to a higher minimum defect rate.

The immensely greater photon density of ArF immersion systems has allowed them to avoid seeing stochastic effects down to the 80 nm pitch (Figure 4), even with relatively low absorbed mJ/cm2.

Figure 4. Negligible stochastic effects show at 80 nm pitch for ArF immersion lithography, even with only 3 mJ/cm2 absorbed.

References

[1] T. Allenet et al., “EUV resist screening update: progress towards High-NA lithography,” Proc. SPIE 12055, 120550F (2022).

[2] P. de Bisschop, “Stochastic printing failures in extreme ultraviolet lithography,” J. Microlith/Nanolith. MEMS MOEMS 17, 041011 (2018).

[3] S. Kang et al., “Massive e-beam metrology and inspection for analysis of EUV stochastic defect,” Proc. SPIE 11611, 1161129 (2021).

This article originally appeared in LinkedIn Pulse: Predicting EUV Stochastic Defect Density

Also Read:

Electron Blur Impact in EUV Resist Films from Interface Reflection

Where Are EUV Doses Headed?

Application-Specific Lithography: 5nm Node Gate Patterning

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

EUV’s Pupil Fill and Resist Limitations at 3nm


Podcast EP128: Secure-IC’s Vision For Cybersecurity

Podcast EP128: Secure-IC’s Vision For Cybersecurity
by Daniel Nenni on 12-02-2022 at 10:00 am

Dan is joined by Hassan Triqui who has over 20 years of experience in the technology sector. Prior to spearheading Secure-IC’s development into a major player in embedded cybersecurity solutions, Hassan was a former senior executive at Thales (Talles) and Thomson.

Dan explores Secure-IC’s vision and strategy to deploy integrated cybersecurity capability across many products and markets. The Company’s chip to cloud vision is discussed as well as its recent acquisition of Silex Insight. The impact of the complete portfolio is examined.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Samsung Versus TSMC Update 2022

Samsung Versus TSMC Update 2022
by Daniel Nenni on 12-02-2022 at 6:00 am

TSMC Versus Samsung

After attending the TSMC and Samsung foundry conferences I wanted to share some quick opinions about the foundry business. Nothing earth shattering but interesting just the same. Both conferences were well attended. If we are not back to the pre pandemic numbers we are very close to it.

TSMC and Samsung both acknowledged that there could be a correction in the first half of 2023 but over the next 5 years semiconductors and the foundry business will see very healthy growth rates. Very good news and I agree completely. The strength and criticality of semiconductors has never been more defined and the foundry ecosystem has never been stronger, absolutely.

At their recent Foundry Forum Samsung forecasted (citing Gartner) that by 2027 the semiconductor industry will approach $800B at a 9% Compound Annual Growth Rate and the foundry industry will experience a 12% CAGR. Samsung Foundry predicts advanced nodes (< 7nm) to outgrow the foundry industry at a 21% CAGR over the next five years and predicts its business will grow to approximately $26B by 2027 with a 20% CAGR.

It will be interesting to see what TSMC guides for 2023 during the Q4 2022 earnings call. Any guesses? Double digits (10-20%) growth is my guess. N3 will be in full production and it will be the biggest node in the history of TSMC, my opinion.

According to the World Semiconductor Trade Statistics (WSTS) the semiconductor industry is now expected to grow 4% in 2022 and drop 4% in 2023. After a 26% gain in 2021 this should not be a surprise. TSMC still expects 35% growth in 2022 and based on their monthly numbers that sounds reasonable.

For Samsung the FinFET era has come to an end with the R&D focus being on GAA. Samsung had a good run at 14nm even getting a piece of the Apple iPhone 6s business. And let’s not forget that Globalfoundries licensed Samsung 14nm so that success belongs to Samsung as well as GF.

Unfortunately, Samsung 10nm was an utter failure in both yield and PPAC (performance, power, area, cost). TSMC 10nm did not fair well either with the exception of Apple. The ROI between 14/16nm and 10nm just was not enough for most customers and the promise of 7nm was worth the wait.

7nm did much better and Samsung again came back to the competitive table. Samsung 14nm was still a stronger node but 8/7nm is doing very well. This can be seen with the current TSMC 7nm slump as Samsung is a cheaper alternative. Unfortunately, Samsung 5/4nm had serious PDK and yield problems so the lion’s share of the leading edge FinFET market went back to TSMC and will stay there, my opinion.

This leaves the door wide open for Intel Foundry Services to get back in the foundry game. IFS will be spending time with us at IEDM this coming week so we can talk more after that. If Intel executes on their process roadmap down to 18A this could get really interesting.

All three foundries are talking about GAA and Samsung is even in very limited production at 3nm GAA but personally I think the FinFET era will continue on for a few more years as we get the kinks worked out of GAA. In talking to the ecosystem at the conferences, HVM GAA is still years away and the PPAC (power/performance/area and cost) is still a big question. Based on the papers I have seen we should get a pretty good GAA update next week at IEDM. Scott Jones and I will be there amongst the media masses.

One of the more interesting battles between Samsung and TSMC became clear at the conferences and that is RF. I fully expect IFS to hit this market hard as well. Based on the talk inside the ecosystem, Samsung 8nm RF is a cheaper non EUV version of TSMC N6F and it seems to be experiencing a surge in popularity. TSMC N6F however is set to fill the N7 fabs so we should see a big push from TSMC in that direction. At the recent TSMC OIP analog automation, optimization, and migration were popular topics( TSMC OIP – Enabling System Innovation , TSMC Expands the OIP Ecosystem! ). But again, RF chips are very price sensitive so if the design specs can be met at Samsung 8RF and the ecosystem is willing then that is where the chips will go, my opinion.

Source: Samsung

Capacity plans were discussed in detail at both conferences. If you look at TSMC, Samsung, and Intel fab plans you will wonder how they will be filled. TSMC builds fabs based on customer demand which now includes pre payments so I have no worries there. Samsung and Intel however seem to be following the Field of Dreams strategy as in “build it and they will come”. I have no worries there either. If all of the fab expansion and build plans that I have seen announced do actually happen we will have oversupply in the next five years which is a good thing for the ecosystem and customers. TSMC, Samsung, and IFS can certainly weather a pricing storm but the 2nd, 3rd, and 4th tier foundries may be in for rougher times.

Just my opinion of course but since I actively work inside the semiconductor ecosystem I am more than just a pretty face.

Also Read:

TSMC OIP – Enabling System Innovation

TSMC Expands the OIP Ecosystem!

A Memorable Samsung Event

Intel Foundry Services Forms Alliance to Enable National Security, Government Applications


Hyundai’s Hybrid Radio a First

Hyundai’s Hybrid Radio a First
by Roger C. Lanctot on 12-01-2022 at 10:00 am

Hyundais Hybrid Radio a First

Current owners of a wide range of Hyundai connected cars encompassing multiple model years are receiving or have received notification of the availability (i.e. eligibility) for a software upgrade that will connect their in-vehicle radio to the Internet. For those owners who receive and have this update installed they may not realize they are experiencing a first-of-its-kind experience – a connected or hybrid radio experience in a mass market vehicle.

Notably, the experience is enabled in the Hyundai Ioniq 5 – an electric vehicle. I write “notably” because we just learned this week that Ford Motor Company’s F-150 Lightning comes with no AM radio. The Hyundai Ioniq 5 proudly preserves AM radio along with Internet access that allows the system to display station ID and logo, streaming promos for the station and its Website, music track and artist, and streaming elements of broadcast advertising.

All of this extra information related to the broadcast is referred to as “metadata” and broadcasters have struggled to deliver the information in a consistent manner – while automakers have struggled to render the information consistently. Working with Xperi, Hyundai is delivering all of it – now – in millions of dashboard systems.

The integration of this metadata – what Xperi calls DTS AutoStage – is only the first step in the transformation of broadcast radio. The availability of the Internet connection and the data ultimately means that future software updates could add content search capability to the radio experience or even alerts and links to further information, Internet content, or e-commerce opportunities.

It was just three years ago that NextRadio gave up on trying to deliver a hybrid radio experience via activated FM chips in Android phones with cellular service from Sprint – now part of T-Mobile. It was a valiant effort and a clever solution leveraging HD Radio technology to create a searchable broadcast solution. The complexity of the NextRadio solution and Apple’s refusal to activate the FM chips in its own phones doomed this ambitious effort.

Audi was next with its own concept of a hybrid radio with a system developed entirely in house and originally deployed only in the most expensive Audi, the A8, and originally only in Europe. Parent Volkswagen has since indicated its plans to bring its hybrid radio platform to all Volkswagen’s brand – and that rollout is steadily proceeding.

The focal point of the Audi hybrid solution – connecting the radio to the Internet – was the ability of the in-car radio to grab a radio station’s Internet stream more or less seamlessly when the terrestrial signal was lost due to a car driving out of range. The idea was as clever as the NextRadio solution, but it was not scalable beyond Volkswagen vehicles and it, too, was and is a bit of a Rube Goldberg proposition with inconsistent execution across platforms.

Mercedes Benz arrived on the hybrid radio scene last year with its own hybrid radio enabled by Xperi’s DTS AutoStage technology. The Mercedes offering added unique HMI elements such as a kind of carousel of radio station logos that made manual searching for a station in a car a truly unique experience and potentially less distracting than turning a knob – though maybe more distracting than pressing a button.

To be clear, Mercedes was first to deploy the DTS AutoStage solution. Hyundai is the first to bring DTS AutoStage to the masses. In fact, Hyundai is simultaneously bringing DTS AutoStage to multiiple Kia and Genesis models as well. Tesla owners may also soon discover their vehicles infused with DTS AutoStage.

The onset of electric vehicles has thrust the in-car experience of radio into the spotlight. Lucid Motors announced this week that its full line-up of vehicles will be equipped with SiriusXM satellite radio technology. The announcement cut through the growing suspicion that emerging EV makers – like Tesla – were somewhat ambivalent about including SiriusXM reception in all their vehicles.

TuneIn has lately been pointing the way to an exclusively Internet-based in-vehicle radio experience. TuneIn recently added Rivian to its existing roster of automotive partners which already includes Tesla, Mercedes, Polestar, and Jaguar Land Rover. TuneIn is also available via Amazon’s Alexa digital assistant wherever it is available in an embedded system.

Mercedes, Hyundai, and Tesla are all pointing the way toward a digital, searchable, delightful embedded radio experience that includes FM AND AM. Hyundai is first to bring the DTS AutoStage hybrid radio technology to the masses, but it won’t be the last.

Also Read:

Mobility is Dead; Long Live Mobility

Configurable Processors. The Why and How

Requiem for a Self-Driving Prophet


INNOVA PDM, a New Era for Planning and Tracking Chip Design Resources is Born

INNOVA PDM, a New Era for Planning and Tracking Chip Design Resources is Born
by Daniel Nenni on 12-01-2022 at 6:00 am

Innova PDM

No doubt that the design success of nowadays system on chips (SoCs) is directly linked to the success of cost control. More market opportunities are open for less expensive system on chips and electronic systems.

Both the design cost prediction and the resource tracking during the design process, are key to such a success

Predicting design cost need to cover all aspects: design (EDA) tools, computing servers, human resources, external IP cores, etc. All these aspects need to be tracked automatically by reporting any problem like a resource that is no longer available and measuring the impact on the subsequent design steps. Otherwise, the financial impact is significant especially in correlation with tight tape-out schedules.

INNOVA Advanced Technologies through its PDM (Project & Design Management) tool is the first software solution in the market which consider all the above aspects, simultaneously and automatically.

INNOVA Advanced Technologies has been founded in 2020 by seasoned from the semiconductor industry. Its solution is intended for designers as well as design managers of complex and multi-domain projects, ranging from microelectronics to computer science. It helps them to manage projects and resources in one unique place.

​INNOVA Project and Design Management (PDM) software Platform offers a single portal that links areas that were, until now, considered separately. This includes all management of resources of a complex design project: design flows and tools, computing servers, and also human resources.

Being fully compatible with design and IT systems in place, this disruptive and non-intrusive solution serves as a single portal. It helps reduce the complexity of using design tools and dedicated design environments. Thanks to its rich reports including alarms and dashboards, optimal decisions can be made in terms of design resource planning, monitoring and resource adjustment through a complete design live cycle.

For each design step, traditionally dozens of software tools are required and often several hundred design engineers are involved throughout a project such as designing a communication chip or a microprocessor.

There are also significant intangible resources involved: predesigned blocks, various software and design flows, computer resources (server farms, etc.), and libraries in connection with companies manufacturing electronic components.

PDM as an open and secured platform correlates design projects directly with the involved design resources. The tool is fully customizable and both graphical and script-based APIs are open to the users.

Thanks to the INNOVA PDM Platform, it is possible to consult information related to current projects: progress, rate of occupation of human resources, the anticipation of possible delays, and the effects they may have on the rest of the design chain, etc. This multidimensional tracking of EDA tool licenses, servers either local or cloud-based is real time.

Capitalization on past experiences is made possible through consultation and a deep reporting of past projects. PDM provides a clear prediction (ML-based) & tracking answers the fundamental question of how much design resources I need to start my design project and how may I track real-time design task execution and report any problem. In addition to easy tracking, PDM provides scheduling capabilities to automatically manage design tasks and jobs based on resource availability.

Compared to traditional and ad-hoc internal solutions, INNOVA claims up to 30% cost reduction with PDM in place within a corporation.

A webinar is planned by INNOVA where INNOVA experts will be presenting typical cases of how to reduce the cost of EDA licenses and computing servers and also how to plan the most optimal and cost-effective package of tool licenses for a design project. You can register for this webinar here: Reduce design cost by better managing EDA tool licenses and servers

For more information about INNOVA Advanced Technologies you can visit their website here: https://www.innova-advancedtech.com/

Also Read:

IDEAS Online Technical Conference Features Intel, Qualcomm, Nvidia, IBM, Samsung, and More Discussing Chip Design Experiences

TSMC OIP – Enabling System Innovation

2023: Welcome to the Danger Zone


Podcast EP127: MITRE Engenuity – Reshaping the Future of Semiconductors and Innovation

Podcast EP127: MITRE Engenuity – Reshaping the Future of Semiconductors and Innovation
by Daniel Nenni on 11-30-2022 at 10:00 am

Dan is joined by Dr. Raj Jammy of MITRE Engenuity. As Chief Technologist, Raj is responsible for incubating and accelerating technologies in partnership with the private sector, and for developing strategic frameworks that promote technologies for the public good. A seasoned semiconductor/electronics industry executive, Dr. Jammy brings 25 years of experience to his role at MITRE Engenuity.

Raj explains the role and vision of MITRE Engenuity. As a hub for transformative innovation, this organization partners with research and technology organizations in the US and their partners around the world. A part of MITRE Corporation, MITRE Edgenuity focuses on bringing the entire ecosystem together to build world-changing innovation in America. Dan explores the strategies and goals of the organization with Raj, including an assessment of how the CHIPS Act fits into the overall strategy.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


WEBINAR: FPGAs for Real-Time Machine Learning Inference

WEBINAR: FPGAs for Real-Time Machine Learning Inference
by Don Dingee on 11-30-2022 at 6:00 am

An server plus an accelerator with FPGAs for real-time machine learning inference reduces costs and energy consumption up to 90 percent

With AI applications proliferating, many designers are looking for ways to reduce server footprints in data centers – and turning to FPGA-based accelerator cards for the job. In a 20-minute session, Salvador Alvarez, Sr. Manager of Product Planning at Achronix, provides insight on the potential of FPGAs for real-time machine learning inference, illustrating how an automatic speech recognition (ASR) application might work with acceleration.

High-level requirements for ASR

Speech recognition is a computationally intensive task and an excellent fit for machine learning (ML). Language differences aside, speakers have different inflections and accents and vary in their use of vocabulary and grammar. Still, sophisticated ML models can produce accurate speech-to-text results using cloud-based resources. Popular models include connectionist temporal classification, listen-attend-spell, and recurrent neural network transducer.

A deterministic, low-latency response is essential. Transit time from an edge device to the cloud and back is low enough on fast 5G or fiber networks to make speech processing the dominant term in response time. Interactive systems add natural language processing and text-to-speech features. Users expect a normal conversation flow and will accept short delays.

Accuracy is also a must, with a low word error rate. Correct speech interpretation depends on what words are present in the conversational vocabulary. Research continues into ASR improvements, and flexibility to adopt new algorithms with a better response in speed or accuracy is a must-have for an ASR system.

While cloud-based resources offer the potential for more processing power than most edge devices, they are not infinitely scalable without tradeoffs. Capital expenditure (CapEx) costs and energy consumption can be substantial in scaled-up, high-throughput configurations that simultaneously take speech input from many users.

FPGA-based acceleration meets the challenge

Multiply-accumulate workloads with high parallelization, typical of most ML algorithms, don’t fit CPUs well, requiring some acceleration to hit performance, cost, and power consumption goals. Three primary ML acceleration vehicles exist: GPUs, ASICs, and FPGAs. GPUs offer flexibility but tend to drive power consumption through the roof with efficiency challenges. ASICs offer tuned performance for specific workloads but can limit flexibility as new models come into play.

FPGA-based acceleration checks all the boxes. By consolidating acceleration in one server with high-performance FPGA accelerator cards, server counts drop drastically while determinism and latency improve. Flexibility for algorithm changes is excellent, requiring only a new FPGA bitstream for new model implementations. Eliminating servers reduces up-front CapEx, helps with space and power consumption, and simplifies maintenance and OpEx.

 

 

 

 

 

 

 

 

High-performance FPGAs like the Achronix Speedster7t family have four features suited for real-time ML inference. Logic blocks provide multiply-accumulate resources. High bandwidth memory keeps data and weighting coefficients flowing, and high-speed interfaces provide the connection to the host server platform. FPGA logic also supports various computational precision needs, yielding ML inference accuracy and lowering ML training requirements.

Overlays help non-FPGA designers

Some ML developers may be less familiar with FPGA design tactics. “An overlay can optimally configure the hardware on an FPGA to create a highly-efficient engine, yet leave it software programmable,” says Alvarez. He expands on how accelerator IP from Myrtle.ai can be configured into the FPGA, abstracting the user interface, upping the clock rate, and utilizing hardware better.

 

 

 

 

 

 

 

 

Alvarez wraps up this webinar on FPGAs for real-time machine learning with a case study describing how an accelerated ASR appliance might work. With the proper ML training, simultaneously transcribing thousands of voice streams with dynamic language allocation becomes possible. According to Achronix:

  • One server with a 250W PCIe Speedster 7t-based accelerator card can replace 20 servers without acceleration
  • Each accelerated server delivers as many as 4000 streaming speech channels
  • Costs and energy consumption both drop by up to 90% by using an accelerated server

Although the example in this webinar is specific to ASR, the principles apply to other machine learning applications where FPGA hardware and IP accelerate inference models. When time-to-market and flexibility matter and high performance is required, FPGAs for real-time machine learning inference are a great fit. Follow the link below to see the entire webinar, including the enlightening case study discussion.

Achronix Webinar: Unlocking the Full Potential of FPGAs for Real-Time Machine Learning Inference

Also Read:

WEBINAR The Rise of the SmartNIC

A clear VectorPath when AI inference models are uncertain

Time is of the Essence for High-Frequency Traders


IDEAS Online Technical Conference Features Intel, Qualcomm, Nvidia, IBM, Samsung, and More Discussing Chip Design Experiences

IDEAS Online Technical Conference Features Intel, Qualcomm, Nvidia, IBM, Samsung, and More Discussing Chip Design Experiences
by Daniel Nenni on 11-29-2022 at 10:00 am

IDEAS 2022 Just Topics Icon

Ansys is hosting IDEAS Digital Forum 2022, a no-cost virtual event that brings together industry executives and technical design experts to discuss the latest in EDA for Semiconductors, Electronics, and Photonics.

See the full online conference agenda and list of speakers at www.ansys.com/IDEAS. The free registration will allow you to attend the event on December 6th or on-demand any time after that.

IDEAS will start with Keynote addresses from Raja Koduri from Intel, Pankaj Kukkal from Qualcomm, and insights into the metaverse from DP Prakash with start-up Youtopian.

Keynote Speakers and Panelists at IDEAS on December 6th, 2022

You can also attend the IDEAS Panel Discussion in the afternoon on the topic of “Thermal Management: How to Keep Your Cool When Chips Get Hot. The moderated panel discussion will include Jean-Philippe Fricker from Cerebras,  Roopashree HM from Texas Instruments, and Bill Mullen, senior director of R&D at Ansys.

Following the Keynotes there are 8 technical tracks on topics covering Thermal Integrity, Power Integrity, Timing Closure, Electromagnetics, Machine Learning, Hardware Security, and Photonics. Over 20 companies are participating in IDEAS to present case studies of their production designs including:

Intel                     Qualcomm                        Nvidia

Samsung             MediaTek                          IBM

GUC                      HP Enterprise                    NXP      

Select authors will be available for Q&A chat with the event attendees after their presentations – don’t miss this opportunity to interact with industry experts.

To see the full agenda, Register now for IDEAS and add this premier event to your calendar.

For more information, contact Marc Swinnen

Ansys is at the forefront of electronic design enablement in partnership with the world’s leading companies for 2.5D/3D-IC, AI and machine learning, high-performance computing, 5G, telecommunications, aerospace and autonomous vehicles.

Join us for the IDEAS Digital Forum — a place to catch up on industry best practices and the latest advances in semiconductor, electronic and photonic design. IDEAS will explore future trends with keynotes from industry leaders and offer technical insights by expert chip designers from many of the world’s largest electronic and semiconductor companies. IDEAS will give you a close-up view of some of the leading companies most advanced electronic design projects in the world.

Meet your industry peers and fellow designers from around the world at this premier virtual event for networking, sharing and learning the latest in multiphysics technology for electronic, photonic, and semiconductor design.

This free event will be hosted as a virtual, on-line event.

About Ansys

When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys.

Take a leap of certainty … with Ansys.

Also Read:

Whatever Happened to the Big 5G Airport Controversy? Plus A Look To The Future

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC

What Quantum Means for Electronic Design Automation