SNPS1670747138 DAC 2025 800x100px HRes

CEO Interview: Anders Storm of Sivers Semiconductors

CEO Interview: Anders Storm of Sivers Semiconductors
by Daniel Nenni on 08-16-2024 at 6:00 am

Anders Storm Sivers

Anders Storm is the CEO of Sivers Semiconductors. Under his almost decade-long leadership, the company has experienced significant growth, solidifying its position as a key player in the global semiconductor industry. With expertise in wifi communications, 5G, and photonics, he drives the company’s corporate strategy, product innovation, organisational excellence, and shareholder engagement.

Tell us about your company
Sivers is at the forefront of SATCOM, 5G, 6G, Photonics, and Silicon Photonics, pushing the boundaries of global communications and sensor technology. We have two business units – Photonics and Wireless – which provide advanced chips and modules essential for best in class gigabit wireless and optical networks. We serve a wide range of industries, from data and telecommunications, aerospace to defense, meeting the growing need for faster computation and better AI performance. By switching from electric to optical connections, we’re also helping to create a more sustainable future.

What problems are you solving?
We address several critical issues across various industries. Sivers enhances global connectivity and communication by advancing SATCOM, 5G, and 6G technologies, ensuring faster and more reliable networks. Our chips and modules for gigabit wireless and optical networks tackle the need for high-speed data transmission, crucial for applications such as video streaming and cloud computing. Additionally, Sivers supports the growing demand for high-performance computing and AI applications, enabling faster data processing and efficient machine learning models. Serving industries from telecommunications to aerospace, Sivers facilitates innovation and improvement in products and services, while helping companies future-proof their infrastructure and scale with technological advancements.

What application areas are your strongest?
Our newest Radio Frequency Module is a high-performance, high-power, wide bandwidth component designed for gigabit communication for example Fixed Wireless Access applications. It enables the creation of internet and data connections without the need for physical cables, making it ideal for providing wireless broadband to homes and businesses.

What keeps your customers up at night?
We find that our customers are most anxious about keeping up with the rapid pace of technological advancements in communications and sensor technologies, fearing that their current infrastructure might become obsolete and not sustainable long term. Similarly, the need to maintain a competitive edge in their respective industries by continuously innovating and integrating the latest technologies is a source of stress for our customers. In addition, the growing demand for higher data transmission speeds and computational power to support emerging applications like AI and machine learning may also weigh heavily on their minds.

What does the competitive landscape look like and how do you differentiate?
The growing demand for high-speed, reliable, and sustainable communication solutions across various industries puts pressure on companies to continuously innovate. Strategic partnerships and collaborations with tech companies, research institutions, and industry organizations are essential for staying competitive and expanding market reach in this rapidly evolving sector. Of course, the integration of AI into communication networks is increasingly crucial, as competitors use AI to optimize performance and enhance analytics.

What new features/technology are you working on?
Under a new contract with Blu Wireless, we are designing and developing advanced 5G long range antenna modules that operate within the 57-71 GHz license exempt band, providing high-speed broadband communication links for track-to-train applications. This is a new and exciting area for us as we seek to transform the way passengers and operators experience connectivity on the move. Another area is for AI clusters to connect GPUs with 16 terabit per second using photons rather than electrons, reducing power consumption by up to 90 percent.

How do customers normally engage with your company?
For customers requiring tailored solutions, Sivers engages in custom development contracts. These agreements outline the specific requirements and specifications for the custom product or solution, including performance metrics, timelines, and milestones. Such contracts often involve close collaboration between Sivers’ engineering teams and the customer’s technical staff, after that we deliver the chips or modules in volumes entering into long-term supply agreements to ensure a steady and reliable source of integrated chips, modules, and other critical components. It’s in this phase we are now transitioning into where we will go from 39% in second quarter 2024 in product sales, to over 80% in 2026. This is where we really will leverage our custom development contracts into the next phase of high growth.

Also Read:

CEO Interview: Zeev Collin of Semitech Semiconductor

CEO Interview: Yogish Kode of Glide Systems

CEO Interview: Pim Donkers of ARMA Instruments


Emerging Memories Overview

Emerging Memories Overview
by Daniel Nenni on 08-15-2024 at 10:00 am

ReRAM History 2024

This year’s Future of Memory and Storage Conference (formerly the Flash Memory Summit) was again very well attended. The Santa Clara Convention Center is definitely the place to be for a Silicon Valley Conference.

This post is about the Emerging Memories session organized by Dave Eggleston. We will be covering other sessions but this was my #1. Having been in the embedded memory space for a good part of my career I know there is a serious Moore’s Law scaling problem and it will be interesting to see what new technology comes out on top.

Here is the session abstract:

In this session, we discuss emerging memories. Ultra-High Speed Photonic NAND FLASH technology revolutionizes memory operations by achieving ultra-high speeds with lower voltages and power consumption. This technology combines vertical NAND FLASH transistors with lasers/LEDs and photon sensors for efficient READ operations. ReRAM is now mainstream in applications such as automotive and edge AI due to its low power, scalability, and resilience to environmental conditions. We will explore the technology enhancements needed for wider adoption and the latest developments in advanced processes. ULTRARAM boasts exceptional properties like energy efficiency and extreme temperature tolerance, making it ideal for space and high-performance computing applications. We will highlight progress in fabrication processes and potential applications. Finally, we will discuss life beyond flash and the future of memory technologies like MRAM, ReRAM, PCM, and FRAM. Analysts will explore the impact on computer architectures, AI, and the memory market in the next 20 years, emphasizing the inevitability of transitioning to emerging memory types.

For me ReRAM is a top contender. Amir Regev, VP of Quality and Reliability at Weebit Nano presented:  ReRAM: Emerging Memory Goes Mainstream which was very interesting. Here is the abstract. Weebit also has a lot of information and instructional videos on their website which is quite good.

ReRAM today is being integrated as an embedded non-volatile memory (NVM) in a growing range of processes from 130nm down to 22nm and below for a range of applications: automotive, edge AI, MCUs, PMICs and others. It is low-power, low-cost, byte-addressable, scales to advanced nodes, and is highly resilient to a range of environmental conditions including extreme temperatures, ionizing radiation and electromagnetic fields. In this session, Weebit will discuss what technology enhancements are needed to proliferate ReRAM even further into applications with extended requirements. We will discuss the latest technical and commercial developments including data in advanced processes.

For those of you who don’t know Resistive Random Access Memory (RRAM or ReRAM) is a type of non-volatile memory that stores data by changing the resistance of a material. Unlike traditional memory technologies like DRAM or flash memory, which store data using charge. Amir reminded us that RERAM is not new and I do remember past RRAM discussions at some of the top semiconductor companies and certainly at TSMC.

Jim Handy also presented. Jim is one of the most respected embedded memory analysts and a blogger like myself. You can find him at https://thememoryguy.com/.

Here is Jim’s presentation abstract:

Flash memory has scaled beyond what was thought possible 20 years ago. Can this continue, or will an emerging memory technology like MRAM, ReRAM, PCM, or FRAM move in to replace it? Are there other memory technologies threatened with similar fates? What will the memory market look like in another 20 years? This talk will explain emerging memory technologies, the applications that have already adopted them in the marketplace, their impact on computer architectures and AI, the outlook for important near-term changes, and how economics dictate success or failure. Noted Analyst Jim Handy, and IEEE President Tom Coughlin will present the findings of their latest report as they discuss where emerging memories complement CXL, Chiplets, Processing In Memory, Endpoint AI, and wearables, and they explain the inevitability of a conversion from established technologies to new memory types.

I invited Jim to be a guest on our Semiconductor Insiders podcast to talk to him in more detail so stay tuned.

Bottom line: If you are currently researching embedded memory my advice is to look to the foundries. Foundry silicon reports do not lie.

Also Read:

Weebit Nano at the 2024 Design Automation Conference

Weebit Nano Brings ReRAM Benefits to the Automotive Market

ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design


The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation
by Kalar Rajendiran on 08-15-2024 at 6:00 am

Comparative Analysis of Chiplet Interconnect Standards (Physical Layer)

The semiconductor industry is experiencing a significant transformation with the advent of chiplet design, a modular approach that breaks down complex chips into smaller, functional blocks called chiplets. A chiplet-based design approach offers numerous advantages, such as improved performance, reduced development costs, and faster time-to-market. This approach improves yield by isolating defects to individual modules, optimizes transistor costs by allowing different manufacturing nodes for different components, and leverages advanced packaging technologies for enhanced performance. The modularity of chiplets supports scalable, customizable designs that accelerate time-to-market and enable targeted optimization for performance, power, and cost.

However, one of the most substantial barriers to widespread adoption has been the lack of standardization in how these chiplets communicate with each other. The Universal Chiplet Interconnect Express (UCIe) standard is poised to change that, making chiplet design more accessible and opening up new opportunities for innovation across the industry. Mayank Bhatnagar, a Product Marketing Director at Cadence gave a talk on this subject at the FMS 2024 Conference in early August.

Standardization and Interoperability

Before standardized chiplet interfaces, custom designs for each chiplet were needed, leading to higher costs, longer development times, and limited interoperability. Companies had to develop proprietary interfaces for their chiplets, making it difficult to integrate components from different suppliers. This lack of interoperability increased development costs and limited the pool of available chiplets.

The adoption of standards simplifies this process, allowing designers to focus on core innovations while using pre-validated interfaces for communication. This reduces custom design efforts, accelerates development, and ensures seamless integration. Companies can now leverage proven chiplets, cutting costs and improving quality. Overall, standardization streamlines design, reduces resource use, and speeds up time-to-market. Over the recent past, a number of chiplet-to-chiplet interface standards have been developed.

A comparative analysis of these various standards indicates that, in terms of bandwidth efficiency, energy usage efficiency and latency, UCIe excels.

The Role of UCIe in Chiplet Design

UCIe, or Universal Chiplet Interconnect Express, is an open industry standard that defines a high-bandwidth, low-latency interconnect protocol for connecting chiplets. UCIe provides a common interface for chiplets to communicate, much like how USB standardized peripheral connections in the PC industry.

With UCIe, companies can mix and match chiplets from various vendors, fostering a more competitive market and driving innovation. It lets designers focus on core innovations and highly customized cores and leverage standardized interfaces for the periphery. By surrounding highly customized cores with standard periphery, designers can maximize their market reach and efficiency.

Enabling Specialized and Customized Solutions

One of the most exciting possibilities enabled by UCIe is the potential for highly specialized and customized solutions. In the past, companies had to rely on expensive monolithic SoCs or resort to general-purpose SoCs that might not be perfectly suited for their specific application. With chiplets and UCIe, companies can build custom systems tailored to their exact needs, selecting the best components from a variety of suppliers. For example, a company developing an AI accelerator could choose a high-performance CPU chiplet from one vendor, a specialized neural processing unit (NPU) from another, and memory from a third. UCIe ensures that these components can communicate effectively, allowing the company to create a highly optimized solution without the need for an expensive monolithic custom SoC.

Custom Silicon for AI Applications

The demand for custom silicon is rapidly increasing, driven by the need to optimize hardware for specific AI applications such as training, inferencing, data mining, and graph analytics. AI training requires high-performance, parallel processing capabilities to manage large datasets and complex models, while AI inferencing demands low-latency, high-throughput processing for real-time predictions and decisions. Data mining benefits from custom silicon tailored for specific data processing and extraction tasks, and graph analytics requires chips designed to handle the complexity of graph processing and large-scale parallelism. A chiplet-based approach leveraging UCIe offers significant advantages for these applications in terms of performance, power efficiency, and scalability.

Fostering Innovation and Collaboration

As an open industry-standard, UCIe not only reduces barriers to entry but also encourages collaboration and innovation within the semiconductor industry. By establishing a common platform for chiplet communication, UCIe enables companies to focus on their core competencies, whether that’s developing cutting-edge processors, advanced memory technologies, or specialized accelerators. This collaborative environment can lead to the development of new, innovative products that might not have been possible within the constraints of traditional SoC design. As more companies adopt UCIe and contribute to the ecosystem, the variety and quality of available chiplets will continue to grow, further driving innovation.

Summary

UCIe represents a significant step forward in the evolution of chiplet design, lowering the barriers to entry for companies of all sizes. By standardizing the communication between chiplets, UCIe makes it easier for companies to develop custom, high-performance systems without the need for costly and complex SoC designs. As a result, UCIe is expected to democratize the semiconductor industry, fostering greater innovation and competition while enabling a new wave of specialized and customized solutions. The future of chip design is modular, and with UCIe, that future is more accessible than ever. The growing demand for custom silicon for AI applications will drive further advancements and opportunities around UCIe technology.

For more details, visit Cadence’s UCIe product page.

Also Read:

The Future of Logic Equivalence Checking

Theorem Proving for Multipliers. Innovation in Verification

Empowering AI, Hyperscale and Data Center Connectivity with PAM4 SerDes Technology


CEO Interview: Zeev Collin of Semitech Semiconductor

CEO Interview: Zeev Collin of Semitech Semiconductor
by Daniel Nenni on 08-14-2024 at 10:00 am

Zeev 29

Zeev Collin is a seasoned technology executive and serial entrepreneur with over 25 years of experience in executive management across international semiconductor companies and startups. Prior to co-founding Semitech, Zeev co-founded and successfully exited two ventures focused on vehicle and trailer tracking devices. Earlier in his career, he played a key role in developing seminal soft modem technology, which was acquired by Conexant Systems, where he subsequently held VP positions in product development and business management. Today, Zeev continues to leverage his expertise as a board member and advisor for various startups. He holds a BSc in Computer Engineering and an MSc in Computer Science from the Technion – Israel Institute of Technology.

Tell us about your company?

Semitech Semiconductor is a dynamic fabless semiconductor company specializing in the development of cutting-edge communication technology. Our flagship products provide reliable, cost-effective communication solutions for a wide range of machine-to-machine applications (Internet of Things) in industrial and automotive environments. We focus on narrowband powerline communication (PLC) and wireless mesh technologies, enabling existing infrastructures to become “smart” with seamless communication without the need for additional wiring.

We are committed to a successful long-tail business model around niche applications by addressing a wide range of IoT communication needs with a versatile, multi-modal solution for both power lines and wireless mesh networks. Our motto is: “Connect Everything, Everywhere!”

What problems are you solving?

Our multi-modal devices offer the most adaptable communication solutions, effectively addressing the diverse needs of the Industrial IoT market for “monitor and control” applications, all while avoiding the cost and complexity of additional wiring.

We tackle several key challenges:
  • Infrastructure limitations: We eliminate the need for installing new communication network wiring in existing buildings or industrial facilities.
  • Reliability: Our solutions ensure robust communication even during wireless network failures by providing dependable connectivity over power lines, even in noisy and electrically challenging environments. We also offer hybrid mesh networks that combine PLC and wireless technologies.
  • Diverse requirements: The Industrial IoT encompasses a wide range of applications and geographies with varying needs. Our flexible, customization-focused approach delivers high-quality solutions, whether they are standard-based or proprietary, tailored to meet specific application and customer requirements.
What application areas are your strongest?

Our primary domain is Industrial IoT, where we have key customer engagements and successfully deployed solutions across various applications, such as:

  • Tractor/Trailer Communication: We are the only solution supporting the PLC4TRUCKS protocol, used in North America for trailer ABS communication. We are collaborating with our customers to expand the use of PLC to other control and communication functions between tractors and trailers.
  • Point-to-Point Communication for Mining and Drilling: Our proprietary SpeedLink protocol is optimized for connecting subterranean sensors to a control center, enabling constant data streaming without the need for repeaters.
  • Smart Lighting: Our solutions allow customers to remotely control lighting systems in locations such as sporting venues, pools, and airfields.
  • Smart Metering: Our combination of wireless and PLC technologies is utilized by one of the largest metering companies worldwide.
What keeps your customers up at night?

Beyond the universal concerns of competitiveness and cost efficiency, our customers are specifically worried about the following aspects:

  • Reliability: Reliable communication is paramount. While data speed is often secondary, having a robust solution that operates effectively in noisy and changing channel conditions is essential. Our products are designed for long-term use and must maintain reliable operation throughout their lifespan.
  • Security: Since many of our target applications relate to critical infrastructure, security is a major concern. Ensuring the safety and integrity of data is crucial for our customers.
  • No New Wires: Retrofitting existing systems with sensors and remote-control capabilities requires communication solutions that do not need dedicated wiring and can utilize existing infrastructure, making this a significant consideration.
  • Automotive Qualification: With semiconductors becoming integral to the automotive industry, there is an increasing expectation that any semiconductor component used must meet stringent automotive-grade qualification requirements.
What does the competitive landscape look like and how do you differentiate?

When it comes to technology choices, our PLC solution competes with various wireless technologies like 5G and LoRa. However, we view these as complementary approaches rather than direct competition. The optimal approach typically depends on environmental conditions and the specific application requirements. Often, the best solution involves a combination of two or more communication technologies to ensure the best coverage and reliability.

In terms of actual competitors, Semitech often competes with large international companies. We differentiate ourselves by offering high-quality solutions that perform better in challenging, noisy channel conditions. More importantly, we focus on niche applications with long lifespans. Instead of pursuing “cookie-cutter” solutions for mass-market applications, we embrace market fragmentation and the diversity of application needs. We provide our customers, even the smaller ones, with boutique-quality support and a level of customization that large companies cannot or will not offer.

What new features/technology are you working on?

We are continually enhancing our existing solution with new features, including:

  • Faster data rates via our SpeedLink protocol
  • PLC-BLE bridging for trailer telematics applications
  • Hybrid mesh networking

Next year, we will introduce the first and only automotive-grade PLC4TRUCKS solution. Additionally, we are developing new technologies, such as GreenPHY for the EV market and a WiSUN/PLC combo solution.

How do customers normally engage with your company?

Due to the nature of our business, we embrace direct interaction with our customers to better understand their requirements and needs. Our engineering team is very adept in providing tailored solutions and we encourage direct open communication between our engineers and our customers. Our website serves as a key source of customer leads, allowing potential clients to find us and contact us directly. Additionally, we employ a network of reps who help channel customers to us and provide first line of support.

We regularly engage in collaborative engineering projects with our customers to develop specialized features or advanced solutions tailored to their specific needs.

Also Read:

CEO Interview: Pim Donkers of ARMA Instruments

CEO Interview: Dr. Babak Taheri of Silvaco

CEO Interview: Orr Danon of Hailo


WEBINAR: Silicon Area Matters!

WEBINAR: Silicon Area Matters!
by Daniel Nenni on 08-14-2024 at 8:00 am

SemiWiki Flex Logix Webinar

When designing IP for system-on-chip (SoC) and application-specific integrated circuit (ASIC) implementations, IP designers strive for perfection. Optimal engineering often yields the smallest die area, thereby reducing both cost and power consumption while maximizing performance.

Similarly, when incorporating embedded FPGA (eFPGA) IP into a SoC, designers prioritize these critical factors. eFPGA IP is inherently scalable, enabling it to be tailored to each customer’s specific requirements. However, the necessary FPGA logic is not only determined by the programmed design, but also by the compiler and FPGA architecture used.

Embedded FPGA provides crucial flexibility, allowing SoCs to adapt to changing standards, protocols, customer requirements and post-quantum cryptography algorithms as well as enables software acceleration and deterministic processing. Flex Logix’s EFLX eFPGA architecture delivers industry-leading performance, power, and area (PPA) metrics. It features a familiar 6-input lookup table (LUT) along with a highly efficient, patented routing switch matrix that sets it apart from competitors. This switch matrix reduces the number of metal stack layers, enabling EFLX to meet the stringent requirements of edge IoT devices.

Recently Flex Logix announced the availability of eXpreso, its powerful 2nd generation EFLX eFPGA compiler and successor to the first-generation compiler, EC1.0. eXpreso, which has been in development for years, is now shipping to alpha customers for evaluation. The new compiler delivers up to 1.5x higher frequency, 2x denser LUT packing and 10x faster compile times for all existing EFLX tile and arrays. Now IC designers can further reduce eFPGA IP implementation to levels never seen before.

REPLAY: Reconfigurability is now achievable with significantly reduced silicon area, thanks to new Flex Logix eFPGA compiler
Abstract:

Many IC architects value the adaptability and reconfigurability of embedded FPGA (eFPGA) technology, but often dismiss it due to the implementation cost. Their focus on smaller die area and power consumption are primary drivers. Flex Logix has addressed this challenge with its new, game-changing eFPGA compiler tool, eXpreso, which can dramatically decrease the required die area impact of adding eFPGA. eXpreso’s innovative routing optimizations and packing ability can cut design implementations in half.

This webinar will provide an opportunity to learn more about Flex Logix’s embedded FPGA IP and problem-solving applications, and to see a live demonstration of eXpreso and how it can significantly reduce the area of embedded FPGA IP.

Presenters:

Jayson Bethurem – VP, Marketing & Business Development
Jayson is responsible for marketing and business development at Flex Logix. Jayson spent six years at Xilinx as Senior Product Line Manager, where he was responsible for about a third of revenues. Before that he spent eight years at Avnet as FAE, showing customers how to use FPGAs to improve their products. Earlier, he worked at start-ups using FPGAs to design products.

Brian Philofsky – Sr Director of Solutions Architecture
Brian is Sr Director of Solutions Architecture supporting customers in their technical evaluation and implementation of Flex Logix Hardware and Software. Brian spent more than 25 years at Xilinx/AMD in various roles including Director of Technical Marketing, Principal Engineer for Power Solutions, and managing applications, design services and support roles. Brian has been awarded 13 US Patents.

About Flex Logix

Flex Logix is a reconfigurable computing company providing leading edge eFPGA, DSP/SDR and AI Inference solutions for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable DSP/SDR/AI is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm, 3nm and 18A in development. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.

Also Read:

Flex Logix at the 2024 Design Automation Conference

Elevating Your SoC for Reconfigurable Computing – EFLX® eFPGA and InferX™ DSP and AI

WEBINAR: Enabling Long Lasting Security for Semiconductors


First third-party ISO/SAE 21434-certified IP product for automotive cybersecurity

First third-party ISO/SAE 21434-certified IP product for automotive cybersecurity
by Don Dingee on 08-14-2024 at 6:00 am

ISO/SAE 21434 and UNECE WP.29 R155

Increased processing and connectivity in automobiles are cranking up the priority for advanced cybersecurity steps to keep roads safe. Electronic vehicle interfaces, including 5G/6G, Bluetooth, Wi-Fi, GPS, USB, CAN, and others, offer convenience features for drivers and passengers, but open numerous attack vectors for hackers. Many vehicles now provide over-the-air (OTA) update capability for infotainment systems and mission-critical vehicle software, adding to security concerns. Synopsys has taken a bold step in achieving third-party ISO/SAE 21434 certification for its ARC HS4xFS Processor IP, with more 21434-certified IP in the pipeline.

Second certification effort for ARC processor IP in automotive

Automotive industry observers are likely familiar with ISO 26262, the functional safety (FuSa) standard that assesses system behavior and any potential degradation in the face of hardware and software faults. ISO 26262 was the initial automotive standard certification focus for the Synopsys ARC Processor IP family, and many ARC core variants, including the ARC HS4xFS Processor IP, now contain FuSa features certified to ASIL D levels.

Cybersecurity presents similar concerns for automotive designers but requires defining a substantially different framework. “For starters, a comprehensive cybersecurity approach for automotive needs much tighter communication between automakers and suppliers,” says Ron DiGiuseppe, Automotive IP Segment Manager for Synopsys. “It also requires commitments at the executive level in creating dedicated cybersecurity assurance teams that partner with the product development teams, overseeing due diligence and working hand-in-hand with product development teams to enforce processes for creating and maintaining ISO 21434-certified IP.”

ISO/SAE 21434 “Road vehicles – cybersecurity engineering” defines such a framework, standardizing roles and responsibilities for various groups during different stages of automotive product development. It comprehensively addresses policies, processes, and procedures in a Secure Development Lifecycle (SDL) with specific criteria that each stage of development must meet before proceeding.

Initial support for ISO 21434 from European stakeholders

A broader European effort is piggybacking on ISO 21434, seeking to harmonize vehicle regulations. UNECE WP.29 extends into cybersecurity and software updates with two recent additions, R155 and R156. R155 sets up a path with uniform provisions for approval of vehicles designed to ISO 21434 and its cybersecurity risk management system.

“Manufacturers and car owners have a vital self-interest in protecting vehicles against cyberattacks,” says Meike Goelder, Product Management Cybersecurity at Bosch (see the full Bosch video “100 Seconds: The Importance of Cybersecurity“). “Attacks aim at manipulating safety-critical parameters, violating privacy by stealing customer data, or even hijacking a car, and new ways of attacking whole fleets only multiply the danger.” She sees both ISO 21434 and UNECE WP.29 R155 helping ensure the cyber compliance of cars.

image courtesy Bosch

DiGiuseppe points out that although UNECE WP.29 R155 is a European effort, taking the lead in defining an approval process, it sets a de facto standard for auto manufacturers selling in global markets. “To help ensure automakers and their suppliers comply with cybersecurity risk management, we selected an appropriate product, the ARC HS4xFS Processor IP, aligned our organization and processes, assessed for compliance, and obtained ISO 21434 third-party certification by SGS-TṺV Saar expediently.”

Installing cybersecurity risk management engineering processes

Achieving ISO 21434 certification involves two distinct phases: assessing the potential vulnerability of a product and providing an organizational structure for ongoing incident response should any occur. Vulnerability is approached by a Threat Analysis and Risk Assessment (TARA), creating a risk score based on four factors. Having a risk score helps drive informed decisions about treating risks, either through the development process or with specific modifications to a product.

image courtesy Synopsys

We asked DiGiuseppe the obvious question: did the ARC HS4xFS Processor IP require any modifications to become ISO 21434-certified IP? The assessment found no security vulnerabilities, which SGS-TṺV Saar confirmed in their audit, so no changes were needed. He indicates other products, including ARC NPU and ARC-V processor and interface IP, are in the certification pipeline, and they hope for similar outcomes but are ready to make any necessary product modifications.

The Synopsys cybersecurity risk management engineering processes include the Security Development Lifecycle, Security Risk Assessment, and IP Security Incident Response Team. As is often the case with systems standards certification, vulnerability management expertise is valuable to help identify, diagnose, and communicate vulnerabilities and the best mitigation approach. “This is the first IP product third-party certified to ISO 21434 in the industry – and we now have processes and teams in place to certify more of our IP products,” DiGiuseppe concludes. This breakthrough is welcome news for SoC designers, whether at automakers or third-party suppliers, who must get ISO 21434 certification with their products. More background on the ISO/SAE 21434 standard and the Synopsys ISO/SAE 21434-certified IP details are online.

Technical bulletin on ISO/SAE 21434 by Ron DiGiuseppe:
The Promise of ISO/SAE 21434 for Automotive Cybersecurity

Details on Synopsys’ industry-first third-party ISO/SAE 21434-certified IP product:
Synopsys Advances Automotive Security with Industry’s First IP Product to Achieve Third-Party Certification for ISO/SAE 21434 Cybersecurity Compliance


Circuit Simulation Update from Empyrean at #61DAC

Circuit Simulation Update from Empyrean at #61DAC
by Daniel Payne on 08-13-2024 at 10:00 am

Empyrean SPICE min

A familiar face in EDA, Greg Lebsack met with me in the Empyrean booth at DAC this year on opening day to provide an update on what’s new. I first met Greg when he was at Tanner EDA, then Mentor and Siemens EDA, so he really knows our industry quite well. The company was a Silver level sponsor of DAC this year, and Empyrean offers tools for circuit verification, covering: Aging, Electrical Over-Stress (EOS), Monte Carlo, cell characterization, RF simulation, co-simulation, GPU-powered simulation, channel simulation, SPICE simulation. They also have EDA tools for RF, digital, flat panel design, foundry and advanced packaging design.

SPICE simulators

I learned that their SPICE simulator running on GPU was popular, and as NVIDIA releases a new GPU, then the ALPS-GT tool quickly gets updated and released. In fact, NVIDIA is a customer of Empyrean and they use ALPS-GT for transient analysis and ALPFS-RF for harmonic balance simulations. A poster session on Tuesday was presented by both NVIDIA and Empyrean about using ALPS-GT:

  • GPU ACCELERATED HARMONIC BALANCE SPICE SIMULATION
    Qikun Xue, NVIDIA, San Jose, CA
    Chen Zhao, Empyrean Technology, Santa Clara, CA

Booth presentations showed Alps and Alps RF on both Monday and Tuesday. Other customers mentioned by Greg were MPS designing CMOS power supply chips, Diodes doing EM/IR analysis on power management ICs, and WillSemi running reliability analysis. PMIC designs are typically using 180nm process nodes, while the ALPS circuit simulator is also certified for use with the leading-edge Samsung 3nm node.

To support the six product lines at the company, they have grown to 1,200 people, up from 700 just two years ago. Tools in their digital SoC design space cover five areas:

  • Qualib – process library analysis and validation
  • Liberal – standard cell, memory and IP characterization
  • XTop – timing closure and ECO tool
  • XTime – timing and design reliability analysis
  • Skipper – layout integration and analysis
Empyrean DAC Booth

Summary

Every time that I meet with contacts at Empyrean the company has grown, and I learn about new customers and market segments being served. Their booth at DAC looked larger this year and included more staff than ever before. Having a tier-one customer like NVIDIA certainly grabbed my attention, and really cements Empyrean in this IC circuit simulation marketplace as a trusted EDA vendor. It’s a bit poetic how NVIDIA GPUs are being used to simulate new NVIDIA ICs, accurately and faster than ever before for both transient analysis and RF analysis.

Stay tuned on SemiWiki for updated news from this rising EDA vendor in blogs to come.

Related Blogs


Why Glass Substrates?

Why Glass Substrates?
by Sharada Yeluri on 08-13-2024 at 6:00 am

Intel Glass Substrates
Borrowed from Intel’s presentation on glass substrates

The demand for high-performance and sustainable computing and networking silicon for AI has undoubtedly increased R&D dollars and the pace of innovation in semiconductor technology. With Moore’s Law slowing down at the chip level, there is a desire to pack as many chiplets as possible inside ASIC packages and get the benefits of Moore’s Law at the package level.

The ASIC package hosting multiple chiplets typically consists of an organic substrate. This is made from resins (mostly glass-reinforced epoxy laminates) or plastics. Depending on packaging technology, either the chips are mounted directly on the substrate, or there is another layer of silicon interposer between them for high-speed connectivity between the chiplets. Interconnect bridges instead of interposers are sometimes embedded inside the substrate to provide this high-speed connectivity.

The problem with organic substrates is that they are prone to warpage issues, especially in larger package sizes with high chip densities. This limits the number of dies that can be packed inside a package. That is where substrates made of glass could be game changers!

Advantages of Glass Substrates

✔ They can be made super flat, allowing for finer patterning and higher (10x) interconnect densities. During photolithography, the entire substrate receives uniform exposure, reducing defects.

✔ Glass has a similar thermal expansion coefficient as the silicon dies above it, reducing thermal stress.

✔ They don’t warp and can handle much higher chip density in a single package. Initial prototypes could handle 50% more chip density than organic substrates.

✔ Could seamlessly integrate optical interconnects, giving rise to more efficient co-packaged optics.

✔ These are typically rectangular wafers, which increase the number of chips per wafer, improving yield and reducing costs.

Glass substrates could potentially replace organic substrates, silicon interposers, and other high-speed embedded interconnects inside the package.

However, there are some challenges: Glass is brittle/fragile and prone to fractures during manufacturing. This fragility needs careful handling and specialized equipment to prevent damage during manufacturing processes. Ensuring proper adhesion between glass substrates and other materials used in semiconductor stacks, such as metals and dielectrics, is challenging. The differences in material properties can lead to stresses at the interfaces, potentially causing delamination or other reliability issues. While glass has a thermal expansion coefficient similar to silicon, it can differ significantly from materials used in PCB boards/bumps. This mismatch can cause thermal stresses during temperature cycling, impacting reliability and performance.

Lack of established industry standards for glass substrates leads to variability in performance across different vendors. As the technology is new, there is not enough long-term reliability data. More accelerated life testing is needed to gain confidence in using these packages for high-reliability applications.

Despite the disadvantages, glass substrates hold great promise for HPC/AI and DC networking silicon, where the focus is on packing as much throughput as possible inside ASIC packages to increase the overall scale, performance, and efficiency of the systems.

Major foundries like Intel, TSMC, Samsung, and SKC are heavily investing in this technology. Intel is leading the pack with test chips introduced late last year. However, it will be another 3-4 years before this inevitable transition to glass substrates happens for high-end silicon.

I can’t wait to see more innovations that push the boundaries of technology!

Also Read:

Intel Ushers a New Era of Advanced Packaging with Glass Substrates

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC

TSMC Advanced Packaging Overcomes the Complexities of Multi-Die Design


PieceMakers HBLL RAM: The Future of AI DRAM

PieceMakers HBLL RAM: The Future of AI DRAM
by Joe Ting on 08-12-2024 at 6:00 am

PieceMaker Memory

PieceMakers, a fabless DRAM product company, is making waves in the AI industry with the introduction of a new DRAM family that promises to outperform traditional High Bandwidth Memory (HBM). The launch event featured industry experts, including a representative from Samsung, highlighting the significance of this innovation.

Today, customers are already exploring the use of low-density HBLL RAM for large language models. According to Dr. Charlie Su, President and CTO of Andes Technology, a leading RISC-V vector processor IP provider, “High-bandwidth RAM, such as HBLL RAM, is widely discussed among AI chip makers. When paired with Andes vector processors and customer accelerators, it creates great synergy to balance compute-bound and memory-bound issues.” Eight HBLL RAM chips can deliver a 4 GB density for smaller language models, with a staggering bandwidth of 1 TB per second and at a low cost.

The Need for Advanced DRAM

Since last year, large language models (LLMs) have grown in size and complexity. These models require varying amounts of memory to store their parameters, but one constant remains: the need for high bandwidth. Currently, the landscape of DRAM includes low-power DDR, GDDR, and HBM. However, there is a notable gap in high bandwidth but lower density options, which is where PieceMakers’ new HBLL RAM comes into play.

The name “HBLL RAM” stands for High Bandwidth, Low Latency, and Random Access. Compared to HBM, HBLL RAM offers two additional characteristics that make it superior: low latency and random access capabilities. This innovation addresses the needs of AI applications by providing lower density with high bandwidth.

The current generation of HBLL RAM, now in production, offers a low density of 0.5 GB and a bandwidth of 128 GB per second. Future generations are being designed with stacking techniques to further enhance performance. The strategy involves increasing data rate vertically and expanding IO width horizontally. Similar to HBM, HBLL RAM uses 512 IO and data on 1K IO, with future generations set to boost the frequency.

When comparing HBLL RAM to HBM, the advantages are clear. At the same density, HBLL RAM provides much higher bandwidth. Conversely, at the same bandwidth, it offers lower density. This improvement is quantified by the bandwidth density index, which measures the maximum bandwidth per unit density (GB). HBLL RAM significantly outperforms HBM, low-power DDR, and GDDR in this regard.

Bandwidth and Energy Efficiency

Typically, discussions about bandwidth focus on sequential bandwidth. However, the granularity of random access is equally important. HBLL RAM excels in random access performance, outperforming HBM, which has good sequential bandwidth but poor random access capabilities.

In terms of energy efficiency, HBLL RAM is more power-efficient because it delivers the same bandwidth with a smaller array density or page size. This efficiency stems from its innovative low-density architecture, first introduced at ISSCC in 2017. A single HBLL RAM chip provides 128 GB per second bandwidth across eight channels, with all signal bumps located on one side of the chip. This design results in latency that is approximately half of traditional DRAM, with superior random access bandwidth.

Real-World Applications and Simplified Interfaces

Jim Handy, a respected industry analyst, highlighted HBLL RAM’s potential in an article where he illustrated its placement between level three cache and DRAM. In fact, simulations using HBLL RAM as level four cache yielded impressive results: latency was halved, and average bandwidth increased significantly compared to systems without HBLL RAM.

The simplicity of the memory controller is another advantage, as PieceMakers provides it directly to customers.  The interface for HBLL RAM is simple and SRAM-like, involving only read and write operations, plus refresh and mode register set.

One of PieceMakers’ demo boards and a customer’s board exemplify this innovation, utilizing an ABF-only design without CoWos (Chip-on-Wafer-on-Substrate), advanced packaging technology that can be 2-3 times more expensive than traditional flip-chip packaging.Looking ahead, PieceMakers plans to stack HBLL RAM similarly to HBM but without the need for CoWos. This 2D stacking approach, as opposed to 2.5D, promises further cost reductions.

In conclusion, PieceMakers’ HBLL RAM represents a significant leap forward in DRAM technology for AI applications. It offers superior bandwidth, lower latency, and enhanced energy efficiency, making it a compelling choice for future large language models. With the potential to provide up to 128 GB to 16 TB per second, HBLL RAM is set to revolutionize the AI industry.

Joe Ting is the Chairman and CTO, PieceMakers

Also Read:

Unlocking the Future: Join Us at RISC-V Con 2024 Panel Discussion!

Andes Technology: Pioneering the Future of RISC-V CPU IP

A Rare Offer from The SHD Group – A Complimentary Look at the RISC-V Market


A Post-AI-ROI-Panic Overview of the Data Center Processing Market

A Post-AI-ROI-Panic Overview of the Data Center Processing Market
by Claus Aasholm on 08-11-2024 at 8:00 am

Datacenter Supply Chain 2024

With all the Q2-24 results delivered, it is time to remove the clouds of euphoria and panic, ignore the performance claims and the bugs, and analyse the Data Center business, including examining the supply chain up and downstream. It is time to find out if the AI boom in semiconductors is still alive.

We begin the analysis with the two main categories, processing and network, and the top 5 semiconductor companies supplying the data center.

The top 5 Semiconductor companies that supply the data center account for nearly 100% of networking and processing. Once again, the overall growth for Q2-24 was a healthy 15.3%, all coming from processing. Networking contracted slightly by -2.5%, while processing grew by 20.3%. As Nvidia stated that the company’s decline in networking business was due to shipment adjustments, the growth numbers likely do not represent a major shift in the underlying business.

From a Year-over-year perspective, the overall growth was massive, 167%, with processing growing by 211% and networking by 66%.

As can be seen from the Operating Profit graph, the operating profit growth was much more aggressive, highlighting the massive demand from Data centers for Nvidia in particular.

The combined annual Operating profit growth was 522%, with processing accounting for a whopping 859% and networking growing 211%.

The quarterly operating profit growth rates aligned with the revenue growth rates, indicating that operating profits have stabilised and favour Processing slightly, as seen below.

Companies and Market shares

Even though Nvidia is so far ahead that market share is irrelevant for the GPU giant, it is vital for the other suitors. Every % is important.

The combined Datacenter processing revenue and market shares can be seen below:

While Nvidia has a strong revenue share of the total market, it has a complete stranglehold on the profits. The ability to get a higher premium is an important indicator of the width of the Nvidia moat. Nvidia’s competitors are trying to push performance/price metrics to convince customers to switch, but at the same time, they are commending Nvidia AI GPUs as Nvidia is running with a higher margin.

The shift in Market share can be seen below:

While this is “Suddenly, Nothing Happened,” the key takeaway is that despite the huff and puff from the other AI suitors, Nvidia stands firm and has slightly tightened its grip on profits.

The noise around Blackwell’s delay has not yet impacted the numbers, and it is doubtful that it will hurt Nvidia’s numbers, as the H100 is still the de facto choice in the data center.

The Datacenter Supply Chain

The shift in the Semiconductor market towards AI GPUs has significantly changed the Semiconductor supply chain. AI companies are now transforming into systems companies that control other parts of the supply chain, such as memory supply.

The supply situation is mostly unchanged from last quarter, with high demand from cloud companies and supply limited by CoWoS packaging and HBM memory. The memory situation is improving, although not all suppliers are approved by Nvidia.

As can be seen, the memory companies have been the clear winners in revenue growth since the last low point in the cycle.

Undoubtedly, SK Hynix has been the prime supplier to Nvidia, as Samsung has had approval problems. The latest operating profit data for Samsung suggest that the company is now delivering HBM to Nvidia or other companies, and the HMB supply situation is likely more relaxed.

GPU/CPU Supply

TSMC manufactures almost all of the processing and networking dies. The company recently reported record revenue for Q2-24 but is not yet at maximum capacity. CoWoS is the only area that is still limited, but TSMC is adding significant capacity every quarter, and it will not impact the key players in the Data Center supply chain.

Also, the monthly revenue for July was a new record.

While nothing has been revealed about the July revenue, it is likely still driven by TSMC’s High-Performance Computing business, which supplies mainly to the data center.

The HPC business added $3B without revealing the customer or the background of the company. As Apple used to be the only 3nm customer and normally buys less in Q2, it looks like it is a new 3nm customer and that is most likely a Datacenter supplier.

It could be one of the Cloud Companies that all are trying to leverage their own architectures. Amazon is very active with Trainium, Inferential and Graviton while Google has the TPU.

Also, Lunar Lake from Intel and the MI series from AMD could be candidates. With Nvidia’s Blackwell issues, the company stays on 4nm (5nm) until Rubin is ready to launch.

The possibility of Apple starting using M-series processors in their data centers is also possible.

The TSMC revenue increase is undoubtedly good news for the Data center market, which will continue growing in Q3, no matter what opinions investment banks have on the ROI of AI.

The Demand Side of the Equation

The AI revolution has caused the explosive growth in Data center computing. Analysing Nvidia’s current customer base, gives an idea of the different demand channels driving the growth.

2/3 of the demand is driven by the large Tech companies in Cloud and consumer, where the last third is more fragmented around Enterprise, Sovereign and Supercomputing. The last two are not really driven from a short term ROI perspective and will not suddenly disappear.

A number of Banks and financial institutions have recently questioned the investment of the large tech companies into AI which have cause the recent bear run in the stock market.

I am not one to run with conspiracy theories but it is well known that volatility is good for banking business. I also know that the banks have no clue about the long term return in AI, just like me, so I will continue to follow the facts, while the markets go up and down.

The primary source of funding for the AI boom will continue to be the large tech companies

Tech CapEx

5 companies represent the bulk of the CapEx that flows to the Data Center Processing market.

It is almost like the financial community treats the entire CapEx for the large Cloud customers as a brand new investment into a doubtfull AI business model. The reality is that the Datacenter investment is not new and it is creating tangible revenue streams at the same time as doubling as an AI investment.

From a growth perspective and using a startpoint from before the AI boom, it becomes clear that the datacenter investment growth actually follows the growth of the cloud revenue growth.

While I will let other people decide if that is a good return on investment, the CapEx growth compared to Cloud revenue growth does not look insane. That might happen later but right now it can certainly be defended.

The next question is, how much processing candy the large cloud companies can get for their CapEx?

The processing share of total capex is certainly increasing although the capex also have increased significantly since the AI boom started. It is worth noting that the new AI servers delivers significantly more performance that the earlier CPU only servers that traditionally has been used in the data centers.

The Q2 increase in CapEx is a good sign for the Datacenter Processing companies. It represents a 8.3B$ increase in CapEx for top 5. This can be compared with a 4.3B$ increase in Processing and Networking revenue for the Semiconductor companies.

What is even better is that the CapEx commitment from the large cloud companies will continue for the foreseeable future. Alphabet, Meta and Amazon will have higher CapEx budgets in 2nd half and Meta will have significantly higher CapEx in 2025.

Microsoft revealed that even though almost all the CapEx is AI and data center related, around half of the current CapEx is used for land and buildings. These are boxes that needs to be filled with loads of expensive AI GPU servers later and a strong commitment to long term CapEx.

Conclusion

While the current valuations and share price fluctuations might be insane, the Semiconductor side of the equation is high growth but not crazy. It is alive and vibrant.

Nvidia might have issues with Blackwell but can keep selling H100 instead. AMD and Intel will start to chip away at Nvidia but it has not happened yet. Cloud companies will also start to sneak in their architectures.

The supply chain looks better aligned to serve the new AI driven business with improved supply of memory although advanced packaging might be tight still.

TSMC has rapidly increasing HPC revenue that is a good sign for the next revenue season.

The CapEx from the large cloud companies is growing in line with their cloud revenue and all have committed to strong CapEx budgets for the next 2 to 6 quarters.

In a few weeks, Nvidia will start the Data Center Processing earnings circus once again. I will have my popcorn ready.

In the Meta call, the ROI on AI was addressed with two buckets: Core AI where a ROI view is relevant and a Gen AI that is a long term bet where ROI does not make sense to talk about yet.

Also Read:

TSMC’s Business Update and Launch of a New Strategy

Has ASML Reached the Great Wall of China

Will Semiconductor earnings live up to the Investor hype?