RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CEO Interview: BRAM DE MUER of ICsense

CEO Interview: BRAM DE MUER of ICsense
by Daniel Nenni on 08-23-2024 at 6:00 am

IMG 0411[6]

Bram co-founded ICsense in 2004 as a spin-off of the University of Leuven. He is CEO since 2004 and helped growing the company from 4 to over 100 people in 20 years while being profitable every year. He managed the acquisition by TDK in 2017. He is an experienced entrepreneur in the micro-electronics field with a strong interest in efficiently managing design teams and delivering projects with high quality.

Bram is a board member of Flanders Semiconductor, a non-profit organization that represents the Belgian semiconductor ecosystem. He is also member of the Crown Counsel of SOKwadraat, a non-profit organization to boost the number of spin-offs in Belgium. He holds a MSc. degree in micro- electronics and a Ph.D. from the Katholieke Universiteit Leuven, Belgium. Bram has been a research and postdoctoral assistant with ESAT-MICAS laboratories with Prof. M. Steyaert.

Tell us about your company?
At ICsense, we specialize in analog, mixed-signal and digital ASIC (Application-Specific Integrated Circuit) developments. We handle the complete chain from architectural definition, design, in-house test development upto mass production of the custom components. Today, we are one of the largest fabless European companies active in this domain.

I co-founded ICsense with 3 of my PhD colleagues back in 2004. Our focus has always been on analog, digital, mixed-signal, and high-voltage ICs, serving diverse industries including automotive, medical, industrial, and consumer electronics. ICsense is headquartered in Leuven, Belgium  and has a design center in Ghent, Belgium. The semiconductor ecosystem in Belgium is quite lively, with imec as a renown research center, world class universities and many industrial players in different parts of the semiconductor value chain represented by Flanders Semiconductors.

In 2017, we became part of the Japanese TDK Group (www.tdk.com), a leading supplier of electronic components. This enabled us to continue our strategy and serve customers worldwide as before. What many people don’t realize is that the majority of ICsense’s business today is outside the TDK Group!

Joining TDK has allowed us to grow faster and broaden our activities. We have invested in ATE (Automated Test Equipment, mass production testers and wafer probers) to do test program developments in-house. This makes ICsense unique in the market of ASIC suppliers, capable of building some of the highest-performance ASICs and bringing them into production for our customers.

What problems are you solving?
Many industries require specialized ICs tailored to specific applications, that off-the-shelf solutions often cannot adequately serve. To meet this need, we design custom ASICs for automotive, medical, industrial, and consumer electronics sectors, ensuring optimal performance and functionality.

Designing high-performance analog and mixed-signal ICs is inherently complex and requires specialized expertise. This expertise is the reason our customers knock on our door. Leveraging our extensive experience in analog, digital, mixed-signal, and high-voltage ICs, we deliver robust and reliable solutions. We develop advanced sensor interfaces, power management solutions, high-voltage actuation and sensing circuits, ultra-low-power circuitry and communication chips.

Every chip is uniquely build for one single customer at a time and only supplied to that customer. The customer’s IP is fully protected to keep his competitive edge in the market.

What application areas are your strongest?
In our 20 years of existence, we have built up a strong track record in complex ASIC developments in different technology nodes and for many different applications. We often push the boundaries to reach the highest performance or tweak the last uA out of a circuit. We are definitely not an “IP-gluer” (i.e. a company that simply combines existing IP blocks without modifications). Our design work is mostly custom, to meet the challenging requirements our customers are faced with.

Over the past 10 years, we have seen a strong growth in industries such as automotive and medical that require ICs meeting stringent quality and reliability standards. To address this, we employ rigorous design techniques. ICsense works according ISO13485 (medical) and ISO262626 (automotive) compliance standards. To give you one example, all the automotive ASICs we developed in the last 5 years are at least ASIL-B(D) Functional Safety level.

What keeps your customers up at night?
It really depends on the specific customer. We don’t have a typical client profile; our customers range from startups to large multinationals, from semiconductor companies to OEMs, each with their own unique concerns and expectations. In the medical market, for example, we collaborate with industry leaders in implants, such as Cochlear, as well as with brand-new startups aiming to bring novel ideas to new markets. The common ground among all our clients is their need for a partner who can build innovative, state-of-the-art ASICs with low risk and who supports sustainable production. They appreciate that ICsense combines the flexibility and dynamic team of a startup company, with the rigour, stability and sustainability of a large company.

In recent years, another major concern for our customers has been de-risking their supply chains. Discussions now frequently revolve around second sourcing and geopolitical issues. In response, we have been exploring more technology and partner options across the supply chain. Today, we are one of the few companies worldwide that can offer IC design in over 50 technology flavors, with fabrication facilities in the US, Europe, and Taiwan. Our specific design methodology allows us to efficiently work across various technology nodes, ensuring we can select the best match for our customers.

What does the competitive landscape look like and how do you differentiate?
Lately, there has been a lot of consolidation in the semiconductor value chain in Europe. As a result, ICsense remains one of the few companies of its size and capabilities that can serve external customers. Thanks to our mother company TDK, we can provide ASICs to Fortune 500 companies and to smaller companies and startups at the same time. With a team of over 100 skilled designers and in-house ATE and product engineering, we have a unique position in ASIC design and supply to the medical, industrial, consumer and automotive markets.

What new features/technology are you working on?
All our ASIC developments are customer specific. Some will hit the market as ASSP by our customer, most as part of a single product. Therefore, all the technology and features we are developing are confidential. We see some trends in the market, such as a shift towards smaller technology nodes (although not deep submicron) and a shift towards more differentiation in supply chain. Our technology-agnostic design approach is quite powerful to capture this trend.

Another trend is the push to higher integration and more functionality in many applications, from medical implants to industrial devices, that push the boundaries of the state-of-the-art. Again, this is one of our core strengths.

How do customers normally engage with your company?
We work with customers in 2 models: the first is a pure design support model, where we act as a virtual team to our customer. We perform the full design and hand over the design files, so our customer can integrate this further or handle the manufacturing themselves. Our second and most popular model is the turnkey supply model or -as we call it- ASIC design and supply. We handle the complete development from study upto mass production for our customer and we supply the ASICs to them throughout the lifetime of their product.

An ASIC design can start with just a back of the envelope idea or a full product requirement. Whatever the starting point, our first step is always to do a feasibility and architectural study in which we pin down all the details of the design to be made, define boundary conditions and prove with calculations and preliminary simulations that the requirements can be met.

We then proceed to the actual implementation, the design and layout work, which is the bulk of the work in the project. Through the design cycle, we continuously perform in-depth verification from transistor to chip top level to make sure all use cases are covered prior to the actual manufacturing of the wafers. In parallel to the manufacturing of the engineering silicon, we develop the ATE test hardware and software so that when the silicon returns from the fab, we can immediately start testing.

We have a good track record in first time functional designs, meaning that the ASIC is fully functional and can be used to build prototypes at the customer side. We typically only need a respin to fix small items and to optimise the yield. This is a result of our proprietary, systematic design flow based on commercially available EDA tools such as Cadence, Synopsys and Siemens.

The last stage is industrialisation, including qualification of the chips and perform additional statistical analysis to prove robustness over the lifetime of the product. Our product engineering team supports our customer with the ramp up, start of production and monitoring of yield, … during production. The supply model, direct or through partners, depends on the volume and the type of customer.

Also Read:

CEO Interview: Anders Storm of Sivers Semiconductors

CEO Interview: Zeev Collin of Semitech Semiconductor

CEO Interview: Yogish Kode of Glide Systems


Overcoming Verification Challenges of SPI NAND Flash Octal DDR

Overcoming Verification Challenges of SPI NAND Flash Octal DDR
by Kalar Rajendiran on 08-22-2024 at 10:00 am

Typical Octal Serial NAND Device

As the automotive industry continues to evolve, the demands for high-capacity, high-speed storage solutions are intensifying. Autonomous vehicles and V2X (Vehicle-to-Everything) communication systems generate and process massive amounts of data, necessitating advanced storage technologies capable of meeting these demands. NAND Flash memory, particularly in its Serial NAND form, has emerged as a critical component in this space, offering higher memory density compared to alternatives like NOR Flash. However, the adoption of new architectures, especially those involving SPI Octal DDR interfaces, presents unique challenges in the verification of these storage solutions.

Durlov Khan, a Product Engineering Lead at Cadence, gave a talk at the FMS 2024 Conference, on how his company helped overcome these verification challenges.

Challenges in Verifying SPI NAND Flash Octal DDR

One of the significant hurdles in integrating SPI Octal DDR NAND Flash into automotive applications is the difficulty in accurately verifying these advanced storage devices. Traditional verification models for NOR Flash memory cannot adequately model the architecture and addressing schemes of Serial NAND Flash memory, especially when it comes to the Command-Address-Data (C-A-D) instruction sequences.

Existing models for x1, x2, or x4 SPI Quad NAND devices fall short in simulating Octal SPI NAND devices due to key differences in architecture. Octal SPI NAND uses an 8-bit wide data bus, requiring more complex Command-Address-Data (C-A-D) sequences and additional signal pins (SIO3-SIO7), which aren’t supported by Quad SPI models.

Additionally, Octal devices operate at higher frequencies with stricter timing parameters, including the use of a Data Strobe (DS) signal for data synchronization. These factors make existing Quad SPI models inadequate for accurately simulating the behavior of Octal SPI NAND devices.

Attempting to replicate an Octal device by combining multiple SPI or SPI Quad NAND devices is not feasible due to signaling incompatibilities and significant discrepancies in AC/Timing parameters, leading to inaccurate verification results. This gap in verification capabilities poses a substantial risk, as it limits developers’ ability to ensure that their automotive storage solutions will perform reliably in real-world scenarios.

Collaborative Solution: SPI NAND Flash Memory Model Enhancement

To address these challenges, a collaborative effort was undertaken by Cadence, in partnership with Winbond, leading to the development of a robust solution for SPI Octal DDR verification. This solution centers around the enhancement of the Cadence SPI NAND Flash Memory Model, which now supports the new SPI Octal DDR capabilities.

This enhanced Memory Model can be activated through a configuration parameter and includes additional support for a Volatile Configuration Register. This register allows users to program the correct Octal transfer mode, enabling accurate simulation of the SPI Octal DDR interface. In this mode, legacy SI and SO pins are repurposed, and new SIO3-SIO7 pins are introduced, along with a Data Strobe (DS) output pin that works with read data to signal the host controller at maximum DDR frequencies.

The model is fully backward compatible and can operate in multiple modes, including 1-bit SPI Single Data Rate (SDR), 1-bit SPI Double Data Rate (DDR), 8-bit Octal SPI SDR, and 8-bit Octal SPI DDR, depending on user configuration. This flexibility ensures that developers can accurately simulate a wide range of operational scenarios, crucial for the varying demands of automotive applications.

Real-World Application and Results at NXP

The integration of the Cadence VIP into NXP’s test environment demonstrated the effectiveness of this solution. The VIP seamlessly supported various density grades of SPI NAND Flash, with commands automatically adapting to the specific density grade in use. This adaptability and the ability to accurately model the SPI Octal DDR interface provided NXP with a reliable verification tool, ensuring that their storage solutions met the stringent performance and reliability standards required in the automotive sector.

Summary

The challenges in verifying SPI NAND Flash Octal DDR devices highlight the complexities of developing advanced storage solutions for the automotive industry. However, through collaborative efforts and innovative solutions like the enhanced SPI NAND Flash Memory Model from Cadence, developers can overcome these challenges. This advancement not only supports the current needs of automotive applications but also lays the groundwork for future innovations in storage technology, ensuring that the next generation of vehicles can handle the ever-increasing demands of data processing and storage with efficiency, reliability, and security.

For more details, visit Cadence’s SPI NAND solutions page.

Also Read:

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Future of Logic Equivalence Checking

Theorem Proving for Multipliers. Innovation in Verification


CAST Advances Lossless Data Compression Speed with a New IP Core

CAST Advances Lossless Data Compression Speed with a New IP Core
by Mike Gianfagna on 08-22-2024 at 6:00 am

CAST Advances Lossless Data Compression Speed with a New IP Core

Data compression is a critical element of many systems. Thanks to trends such as AI and highly connected systems there is more data to be stored and processed every day.  Data growth is staggering. Statista recently estimated that 90% of the world’s data was generated in the last two years. Storing and processing all that data demands ways to reduce the space it takes.

Data compression takes two basic forms – lossless and lossy. Lossy data compression can result in significantly smaller file sizes, but with the potential loss of some degree of quality. JPEG and MP3 are examples. Lossless compression also produces smaller files, but with complete fidelity to the original. Some data cannot tolerate loss during compression — such as text, code, and binaries — while for other data the maximum, original quality is essential — think medical imaging or financial information. GIF, PNG and ZIP are lossless formats.

So lossless data compression is quite prevalent. That’s why a new IP core from CAST has such significance. Let’s look at how CAST advances lossless data compression speed with a new IP core.

Lossless Compression Basics

As discussed, lossless compression doesn’t degrade the data, and so the decompressed data is identical to the original, just with a somewhat smaller file size. Lossless compression typically works by identifying and eliminating statistical redundancy in the information. This can require additional computing time, so ways to speed up the process are important.

There are many algorithms that can be applied to this problem. Two popular ones are:

LZ4 – which features an extremely fast decoder. The LZ4 library is provided as open-source software using a BSD license.

Snappy – is a compression/decompression library. The focus of this approach is very high speeds and reasonable compression. The software is provided by Google and is used extensively by the company.

The CAST Announcement

With that primer, the recent CAST announcement will have better context.

New LZ4 & Snappy IP Core from CAST Enables Fast Lossless Data Decompression.

This announcement focuses on a new IP core from CAST that accelerates the popular LZ4 and Snappy algorithms. The part can be used in both ASIC or FPGA implementations and delivers a hardware decompression engine. Average throughput is a peppy 7.8 bytes of decompressed data per clock cycle in its default configuration. And since it’s an IP core, decompression can be improved to 100Gbps or greater by instantiating the core multiple times.

Called LZ4SNP-D, CAST believes it is the first available RTL-designed IP core to implement LZ4 and Snappy lossless data decompression in ASICs or FPGAs from all popular providers. Systems using the core benefit from standalone operation, which offloads the host CPU from the significant tasks of decompressing data.

The core handles the parsing of the incoming compressed data files with no special preprocessing. Its extensive error tracking and reporting capabilities ensure smooth system operation, enabling automatic recovery from CRC 32, file size, coding, and non-correctible ECC errors.

My Conversation with the CEO

At DAC, I was able to meet with Nikos Zervas, the CEO of CAST. I found Nikos to be a great source of information on the company. You can read about that conversation here. So, when I saw this press release, I reached out to him to get some more details.

It turns out lossless data compression isn’t new for CAST. The company has been offering GZIP/ZLIB/Deflate lossless data compression and decompression engines since 2014. These engines have scalable throughput, and there are customers using them to compress and decompress at rates exceeding 400Gbps.

Applications include optimization for storage and/or communication bandwidth for data centers (e.g., SSDs, NICs) and automotive (for recording sensor data).  For other applications, the delay and power for moving large amounts of data between SoCs or within SoCs is optimized. I can remember several challenging chip designs during my ASIC days where this kind of problem can quickly become a real nightmare.

An example to consider is a flash device’s inline decompression of boot images. Flash memories are slow and power-hungry. Both the latency (i.e., boot time) and energy consumption for loading a boot image can be significantly reduced by storing a compressed file for that boot image and decompressing it on the fly during boot. Other use cases involve chipsets exchanging large amounts of data over Gbp-capable connections, or parallel processing platforms moving data from one processing element to the next.

It turns out CAST has tens of customers ranging from Tier 1 OEMs in the networking and telecom equipment area to augmented reality startups that use GZIP/ZLIB/Deflate lossless cores.

I asked Nikos, why introduce new lossless compression cores now? He explained that LZ4 and Snappy compression may not be as efficient as GZIP, but they are less computationally complex, and this attribute makes LZ4 and Snappy attractive in cases where compression is at times (or always) performed in software. The lower computational complexity also translates to a smaller and faster hardware implementation, which is also important, especially in the case of high processing rates (e.g., 100Gbps or 400Gbps) where the size of the compression or decompression engines is significant (in the order of millions to tens of millions of gates).

CAST had received multiple requests for these faster compression algorithms over the past couple of years. Nikos explained that the company listened and responded with the hardware LZ4/Snappy decompressor. He went on to say that a compressor core will follow. This technology appears to be quite popular. CAST had its first lead customer signed up before announcing the core.

To Learn More

The new LZ4SNP-D LZ4/Snappy Data Decompressor IP core is available now. It can be delivered in synthesizable HDL (System Verilog) or targeted FPGA netlist forms and includes everything required for successful implementation. Its deliverables include:

  • Sophisticated test environment
  • Simulation scripts, test vectors, and expected results
  • Synthesis script
  • Comprehensive user documentation

If your next design requires lossless compression, you should check out this new IP from CAST here.  And that’s how CAST advances lossless data compression speed with a new IP core.


Robust Semiconductor Market in 2024

Robust Semiconductor Market in 2024
by Bill Jewell on 08-21-2024 at 1:30 pm

Semiconductor Market Change 2024

The global semiconductor market reached $149.9 billion in the second quarter of 2024, according to WSTS. 2Q 2024 was up 6.5% from 1Q 2024 and up 18.3% from a year ago. WSTS revised 1Q 2024 up by $3 billion, making 1Q 2024 up 17.8% from a year ago instead of the previous 15.3%.

The major semiconductor companies posted generally strong 2Q 2024 revenue gains versus 1Q 2024. Of the top fifteen companies, only two – MediaTek and STMicroelectronics – saw revenue declines in 2Q 2024. The strongest growth was from the memory companies, with SK Hynix and Kioxia each up over 30%, Samsung Semiconductor up 23% and Micron Technology up 17%. The weighted average growth of the top fifteen companies in 2Q 2024 versus 1Q 2024 was 8%, with the memory companies up 22% and the non-memory companies up 3%.

Nvidia remained the largest semiconductor company, based on its 1Q 2024 guidance of $28 billion in 2Q 2024 revenue. Samsung was number two at $20.7 billion. Broadcom has not yet reported its 2Q 2024 results, but we estimate revenues at $13.0 billion, passing Intel at $12.8 billion. Intel slipped to fourth, after many years of being number one or number two.

Revenue guidance for 3Q 2024 versus 2Q 2024 is positive, but with a wide range of outlooks. AMD expects 3Q 2024 revenue to increase 15% based on strong growth in data center and client computing. Micron indicated the memory boom will continue, with supply below demand, and guided for 12% growth. Samsung Semiconductor and SK Hynix did not provide revenue guidance, but both companies expect continuing strong demand from server AI.

A few companies project low 3Q 2024 revenue growth of about 1%: Intel, MediaTek and STMicroelectronics. Intel blamed excess inventory for the weak outlook. The other five companies providing revenue guidance are in the 4% to 8% range. STMicroelectronics and NXP Semiconductors expect automotive to improve in 3Q 2024, but inventory issues remain in the industrial sector. Texas Instruments projects strength in personal electronics. The 3Q 2024 weighted average revenue growth of the nine non-memory companies providing guidance was 5%.

The substantial increase in the semiconductor market in the first half of 2024 (up 18% from the first half of 2023) will drive robust growth for the full year 2024. 2024 forecasts from the last few months range from 14.4% from the Cowan LRA Model to 20.7% from Statista Market Insights. Our Semiconductor Intelligence (SC-IQ) projection of a 17.0% increase in 2024 is in line with Gartner at 17.4% and WSTS at 16.0%.

The four estimates for 2025 show similar trends – slower but still strong growth ranging from our Semiconductor Intelligence’s 11.0% to Statista’s 15.6%. The growth deceleration from 2024 to 2025 ranges from minus 3.5 percentage points from WSTS (16% to 12.5%) to our minus 6 percentage points (17% to 11%). Our initial projections for 2026 are in the mid-single digits. The momentum from AI and a recovering memory market should taper off by then. The other major end markets (smartphones, PCs and automotive) will probably see flat to low growth in the next couple of years. Barring any significant new growth drivers to boost the market or an economic downturn to depress the market, the outlook for the semiconductor market should remain in the mid-single digits through the end of the decade.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductor CapEx Down in 2024, Up Strongly in 2025

Automotive Semiconductor Market Slowing

2024 Starts Slow, But Primed for Growth


Introducing XSim: Achieving high-quality Photonic IC tape-outs

Introducing XSim: Achieving high-quality Photonic IC tape-outs
by jrobcary on 08-21-2024 at 10:00 am

txcorp XSim Example Photonics

Similar to analog circuits, which use EM waves at communications frequencies, components in photonics integrated circuits (PICs), which use EM waves at optical frequencies, are sensitive to layout and manufacturing variations—arguably more so.  Similar to their semiconductor counterparts, which transmit information using electrons, PICs transmit information using photons, and they also have to resolve signal integrity and parasitic losses. Designers of PIC components, therefore, have had to pioneer new design workflows. These workflows typically rely on FDTD EM engines that are computationally demanding, even for smaller components.

Currently, component designers have limited awareness of physical effects since they are limited to simulating extruded (as-drawn) GDS geometries, which under-represent important details of the as-manufactured PIC. As a result, they learn of the impact of physical effects like backscattering and parasitic interaction with other components much later, after prototype fabrication and during testing. This results in excessive design-of-experiments iterations and long loops of prototyping shuttle runs with their foundry partner, resulting in unpredictable timelines to market and a lack of flexibility to react to evolving requirements.

The solution to the problem requires industry cooperation and improvements in both computational hardware and software features and algorithms.  For computational hardware, Nvidia’s GPU and CUDATM programming language provide powerful new capabilities for solving large, difficult problems. On the simulation side, Tech-X has been doing very large physical simulations for national labs and large commercial customers on supercomputers for over two decades. But we also have recognized that most photonics IC design groups don’t have access to these resources, and modeling large manufacturing-realistic photonic component designs requires distributed computing across 100’s of CPU cores or 10s of CUDA GPUs, now accessible at larger companies or through Amazon Web Services.  In addition, we have been working with customers to understand the problems they face in achieving more accurate simulations and have built those capabilities into XSim, designed specifically to tackle computationally challenging manufacturing-aware photonic component design.

To highlight XSim capabilities:

  • STL or STEP input files or extruded GDS with optional roughness modeling to accurately describe 3D manufacturing features and sidewall roughness. The built-in Composer™ tool can also create 3D geometry.
  • User-defined or built-in materials properties, constant or variable-grid meshing, highly accurate dielectric algorithms,  mode solver/launcher, accurate S parameter calculations.
  • Efficient distributed computing across a large number of CPU cored or GPUs to resolve the critical dimension required for the simulation volume.
  • Efficient cloud hardware configuration set-up and access to AWS’s wide range of x86 and GPU cluster instances for those design groups who do not have access to on-premises workstations or clusters.

Come see how XSim can reduce prototyping cycles at our webinar on September 5th.  More information and webinar sign-up can be found at www.txcorp.com/xsimwebinar.

John Cary
Founder, CTO
Tech-X Corporation

Also Read:

CTO Interview: John R. Cary of Tech-X Corporation

Understanding Sheath Behavior Key to Plasma Etch


Podcast EP242: A View of the Dynamics and Future of the Memory Market with Jim Handy

Podcast EP242: A View of the Dynamics and Future of the Memory Market with Jim Handy
by Daniel Nenni on 08-21-2024 at 8:00 am

Dan is joined by Jim Handy of Objective Analysis. Jim is a 35-year semiconductor industry executive and like Dan, a leading industry analyst, speaker, and blogger. Following marketing and design positions at Intel, National Semiconductor, and Infineon, Jim became highly respected as an analyst for his technical depth, accurate forecasts, industry presence, and numerous market reports, articles, white papers, and quotes.

In this far-reaching and informative discussion, Jim explores the memory market with Dan. He explains the forces that define how new memory technologies displace existing approaches. Volume availability and cost drive a lot of those decisions. He points out that better technical approaches will see low adoption if the cost is much higher than existing technology for high volume applications such as cell phones.

Regarding new memory technologies, MRAM and ReRAM are discussed. Jim explains why he believes there will be room for only one new memory technology, He also discusses the challenges of using new materials and explores the specific reasons why embedded flash doesn’t scale,

Jim also discusses specific challenges in some markets, such as automotive, military and space.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys IP Processor Summit 2024

Synopsys IP Processor Summit 2024
by Daniel Nenni on 08-21-2024 at 6:00 am

Synopsys Processor Summit

Now that live events are filling up there are even more live events especially here in Silicon Valley. Synopsys, the #1 full IP provider, will host a processor summit here in Santa Clara next month. Given the popularity of anything RISC-V, I would expect this event to be very well attended so be sure and register in advance.

The networking opportunities are always worth the time spent at live events but looking at the agenda I would say the content will be the big draw. Not to mention the excellent food provided by the Marriot, Synopsys events are very well catered, absolutely.

The keynote sets the tone for the summit which is a full day event. The speaker is Alexander Kocher, a 30+ year semiconductor professional with deep experience in the automotive industry. Currently, Alexander is the CEO of new comer Quantauris which has an impressive list of investors: Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor ASA, NXP® Semiconductors, and Qualcomm Technologies, Inc.

Keynote:

Enabling RISC-V Solutions for IoT & Automotive Applications
RISC-V is generating a high interest across multiple industries. In the software-defined era, system integrators are intrigued by the prospects of increased innovation, agility and customization, while cutting cost and reducing supply chain risks.Those opportunities led to the creation of Quintauris, founded to accelerate the adoption of RISC-V globally by enabling solutions for the Automotive and IoT industries. Quintauris will describe the benefits RISC-V offers to the automotive and IoT market segments, and how the organization will overcome perceived challenges around open standard semiconductor design by nurturing the broad RISC-V ecosystem and facilitating commercial adoption.

Synopsys IP Processor Summit 2024 Registration
September 5, 2024 9:00 a.m. – 6:30 p.m @ the Santa Clara Marriott

Conference Introduction:

As electronic systems continue to become more complex and integrate greater functionality, SoC developers are faced with the challenge of developing more powerful, yet more energy-efficient devices. The processors used in these applications must be efficient to deliver high levels of performance within limited power and silicon area budgets.

Why attend?

Join us for the Processor IP Summit to get in-depth information from industry leaders on the latest in ARC-V™ RISC-V processor IP, ARC® VPX DSP IP and ARC NPX NPU IP along with related hardware/software technologies that enable you to achieve PPA differentiation in your chip or system design. Synopsys experts, partners, and our processor IP user community will discuss electronic market trends and present on a range of topics including artificial intelligence, automotive safety, software development and more. Sessions will be followed by a networking reception where you can see live demos.

Who Should Attend?

Whether you are a developer of chips, systems or software, the Synopsys Processor IP Summit will give you practical information to help you create more differentiated products in the shortest amount of time.

Synopsys IP Processor Summit 2024 Registration
September 5, 2024 9:00 a.m. – 6:30 p.m @ the Santa Clara Marriott

About Synopsys

Catalyzing the era of pervasive intelligence, Synopsys, Inc. (Nasdaq: SNPS) delivers trusted and comprehensive silicon to systems design solutions, from electronic design automation to silicon IP and system verification and validation. We partner closely with semiconductor and systems customers across a wide range of industries to maximize their R&D capability and productivity, powering innovation today that ignites the ingenuity of tomorrow. Learn more at www.synopsys.com.

Also Read:

Mitigating AI Data Bottlenecks with PCIe 7.0

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

Synopsys’ Strategic Advancement with PCIe 7.0: Early Access and Complete Solution for AI and Data Center Infrastructure


The State of The Foundry Market Insights from the Q2-24 Results

The State of The Foundry Market Insights from the Q2-24 Results
by admin on 08-20-2024 at 10:00 am

TSMC Leads but Challengers Follow 2024

If you work in the Semiconductor or related industry, you know that industry cycles can profoundly impact your business. It is crucial for strategic development to invest at the appropriate time and to rope the sails when necessary.

As a semiconductor investor, you’re accustomed to the ebb and flow of industry cycles. It’s a reality that even the most stable long-term growth stocks must adapt to. But with the right understanding and preparation, you can predict and make informed investment decisions during these cycles.

My work aims to extract insights from the analysis of public data, insights that can be used to predict where the Semiconductor business is heading. While Semiconductor companies are different, these are the waters all semiconductor companies are navigating. Predicting a company’s business is different from predicting the stock price. The stock market has its logic, but eventually, it will align with the company’s underlying business.

One of the pivotal areas of the Semiconductor industry is the foundry market, which is dominated by TSMC. Born out of frustration with the American ability to compete effectively with the Japanese, the Taiwanese giant is unrivalled from an advanced logic perspective. TSMC always deserves a particular analysis in my research, but researching the entire foundry industry for insights is often valuable.

The development of the Foundry market impacts many companies. A survey from last week confirmed this:

Q2-24 Status of the foundry market.

It was another growth quarter for the foundry companies. Collectively, the industry grew by 10.2% QoQ and 19.6% YoY to $33.5B, which is still some distance from the last peak of $35.4B

TSMC’s dominance is undeniable, and as the current market situation is driven by AI’s need for leading-edge GPUs, it is no surprise that TSMC gained additional market share in Q2-24 although only marginal. The rest led by the Chinese foundries are matching the pace.

After some wobbling quarters, TSMC is now consistently increasing market share. TSMC’s market share has passed 62%, up from the latest low of 52% 10 quarters ago. This is likely to continue for the next few quarters as Intel and Samsung struggle to challenge TSMC’s leading-edge leadership.

The operating profit took a significant jump upwards and is completely dominated by TSMC. The AI leading edge boom is likely to benefit TSMC even more and it is obvious why TSMC is content even though Nvidia captures most of the value.

The increasing operating profit could indicate that we are entering shortage territory again soon, as it is only a couple of billion short of the record.

However, the collective inventory position of the Foundries shows that, in all likelihood, the industry is in balance and is not near the capacity maximum. A deeper dive into the numbers can uncover more insights

The deeper insights

The revenue from a technology perspective is moving at a rapid pace. More than half (SMIC excluded from this analysis) of the top 5 foundry revenue is at 7nm-3nm. That is up from just over a third, 2.5 years ago.

While not a problem yet, we are not far from a situation where it will become challenging to get mature technologies from Western companies.

The wafer fab capacity has increased significantly over the last cycle. For top 6 the combined capacity has grown by 11% CAGR, with SMIC leading the pack. With CapEx investments higher than revenue, the Chinese foundry was able to grow capacity by 29% CAGR since beginning of 2022.

The utilisation of capacity is on the rise for all major foundries, first and foremost the Chinese operations, while TSMCs have a more modest increase at the 80% range. In all likelihood TSMC have quite different loading of the different factories according to technology with pressure on 3nm.

Brick owners of the Semiconductor Industry

It is no secret that semiconductor manufacturing is expensive, which is why most semiconductor companies have chosen the fabless model, leaving the investments to the foundries.

While capacity levels give a good idea of the online capacity at a given time, they do not indicate future capacity. It is valuable to look at the companies’ balance sheets and the financial value of the manufacturing assets. Property, Plant, and Equipment are almost all manufacturing assets for semiconductor companies.

It is worth noting that manufacturing assets are allocated to the country of incorporation, so all of Intel’s PPE will register as US. Of course, reality is more complex so this cannot be used to evaluate the impact of the US chips act.

The PPE view shows that the financial value of the manufacturing assets is closing in on $550B. This is not only ready facturing but also land and construction in progress.

We divide the PPE into three categories to get a feel for the strength of the different manufacturing models:

IDM – Integrated Device Manufacturers manufacturing their own Chips

Foundry – Manufacturing for other IDMs and fabless companies

Mixed Manufacturing – Speciality fab owners that buy advanced logic from Foundries

The chart shows healthy growth, indicating more capacity will come online in the near term, but also a decline in growth in the last quarter. This signals that the investments from the last peak are coming to an end, and PPE growth will be slower.

The CapEx spend can be analysed to gain insight into the longer term future capacity.

It may seem a little bit counterintuitive, given all of the noise about the Chips Act, but CapEx investments are actually declining. It could be easy to interpret this as the Chips Act not working. However, a more likely answer is that the Chips Act changed the investment strategy of most of the large manufacturers to align with it, and investments will accelerate later.

The investment levels in each of the manufacturing types are well above the replacement capex, which represents the capex needed to maintain the same level of capacity.

Conclusion

It was a good quarter for the Foundry companies, particularly TSMC and the Chinese foundries. TSMC’s strong profitability shows that the company does not have to make concessions to keep and win business and is still miles ahead of Samsung.

While TSMC is still far from full capacity, the Chinese foundries are getting closer. As they have lost a significant proportion of their Western business, this is a sign that the Chinese electronics manufacturers are increasing their purchases.

The short-term capacity increase is likely to tail off, as indicated in the PPE development for both IDMs and foundries. This is a result of the last investment peak and the following CapEx pause.

The large gap between current CapEx and replacement CapEx will benefit the longer-term capacity. This has been somewhat delayed but will accelerate once TSMC starts filling its new factories. There will be sufficient capacity, but maybe not the right kind.

The dominance of TSMC will continue, but the Chinese Foundries are punching above their weight and will soon own a significant part of the mature technologies. TSMC will only make limited investments into these technologies and Western fabless companies will have to find a way back to the Chinese foundries without alienating the US government.

Pledge your support for this content

Also Read:

A Post-AI-ROI-Panic Overview of the Data Center Processing Market

TSMC’s Business Update and Launch of a New Strategy

Has ASML Reached the Great Wall of China

Will Semiconductor earnings live up to the Investor hype?


Weebit Nano is at the Epicenter of the ReRAM Revolution

Weebit Nano is at the Epicenter of the ReRAM Revolution
by Mike Gianfagna on 08-20-2024 at 6:00 am

Weebit Nano is at the Epicenter of the ReRAM Revolution

It’s well known that flash is the embedded non-volatile memory (NVM) incumbent technology. As with many technologies, flash is bumping into limits such as power consumption, speed, endurance and cost. It is also not scalable below 28nm. This presents problems for applications such as AI inference engines that require embedded NVM, typically on a single SoC below 28nm. Resistive random-access memory, referred to as ReRAM or RRAM, is emerging as the preferred alternative to address these shortcomings. Weebit Nano is paving the way for this change. Dan Nenni provides a good overview of the company’s ReRAM technology in this DAC report.  The movement is gaining speed. In this post, I’ll review how things are coming together and how Weebit Nano is at the epicenter of the ReRAM revolution.

The Times Are Changing

At the recent TSMC Technology Symposium, there was a lot of discussion about ReRAM.  Indeed, the foundry market leader (who refers to it as RRAM) talked up the technology in its presentation on embedded NVM. The company shared that RRAM, non-volatile memory well formed between backend metal layers, is an excellent Flash replacement with good scalability.

According to the TSMC website, TSMC continues to explore novel RRAM material stacks and their density-driven integration, along with variability-aware circuit design and programing constructs to realize high-density embedded RRAM-based solution options for AIoT applications.

Tech Insights recently reported that Nordic Semi’s new Bluetooth 5.4 SoC has 12 Mb embedded ReRAM on board. The piece went on to say that with multiple resistive states, which can correspond to multiple memory states, ReRAM is a leading contender for machine learning designs. Nordic’s chip is fabricated in TSMC 22 nm ultra-low leakage (22ULL) with an embedded resistive random-access memory (eReRAM) process.

According to the Yole Group by 2028, the total embedded emerging non-volatile memory  market is expected to be worth ~$2.7 billion. Yole cited the first microcontroller product for automotive applications employing embedded RRAM from Infineon as an example of RRAM momentum. The Infineon AURIX TC4 MCU includes 20 MB of non-volatile resistive memory and is manufactured by TSMC in a 28nm process.

Another market research firm, Objective Analysis in its EMERGING MEMORIES BRANCH OUT report, states that over time, the NOR embedded in most SoCs will be almost entirely replaced by either MRAM, ReRAM, FRAM, or PCM.

This momentum isn’t limited to TSMC. On July 31 of this year, it was announced that Weebit Nano and DB HiTek tape-out ReRAM module in DB HiTek’s 130nm BCD process. It was reported that the demo chips will be used for testing and qualification ahead of customer production and demonstrate the performance and robustness of Weebit Nano’s ReRAM technology. And there’s more. Read on…

More From a Recent Memory Conference

Amir Regev

Embedded non-volatile memory technology isn’t the only thing experiencing change. Conference names are evolving as well. The Design Automation Conference is now DAC: The Chips to Systems Conference.  The Flash Memory Summit is now FMS: the Future of Memory and Storage. At that conference, which was held in Santa Clara from August 6-8, 2024, Weebit Nano’s VP of Quality and Reliability Amir Regev gave a presentation on emerging memories. During that presentation, Amir presented more evidence of the ReRAM revolution.

Amir presented test results for Weebit’s ReRAM technology implemented on GlobalFoundries 22FDX wafers. This is significant as this is the first time any public data has been shared about NVM performance at nodes such as 22nm.You can read the press release announcing this milestone here. In that release, it was reported that, Mr. Regev will also highlight the performance of Weebit ReRAM on GlobalFoundries 22FDX® wafers including endurance and retention data – the first such ReRAM results.

Here are some details that Amir presented:

  • Earlier this year Weebit received GF 22FDX wafers incorporating our ReRAM module prototype
    • 8Mb, 128-bit wide, targeting 10K cycles and 10yr retention at 105°C (automotive to follow)
    • Characterization and qualification activities are ongoing
  • Pre-qualification results show:
    • Weebit’s ReRAM stack is stable at 105°C cycling endurance up to 10K cycling
    • Very good data retention pre- and post-cycling is maintained for a long time at high temperatures (150°C), as shown in the figure below.
Hi Temp Cycling Results

Amir also provided a broad summary of Weebit’s qualification work:

  • Qualified modules at 85°C and 125°C
    • Temperatures specified for industrial and automotive grade 1 ICs
    • Qualified for endurance and 10yr retention per JEDEC industry standards
  • AEC-Q100 qualification (150°C and 100K cycles) in progress
    • Good results achieved, collecting statistical data for full qualification
  • Technology demonstrated on multiple process nodes
    • From 130nm to 22nm, Al / Cu, 200mm / 300mm
    • Successfully simulated on FinFET nodes

Amir also described the work underway to qualify Weebit’s ReRAM under extended automotive conditions. He mentioned the excellent results so far on temperature cycling provide a strong foundation for automotive qualification.

Weebit Nano’s ReRAM is finding application in a wide range of processes, foundries and applications. Beyond those mentioned so far, the company also recently announced work with Efabless on SkyWater’s 130nm CMOS (S130) process. This work enables fast and easy prototyping of intelligent devices using Weebit’s technology. Weebit Nano is creating a wide footprint in the market.

To Learn More

ReRAM technology is poised to change the game for many applications.  If your next project includes embedded non-volatile memory, you should see how Weebit Nano can help. You can learn more about the company’s technology here.  If you want to chat with the Weebit team, you can start here.  Weebit Nano is at the epicenter of the ReRAM revolution, join in.


What are Cloud Flight Plans? Cost-effective use of cloud resources for leading-edge semiconductor design

What are Cloud Flight Plans? Cost-effective use of cloud resources for leading-edge semiconductor design
by Christopher Clee on 08-19-2024 at 10:00 am

fig1 vpc

Embracing cloud computing is highly attractive for users of electronic design automation (EDA) tools and flows because of the productivity gains and time to market advantages that it can offer. For Siemens EDA customers engaged in designing large, cutting-edge chips at advanced nanometer scales, running Calibre® design stage and signoff verification in the cloud proves advantageous, as evidenced by the benchmark results discussed in this article. Calibre flows not only facilitate swift design iterations with modest compute resources but also consistently improve with each release. Cloud deployment offers a dual benefit: design teams avoid waiting for local resources and gain the flexibility to scale up during peak demand and leverage Calibre’s scalability for increased design throughput.

But to be cost-effective, cloud resources and infrastructure must be tailored to meet the individual and diverse demands of the many tools that constitute the semiconductor design flow. So what is the optimal configuration for running Calibre applications in the cloud? Which of the dozens of classes of virtual machines are best for running Calibre applications? How do cloud users know they are spending their money wisely and getting the best results? We set out to answer all these questions with a collaboration between Amazon Web Services (AWS), Amazon Annapurna Labs (Annapurna) and Siemens EDA to evaluate results from a series of Calibre flow benchmarks run inside Annapurna’s production virtual private cloud (VPC), which is hosted on AWS. After this evaluation, we developed a set of best known methods (BKMs) based on our experiences and the results.

Environment setup

The cloud experience works best when it is configured to be seamless from an end-user’s perspective. The setup that is probably most familiar to semiconductor designers in their on-premises systems is one where user has an exclusive small head node assigned that is used to submit all their jobs to other machines using some kind of queue manager. The head node is also useful for housekeeping purposes, like editing files, moving data, capturing metrics, etc.

The Calibre nmDRC benchmarks detailed in this paper took advantage of the Calibre MTFlex distributed processing mode running on a primary machine with a series of attached remote machines. In these cases, we used the same machine type for both the remote hosts and the primary. Other tests simply used multithreading on a single machine. A virtual private cloud setup is illustrated in figure 1.

Figure 1: VPN access from a VNC client to a dedicated head node, and then to primary and remote machines inside the cloud environment

Calibre nmDRC benchmark results

Figure 2 shows results for Calibre nmDRC runtime and peak remote memory usage for an Annapurna 7nm design when using an increasing number of remote cores. Runtime is shown in hours, and peak remote memory usage in GB. All runs used the Calibre MTflex distributed processing mode and a homogeneous cluster of machine types for the primary and remote nodes (AWS r6i.32xlarge instances). The horizontal axis shows the number of remote cores, which were doubled with each subsequent run. Each run used a consistent 64-core primary machine.

Figure 2. Calibre nmDRC runtime and peak remote memory with an increasing number of remote cores for the 7nm Annapurna design

The dark blue line is the baseline run using the same Calibre nmDRC version that Annapurna originally used in production on these designs with stock rules from their foundry partner. The light green line shows results using a more recent Calibre nmDRC version with optimized rules and instead of reading the data in from an OASIS file, the design data was restored from a previously saved reusable hierarchical database (RHDB) which in this case saved about 20 minutes per run. The light blue dotted line shows the percentage time saving between these two sets of runs. The purple line is the Calibre nmDRC Recon run, which automatically selects a subset of the foundry rules to run to find and resolve early design iteration systematic errors. Siemens EDA always recommends that customers run the Calibre nmDRC Recon tool on dirty designs before committing to a full Calibre run. This helps find the gross errors in the design very quickly, so they can be eliminated with short cycle times.

Determining how many remote cores to use in the cloud is dependent on the size and complexity of the design, the process technology, and the complexity of the rule deck. The optimal spot is found around the “knee” in the curve on these charts (for the design in figure 2, around 500 remote cores). The peak memory plots show that there was plenty of headroom for physical memory – each remote had 1TB RAM. The cost of these runs is typically in the range of a couple hundred dollars. Calibre customers typically use 500 remote cores as a minimum for full-chip Calibre nmDRC runs at advanced nodes. The data supports the Calibre value proposition of maintaining fast turnaround based on a modest amount of compute resource. However, the data also shows that scalability continues to even greater numbers of cores, giving Calibre users headroom to further compress cycle time if needed.

Figure 3 shows similar results for a Calibre nmDRC run on a 5nm Annapurna design. Here again, the optimal spot is around 500 remote cores, with fewer than 5 hours of runtime.

Figure 3. Calibre nmDRC runtime and peak remote memory with an increasing number of remote cores for a 5nm Annapurna design

Here again, the data demonstrates that the Calibre nmDRC tool is very resource-efficient, so it is not necessary to use thousands of remote CPUs to get reasonable design cycle times. Design teams can readily perform multiple turns per day using a modest number of cores, with a correspondingly modest associated cost. If it is helpful or necessary to squeeze in one or two more design turns per day, they can increase the number to 1,000 remote cores. The advantage of operating in the cloud is that more machines are always available, and it is highly likely that they will spin up very quickly.

Calibre interactive run management results

Both Annapurna designs were opened in the Calibre DESIGNrev interface, and the Calibre Interactive invocation GUI was used to initiate Calibre nmDRC and Calibre nmLVS runs. In addition, the Calibre Interactive integration with the Altair Accelerator (NC) queue manager was assessed, and Calibre RVE invocation and cross-probing to the design in the Calibre DESIGNrev interface was evaluated. All Calibre Interactive operations were fast and responsive.

Best known methods for EDA cloud computing

Following the benchmarks, we evaluated the results to encapsulate learnings and observations into suggested BKMs for running Calibre applications in the cloud. We generated general BKMs for optimizing spend in the cloud, improving cloud performance, and optimizing the general experience when using cloud-based resources, as well a specific BKMs for running Calibre flows in the cloud. These BKMs are encapsulated as Cloud Flight Plans, which are instructions for flow-specific optimizations that will allow Siemens EDA customers to leverage the availability, scalability and flexibility of cloud compute in a way that is very cost effective. Many of these resources are available through cloud landing pages on our Siemens EDA website. Our Cloud Reference Environment and other flow-specific assets and resources are available to our customers through our SupportCenter resource site.

Download our newest technical paper that describes BKMs for Amazon Web Services (AWS), using Annapurna Labs’ experience as a benchmark. Running Calibre solutions in the cloud: Best known methods.

Conclusion

In the realm of electronic design automation (EDA), cloud computing offers a compelling solution to the problem of constantly burgeoning design size and complexity. Design teams, armed with cloud-tuned machines, can work at an accelerated pace—launching jobs when needed, utilizing resources efficiently, and even running multiple tasks in parallel. Whether it’s accelerating design cycles or optimizing costs, the cloud provides flexibility. To navigate this celestial landscape effectively, understanding and optimizing cloud resources and configurations is key. Siemens EDA, in collaboration with major cloud providers like AWS, has crafted Cloud Flight Plans to guide design mutual customers on their cloud journey. With Cloud Flight Plans as their compass, semiconductor designers can chart a steady course toward efficient, cloud-powered success.

Also Read:

Solido Siemens and the University of Saskatchewan

Three New Circuit Simulators from Siemens EDA

Siemens Provides a Complete 3D IC Solution with Innovator3D IC