Banner 800x100 0810

Is Intel cornering the market in ASML High NA tools? Not repeating EUV mistake

Is Intel cornering the market in ASML High NA tools? Not repeating EUV mistake
by Robert Maire on 12-24-2023 at 9:00 am

High NA EUV
  • Reports suggest Intel will get 6 of 10 ASML High NA tools in 2024
  • Would give Intel a huge head start over TSMC & Samsung
  • A big gamble but a potentially huge pay off
  • Does this mean $4B in High NA tool sales for ASML in 2024?

News suggests Intel will get 6 of first 10 High NA tools made by ASML in 2024

An industry news source, Trendforce, reports that Intel will get up to 6 of the 10 High NA ASML tools likely to be shipped in 2024.

The article also quotes Samsung Vice Chairman, Kyung Kye-hyun, saying ” Samsung has secured a priority over the High-NA equipment technology

This seems to imply that Intel is getting the most High NA tools followed by Samsung due to its recently announced $755M investment with ASML in Korea.

This would put TSMC in an unusual third place which its opposite its current dominance of EUV with 70 percent of the EUV tools in the world

Trendforce article on High NA tools

$3.5 to $4B in high NA sales for ASML in 2024??

If we assume tool cost of $350M to $400M and ten tools that would be between $3.5B and $4B in High NA sales in 2024 even though not all the revenue would likely be recognized in the calendar year.

We think the potential upside from High NA for ASML is not built in to the stock price yet as ten tools sounds like a stretch by most accounts, but if that could be done it would suggest strong upside.

We had suggested in past articles that any weakness due to Chinese sanctions would be more than made up by other sales and this is just one example

15 tools in 2025?…..20 tools in 2026? Zeiss is the gatekeeper….

If ASML does indeed get 10 tools shipped in 2024 that sort of ramp would imply an easy ramp to 15 or more tools in 2025 and likely over 20 in 2026.

As with regular EUV tools, lens availability from Zeiss will again be the gating factor with high NA lenses being far more complex than current already difficult EUV lenses.

Given that the EUV source will remain somewhat unchanged and the stage incremental it is really all about the lenses.

Who will get them? Where and When?

If we assume Intel does get the first 6 High NA tools as suggested we would imagine that Intel Portland will get at least the first two tools with other tools following in 2024 going to Arizona and/or Ohio.

Of the other four tools, maybe at least two to three to Samsung with TSMC getting one or two tools.

Its our guess that TSMC will likely push multi-patterning current EUV harder rather than jump to High NA right away.

There are numerous industry suggestions that High NA EUV tools are going to be difficult to cost justify versus multi-pattern existing EUV tools. This could either be a smart move by TSMC or a mistake…time will tell. From Intel’s perspective they have no choice but to push hard as their slowness to EUV in the past was one of the reasons that TSMC raced past them in Moore’s law.

Its clear that Intel does not want a repeat of what happened with the original EUV tools and thus committed early to ASML to get the first copies of High NA Tools as announced over a year ago.

If High NA does indeed pan out, this would be the “leap frog” that Intel needs to catch up.

If our guess of 15 tools in 2025 is close, we would expect the share between Intel, Samsung and TSMC to be a bit more even along with a tool for IMEC in Europe, who works closely with ASML, likely getting one as well for R&D.

In 2026, 20 tools is more of a toss up, between the big three foundry/logic makers, likely with the addition of a High NA tool for New York for R&D as recently announced.

We don’t expect memory makers to engage with High NA for the first several years as they are just starting to engage with regular EUV just now and remain 4 to 6 years behind foundry/logic need for EUV lithographic technology.

We would not be surprised that after 3 years of High NA shipments for Intel to have the majority of High NA tools much as TSMC has the lions share of current EUV technology.

A big but necessary bet for Intel

If Intel buys 6 High NA tools in 2024 that implys $2.1B to $2.4B in High NA tool capex alone. Thats a pretty big bet….but if it helps pay off in catching up to TSMC it will be money well spent. There is no choice, as to not do so would relegate Intel to a trailing and at best, if lucky, equal position with TSMC.

High NA will get into production faster than standard EUV did

Although there are significant differences between EUV and High NA EUV tools, the basic concepts are the same. Its not the quantum leap that EUV was over ARF immersion.

At $350 to $400M a copy, chip makers are going to need to put those new tools to work as quickly as possible.

They are also going to have to focus (pardon the pun) on getting an economic return on single pattern High NA versus multi pattern standard EUV which is probably not a slam dunk in the beginning just as EUV was hard to prove over multi pattern ARF immersion (there is still controversy as to thye cost advantage).

Does High NA finally leave China behind??

As we have seen with 7NM, China has been able to get EUV like fidelity using multi-patterning ARF immersion.

This work around seems extendable down to 5NM.

We have our doubts about extending ARF immersion down to 3NM in any sort of usable/economically feasible way. This would tend to suggest that those implementing High NA EUV will be beyond the reach of China’s fabs….at least for the foreseeable future.

The Stocks

We view this news if true, or even close to true, as a significant positive for both ASML & Intel. We however would not view it as a negative, just yet, for TSMC as they likely remain in a strong position in EUV overall.

We think that ASML and its stock could ride the High NA EUV wave throughout 2024 assuming no hiccups in production.

We would further suggest that the recent changes in the retiring CEO & CTO at ASML would not likely have happened unless and until High NA EUV was safely on its way and over any major technology humps or other issues as I doubt either one would likely retired on a weak note of a poor High NA roll out.

High NA is going out on a high note!

Although it may take a while for Intel to see the fruits of its High NA bet we think investors would be willing to overlook the negative impact of the spend required for High NA if the reward is regaining technology leadership…..

Have a great Holiday!!

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

About Semiwatch

Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch

Also Read:

AMAT- Facing Criminal Charges for China Exports – Overshadows OK Quarter

The Coming China Chipocalypse – Trade Sanctions Backfire – Chips versus Equipment

KLAC- OK quarter in ugly environment- Big China $ – Little Process $ – Legacy good


Podcast EP199: How Rambus is Helping to Counter the Security Threats of Quantum Computing with Scott Best

Podcast EP199: How Rambus is Helping to Counter the Security Threats of Quantum Computing with Scott Best
by Daniel Nenni on 12-22-2023 at 10:00 am

Dan is joined by Scott Best, technical director at Rambus. His research areas are memory architectures, 3D packaging, and security processors. Scott joined Rambus in 1998 and has served in many and varied technical roles. He has become one of the most prolific inventors in the company’s history. Over the course of his career at Rambus, he is a named inventor on over 200 patents worldwide.

Dan explores the Rambus Quantum Safe Engine (QSE) with Scott. The architecture is discussed, along with key markets and applications and how Rambus will help with the massive worldwide infrastructure update required to achieve quantum safe cryptography.

In this informative discussion, Scott explains the significant security implications presented by quantum computing – the timeline for when the threat will be real, what is being done across many industries to ensure security is maintained and how Rambus technology fits into the plans.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Giving Back – The Story of One Silicon Valley Veteran’s Journey

Giving Back – The Story of One Silicon Valley Veteran’s Journey
by Mike Gianfagna on 12-22-2023 at 6:00 am

Giving Back – The Story of One Silicon Valley Veteran’s Journey

The concept of giving back is something many of us have contemplated. Giving back to the community or to support a particular cause. How to respond to those inquiries from our alma mater is another example. These conversations typically focus on giving money to provide needed support. As engineers, we are surrounded by a massive problem that needs attention and money by itself won’t fix it. According to the US Bureau of Labor Statistics, economic projections point to a need for approximately 1 million more STEM professionals than the U.S. will produce at the current rate over the next decade if the country is to retain its historical preeminence in science and technology. That should get your attention; it’s a big problem that money alone can’t fix. This post is about someone who is helping to address this crisis in a personal way. Read on to learn the story of one Silicon Valley veteran’s journey to giving back.

The Changing Landscape

We have all witnessed the shift in innovation that has occurred over the past decade or so. Software is now the central focus for innovation. It defines new products, new user experiences, expanded deployment of AI and essentially sets the stage for what’s next. It’s been said that all companies are software companies. I tend to agree. Underpinning this shift is the fact that custom chips are required to bring new software ideas to life.

The demand for custom silicon is at an all-time high. Growing up in the ASIC business, this shift brings a smile for me. Software and silicon now take center stage around the world. Governments are getting involved as well. The Chips Act is fueling some big demands for growth. According to the Semiconductor Industry Association, “robust federal incentives for domestic chip manufacturing would create an average of nearly 200,000 American jobs annually as fabs are built and add nearly $25 billion annually to U.S. economy.”

Filling all these jobs is a critical part of success, and that means encouraging more enrollment in STEM curricula to increase the flow of new engineering graduates. A well-known Silicon Valley veteran has decided to work with his alma mater to help address this challenge. This is his story.

Rick Carlson Teams Up with Illinois Tech

Rick Carlson

Rick Carlson is vice president of sales, Verific Design Automation. He’s been there for 20 years after joining Verific from AccelChip. Prior to AccelChip, he held positions as vice president of sales for Averant, Synplicity (now Synopsys), Escalade (now Siemens), and EDA Systems. He is also a co-founder of the EDA Consortium (now the ESD Alliance) in 1987 and is currently a Lanza techVentures Investment Partner. To say that Rick is well connected in the technology community is an understatement.

What can a person with all that experience and all those connections do, beyond making companies like Verific successful? Rick discovered an opportunity to give back using those skills. It all began with the traditional email from the Illinois Institute of Technology (Illinois Tech) in Chicago, asking for a donation. Rick called the alumni office and asked how he could give back to the school that helped launch his career in high tech so many years ago.

The person speaking to Rick was, I believe, a truly creative thinker. After learning about Rick’s background, the Alumni Giving’s Office Director suggested that instead of writing a check Rick should volunteer using his 40+ years’ experience in EDA and business development to help the school. Not knowing what that would lead to, Rick said yes.

So, he started doing what he does best, networking and making connections. This led to an introduction to the Dean of the College of Computing, Dr. Lance Fortnow, who authored “The Golden Ticket: P, NP and the Pursuit of the Impossible.”  As a math major, this book re-ignited Rick’s interest in mathematics and that led to additional introductions to other colleges including the College of Electrical and Computer Engineering (ECE).

Rick’s angel investment in a Colorado-based company called Mountain Flow that makes a plant-based ski wax led to an introduction to the chemical engineering department now doing paid research project on a next-generation ski wax. His interest in learning about technology transfers led him to the Executive Director of Illinois Tech’s Kaplan Institute of Technology and Entrepreneurship. Kaplan teaches students about what it takes to start a company.

That led to a deep connection of networking, volunteering, and mentoring at Illinois Tech. Rick is now on three boards with a fourth board seat imminent at Illinois Tech. He is also producing a tech documentary series about a Chicago company called Influit Energy that developed a new form of energy that uses nano electric particles. The three founders are all affiliated with Illinois Tech as researchers and professors.

The school’s leadership is delighted with Rick’s involvement and impact. Rick takes pride in highlighting the significance of alumni giving back to their schools. In the case of Illinois Tech, he is following in the footsteps and inspired by Illinois Tech alumni including Chris Gladwin, founder of CleverSafe, Ed Kaplan, inventor of the bar code, Rohit Prasad, creator of Amazon’s Alexa, Marty Cooper, who led the team that built the first mobile phone and Victor Tsao who founded Linksys and enabled high-speed home internet.

Work/Life Balance

Verific is the leading provider of SystemVerilog, VHDL, and UPF front-ends. Its software is used worldwide in synthesis, simulation, formal verification, emulation, debugging, virtual prototyping and design-for-test applications, which combined have shipped over 60,000 copies. The company has a well-deserved reputation for excellent customer support and flexibility, and this culture is what help its 20-year veteran to focus on giving back.

Rob Dekker, Verific’s founder, and Michiel Ligthart, its president, have been supportive and they are embracing Rick’s efforts. Naturally, Illinois Tech has Verific tools for use in the ECE department.

What’s Next – It’s Up to You

Rick hopes to inspire individuals to get involved with alumni groups at their colleges and universities through monetary donations but also by becoming active participants in student activities. It’s a great way for students to get a feel for our industry and look more closely at semiconductor careers. It’s also a great way for us to give back to the place that gave us a start.

And that’s the story of one Silicon Valley veteran’s journey to giving back.

Also Read:

Bespoke Silicon Requires Bespoke EDA

Verific Sharpening the Saw

COO Interview: Michiel Ligthart of Verific


Agile Analog Partners with sureCore for Quantum Computing

Agile Analog Partners with sureCore for Quantum Computing
by Daniel Nenni on 12-21-2023 at 10:00 am

Agile Analog sureCore project PR image

Quantum computing is the next big thing for the computing world. The semiconductor industry has been talking about it for years. It’s shiny, mysterious, and capable of some incredible things. Instead of using classical bits to represent information (which can be either a 0 or a 1), quantum computers use quantum bits or qubits. What’s special about qubits is their ability to exist in multiple states simultaneously, thanks to a phenomenon called superposition. This allows quantum computers to perform complex calculations at speeds that would make traditional computers melt.

Quantum computing isn’t here to replace your laptop or iPhone, it’s more like a sledgehammer for specific compute intensive tasks: cyber security, meteorological, medical diagnostics, finance, cryptography, AI, etc… Imagine solving complex problems in seconds that would take today’s computers lifetimes.

To get there however we need a quantum capable semiconductor ecosystem and part of that ecosystem is Cyro-CMOS IP:

***

Agile Analog, the analog IP innovators, is collaborating with sureCore, the ultra-low power embedded memory specialist, to implement a cryogenic control ASIC on the GlobalFoundries 22FDX process, as part of the Innovate UK funded project: “Development Of Cryogenic CMOS To Enable The Next Generation Of Scalable Quantum Computers.”

The consortium members created cryogenic SPICE models for the GF 22FDX process technology, and sureCore had used these to recharacterize standard cell and IO cell libraries, as well as developing low power SRAM, ROM and Register File Compilers. These cryogenic IP libraries are being used to enable the development of a test chip that will allow measurement of performance at cryogenic temperatures. Agile Analog is working closely with sureCore to implement and verify its solution.

According to Barry Paterson, CEO of Agile Analog:

“We were delighted when sureCore approached us to undertake the physical design required to create a test chip for this cutting-edge project. Integrating control and measurement electronics capable of operation down to 4 Kelvin is critical to enabling quantum computer scaling. The UK is leading innovation in the quantum technology space and I am pleased that Agile Analog can participate in the development of this technology.”

Agile Analog completed the synthesis, floor planning, place and route, and design closure steps to ensure that the cryogenic test chip would be able to act as a qualification test vehicle, in order to prove that the approach adopted by this project could be a viable solution for cryogenic control ASICs.

Semiconductor process technologies are typically characterized for operation from -40C to 125C. However, in the world of quantum computing, where operational qubits demand temperatures even lower than 4K, co-locating the control electronics close to the qubits within the cryostat is crucial for quantum computer scaling. To achieve their true potential, there is a need to dramatically increase the number of qubits, from the several hundred that is possible today to millions. These qubits have to be controlled, and currently this is done by using external control electronics housed outside of the cryostat at room temperature. By generating semiconductor IP that can operate at cryogenic temperatures, quantum computing developers can quickly design their own control ASICs that can be co-located with the qubits in the cryostat.

Paul Wells, CEO of sureCore, commented on this collaboration:

“Agile Analog have a unique blend of skillsets that make them the ideal partner. Their expertise and sheer professionalism ensured that we were able to work extremely closely and identify critical issues early in the project, meaning that the physical design flow proceeded in a smooth and predictable manner. Having been on both sides of the fence I can say that they are one of the best design services teams I have come across in my 35 year career.”

Barry Paterson concluded:

“The Agile Analog team is pleased to be able to support sureCore, and the other consortium members, with the implementation of this platform. We have gained invaluable experience in working at challenging temperatures. The pathway to advanced quantum computers with millions of qubits relies on integrating the control and measurement within the cryostat. We can use the knowledge acquired during this project to make a range of our analog IP, including our data converters, available with support for these cryogenic temperatures. We are already having initial discussions with potential partners about delivering these solutions.”

***

Agile Analog – Analog IP the way you want it

Agile Analog is transforming the world of analog IP with Composa™, its innovative, highly configurable, multi-process analog IP technology. Headquartered in Cambridge, UK, with a growing number of customers across the globe, Agile Analog has developed a unique way to automatically generate analog IP that meet the customer’s exact specifications, for any foundry and on any process, from legacy nodes right up to the leading edge. The company provides a wide-range of novel analog IP and subsystems for data conversion, power management, IC monitoring, security and always-on domains, with applications including; data centers/HPC, IoT, AI, quantum computing, automotive, aerospace and defense. The digitally wrapped and verified solutions can be seamlessly integrated into any SoC, significantly reducing complexity, time and costs, helping to accelerate innovation in semiconductor design. www.agileanalog.com

sureCore  When low power is paramount

sureCore, the ultra-low power, embedded memory specialist, is the low-power innovator who empowers the IC design community to meet aggressive power budgets through a portfolio of ultra-low power memory design services and standard IP products. sureCore’s low-power engineering methodologies and design flows meet the most exacting memory requirements with a comprehensive product and design services portfolio that create clear market differentiation for customers. The company’s low-power product line encompasses a range of close to near-threshold, silicon proven, process-independent SRAM IP. www.sure-core.com

Also Read:

Agile Analog Visit at #60DAC

Counter-Measures for Voltage Side-Channel Attacks

Slashing Power in Wearables. The Next Step


Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World

Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World
by Mike Gianfagna on 12-21-2023 at 6:00 am

Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World

According to its website, CES® Is the global stage for innovation, delivering the most powerful tech event in the world — the proving ground for breakthrough technologies and global innovators. Owned and produced by the Consumer Technology Association (CTA)®, it is the only trade show that showcases the entire tech landscape at one event. This is indeed a prestigious show. If you’ve never gone, you should. It takes over essentially all of Las Vegas for a few days in early January. There will be more information about the show later, but first I’d like to explore a rather impressive statistic. An extensive list of Silicon Catalyst incubator companies will be exhibiting this year at CES, presenting significant advances in AI, sensing and vision. Let’s examine the seven Silicon Catalyst companies to exhibit at CES, the most powerful tech event in the world.

Kura Technologies – https://www.kura.tech

Kura’s mission is to build the best personal AI assistant to empower people 100x in working, learning, and communicating. To do that, the company started by building its first product – the world’s best-performing direct-see-through augmented reality glasses and SaaS collaboration + AI platform, which solves today’s biggest adoption challenges in augmented reality, AI data and AI tool deployment. The goal is to expand the market 100x or more.

Today’s AR glasses are bottlenecked by a small field-of-view, low transparency (less than 25%, similar to dark sunglasses), low resolution, poor brightness (unsuited for outdoor or bright indoor environments), and bulky form factor, all of which prevent widespread adoption.

Kura’s AR glasses and telepresence+collaboration platform feature a 150° field-of-view (9x that of existing AR), 95% transparency (4x existing), 8K resolution (4x existing), high brightness, wide range of depth for variable focus, and a compact form factor, solving the bottlenecks which prevent the adoption of AR.

The company has achieved successful tapeout of its custom backplane at TSMC on a production CMOS node; this backplane uses its proprietary drive algorithms and is the world’s fastest micro-display driver. Kura has paid orders from 400+ companies with demand of 500K+ units; over $30M in PO requests in 2022 and 2023

Kura is not new to CES. Last year, they won the CES Innovation Award and Best of CES.

Oculi – https://www.oculi.ai

Oculi® is an alumnus of the Silicon Catalyst incubator.  It is a fabless semiconductor company that produces the OCULI SPU™ (Sensing & Processing Unit), a novel architecture and product in the world of AI and vision technology. The OCULI SPU is the only single chip Software-Defined Vision Sensor™ that delivers Real-Time Vision Intelligence (VI) at the edge. Oculi makes computer/machine vision faster and more efficient by embedding intelligence in the sensor starting at the pixel, the true edge!

The OCULI SPU is the product of over 18 years of R&D led by Dr. Charbel Rizk, Founder CEO CTO of Oculi which started at the Johns Hopkins University and was specifically focused on developing efficient vision Intelligence on a single chip that delivers fast response and low bandwidth, power, size, and weight.

With Oculi vision, its partners can deliver a natural and immersive user experience under any lighting conditions, indoors and outdoors, with up to 30X reduction in power consumption and latency with a lower bill of materials.

Owl Autonomous Imaging – https://www.owlai.us

Owl is also an alumnus of the Silicon Catalyst incubator.  The company is all about safety, especially pedestrian safety. It is developing HD thermal image sensors and computer vision software. Headquartered in the heart of image sensor innovation, Rochester NY, it is singularly focused on improving visibility in no light, bright light, and degraded visual environments.

As a fabless semiconductor company, Owl supplies a patented HD thermal digital image

sensor, auto qualified camera cores and a thermal camera reference design. Its computer vision algorithms are uniquely designed for thermal applications. For example:

  • Classification CNNs
  • Ranging CNNs
  • 3D fusion & object segmentation

The current markets served include automotive ADAS L2+, L3/4 autonomy, industrial, construction, agriculture, and smart infrastructure. Its thermal computer vision platform is in active trials today.

Quadric – https://quadric.io

Another alumnus of the Silicon Catalyst incubator, Quadric had created the industry’s first GPNPU (general purpose Neural Processing Unit). ML models are created using known labeled datasets (training phase) and are subsequently used to make predictions when presented with new, unknown data in a live deployment scenario (inference). Because an enormous amount of computing resources is required both for training and inference, an ML accelerator is now vital in handling computational workloads.

For most high-volume consumer products, chips are designed with tight cost, power, and size limitations. This is the market Quadric serves with innovative semiconductor intellectual property (IP) building blocks. Its ML-optimized Chimera processors allow companies to rapidly build leading-edge SoCs and more easily write application code for those chips.

After proving the innovative Chimera architecture in 2021 by producing a test chip, Quadric introduced its first licensable IP product – the industry’s first GPNPU (general purpose Neural Processing Unit) in November of 2022, and began product deliveries in Q2 2023. You can learn more about Quadric on SemiWiki here.

SigmaSense – https://sigmasense.com

SigmaSense® is pioneering a programmable continuous DSP-based sensing technology for use in touch displays, automotive surfaces, and battery impedance sensing applications. SigmaSense has created a radical technology transformation of how digital systems interact with the physical analog world, enabling reconfigurable mixed signal solutions that can be continuously optimized.

The new platform measures current direct-to-digital, eliminating the signal preconditioning of traditional voltage mode ADCs. This enables simplified designs, better quality data capture, and systems that can continuously improve. The Society for Information Displays recognized the company’s first touch controller (SDC100) as the “Display Component of the Year” for its innovation in leading the transition away from high voltage, time-base sensing, to current and frequency-based systems.

SigmaSense has substantial investments in its patent filings. The enabling IP is protected with over 250 patents already issued and 180 more pending. The patent filings extend to a range of applications including traditional touch screens, power regulation, batteries, electric motors, data communications, medical, automotive, and IoT applications.

Sonical – https://www.sonical.ai

Sonical is another alumnus of the Silicon Catalyst incubator.  The company is enabling Headphone 3.0, the next industry standard for ear worn products. Sonical is building a platform for Headphone 3.0 that unlocks the secret potential of your ears using more effective wearable products.  The team at Sonical is developing its own operating system, CosmOS, along with a dedicated chip specifically designed for hearables running downloadable plugins. 

This will unlock the large number of app developers that have created advanced AI based algorithms.  The company’s mission is to empower hearables manufacturers, as well as individual users, to select which features and combinations of apps they want to include in their new hearables products in the same way one currently chooses which apps for a laptop, tablet, or smartphone. Sonical enables App developers to have direct access to consumers to deliver a differentiated experience, exactly what they need, when they need it.

Sonical and XMOS, a semiconductor company at the leading edge of the intelligent IoT will be providing early demonstrations of a collaboration for Headphone 3.0 at CES.

SPARK Microsystems – https://www.sparkmicro.com

Also a Silicon Catalyst alumnus, SPARK Microsystems is a fabless semiconductor company focused on enabling a new generation of wireless extended reality, audio, gaming, human interface and IoT devices.

The company is building next generation short-range wireless communication devices. SPARK UWB provides high data rate and very low latency wireless communication links at an ultra-low power profile, making it ideal for personal area networks (PANs) used in mobile, consumer and IoT-connected products. Leveraging patented technologies, SPARK Microsystems strives to minimize and ultimately eliminate wires and batteries from a wide range of applications.

The company offers wireless transceivers, software development kits, hardware kits, and a SPARK headset reference platform.

About CES

CES will be held in Las Vegas, NV from January 9 – 12, 2024. You can learn more about the show and register to attend here. CES is a trade-only event for individuals affiliated with the consumer technology industry. I expect most of SemiWiki’s readers qualify for attendance. It is a show unlike any other, if you haven’t been there, consider registering. It will be an unforgettable trip and you’ll get to see the seven Silicon Catalyst companies to exhibit at CES, the most powerful tech event in the world.


ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design

ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design
by Kalar Rajendiran on 12-20-2023 at 10:00 am

Power Analog ICs Adopting ReRAM

Weebit Nano, a leading developer of advanced memory technologies, recently announced a significant collaboration with DB HiTek, one of the top ten foundries of the world. The collaboration is designed to enable integration of Weebit’s Resistive Random-Access Memory (ReRAM) into DB HiTek’s 130nm Bipolar-CMOS-DMOS (BCD) process. It marks a significant milestone in semiconductors for analog, mixed-signal and power designs, setting the stage for a higher levels of system integration and more competitive solutions.

Monolithic Integration and Streamlined Design

A key highlight of this announcement is the goal of monolithic integration, allowing analog, digital, and power components to coexist on the same chip seamlessly. This consolidation not only simplifies the design process but also enhances overall system performance. The synergy between ReRAM and BCD technology enables the creation of highly integrated circuits, reducing the complexity associated with external components and multiple dies.

Market Adoption and Industry Implications

The adoption of Weebit ReRAM by DB HiTek could be an early indicator of widespread industry adoption in the future. This collaboration establishes a precedent for other foundries and semiconductor manufacturers to explore the integration of Weebit ReRAM in their processes, further accelerating the industry’s trajectory towards advanced semiconductor solutions.

Versatility and Application Diversity

The announcement underscores the versatility of ReRAM, positioning it as a solution across diverse applications. From consumer electronics to industrial systems and Internet of Things (IoT) devices, ReRAM’s adaptability makes it a go-to memory technology for emerging technologies and next-generation electronic devices. This versatility opens up new possibilities for innovation and customization across various industries.

ReRAM is BEOL technology

ReRAM, as a BEOL technology, integrates in the later stages of semiconductor manufacturing. This simplifies the integration process, avoiding interference with Front-End components and allowing for straightforward incorporation into existing BCD processes. The simple stack design of ReRAM facilitates its seamless integration in the BEOL. This simplicity allows for the addition of ReRAM to existing layers of metal and insulating materials without extensive modifications.

ReRAM Advantages Over Flash Memory

ReRAM’s ease of integration, combined with high endurance, reliability at high temperatures, and low-power characteristics, positions it as an advantageous choice over Flash memory in BCD processes. The simplified integration process and reduced need for process tuning enhance the overall efficiency and performance of semiconductor designs.

Benefits to the Semiconductor Industry

The integration of ReRAM in DB HiTek’s BCD process brings forth several key benefits for the semiconductor ecosystem.

Cost-Effective Manufacturing: Weebit’s ReRAM technology requires only 2 added masks in the manufacturing process, compared to more than 10 such masks for flash. This has a direct effect on manufacturing cost and cycle time. In addition, the monolithic integration reduces the need for external components, streamlining manufacturing and lowering production costs. This cost-effectiveness aligns with industry demands for efficient and economical solutions.

Enhanced System Integration: ReRAM’s integration in the BCD process enables the seamless coexistence of analog, digital, and power components on a single chip. This not only simplifies the design process but also contributes to more compact and efficient electronic systems.

Energy Efficiency: Leveraging ReRAM’s low-power characteristics in conjunction with the power management capabilities of the BCD process enhances overall energy efficiency. This is particularly crucial in mobile applications where power consumption is a critical consideration.

Versatility and Customization: The collaboration empowers semiconductor designers with the flexibility to customize integrated circuits based on specific application requirements. The adaptability of ReRAM ensures that the technology can be tailored to meet the diverse needs of different industries.

Applications and Future Potential

The integration of ReRAM in the BCD process holds immense promise for various applications. Consumer devices like smartphones, tablets, and wearables stand to benefit from the enhanced energy efficiency and space-saving attributes of ReRAM in BCD processes. ReRAM’s robustness at high temperatures and integration with BCD’s power management make it well-suited for industrial control systems, automation, and robotics. IoT devices, with their compact size and low power requirements, could significantly benefit from the integration of ReRAM in the BCD process. ReRAM integration into BCD process also offers a powerful and efficient memory solution for the growing landscape of IoT applications.

Summary

By licensing Weebit’s ReRAM for integration in DB HiTek’s BCD process, the two companies are pioneering a path to more efficient, cost-effective, and versatile semiconductor designs. The benefits of cost-effective manufacturing, enhanced system integration, energy efficiency, and application versatility position ReRAM in BCD as a transformative solution for the semiconductor industry. As the integration gains momentum, we can anticipate a wave of innovation across various applications, ultimately shaping the future of semiconductor design.

You can access the Weebit Nano – DB HiTek joint press release here.

For more details about Weebit’s technology and solution offerings, visit www.weebit-nano.com

Also Read:

A preview of Weebit Nano at DAC – with commentary from ChatGPT

Weebit ReRAM: NVM that’s better for the planet

How an Embedded Non-Volatile Memory Can Be a Differentiator


Reasoning and Planning as the Next Big Thing in AI

Reasoning and Planning as the Next Big Thing in AI
by Bernard Murphy on 12-20-2023 at 6:00 am

Arithmetic min

When I search for ‘what is the next big thing in AI?’ I find a variety of suggestions around refining and better productizing what we already know. Very understandable for any venture aiming to monetize innovation in the near term, but I am more interested in where AI can move outside the box, to solve problems well outside the purview of today’s deep learning and LLM technologies. One example is in tackling math problems, a known area of weakness for the biggest LLMs, even more so for GPT 4. OpenAI Q* and Google Gemini both have claims in this space.

I like this example because it illustrates an active area of research in reasoning with interesting ideas while also clarifying the scale of the mountain that must be climbed on the way to anything resembling artificial general intelligence (AGI).

Math word problems

Popular accounts like to illustrate LLM struggles with math through grade school word problems, for example (credit to Timothy Lee for this example):

John gave Susan five apples and then gave her six more. Susan then ate three apples and gave three to Charlie. She gave her remaining apples to Bob, who ate one. Bob then gave half his apples to Charlie. John gave seven apples to Charlie, who gave Susan two-thirds of his apples. Susan then gave four apples to Charlie. How many apples does Charlie have now?

Language recognition is obviously valuable in some aspects of understanding this problem, say in translating from a word-based problem statement to an equation-based equivalent. But I feel this step is incidental to LLM problems with math. The real problem is in evaluating the equations, which requires a level of reasoning beyond LLM statistical prompt/response matching.

Working a problem in steps and positional arithmetic

The nature of an LLM is to respond in one shot to a prompt; this works well for language-centric questions. Language variability is highly bounded by semantic constraints, therefore a reasonable match with the prompt is likely to be found with high confidence in the model (more than one match) then triggering an appropriate response. Math problems can have much more variability in values and operations; therefore any given string of operations is much less likely to be found in a general training pile no matter how large the pile.

We humans learn early that you don’t try to solve such a problem in one shot. You solve one step at a time. This decomposition, called chain-of-thought reasoning, is something that must be added to a model. In the example above, first calculate how many apples Susan has after John hands over his apples. Then move to the next step. Obvious to anyone with skill in arithmetic.

Zooming further in, suppose you want to solve 5847+15326 (probably not apples). It is overwhelmingly likely that this calculation will not be found anywhere in the training dataset. Instead, the model must learn how to do arithmetic on positional notation numbers. First compute 7+6 = 13, put the 3 in the 1s position for the result and carry 1. And so on. Easy as an explicit algorithm but that’s cheating; here the model must learn how to do long addition. That requires training examples for adding two numbers, each between 0 and 9, plus multiple training examples which demonstrate the process of long addition in chain-of-thought reasoning. This training will in effect build a set of rules in the model, but captured in the usual tangle of model parameters rather than as discernible rules. Once training is finished against whatever you decided was a sufficient set of examples it is ready to run against addition tests not seen in the training set.

This approach, which you might consider meta-pattern recognition, works quite well, up to a point. Remember that this is training to infer rules by example rather than by mathematical proof. We humans know that the long addition algorithm works no matter how big the numbers are. A model trained on examples should behave similarly for a while, but as the numbers get bigger it will at some point run beyond the scope of its training and will likely start to hallucinate. One paper shows such a model delivering 86% accuracy on calculations using 5-digit numbers – much better than the 5-6% of native GPT methods – but dropping to 41% for 12-digit numbers.

Progress is being made but clearly this is still a research topic. Also a truly robust system would need to move up another level, to learning absolute and abstract mathematical facts, for example true induction on long addition.

Beyond basic arithmetic

So much for basic arithmetic, especially as expressed in word problems. UC Berkeley has developed an extensive set of math problems, called MATH, together with AMPS, a pretraining dataset. MATH is drawn from high school math competitions covering prealgebra, algebra, number theory, counting and probability, geometry, intermediate algebra, and precalculus. AMPS, the far larger training dataset, is drawn from Khan Academy and Mathematica script examples and runs to 23GB versus 570GB for GPT3 training.

In a not too rigorous search, I have been unable to find research papers on learning models for any of these areas outside of arithmetic. Research in these domains would be especially interesting since solutions to such problems are likely to become more complex. There is also a question of how to decompose solution attempts into sufficiently granular chains-of-thought reasoning to guide effective training for LLMs rather than human learners. I expect that could be an eye-opener, not just for AI but also for neuroscience and education.

What seems likely to me (and others) is that each such domain will require, as basic arithmetic requires, its own fine-tuning training dataset. Then we can imagine similar sets of training for pre-college physics, chemistry, etc. At least enough to cover commonsense know-how. In compounding all these fine-tuning subsets, at some point “fine-tuning” a core LLM will no longer make sense. We will need to switch to new types of foundation model. So watch out for that.

Beyond math

While example-based intuition won’t fly in math, there are many domains outside the hard sciences where best guess answers are just fine. I think this is where we will ultimately see the payoff of this moonshot research. One interesting direction here further elaborates chain-of-thought from linear reasoning to exploration methods with branching searches along multiple different paths. Again a well-known technique in algorithm circles but quite new to machine learning methods I believe.

Yann LeCun has written more generally and certainly much more knowledgeably on this area as the big goal in machine learning, combining what I read as recognition (something we sort of have a handle on), reasoning (a very, very simple example covered in this blog), and planning (hinted at in the branching searches paragraph above). Here’s a temporary link: A Path Towards Autonomous Machine Intelligence. If the link expires, try searching for the title, or more generally “Yann LeCun planning”.

Very cool stuff. Opportunity for new foundation models and no doubt new hardware accelerators 😀

Also Read:

Building Reliability into Advanced Automotive Electronics

Synopsys.ai Ups the AI Ante with Copilot

Accelerating Development for Audio and Vision AI Pipelines


CHIPS Act and U.S. Fabs

CHIPS Act and U.S. Fabs
by Bill Jewell on 12-19-2023 at 10:00 am

Major Future US Fabs

In August 2022, U.S. President Biden signed into law the CHIPS and Science Act of 2022 to provide incentives for semiconductor manufacturing in the United States. In a case of creating the acronym first and then finding a name to fit, CHIPS stands for Creating Helpful Incentives to Produce Semiconductors. The act provides a total of $52.7 billion for the U.S. semiconductor industry, including $39 billion in manufacturing incentives.

The basics of the CHIPS Act began in November 2019 with a bipartisan proposal from Democratic Senator Chuck Schumer of New York and Republican Senator Todd Young of Indiana. In 2020, officials from the State and Commerce departments under President Trump negotiated with TSMC to build a wafer fab in the U.S. At the time, the U.S. government promised to work to provide subsidies to the project.

Has the CHIPS Act been successful in increasing investment in semiconductor wafer fabs in the U.S.? Below is a table of major U.S. fab projects announced in the last few years. The total near term investment is $142 billion. Most of these projects were announced before the passage of the CHIPS Act. However, these companies likely expected future U.S. government subsidies. The subsidies listed in the table are from state and local governments. The organization Good Jobs First tracks government financial assistance to business.

Would these fabs have been built in the U.S. without the expectation of U.S. government assistance? Let’s look at each company.

TSMC – the largest semiconductor wafer foundry, based in Taiwan. TSMC has six current 300mm wafer fabs. All are based in Taiwan except for one in Nanjing, China. TSMC’s third quarter 2023 report shows 69% of its revenue came from companies based in North America (primarily the U.S.). TSMC has been facing pressure from both the U.S. government and its U.S. customers to build a fab in the U.S. This pressure combined with the hope of U.S. government funding most likely drove its decision to build a fab in Arizona. TSMC is reportedly seeking about $15 billion in funding through the CHIPS Act. TSMC is also planning an $11 billion wafer fab in Dresden, Germany in a joint venture with Bosch, Infineon and NXP. The German government planned to contribute about 5 billion Euros ($5.4 billion) towards the fab. However, a recent court ruling places the German subsidies in doubt.

Texas Instruments – the largest analog IC company, based in Dallas, Texas. TI currently has 300mm wafer fabs located in Dallas, Texas; Richardson, Texas; and Lehi, Utah. The Lehi fab was purchased from Micron Technology and converted by TI to produce analog ICs. TI in the past has located fabs in Europe and Asia, but in the last several years has only build fabs in the U.S. TI’s proposed fabs in Sherman, Texas are about a one-hour drive from TI’s headquarters. TI has had operations in Sherman for over 50 years. The city, school district, and county will provide about $2.4 billion in subsidies for the Sherman fabs, primarily through tax breaks. Any money TI receives through the CHIPS Act will be a bonus. However, it is likely TI would have built its new fabs in Sherman without the CHIPS Act.

Samsung – the largest memory IC producer, based in South Korea. Most of Samsung’s fabs are in South Korea. Samsung built a fab in Austin, Texas which opened in 1996. The Austin fab operates as a wafer foundry. Samsung’s announced fab in Taylor, Texas – about 45 minutes from Austin – will also be a wafer foundry. The company will continue to make major fab investments in South Korea with $230 billion planned over the next 20 years, primarily for memory fabs. Samsung will receive about $1.2 billion in local subsidies from Taylor area governments. The proximity to its Austin fab and the local incentives were most likely the primary drivers for Samsungs Taylor fab. Funds from the CHIPS Act were probably also a factor, but Samsung presumably would have built the fab in Taylor without CHIPS money.

Intel – the largest microprocessor supplier, based in Santa Clara, California. Intel has major U.S. fabs in Chandler, Arizona; Hillsboro, Oregon; and Rio Rancho, New Mexico. It also has fabs in Leixlip, Ireland; Jerusalem, Israel; and Kiryat Gat, Israel. Intel is building a new fab in Kiryat Gat with about $3 billion in Israeli government subsidies. The company also plans a fab in Magdeburg, Germany with about $11 billion in German government aid. However, as with TSMC, the German funding is uncertain. Intel will receive about $2.4 billion in local aid for its fab in New Albany, Ohio. Intel announced the Ohio fab in January 2022 – before the CHIPS Act was passed but when passage appeared likely. Intel has shown a willingness to locate fabs outside of the U.S. for the right incentives. The CHIPS funds were certainly a major factor in deciding on the Ohio location.

Micron Technology – the largest U.S. memory producer and third largest worldwide, headquartered in Boise, Idaho. Micron has fabs in Boise, Idaho; Taichung, Taiwan; Hiroshima, Japan; and Singapore. The foreign fabs were all procured through Micron business acquisitions: Rexchip Electronics in Japan, Intotera Memories in Taiwan, and Texas Instruments’ memory business in Singapore. Micron plans to expand its fabs in Taiwan and Japan. The Japanese government will subsidize Micron’s new Hiroshima fab with about $1.3 billion. Micron will build new fabs in Boise, Idaho and Clay, New York over the next several years. Micron will receive about $6.4 billion in state and local incentives for its New York fabs. The new fabs will produce DRAMs, which Micron currently makes only in Taiwan and Japan. Micron’s strategy is to eventually produce 40% of its DRAMs in the U.S. The new U.S. fabs were announced in September and October of 2022, well after the passage of the CHIPS Act. Since Micron has shown a willingness to expand its overseas fabs, the CHIPS funds were undoubtedly a major factor in deciding on its Idaho and New York fabs.

In summary, was the CHIPS Act a deciding factor in locating these new fabs in the U.S.? We say yes for TSMC and probably for Micron and Intel. TI and Samsung would likely have made their fab location decisions without the CHIPS Act. It remains to be seen how the CHIPS Act will affect future fab decisions. Companies decide to build new fabs based on their anticipated capacity needs. Fab locations are based on many factors including proximity to company headquarters, infrastructure, workforce, political stability, customer proximity, and logistics. Government subsidies may influence the country for the fab and the location within the country but are generally not the primary driving factor.

CapEx update

In our Semiconductor Intelligence June newsletter, we estimated 2023 semiconductor capital expenditures would be about $156 billion, down 14% from 2022. Most companies appear to be holding to their plans. One exception is Intel, which we had estimated at $20 billion in 2023 CapEx. Through the third quarter of 2023, Intel capex was $19.1 billion, which means the full year will likely be around $24 billion. The largest spender, TSMC, confirmed in October their 2023 capex target of $32 billion, down 12% from 2022.

Few companies have indicated their capex plans for 2024. Micron Technology ended its 2023 fiscal year in August with $7.0 billion in capex. Their guidance for fiscal 2024 capex was “slightly above” fiscal 2023. Infineon Technologies fiscal year 2023 ended in September with 3 billion euro ($3.2 billion) in capex. Infineon plans to increase capex to 3.3 billion euro ($3.6 billion). Our preliminary estimate for 2024 total capex is a 10% to 20% increase from 2023, in the range of $172 billion to $187 billion.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Headed Toward Strong 2024

Electronics Production Trending Up

Nvidia Number One in 2023

Turnaround in Semiconductor Market


RISC-V Summit Buzz – Launchpad Showcase Highlights Smaller Company Innovation

RISC-V Summit Buzz – Launchpad Showcase Highlights Smaller Company Innovation
by Mike Gianfagna on 12-19-2023 at 8:00 am

RISC V Summit Buzz – Launchpad Showcase Highlights Smaller Company Innovation

One of the goals of the recent RISC-V Summit was to demonstrate that the RISC-V movement is real – major programs by large organizations committing to development around the RISC-V ISA. I would say this goal was achieved. Many high-profile announcements and aggressive, new architectures based on RISC-V were presented. On day one, compelling keynotes from companies such as Ventana Micro Systems, Meta, Microchip, Qualcomm and Synopsys were on the agenda. But what about the smaller companies? At the end of day one, a group of these companies got a chance to tell conference attendees about the great work they were doing. Read on to get a feeling for the magnitude of the of the RISC-V movement as the Launchpad Showcase highlights smaller company innovation.

Three Minutes of Fame

The famous quote, “In the future, everyone will be world-famous for 15 minutes” has been attributed to Andy Warhol from the 1960’s. Who actually said it first is the subject of some debate. Regardless, the concept is quite relevant today thanks to ubiquitous social media platforms. At the end of the first day of the RISC-V Summit, Tiffany Sparks, director of marketing at RISC-V International kicked of a session that aimed to highlight a broad range of innovations being shown at the conference. Eight companies were chosen, and the 15 minutes of fame was reduced to three minutes to manage session length. Can eight individuals deliver a compelling and memorable message to a large audience at the end of a long day of presentations? Let’s find out…

Andes D25F-SE: The Superhero of RISC-V CPUs

Marvin Chao, director of solution architect presented for Andes Technology. Marvin explained that the D25F-SE core is fully ASIL-B compliant and contains several extensions, making it useful in many automotive applications, as shown in the figure below.

ASIL B Applications for ANDES D25F SE

The core has a five-stage, in-order single-issue architecture. Based on the AndesStar V5 32-bit architecture, it has the RV32 GCBP ISA with Andes extensions. The memory subsystem supports instructions and data caches up to 32KB each and instruction and data local memories up to 16MB each. The part has AXI or AHB ports and a local memory direct access port.  For functional safety, there is support for things like core trap status bus interface, ECC protection, StackSafe, and PMP.  The part will be ASIL-B certified in Q4, 2023.

Marvin explained that speed-ups in the range of 3X or more can be achieved. The performance and flexibility of this design does put it in the superhero category I believe.

Beagleboard.org: Technology Access Without Barriers

Jason Kridner, co-founder of beagleboard.org, presented a couple of new products from this non-profit organization. Its mission is to provide education in and collaboration around the design and use of open-source software and hardware in embedded computing. Jason presented two new boards.

The BeagleV-Ahead is aimed at high-performance mobile and edge applications. It contains a quad core C910 2GHZ with optimized video, graphics, neural, and audio processors. It supports out of order execution and supports H.264 and H.265. Jason went on to describe many more capabilities of the product, making it applicable in a wide range of applications. The best part is that it’s available for $149 in quantity, worldwide.

He also presented a product that was just announced, the BeagleV-Fire. This part uses a Microchip PolarFire SoC with an FPGA fabric, making experimentation easy. This one covers a lot of application space as well and is also available for $149. Below is more detail on it.

Beagle V Fire Overview

Jason ended his presentation with an offer to set up a meeting with him via the beagleboard.org website. He is there to help the community succeed, so he is personally committed to the mission of technology access without barriers.

Codasip: Meet Mr. Custom Compute

Mike Eftimakis is the VP of strategy and ecosystem at Codasip. He’s known around the company for his passion regarding custom compute. Mike pointed out that custom compute is the differentiator for Codasip, so his passion aligns well with company goals.  He explained that Codasip decided from the beginning to take a different approach that would allow the flexibility needed to make custom compute a reality.

Mike explained that efficiency improvements of 1,000 – 10,000 percent are possible if one embraces the notion of tuning and customizing the processor to the application.  He went on to explain that doing this by hand is very difficult and that’s why Codasip invested in three areas: The tools, the methodology and a range of IP cores that are ready for customization. On that last point, Mike talked about the new 700 family that was announced at the show. This brings great flexibility across many applications and even has technology in it that will thwart up to 70 percent of possible cyber-attacks. He ended by telling everyone to stop wasting time trying to optimize your design with an architecture that wasn’t meant to be modified. Call Codasip.

Mike’s passion and commitment were quite clear. He did all this with one slide, that had, well, very little content.

Deep Computing: The First Phone Call from a RISC-V Pad

Yuning Liang, CEO of Deep Computing explained that his company develops applications across many markets, including personal computing, laptops, workstations and consumer electronics products as well, some of which could be seen around the show floor. The figure below summarizes the breadth of the company’s products.

Deep Cpmputing Overview

With Tiffany’s help Yuning placed a phone call from a RISC-V pad live on stage, a first. Doing that in front of a large, live audience definitely shows the level of confidence Yuning has in his company and its products.

Esperanto Technologies: Generative AI Meets RISC-V

Craig Cochran is vice president of marketing and business development at Esperanto Technologies. He explained that the needs of high-performance computing and machine learning are actually converging. He went on to say that RISC-V was in a unique position to build the best converged HPC and ML systems.

Craig introduced Esperanto’s new RISC-V supercomputer on a chip, the ET-SoC-1. The device contains over 1,000 64-bit RISC-V CPUs per chip. The chip is very energy efficient and can be combined to make large systems. An architectural overview is shown below.

ET SoC 1 Oveview

Craig ended with big news about a new application for this architecture. Esperanto is applying it to a new generative AI appliance for Ai inferencing. The performance and power efficiency are substantial, and the application space appears broad. This is one to watch for sure.

Semidynamics: Delivering Unfair Advantages with Tensor and Vector Unit Integration

Roger Espasa, CEO and founder of Semidynamics explained the details of the company’s new design that directly connects a tensor unit with an existing vector unit. Roger gave one of the more detailed and technical presentations. The punch line is really one of efficiency and software compatibility.

AI processes will typically multiply matrices of data and perform operations based on the results. While Linux supports these concepts directly, a typical hardware implementation requires multiple operations and data movement to get it done. It’s far from elegant. Now, with the integrated architecture delivered by Semidynamics, vector operations are done in the tensor unit and the results are immediately available in the vector unit to support subsequent actions. It’s quite an elegant solution. The diagram below shows some details of the architecture.

New Semidynamics Archtecture

This approach holds promise for great impact. Another one to watch.

SiFive: Cores and Development Boards for All

Drew Barbier is senior director of product management at SiFive. He’s been with the company quite a while. Drew talked about the need for a complete solution, not just a RISC-V core. He explained that SiFive understands this need and is hard at work delivering complete solutions. Some of the items he discussed include:

  • Coherent, heterogeneous system architecture
  • RISC-V vector crypto extensions
  • Hypervisor and IOMMU
  • Advanced power management
  • System security

The actual list is quite long, there’s a lot to deliver here. Drew then discussed the extensive development board program SiFive has underway. The company is working with its partners to deliver a wide variety of development boards that cover a lot of markets. Below is a summary of the program.

SiFive Development Board Program

SiFive has clearly listened to its customers. Product support is quite extensive.

TetraMem: Welcome to a Revolution in the Physics of Computing

David George is head of global operations at TetraMem Inc. David talked about a new circuit element beyond the traditional capacitor, resistor and inductor. The memristor is the novel new circuit element at the core of TetraMem’s new product, the MX 100 which is the first commercial implementation of a memristor.  The figure below summarizes some it its capabilities.

MX 100 Overview

This device performs analog, in-memory compute on a RISC-V SoC. David explained that the analog devices are organized into neural processing units and use the memristor’s power in a unique way, eliminating the need for thousands of clock cycles. The result is a huge improvement in latency, power, and throughput. David went on to point out that this innovation required the application of skills across materials science, semiconductor processes and devices, circuit architecture, algorithms, and applications. This is a technological virtuoso performance, delivering a complete hardware and software solution for neural network inference for AI at the edge.

TetraMem is indeed orchestrating a revolution in the physics of computing. And the impact is growing with announced partnerships with Andes and Synopsys. Exciting stuff.

To Lean More

In a relatively short session, eight innovators presented potentially game-changing technology. This body of work could occupy a whole track at a conference like this. In this case, it was all covered in one short session. If you’d like to see the event, you can access a replay of it here. You can also learn more about RISC-V on SemiWiki here. The Launchpad Showcase highlights smaller company innovation – a great result from a growing RISC-V community.


Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure

Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure
by Kalar Rajendiran on 12-19-2023 at 6:00 am

Alphawave Semi 224G SerDes 1st TestChip

In the rapidly evolving landscape of artificial intelligence (AI) and data-intensive applications, the demand for high-performance interconnect technologies has never been more critical. Even the 100G Interconnect is already not fast enough for infrastructure applications. AI applications, with their massive datasets and complex algorithms, are driving the need for unprecedented data transfer speeds. The 224G Serializer/Deserializer (SerDes) stands at the forefront of the high-speed data communication revolution, ushering in a new era of unprecedented performance and adaptability.

Alphawave recognizes this market need and addresses it head-on with its cutting-edge 200G interconnect technologies. It is a testament to the company’s commitment to staying ahead of the data curve, empowering industries with the speed and efficiency needed to propel AI and high-performance computing into the future.

Recently, the company hosted a webinar on this topic and shared results from their AthenaCORE 224G SerDes TestChip. This post takes a look at Alphawave’s efforts toward unleashing the 1.6T ecosystem with its comprehensive offerings including its 200G interconnect technology.

Leveraging Alphawave’s 112G SerDes Success to Deliver Robust 224G SerDes

By extending its proven 112G SerDes to support a remarkable 224Gbps, Alphawave has not only doubled the data rate but has also unlocked new possibilities for data-intensive applications, particularly in the realm of Artificial Intelligence (AI) and advanced computing. Overcoming the associated challenges and complexities of 200G Interconnect called for a combination of advanced technologies, innovative design approaches, and collaborative efforts within the industry. Alphawave has  built upon this 112G SerDes success to deliver the even more stringent requirements of a 224G SerDes.

The AlphaCORE DSP-based Serializer/Deserializer (SerDes) architecture is engineered to deliver versatile high-speed data communication solutions, featuring a configurable 112G Digital Signal Processor (DSP). The configurability of the DSP architecture enables adaptation for diverse applications and performance demands, providing a plug-and-play modular design for interchangeability and easy integration. Operating at a data rate of 112 gigabits per second (112G), the architecture aligns with the requirements of modern data communication in fields such as data centers, networking, and high-performance computing. With an emphasis on application-customized solutions, flexibility, and adaptability, the SerDes can be tailored to specific use cases, showcasing its ability to optimize performance for varying applications and environments. The inclusion of a DSP underscores its significance in tasks like equalization, error correction, and signal conditioning. Designed for ease of integration and adaptable to various Plug and Play Modules, the architecture ensures seamless compatibility with different components and functionalities. As a high-speed communication solution, the architecture meets the evolving demands of data rates and aligns with advancements in communication standards, making it well-suited for dynamic and future-oriented communication environments.

AthenaCORE 224G SerDes TestChip Results

Alphawave’s Innovative Development Efforts

Alphawave’s 200G interconnect technologies are not only about speed but also about efficiency and reliability. The 200G interconnect challenges include signal integrity issues, crosstalk, and dispersion. The company invests in advanced modulation schemes, such as PAM4 (Pulse Amplitude Modulation 4) which allows multiple bits to be encoded in a single symbol, effectively increasing the data rate. Alphawave also deploys advanced DSP techniques and adaptive error correction schemes to enhance the reliability and performance of data transmission at 200G speeds.

Advanced DSP Techniques

Maximum Likelihood Sequence Detectors (MLSD) represent a sophisticated Digital Signal Processing (DSP) technique employed in communication systems, notably effective in scenarios featuring intersymbol interference (ISI). Unlike conventional methods that aim to eliminate ISI, MLSD uniquely capitalizes on the energy within interference to boost signal power, optimizing symbol sequence detection. Its mathematically optimal approach involves an exhaustive search over all possible symbol sequences, minimizing mean square error to identify the transmitted sequence. Recognized for its capacity to significantly enhance system performance, MLSD is particularly applied in high-speed data communication and optical communication, addressing concerns related to signal distortion due to ISI. While MLSD’s computational demands raise complexity considerations, the technique’s adaptability to varying channel conditions underscores its efficacy in dynamic communication environments.

Forward Error Correction (FEC) Strategies

Alphawave embraces adaptive Forward Error Correction (FEC) strategies, allowing for dynamic adjustments based on real-time channel conditions. This flexibility ensures optimal performance without compromising on bandwidth efficiency. FEC empowers systems to establish higher Bit Error Rate (BER) targets on electrical links, providing a threshold for tolerating and correcting errors. Adaptive FEC dynamically adjusts error correction strength, balancing correction and bandwidth efficiency based on real-time channel conditions. The ascent of adaptive and dynamic FEC strategies enhances system adaptability, while integration with advanced modulation schemes optimizes performance, particularly in high-speed and optical communication systems.

Versatile Options to Support the 1.6T Ecosystem

Alphawave provides versatile options for switch ASICs (Application-Specific Integrated Circuits) in the 1.6T ecosystem. This includes the ability to stick with 512 × 100G links or leverage 256 x 200G links in a 1RU – 32 Port Switch configuration, offering scalability and flexibility for different deployment scenarios. The company’s UCle-enabled chiplets opens up new possibilities for chip-level modularity and scalability to address high-speed memory and compute requirements for infrastructure applications. With its 2.5D/3D packaging and application-optimized IP, the company navigates the delicate balance between complexity and performance to deliver advanced solutions.

Multi-Vendor Interoperability

Encouraging innovation, interoperability spans various dimensions, including form factors, SerDes interfaces, and management software, with the ultimate goal of achieving system compatibility. Multi-vendor Interoperability is a critical factor in the adoption and success of new technologies. Early adopters benefit from a broader range of compatible products, while downstream implementers leverage interoperability to streamline development, reducing time and costs. Setting performance standards, interoperability ensures users can anticipate how different components will function together in a system. This fosters quicker access to lower-cost technology, driven by competition in a diverse ecosystem of interoperable solutions.

Working with Standards Bodies

Alphawave understands the importance of multi-vendor interoperability and actively engages with industry standards bodies such as OIF (Optical Internetworking Forum) and IEEE 802.3 to contribute to the development of 200G signaling standards. This collaboration ensures interoperability and sets the stage for the seamless integration of Alphawave’s technologies into the broader ecosystem. Alphawave’s robust specifications and adherence to industry standards ensure that its 200G interconnect technologies seamlessly integrate with a variety of systems.

Summary

By actively contributing to industry standards, investing in advanced technologies, and providing versatile solutions, Alphawave is an important player in making the 1.6T ecosystem mainstream for the era of artificial intelligence.  Alphawave offers a comprehensive suite of solutions designed for high-performance connectivity. Their High-Performance Connectivity IP spans across crucial areas like PCIe/CXL, Ethernet, and HBM/DDR, catering to the demands of high-speed data communications. The incorporation of chiplet technology, notably leveraging UCle, indicates a commitment to seamless chiplet interconnectivity. The specific chiplet types—IO, Memory, and Compute—underscore a modular approach, allowing different chiplets to function together harmoniously.

As data-intensive applications continue to evolve, Alphawave’s commitment to innovation positions it as a key enabler of the high-speed, reliable, and scalable AI data infrastructure of tomorrow. In essence, Alphawave is a key player in enabling flexibility, scalability and innovation for the upcoming 1.6T ecosystem.

To listen to the webinar, visit here.

Also Read:

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Interface IP in 2022: 22% YoY growth still data-centric driven

Alphawave Semi Visit at #60DAC