DAC2025 SemiWiki 800x100

Reasoning and Planning as the Next Big Thing in AI

Reasoning and Planning as the Next Big Thing in AI
by Bernard Murphy on 12-20-2023 at 6:00 am

Arithmetic min

When I search for ‘what is the next big thing in AI?’ I find a variety of suggestions around refining and better productizing what we already know. Very understandable for any venture aiming to monetize innovation in the near term, but I am more interested in where AI can move outside the box, to solve problems well outside the purview of today’s deep learning and LLM technologies. One example is in tackling math problems, a known area of weakness for the biggest LLMs, even more so for GPT 4. OpenAI Q* and Google Gemini both have claims in this space.

I like this example because it illustrates an active area of research in reasoning with interesting ideas while also clarifying the scale of the mountain that must be climbed on the way to anything resembling artificial general intelligence (AGI).

Math word problems

Popular accounts like to illustrate LLM struggles with math through grade school word problems, for example (credit to Timothy Lee for this example):

John gave Susan five apples and then gave her six more. Susan then ate three apples and gave three to Charlie. She gave her remaining apples to Bob, who ate one. Bob then gave half his apples to Charlie. John gave seven apples to Charlie, who gave Susan two-thirds of his apples. Susan then gave four apples to Charlie. How many apples does Charlie have now?

Language recognition is obviously valuable in some aspects of understanding this problem, say in translating from a word-based problem statement to an equation-based equivalent. But I feel this step is incidental to LLM problems with math. The real problem is in evaluating the equations, which requires a level of reasoning beyond LLM statistical prompt/response matching.

Working a problem in steps and positional arithmetic

The nature of an LLM is to respond in one shot to a prompt; this works well for language-centric questions. Language variability is highly bounded by semantic constraints, therefore a reasonable match with the prompt is likely to be found with high confidence in the model (more than one match) then triggering an appropriate response. Math problems can have much more variability in values and operations; therefore any given string of operations is much less likely to be found in a general training pile no matter how large the pile.

We humans learn early that you don’t try to solve such a problem in one shot. You solve one step at a time. This decomposition, called chain-of-thought reasoning, is something that must be added to a model. In the example above, first calculate how many apples Susan has after John hands over his apples. Then move to the next step. Obvious to anyone with skill in arithmetic.

Zooming further in, suppose you want to solve 5847+15326 (probably not apples). It is overwhelmingly likely that this calculation will not be found anywhere in the training dataset. Instead, the model must learn how to do arithmetic on positional notation numbers. First compute 7+6 = 13, put the 3 in the 1s position for the result and carry 1. And so on. Easy as an explicit algorithm but that’s cheating; here the model must learn how to do long addition. That requires training examples for adding two numbers, each between 0 and 9, plus multiple training examples which demonstrate the process of long addition in chain-of-thought reasoning. This training will in effect build a set of rules in the model, but captured in the usual tangle of model parameters rather than as discernible rules. Once training is finished against whatever you decided was a sufficient set of examples it is ready to run against addition tests not seen in the training set.

This approach, which you might consider meta-pattern recognition, works quite well, up to a point. Remember that this is training to infer rules by example rather than by mathematical proof. We humans know that the long addition algorithm works no matter how big the numbers are. A model trained on examples should behave similarly for a while, but as the numbers get bigger it will at some point run beyond the scope of its training and will likely start to hallucinate. One paper shows such a model delivering 86% accuracy on calculations using 5-digit numbers – much better than the 5-6% of native GPT methods – but dropping to 41% for 12-digit numbers.

Progress is being made but clearly this is still a research topic. Also a truly robust system would need to move up another level, to learning absolute and abstract mathematical facts, for example true induction on long addition.

Beyond basic arithmetic

So much for basic arithmetic, especially as expressed in word problems. UC Berkeley has developed an extensive set of math problems, called MATH, together with AMPS, a pretraining dataset. MATH is drawn from high school math competitions covering prealgebra, algebra, number theory, counting and probability, geometry, intermediate algebra, and precalculus. AMPS, the far larger training dataset, is drawn from Khan Academy and Mathematica script examples and runs to 23GB versus 570GB for GPT3 training.

In a not too rigorous search, I have been unable to find research papers on learning models for any of these areas outside of arithmetic. Research in these domains would be especially interesting since solutions to such problems are likely to become more complex. There is also a question of how to decompose solution attempts into sufficiently granular chains-of-thought reasoning to guide effective training for LLMs rather than human learners. I expect that could be an eye-opener, not just for AI but also for neuroscience and education.

What seems likely to me (and others) is that each such domain will require, as basic arithmetic requires, its own fine-tuning training dataset. Then we can imagine similar sets of training for pre-college physics, chemistry, etc. At least enough to cover commonsense know-how. In compounding all these fine-tuning subsets, at some point “fine-tuning” a core LLM will no longer make sense. We will need to switch to new types of foundation model. So watch out for that.

Beyond math

While example-based intuition won’t fly in math, there are many domains outside the hard sciences where best guess answers are just fine. I think this is where we will ultimately see the payoff of this moonshot research. One interesting direction here further elaborates chain-of-thought from linear reasoning to exploration methods with branching searches along multiple different paths. Again a well-known technique in algorithm circles but quite new to machine learning methods I believe.

Yann LeCun has written more generally and certainly much more knowledgeably on this area as the big goal in machine learning, combining what I read as recognition (something we sort of have a handle on), reasoning (a very, very simple example covered in this blog), and planning (hinted at in the branching searches paragraph above). Here’s a temporary link: A Path Towards Autonomous Machine Intelligence. If the link expires, try searching for the title, or more generally “Yann LeCun planning”.

Very cool stuff. Opportunity for new foundation models and no doubt new hardware accelerators 😀

Also Read:

Building Reliability into Advanced Automotive Electronics

Synopsys.ai Ups the AI Ante with Copilot

Accelerating Development for Audio and Vision AI Pipelines


CHIPS Act and U.S. Fabs

CHIPS Act and U.S. Fabs
by Bill Jewell on 12-19-2023 at 10:00 am

Major Future US Fabs

In August 2022, U.S. President Biden signed into law the CHIPS and Science Act of 2022 to provide incentives for semiconductor manufacturing in the United States. In a case of creating the acronym first and then finding a name to fit, CHIPS stands for Creating Helpful Incentives to Produce Semiconductors. The act provides a total of $52.7 billion for the U.S. semiconductor industry, including $39 billion in manufacturing incentives.

The basics of the CHIPS Act began in November 2019 with a bipartisan proposal from Democratic Senator Chuck Schumer of New York and Republican Senator Todd Young of Indiana. In 2020, officials from the State and Commerce departments under President Trump negotiated with TSMC to build a wafer fab in the U.S. At the time, the U.S. government promised to work to provide subsidies to the project.

Has the CHIPS Act been successful in increasing investment in semiconductor wafer fabs in the U.S.? Below is a table of major U.S. fab projects announced in the last few years. The total near term investment is $142 billion. Most of these projects were announced before the passage of the CHIPS Act. However, these companies likely expected future U.S. government subsidies. The subsidies listed in the table are from state and local governments. The organization Good Jobs First tracks government financial assistance to business.

Would these fabs have been built in the U.S. without the expectation of U.S. government assistance? Let’s look at each company.

TSMC – the largest semiconductor wafer foundry, based in Taiwan. TSMC has six current 300mm wafer fabs. All are based in Taiwan except for one in Nanjing, China. TSMC’s third quarter 2023 report shows 69% of its revenue came from companies based in North America (primarily the U.S.). TSMC has been facing pressure from both the U.S. government and its U.S. customers to build a fab in the U.S. This pressure combined with the hope of U.S. government funding most likely drove its decision to build a fab in Arizona. TSMC is reportedly seeking about $15 billion in funding through the CHIPS Act. TSMC is also planning an $11 billion wafer fab in Dresden, Germany in a joint venture with Bosch, Infineon and NXP. The German government planned to contribute about 5 billion Euros ($5.4 billion) towards the fab. However, a recent court ruling places the German subsidies in doubt.

Texas Instruments – the largest analog IC company, based in Dallas, Texas. TI currently has 300mm wafer fabs located in Dallas, Texas; Richardson, Texas; and Lehi, Utah. The Lehi fab was purchased from Micron Technology and converted by TI to produce analog ICs. TI in the past has located fabs in Europe and Asia, but in the last several years has only build fabs in the U.S. TI’s proposed fabs in Sherman, Texas are about a one-hour drive from TI’s headquarters. TI has had operations in Sherman for over 50 years. The city, school district, and county will provide about $2.4 billion in subsidies for the Sherman fabs, primarily through tax breaks. Any money TI receives through the CHIPS Act will be a bonus. However, it is likely TI would have built its new fabs in Sherman without the CHIPS Act.

Samsung – the largest memory IC producer, based in South Korea. Most of Samsung’s fabs are in South Korea. Samsung built a fab in Austin, Texas which opened in 1996. The Austin fab operates as a wafer foundry. Samsung’s announced fab in Taylor, Texas – about 45 minutes from Austin – will also be a wafer foundry. The company will continue to make major fab investments in South Korea with $230 billion planned over the next 20 years, primarily for memory fabs. Samsung will receive about $1.2 billion in local subsidies from Taylor area governments. The proximity to its Austin fab and the local incentives were most likely the primary drivers for Samsungs Taylor fab. Funds from the CHIPS Act were probably also a factor, but Samsung presumably would have built the fab in Taylor without CHIPS money.

Intel – the largest microprocessor supplier, based in Santa Clara, California. Intel has major U.S. fabs in Chandler, Arizona; Hillsboro, Oregon; and Rio Rancho, New Mexico. It also has fabs in Leixlip, Ireland; Jerusalem, Israel; and Kiryat Gat, Israel. Intel is building a new fab in Kiryat Gat with about $3 billion in Israeli government subsidies. The company also plans a fab in Magdeburg, Germany with about $11 billion in German government aid. However, as with TSMC, the German funding is uncertain. Intel will receive about $2.4 billion in local aid for its fab in New Albany, Ohio. Intel announced the Ohio fab in January 2022 – before the CHIPS Act was passed but when passage appeared likely. Intel has shown a willingness to locate fabs outside of the U.S. for the right incentives. The CHIPS funds were certainly a major factor in deciding on the Ohio location.

Micron Technology – the largest U.S. memory producer and third largest worldwide, headquartered in Boise, Idaho. Micron has fabs in Boise, Idaho; Taichung, Taiwan; Hiroshima, Japan; and Singapore. The foreign fabs were all procured through Micron business acquisitions: Rexchip Electronics in Japan, Intotera Memories in Taiwan, and Texas Instruments’ memory business in Singapore. Micron plans to expand its fabs in Taiwan and Japan. The Japanese government will subsidize Micron’s new Hiroshima fab with about $1.3 billion. Micron will build new fabs in Boise, Idaho and Clay, New York over the next several years. Micron will receive about $6.4 billion in state and local incentives for its New York fabs. The new fabs will produce DRAMs, which Micron currently makes only in Taiwan and Japan. Micron’s strategy is to eventually produce 40% of its DRAMs in the U.S. The new U.S. fabs were announced in September and October of 2022, well after the passage of the CHIPS Act. Since Micron has shown a willingness to expand its overseas fabs, the CHIPS funds were undoubtedly a major factor in deciding on its Idaho and New York fabs.

In summary, was the CHIPS Act a deciding factor in locating these new fabs in the U.S.? We say yes for TSMC and probably for Micron and Intel. TI and Samsung would likely have made their fab location decisions without the CHIPS Act. It remains to be seen how the CHIPS Act will affect future fab decisions. Companies decide to build new fabs based on their anticipated capacity needs. Fab locations are based on many factors including proximity to company headquarters, infrastructure, workforce, political stability, customer proximity, and logistics. Government subsidies may influence the country for the fab and the location within the country but are generally not the primary driving factor.

CapEx update

In our Semiconductor Intelligence June newsletter, we estimated 2023 semiconductor capital expenditures would be about $156 billion, down 14% from 2022. Most companies appear to be holding to their plans. One exception is Intel, which we had estimated at $20 billion in 2023 CapEx. Through the third quarter of 2023, Intel capex was $19.1 billion, which means the full year will likely be around $24 billion. The largest spender, TSMC, confirmed in October their 2023 capex target of $32 billion, down 12% from 2022.

Few companies have indicated their capex plans for 2024. Micron Technology ended its 2023 fiscal year in August with $7.0 billion in capex. Their guidance for fiscal 2024 capex was “slightly above” fiscal 2023. Infineon Technologies fiscal year 2023 ended in September with 3 billion euro ($3.2 billion) in capex. Infineon plans to increase capex to 3.3 billion euro ($3.6 billion). Our preliminary estimate for 2024 total capex is a 10% to 20% increase from 2023, in the range of $172 billion to $187 billion.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Headed Toward Strong 2024

Electronics Production Trending Up

Nvidia Number One in 2023

Turnaround in Semiconductor Market


RISC-V Summit Buzz – Launchpad Showcase Highlights Smaller Company Innovation

RISC-V Summit Buzz – Launchpad Showcase Highlights Smaller Company Innovation
by Mike Gianfagna on 12-19-2023 at 8:00 am

RISC V Summit Buzz – Launchpad Showcase Highlights Smaller Company Innovation

One of the goals of the recent RISC-V Summit was to demonstrate that the RISC-V movement is real – major programs by large organizations committing to development around the RISC-V ISA. I would say this goal was achieved. Many high-profile announcements and aggressive, new architectures based on RISC-V were presented. On day one, compelling keynotes from companies such as Ventana Micro Systems, Meta, Microchip, Qualcomm and Synopsys were on the agenda. But what about the smaller companies? At the end of day one, a group of these companies got a chance to tell conference attendees about the great work they were doing. Read on to get a feeling for the magnitude of the of the RISC-V movement as the Launchpad Showcase highlights smaller company innovation.

Three Minutes of Fame

The famous quote, “In the future, everyone will be world-famous for 15 minutes” has been attributed to Andy Warhol from the 1960’s. Who actually said it first is the subject of some debate. Regardless, the concept is quite relevant today thanks to ubiquitous social media platforms. At the end of the first day of the RISC-V Summit, Tiffany Sparks, director of marketing at RISC-V International kicked of a session that aimed to highlight a broad range of innovations being shown at the conference. Eight companies were chosen, and the 15 minutes of fame was reduced to three minutes to manage session length. Can eight individuals deliver a compelling and memorable message to a large audience at the end of a long day of presentations? Let’s find out…

Andes D25F-SE: The Superhero of RISC-V CPUs

Marvin Chao, director of solution architect presented for Andes Technology. Marvin explained that the D25F-SE core is fully ASIL-B compliant and contains several extensions, making it useful in many automotive applications, as shown in the figure below.

ASIL B Applications for ANDES D25F SE

The core has a five-stage, in-order single-issue architecture. Based on the AndesStar V5 32-bit architecture, it has the RV32 GCBP ISA with Andes extensions. The memory subsystem supports instructions and data caches up to 32KB each and instruction and data local memories up to 16MB each. The part has AXI or AHB ports and a local memory direct access port.  For functional safety, there is support for things like core trap status bus interface, ECC protection, StackSafe, and PMP.  The part will be ASIL-B certified in Q4, 2023.

Marvin explained that speed-ups in the range of 3X or more can be achieved. The performance and flexibility of this design does put it in the superhero category I believe.

Beagleboard.org: Technology Access Without Barriers

Jason Kridner, co-founder of beagleboard.org, presented a couple of new products from this non-profit organization. Its mission is to provide education in and collaboration around the design and use of open-source software and hardware in embedded computing. Jason presented two new boards.

The BeagleV-Ahead is aimed at high-performance mobile and edge applications. It contains a quad core C910 2GHZ with optimized video, graphics, neural, and audio processors. It supports out of order execution and supports H.264 and H.265. Jason went on to describe many more capabilities of the product, making it applicable in a wide range of applications. The best part is that it’s available for $149 in quantity, worldwide.

He also presented a product that was just announced, the BeagleV-Fire. This part uses a Microchip PolarFire SoC with an FPGA fabric, making experimentation easy. This one covers a lot of application space as well and is also available for $149. Below is more detail on it.

Beagle V Fire Overview

Jason ended his presentation with an offer to set up a meeting with him via the beagleboard.org website. He is there to help the community succeed, so he is personally committed to the mission of technology access without barriers.

Codasip: Meet Mr. Custom Compute

Mike Eftimakis is the VP of strategy and ecosystem at Codasip. He’s known around the company for his passion regarding custom compute. Mike pointed out that custom compute is the differentiator for Codasip, so his passion aligns well with company goals.  He explained that Codasip decided from the beginning to take a different approach that would allow the flexibility needed to make custom compute a reality.

Mike explained that efficiency improvements of 1,000 – 10,000 percent are possible if one embraces the notion of tuning and customizing the processor to the application.  He went on to explain that doing this by hand is very difficult and that’s why Codasip invested in three areas: The tools, the methodology and a range of IP cores that are ready for customization. On that last point, Mike talked about the new 700 family that was announced at the show. This brings great flexibility across many applications and even has technology in it that will thwart up to 70 percent of possible cyber-attacks. He ended by telling everyone to stop wasting time trying to optimize your design with an architecture that wasn’t meant to be modified. Call Codasip.

Mike’s passion and commitment were quite clear. He did all this with one slide, that had, well, very little content.

Deep Computing: The First Phone Call from a RISC-V Pad

Yuning Liang, CEO of Deep Computing explained that his company develops applications across many markets, including personal computing, laptops, workstations and consumer electronics products as well, some of which could be seen around the show floor. The figure below summarizes the breadth of the company’s products.

Deep Cpmputing Overview

With Tiffany’s help Yuning placed a phone call from a RISC-V pad live on stage, a first. Doing that in front of a large, live audience definitely shows the level of confidence Yuning has in his company and its products.

Esperanto Technologies: Generative AI Meets RISC-V

Craig Cochran is vice president of marketing and business development at Esperanto Technologies. He explained that the needs of high-performance computing and machine learning are actually converging. He went on to say that RISC-V was in a unique position to build the best converged HPC and ML systems.

Craig introduced Esperanto’s new RISC-V supercomputer on a chip, the ET-SoC-1. The device contains over 1,000 64-bit RISC-V CPUs per chip. The chip is very energy efficient and can be combined to make large systems. An architectural overview is shown below.

ET SoC 1 Oveview

Craig ended with big news about a new application for this architecture. Esperanto is applying it to a new generative AI appliance for Ai inferencing. The performance and power efficiency are substantial, and the application space appears broad. This is one to watch for sure.

Semidynamics: Delivering Unfair Advantages with Tensor and Vector Unit Integration

Roger Espasa, CEO and founder of Semidynamics explained the details of the company’s new design that directly connects a tensor unit with an existing vector unit. Roger gave one of the more detailed and technical presentations. The punch line is really one of efficiency and software compatibility.

AI processes will typically multiply matrices of data and perform operations based on the results. While Linux supports these concepts directly, a typical hardware implementation requires multiple operations and data movement to get it done. It’s far from elegant. Now, with the integrated architecture delivered by Semidynamics, vector operations are done in the tensor unit and the results are immediately available in the vector unit to support subsequent actions. It’s quite an elegant solution. The diagram below shows some details of the architecture.

New Semidynamics Archtecture

This approach holds promise for great impact. Another one to watch.

SiFive: Cores and Development Boards for All

Drew Barbier is senior director of product management at SiFive. He’s been with the company quite a while. Drew talked about the need for a complete solution, not just a RISC-V core. He explained that SiFive understands this need and is hard at work delivering complete solutions. Some of the items he discussed include:

  • Coherent, heterogeneous system architecture
  • RISC-V vector crypto extensions
  • Hypervisor and IOMMU
  • Advanced power management
  • System security

The actual list is quite long, there’s a lot to deliver here. Drew then discussed the extensive development board program SiFive has underway. The company is working with its partners to deliver a wide variety of development boards that cover a lot of markets. Below is a summary of the program.

SiFive Development Board Program

SiFive has clearly listened to its customers. Product support is quite extensive.

TetraMem: Welcome to a Revolution in the Physics of Computing

David George is head of global operations at TetraMem Inc. David talked about a new circuit element beyond the traditional capacitor, resistor and inductor. The memristor is the novel new circuit element at the core of TetraMem’s new product, the MX 100 which is the first commercial implementation of a memristor.  The figure below summarizes some it its capabilities.

MX 100 Overview

This device performs analog, in-memory compute on a RISC-V SoC. David explained that the analog devices are organized into neural processing units and use the memristor’s power in a unique way, eliminating the need for thousands of clock cycles. The result is a huge improvement in latency, power, and throughput. David went on to point out that this innovation required the application of skills across materials science, semiconductor processes and devices, circuit architecture, algorithms, and applications. This is a technological virtuoso performance, delivering a complete hardware and software solution for neural network inference for AI at the edge.

TetraMem is indeed orchestrating a revolution in the physics of computing. And the impact is growing with announced partnerships with Andes and Synopsys. Exciting stuff.

To Lean More

In a relatively short session, eight innovators presented potentially game-changing technology. This body of work could occupy a whole track at a conference like this. In this case, it was all covered in one short session. If you’d like to see the event, you can access a replay of it here. You can also learn more about RISC-V on SemiWiki here. The Launchpad Showcase highlights smaller company innovation – a great result from a growing RISC-V community.


Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure

Unleashing the 1.6T Ecosystem: Alphawave Semi’s 200G Interconnect Technologies for Powering AI Data Infrastructure
by Kalar Rajendiran on 12-19-2023 at 6:00 am

Alphawave Semi 224G SerDes 1st TestChip

In the rapidly evolving landscape of artificial intelligence (AI) and data-intensive applications, the demand for high-performance interconnect technologies has never been more critical. Even the 100G Interconnect is already not fast enough for infrastructure applications. AI applications, with their massive datasets and complex algorithms, are driving the need for unprecedented data transfer speeds. The 224G Serializer/Deserializer (SerDes) stands at the forefront of the high-speed data communication revolution, ushering in a new era of unprecedented performance and adaptability.

Alphawave recognizes this market need and addresses it head-on with its cutting-edge 200G interconnect technologies. It is a testament to the company’s commitment to staying ahead of the data curve, empowering industries with the speed and efficiency needed to propel AI and high-performance computing into the future.

Recently, the company hosted a webinar on this topic and shared results from their AthenaCORE 224G SerDes TestChip. This post takes a look at Alphawave’s efforts toward unleashing the 1.6T ecosystem with its comprehensive offerings including its 200G interconnect technology.

Leveraging Alphawave’s 112G SerDes Success to Deliver Robust 224G SerDes

By extending its proven 112G SerDes to support a remarkable 224Gbps, Alphawave has not only doubled the data rate but has also unlocked new possibilities for data-intensive applications, particularly in the realm of Artificial Intelligence (AI) and advanced computing. Overcoming the associated challenges and complexities of 200G Interconnect called for a combination of advanced technologies, innovative design approaches, and collaborative efforts within the industry. Alphawave has  built upon this 112G SerDes success to deliver the even more stringent requirements of a 224G SerDes.

The AlphaCORE DSP-based Serializer/Deserializer (SerDes) architecture is engineered to deliver versatile high-speed data communication solutions, featuring a configurable 112G Digital Signal Processor (DSP). The configurability of the DSP architecture enables adaptation for diverse applications and performance demands, providing a plug-and-play modular design for interchangeability and easy integration. Operating at a data rate of 112 gigabits per second (112G), the architecture aligns with the requirements of modern data communication in fields such as data centers, networking, and high-performance computing. With an emphasis on application-customized solutions, flexibility, and adaptability, the SerDes can be tailored to specific use cases, showcasing its ability to optimize performance for varying applications and environments. The inclusion of a DSP underscores its significance in tasks like equalization, error correction, and signal conditioning. Designed for ease of integration and adaptable to various Plug and Play Modules, the architecture ensures seamless compatibility with different components and functionalities. As a high-speed communication solution, the architecture meets the evolving demands of data rates and aligns with advancements in communication standards, making it well-suited for dynamic and future-oriented communication environments.

AthenaCORE 224G SerDes TestChip Results

Alphawave’s Innovative Development Efforts

Alphawave’s 200G interconnect technologies are not only about speed but also about efficiency and reliability. The 200G interconnect challenges include signal integrity issues, crosstalk, and dispersion. The company invests in advanced modulation schemes, such as PAM4 (Pulse Amplitude Modulation 4) which allows multiple bits to be encoded in a single symbol, effectively increasing the data rate. Alphawave also deploys advanced DSP techniques and adaptive error correction schemes to enhance the reliability and performance of data transmission at 200G speeds.

Advanced DSP Techniques

Maximum Likelihood Sequence Detectors (MLSD) represent a sophisticated Digital Signal Processing (DSP) technique employed in communication systems, notably effective in scenarios featuring intersymbol interference (ISI). Unlike conventional methods that aim to eliminate ISI, MLSD uniquely capitalizes on the energy within interference to boost signal power, optimizing symbol sequence detection. Its mathematically optimal approach involves an exhaustive search over all possible symbol sequences, minimizing mean square error to identify the transmitted sequence. Recognized for its capacity to significantly enhance system performance, MLSD is particularly applied in high-speed data communication and optical communication, addressing concerns related to signal distortion due to ISI. While MLSD’s computational demands raise complexity considerations, the technique’s adaptability to varying channel conditions underscores its efficacy in dynamic communication environments.

Forward Error Correction (FEC) Strategies

Alphawave embraces adaptive Forward Error Correction (FEC) strategies, allowing for dynamic adjustments based on real-time channel conditions. This flexibility ensures optimal performance without compromising on bandwidth efficiency. FEC empowers systems to establish higher Bit Error Rate (BER) targets on electrical links, providing a threshold for tolerating and correcting errors. Adaptive FEC dynamically adjusts error correction strength, balancing correction and bandwidth efficiency based on real-time channel conditions. The ascent of adaptive and dynamic FEC strategies enhances system adaptability, while integration with advanced modulation schemes optimizes performance, particularly in high-speed and optical communication systems.

Versatile Options to Support the 1.6T Ecosystem

Alphawave provides versatile options for switch ASICs (Application-Specific Integrated Circuits) in the 1.6T ecosystem. This includes the ability to stick with 512 × 100G links or leverage 256 x 200G links in a 1RU – 32 Port Switch configuration, offering scalability and flexibility for different deployment scenarios. The company’s UCle-enabled chiplets opens up new possibilities for chip-level modularity and scalability to address high-speed memory and compute requirements for infrastructure applications. With its 2.5D/3D packaging and application-optimized IP, the company navigates the delicate balance between complexity and performance to deliver advanced solutions.

Multi-Vendor Interoperability

Encouraging innovation, interoperability spans various dimensions, including form factors, SerDes interfaces, and management software, with the ultimate goal of achieving system compatibility. Multi-vendor Interoperability is a critical factor in the adoption and success of new technologies. Early adopters benefit from a broader range of compatible products, while downstream implementers leverage interoperability to streamline development, reducing time and costs. Setting performance standards, interoperability ensures users can anticipate how different components will function together in a system. This fosters quicker access to lower-cost technology, driven by competition in a diverse ecosystem of interoperable solutions.

Working with Standards Bodies

Alphawave understands the importance of multi-vendor interoperability and actively engages with industry standards bodies such as OIF (Optical Internetworking Forum) and IEEE 802.3 to contribute to the development of 200G signaling standards. This collaboration ensures interoperability and sets the stage for the seamless integration of Alphawave’s technologies into the broader ecosystem. Alphawave’s robust specifications and adherence to industry standards ensure that its 200G interconnect technologies seamlessly integrate with a variety of systems.

Summary

By actively contributing to industry standards, investing in advanced technologies, and providing versatile solutions, Alphawave is an important player in making the 1.6T ecosystem mainstream for the era of artificial intelligence.  Alphawave offers a comprehensive suite of solutions designed for high-performance connectivity. Their High-Performance Connectivity IP spans across crucial areas like PCIe/CXL, Ethernet, and HBM/DDR, catering to the demands of high-speed data communications. The incorporation of chiplet technology, notably leveraging UCle, indicates a commitment to seamless chiplet interconnectivity. The specific chiplet types—IO, Memory, and Compute—underscore a modular approach, allowing different chiplets to function together harmoniously.

As data-intensive applications continue to evolve, Alphawave’s commitment to innovation positions it as a key enabler of the high-speed, reliable, and scalable AI data infrastructure of tomorrow. In essence, Alphawave is a key player in enabling flexibility, scalability and innovation for the upcoming 1.6T ecosystem.

To listen to the webinar, visit here.

Also Read:

Disaggregated Systems: Enabling Computing with UCIe Interconnect and Chiplets-Based Design

Interface IP in 2022: 22% YoY growth still data-centric driven

Alphawave Semi Visit at #60DAC


Ceva Launches New Brand Identity Reflecting its Focus on Smart Edge IP Innovation

Ceva Launches New Brand Identity Reflecting its Focus on Smart Edge IP Innovation
by Daniel Nenni on 12-18-2023 at 10:00 am

New CEVA Logo
Company sharpens its strategy of delivering silicon and software IP that makes it possible for low power Edge AI devices to connect, sense and infer data, reliably and efficiently, across multiple high-growth end markets

Ceva Inc. is an interesting company. Founded in November 2002, through the combination of the DSP IP licensing division of DSP Group and Parthus Technologies plc. Ceva is publicly traded  on NASDQ under the CEVA symbol. We have been working with Ceva since 2012 publishing 149 articles that garnered more than 1.5M views, wow!

So why the pivot? I would say this one is more of a communications pivot rather than a product or strategy pivot. Ceva has 20+ years of experience and knows the semiconductor IP business intimately. Ceva is also a well run no-nonsense company so this is a great opportunity to tell it like it is, absolutely.

Here is a quick Q&A with Moshe Sheier, Vice President of Marketing at Ceva Moshe brings with him more than 20 years of experience in the semiconductor IP and chip industry in both development and managerial roles. Prior to this position, Mr. Sheier was the Director of Product Marketing at Ceva and before that he has managed Xilinx Inc. partner program in EMEA. Before joining Ceva, he was with the DSP Group since 1993, holding different engineering and R&D management positions, including the VLSI Department Manager. We will be writing more about this in 2024 so stay tuned.

Why did Ceva rebrand?
Ceva has transformed its business to focus on the smart edge by developing innovative silicon and software IP solutions that enable devices to connect, sense and infer data more reliably and efficiently.

Today, Ceva delivers the industry’s only portfolio of comprehensive wireless communications and scalable edge AI IP, and we are a trusted partner to over 400 of the industry’s leading semiconductor and OEM brands. With this in mind, we have refreshed our logo and visual identity to accurately reflect our forward looking brand reality.

Why did Ceva change its logo? What does the new logo mean?
At Ceva, we are focused on smart edge innovation. We envision a connected world where intelligent devices seamlessly interpret and enhance how we work, play, learn, protect, and care for each other. Our new logo and visual identity reflect our vision and respect our heritage.

We have refreshed our primary color palette to blue and orange. Blue signifies our continued dedication to our customers and our mission, and the orange to bold innovation and bright future. The interconnection of the letters “c” and “e” is a visual representation of our collaborative approach while the ascending “v” in the shape of a check mark signifies commitment to being a reliable and dependable industry partner. Last, but not least, the orange “sun” completing the upper case “A” signifies the rising sun and a new day dawning – a day full of opportunity and growth for Ceva, our customers and the industry.

How does this rebranding align with the company’s mission and vision?
Our refreshed brand reflects our commitment to developing innovative silicon and software IP solutions that enable products to connect, sense, and infer data more reliably and efficiently. We envision a smarter, safer, and more interconnected Ceva-powered future, where intelligent devices seamlessly interpret and enhance how we work, play, learn, protect, and care for each other. Our new brand reflects our commitment to this mission and vision.

Is Ceva changing its business model?
No. Ceva continues to develop and license IP that powers connectivity, data sensing, and inference in today’s most advanced smart edge mobile, consumer IoT, automotive, data center and infrastructure, industrial, and personal computing products.

Will Ceva’s products change?
Ceva continues to expand its portfolio of IP solutions. We offer a comprehensive silicon IP portfolio including platforms for 5G cellular, Wi-Fi, Bluetooth, UWB, and NB-IoT, processors for digital signal processing (DSP), TinyML processing, and neural processing (NPU), as well as embedded application software IP for ambient sensing and immersive audio that makes it easy for our customers to deploy application ready end user solutions that utilize audio, voice and sensors.

Is Ceva deprioritizing its DSP portfolio?
No. Ceva continues to be committed to our DSP portfolio now and into the future. We continue to offer full support and a roadmap for our DSP IP platforms.

Is Ceva changing its market focus?
Ceva has expanded its focus to include high growth smart edge markets with focus on consumer IoT, automotive, infrastructure and industrial. Ceva will continue supporting the PC and mobile markets as well.

Why was the Ceva website URL changed?
The URL change from ceva-dsp.com to ceva-ip.com signifies our commitment to developing and licensing IP that powers wireless connectivity, sensing, and inference in today’s most advanced smart edge devices. It also signifies the expansion of our IP portfolio to include a broader set of silicon and software
IP solutions beyond our industry leading DSP solutions.

About Ceva, Inc.

At Ceva, we are passionate about bringing new levels of innovation to the smart edge. Our wireless communications, sensing and Edge AI technologies are at the heart of some of today’s most advanced smart edge products. From Bluetooth connectivityWi-FiUWB and 5G platform IP for ubiquitous, robust communications, to scalable Edge AI NPU IPs, sensor fusion processors and embedded application software that make devices smarter, we have the broadest portfolio of IP to connect, sense and infer data more reliably and efficiently. We deliver differentiated solutions that combine outstanding performance at ultra-low power within a very small silicon footprint. Our goal is simple – to deliver the silicon and software IP to enable a smarter, safer, and more interconnected world. This philosophy is in practice today, with Ceva powering more than 17 billion of the world’s most innovative smart edge products from AI-infused smartwatches, IoT devices and wearables to autonomous vehicles and 5G mobile networks.

Our headquarters are in Rockville, Maryland with a global customer base supported by operations worldwide. Our employees are among the leading experts in their areas of specialty, consistently solving the most complex design challenges, enabling our customers to bring innovative smart edge products to market.

Ceva: Powering the Smart Edge™

Visit us at www.ceva-ip.com and follow us on LinkedInXYouTube, Facebook, and Instagram.

Also Read:

5G Aim at LEO Satellites Will Stimulate Growth and Competition

Navigating Edge AI Architectures: Power Efficiency, Performance, and Future-Proofing

Fitting GPT into Edge Devices, Why and How


Samtec Welcomes You to the Future with Proven 224G PAM4 Interconnect Solutions

Samtec Welcomes You to the Future with Proven 224G PAM4 Interconnect Solutions
by Mike Gianfagna on 12-18-2023 at 8:00 am

Samtec Welcomes You to the Future with Proven 224G PAM4 Interconnect Solutions

We all know about the relentless march of Moore’s Law. Denser, faster, and cheaper semiconductor devices that fuel the innovations that surround us. For this discussion, I will lump the significant movement from single chip advances to multi-chip strategies into a single thread as devices continue to get smaller, faster and more cost and power efficient. Across the data center, the 5G wireless network and the plethora of AI applications that surround us the exponential increase in processing is easy to see. There is another aspect of this technology revolution that is just as important – the ability to transmit ever-increasing data volumes at ever-increasing speed. That is the subject of this post. Let’s explore how Samtec welcomes you to the future with proven 224G PAM4 interconnect solutions.

What is 224G PAM4 and Why Does it Matter?

The purpose of any signal modulation scheme is to transmit bytes of data over coax, fiber, or a PCB trace. There is more than one approach to signal modulation, but it turns out pulse amplitude modulation 4-level (PAM4) is the format that has become the leader. This approach uses four voltage levels to represent four combinations of two bits of logic (00, 01, 10, and 11). Because of the use of multiple voltage levels, PAM4 has twice the throughput compared to other approaches.

There is no free lunch, however. Compared to other approaches, PAM4 has a worse signal to noise ratio (SNR), more incidence of signal reflections and requires more expensive equipment to implement. So, the goal is to build signal channels that support PAM4 in a way that capitalizes on its benefits and neutralizes its weaknesses. 224 gigabit per second PAM4 channels will enable the next generation of communication protocols in data centers, 5G networks and other massive, networked device configurations.

Samtec’s Approach to New Communication Channel Performance Demands

Samtec has developed a series of next-generation interconnect solutions that deliver the flexibility and performance required for 224 Gbps PAM4 channels.  Areas such as data centers, artificial intelligence (AI), machine learning (ML), and quantum computing are all needing this capability, and Samtec’s products and associated roadmaps are designed to satisfy these high-performance needs.

Let’s look at the family of products serving this area from Samtec and how they work together to deliver a complete solution.

Si-Fly™ HD (on package or ASIC adjacent cable system)

  • Industry’s highest-density on-package or ASIC-adjacent cable system
  • 207 differential pairs per square inch
  • Eye Speed® AIR™ hyper low skew 33 AWG twinax cable
  • Samtec Flyover® Cable Technology
  • Designed for HDI & package substrates

Si-Fly™ BP (extreme performance backplane system)

  • Eye Speed® AIR™ hyper low skew 33 AWG twinax cable
  • 146 differential pairs per square inch
  • Samtec Flyover® Cable Technology
  • End 2 design flexibility
  • Cable-to-cable connectivity

Si-Fly™ MZ (high-density mezzanine system)

  • Industry’s highest-density board-to-board and on-package mezzanine system
  • 64 pairs in a 14 x 14 mm footprint
  • Low profile design
  • Up to 6.4 Tbps aggregate data rate 

FLYOVER® (OSFP 224 Gbps PAM4 Panel Assembly)

  • Up to 1.6 Tbps aggregate data rate
  • Eye Speed® AIR™ hyper low skew 33 AWG twinax cable
  • Samtec Flyover® Cable Technology
  • Excellent thermal performance
  • Direct attach contacts for optimized signal integrity

If you’d like to learn more about Samtec’s 224G and 112G PAM4 products, you can find plenty of material here.

Live Proof of the Technology

Samtec tends to steal the show on the exhibit floor. The company works with its partners to bring the very latest technology proof points to any show the company attends. You can read about examples of this on SemiWiki here and here. 

This past November, Samtec exhibited at Super Computing 23 in Denver, Colorado. Ralph Page, Systems Architect at Samtec, provides an overview of what was shown at the conference here.  It is possibly the world’s first public demonstration of an asynchronous 224 G system. Here is a summary provided by Ralph of what was shown.

The demo starts with a Synopsys 224G Ethernet PHY IP that’s transmitting 31-bit, PRBS data at 224 Gbps PAM4. The 224 data travels to a Samtec Bulls Eye® test point connector, then through low loss coax to Samtec 1.85 compression mount RF connectors on a Samtec evaluation board.

The signal then goes to a Samtec Flyover® cable system, the high-density Si-Fly HD. From there, it travels through 14 inches of 34 AWG Eye Speed® ultra-low skew twinax cable, then the signal travels to end two of the Si-Fly HD connector system. From there, to a second Si-Fly™ HD evaluation board, through Samtec 1.85 mm compression mount RF connectors, to another Samtec Bulls Eye test point system on a second Synopsys 224 G PHY for analysis. The total loss exceeds 34 dBs at 56 GHz, quite impressive.

Ralph concludes by explaining that Samtec Si-Fly HD is the industry’s highest-density on-package or ASIC-adjacent cable system. High-density means it has 207 differential pairs per square inch. The figure below shows just a small portion of the complete demo layout, which takes up quite a bit of space.

Demo layout (partial)

Advanced system design requires high-performance communications, and Samtec delivers the complete solution to implement the required communication channels. If there is high-speed data comms in your future, you really need to check out what Samtec offers. And that’s how Samtec welcomes you to the future with proven 224G PAM4 interconnect solutions.


IEDM: TSMC Ongoing Research on a CFET Process

IEDM: TSMC Ongoing Research on a CFET Process
by Paul McLellan on 12-18-2023 at 6:00 am

Screen Shot 2023 12 16 at 12.16.14 PM

I attended the recent International Electron Devices Meeting (IEDM) last week. Many of the sessions are too technical and too far away from high volume manufacture to make good topics for a blog post. As a Fellow from IBM said about 5nm at and earlier IEDM, “none of these ideas will impact 5nm. It takes ten years for a solution to from and IEDM paper to HVM. So 5nm will be something like FinFET with some sort of copper interconnect.” And so it turned out to be.

Often there are late submissions that IEDM accepts, usually from important manufacturers such as Intel or TSMC giving the first details of their next generation process. Unfortunately, over the years, these papers have got less informative and rarely include key measures such as the pitches on the most critical layers.

This year there was a late paper from TSMC titled Complementary Field-Effect Transistor (CFET) Demonstration at 48nm Gate Pitch for Future Logic Technology Scaling. A CFET is a CMOS process where the transistors are stacked vertically, rather than being in the same plane as with all previous logic processes: planar, FinFET, nanosheet field effect transistors (NSFET, also known as gate-all-around or GAA). The paper was by about 50 different authors that I’m not going to list, and was presented by Sandy Liao. She said it was “late news” since it is very recent work.

Presumably, TSMC will have a CFET process in the future, but this paper described early research on manufacturability. The process stacks the n-transistor on top of the p-transistor. In the Q&A Sandy was asked what motivated this decision. She said that it wasn’t cast in stone and could get changed in the future, but putting PMOS on the bottom makes handling strain easier. TSMC calls this monolithic CFET or mCFET.

CFET can create an area reduction of 1.5X to 2X, he said. There still has to be space for some vertical routing so you don’t usually get the full 2X you might expect from stacking the transistors. Previous studies of CFET manufacturing have used relaxed gate pitches, and don’t succeed in getting gate pitches around 50nm. So this TSMC study is the first that uses a gate pitch of 48nm, which Sandy said is “pertinent to industry-level advanced node scaling.”

To accomplish this, there is a middle dielectric isolation, inner spacer, and n/p source-drain isolation. This process provides a robust foundation for future mCFET advancement which will require further innovation and additional architectural features.

Here is a TEM demonstration of the mCFET. As I already said, the nFETs are on top and the pFETs on the bottom. Both types of transistors have the channel surrounded by a single metal gate.

Sandy said she would provide some details of the fabrication process “but not too much”. It is a 20-step flow, although obviously there are many sub-steps inside each step. For now, the process is expensive to manufacture she said in time engineers will solve that and so the process will have value. Below are the 20 steps.

By introducing a middle dielectric isolation, inner spacer, and n/p source-drain isolation, the vertically stacked transistors have a survival rate of over 90% with high on-state current and low leakage. There is a six-orders-of magnitude Ion/Ioff current ratio.

Sandy’s conclusion:
  • This is just the beginning, and there is a long way to go. But transistors in high volume cannot be worse than this. We need work hard to generate process features real yielding circuits with better characteristics.
  • This is just a study to pave the way for a practical process architecture that can fuel future logic technology, scaling, and PPAC advancement.
Also Read:

Analog Bits Leads the Way at TSMC OIP with High-Accuracy Sensors

TSMC N3E is ready for designs, thanks to IP from Synopsys

The True Power of the TSMC Ecosystem!


Podcast EP198: How Lightmatter Creates the Foundation for the Next Moore’s Law with Ritesh Jain

Podcast EP198: How Lightmatter Creates the Foundation for the Next Moore’s Law with Ritesh Jain
by Daniel Nenni on 12-15-2023 at 10:00 am

Dan is joined by Ritesh Jain. Ritesh is the senior vice president of engineering and operations for Lightmatter. Prior to joining Lightmatter, Ritesh was a vice president in Intel’s Data Center and AI group where he directed the hardware development across silicon packaging, power integrity, signal integrity, mechanical & thermal engineering solutions essential for the data center product roadmap.

Over the two decades at Intel, Ritesh built and led several cross-functional engineering teams globally and was a part of several major technology transitions and initiatives for datacenter products.

Ritesh discusses the technology and products delivered by Lightmatter and how silicon photonics-based computing and interconnect can decrease power and increase performance for advanced AI designs. Ritesh discusses the details of how Lightmatter technology can deliver substantial performance improvements and power reduction, allowing a continuation of Moore’s Law.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Sridhar Joshi of TenXer

CEO Interview: Sridhar Joshi of TenXer
by Daniel Nenni on 12-15-2023 at 6:00 am

Sreedhar Pic

Sridhar Joshi has more than Two and half decades of experience in the Semiconductor Industry. He spent Large part of his career working in various Engineering and leadership roles at National Semiconductor and Texas Instruments, including leading a company wide initiative for System Design and Technology, ARM Microcontroller based Platform for Analog Front End (AFE) Designs.

Tell us about your company?

TenXer was founded in 2016 and is backed by Silicon Valley Quad (SV Quad). We have developed into a market leader – enabling remote evaluation of Semiconductor ICs and Solutions with our unique “Lab-on-Cloud” offering.

With this LAB ON CLOUD platform TenXer helps System Designers gain access to an IC Evaluation Kit or A subsystem over the internet from anywhere in the world.

What this means is that our customers can make available their Hardware Solution or Evaluation Kits (“EVK”) on the TenXer Platform  as a Device Under Test (DUT).  System designers can control the System Setup – including instruments, Stimulus and probes using a browser based application and measure the performance – all in real time!

We are currently working with leading semiconductor companies globally with more than 80 Labs  hosted from Renesas and SiTIme.  Currently we have more than 10K system engineers globally who use the TenXer platform regularly.

What problems are you solving?

Choosing the right IC for use in a system design involves ordering and testing numerous EVKs, this could go on for a few months – very often due to shipment delays and delays in response to application support requests or even the supply-chain issues.

The TenXer platform is designed to solve this issue, helping both semiconductor suppliers and system designers.

Don’t get me wrong, we are NOT saying that system designers will never buy EVK Hardware again. In fact, they may actually need to purchase the EVKs as the mover along the design process.   We allow system designers to Explore various IC solutions available, Evaluate their performance in a system setup  and even Co-develop for a PoC to  choose the right solution with  instant online access. This saves time and accelerates GTM.

This also helps Semiconductor companies increase reach, improve velocity and reduce costs, resulting in significant competitive advantage, increased sales and profitability.

What application areas are your strongest?

We are quite diverse, on the solutions & products we have hosted including Motor Control, Timing & Clocks, Automotive Electronics & EV,  Image processing and MCUs with focus on IOT, Sensor Connectivity & Analog Front Ends.

What does the competitive landscape look like and how do you differentiate?

Our offering is quite unique with pretty much no other company offering anything comparable to the TenXer platform. Very often it is a matter of Make-vs-Build for our customers. During the Covid years companies have developed some kind of remote testing capabilities which allows their employees to access test platforms over VPN.

Due to security reasons, these internally built solutions cannot be made available to 3rd parties including customers. Also, Tenxer has developed Hardware and Software IP which accelerate onboarding EVKs and solutions to potentially 2 weeks.

Reliability, Security Platform accelerators IPs,  and our strong team of 40 engineers who have specialized in accelerated the cloudification process, are our primary differentiators.

What new features/technology are you working on?

We have built the capability to develop and execute embedded software code such AI algorithms on difficult to find hardware boards such as the Google Corel Board and Nvidia Jetson Nano.

On the Hardware side, we have a focus on evaluation of FPGA boards – since these usually cost upwards of $1000; and system engineers really are looking to ensure they buy the right boards.

We have some really cool features such as an AI enabled, Knowledge Assistant to explore and compare, Copilot as an   application support which has been extensively tested by both TenXer and our customer. You will be seeing this soon on the platform at www.tenxerlabs.com

How do customers normally engage with your company?

We usually engage with the Product Management, Applications Engineering and Sales teams of semiconductor companies.

The EVKs and Solutions are identified by our customers and hosted on the TenXer platform within 2-8 weeks depending on the complexity and maturity of the solution. These EVKs and Solutions are interfaced to the TenXer platform and can be physically hosted at TenXer or Customer’s labs.

For system engineers we have 100% free access and they can sign-up at  www.tenxerlabs.com and start evaluating the 80 designs we are hosting as of December 2023.

Visit us at the DesignCon 2024 Event in Santa Clara Booth #607 – to  see in real-time how TenXer Labs works and interact with our experts.

Also Read:

CEO Interview: Suresh Sugumar of Mastiska AI

CEO Interview: David Moore of Pragmatic

CEO Interview: Dr. Meghali Chopra of Sandbox Semiconductor


An Insider’s View of the 2023 Global Semiconductor Alliance’s (GSA) Annual Awards

An Insider’s View of the 2023 Global Semiconductor Alliance’s (GSA) Annual Awards
by Daniel Nenni on 12-14-2023 at 10:00 am

IMG DEF60C112922 1

My beautiful wife and I attended the annual Global Semiconductor Alliance (GSA) Awards event last week. Usually this is a solo event but since my wife is CFO of SemiWiki I was able to get her an invite. I go every year and she wanted to see what all of the excitement was about. She also knows quite a few industry people from attending the Design Automation Conference with me. Her first DAC was 1985 in Las Vegas.

Surprisingly she had a good time. They must have broke an attendance record this year, the place was packed. It really was the Who’s Who of the semiconductor industry. I would guess that most of the GSA events were sold out this year, the ones I attended certainly were. We have used the term Rock Star in the past but today semiconductor professionals really are Rock Stars.

The GSA was established in 1994. It was founded to address the challenges and opportunities within the semiconductor industry, fostering collaboration and innovation among its members. It really is an incredible organization and a credit to the semiconductor ecosystem. You can read more about them on their website or ask Chat-GPT.

For me the GSA is known for the events, conferences, and forums where professionals from the semiconductor industry can come together to discuss emerging trends, share insights, and collaborate on the challenges facing the industry. These events have been invaluable for me over the years, absolutely.

The theme of the networking reception was digital twins sponsored by Cadence. They did an excellent job, even my wife said so. She actually knows what a digital twin is now so bravo to Cadence. She was less impressed by the Tesla truck however and I agree completely. As my kids would say “a serious dumpster fire”. It’s digital twin must be fraternal.

The ceremony generally starts with a comedian or celebrity of some sort. I remember one year it was Jay Leno and like the rest he was not funny. They always try and tell semiconductor jokes but they have no idea what they are talking about so it just does not work. The most memorable GSA opening act for me was Steve Forbes, son of Malcolm Forbes. That man had vision (flat tax and term limits) and was very passionate and articulate.

My wife was all about the food and it did not disappoint, especially the dessert!

Then came the awards:

Dr. Morris Chang Exemplary Leadership Award

The GSA’s most prestigious award recognizes individuals, such as its namesake, Dr. Morris Chang, for their exceptional contributions to drive the development, innovation, growth, and long-term opportunities for the semiconductor industry. This year’s recipient is Dr. Rick (Lih Shyng) Tsai, CEO and Vice Chairman of MediaTek.

Back in the day Morris Chang used to personally present this award and that was worth the price of admission, even though it is free for me since I am famous. Morris now appears via video recording and did not disappoint. Morris does not pull punches. He praised Rick’s work at TSMC (1989-2014) and his time as CEO (2005-2009) but he did mention the TSMC diversion of LED and Solar under Rick. What he did not mention was Rick’s downfall, TSMC’s first and last lay-off in 2009. I was in Taiwan at the time and let me tell you it was something to see. There were protests in front of TSMC, it was that shocking for TSMC and Taiwan in general. I believe it was something like 5% of the workforce, and that was after Rick said in 2008 no layoffs were planed.

Shortly thereafter Morris Chang took over as CEO and asked all laid-off employees to return to work to make amends for what he described as “regrettable actions” to dismiss them amid the economic downturn.

I spent most of my 40 year semiconductor career orbiting TSMC so this is from memory, go ahead and correct me if I am wrong here.

Rick then reinvented himself and joined SoC powerhouse MediaTek in 2017. Under his leadership MediaTek went from a trailing edge to a leading edge semiconductor company so this award is well deserved. I saw this transformation first hand, it was exceptional.

Rising Women of Influence Award

This award recognizes and profiles the next generation of women leaders in the semiconductor industry who are believed to be rising to top executive roles within their organizations. This year’s award was presented to Thy Tran, Vice President of Global Frontend Procurement at Micron Technology, Inc.

I do not know Thy but her story was certainly inspirational. I had to look her up and found this article which mirrored her speech. It is definitely worth a read:

From Refugee to Micron VP IEEE Spectrum

Company Awards

Most Respected Semiconductor Companies GSA members identified the winners in this category by casting ballots for the industry’s most respected
companies, judged for their vision, technology and market leadership. This year’s recipients include:

Most Respected Public Semiconductor Company Achieving Greater than $5 Billion in Annual Sales

• NVIDIA

Most Respected Public Semiconductor Company Achieving $1 Billion to $5 Billion in Annual Sales
• Silicon Labs
Most Respected Public Semiconductor Company Achieving $500 Million to $1 Billion in Annual Sales
• Lattice Semiconductor
Most Respected Emerging Public Semiconductor Company Achieving $100 Million to $500 Million in Annual Sales
• Rambus

Most Respected Private Company

• Astera Labs

Best Financially Managed Semiconductor Companies

These awards are derived from a broad evaluation of the financial health and performance of public semiconductor companies. This year’s recipients are:

Best Financially Managed Semiconductor Company Achieving up to $1 Billion in Annual Sales
• Lattice Semiconductor

Best Financially Managed Semiconductor Company Achieving Greater than $1 Billion in Annual Sales

• NVIDIA

Start-Up to Watch

GSA’s Private Awards Committee, comprised of successful executives, entrepreneurs and venture capitalists, chose the winner by identifying a promising startup that has demonstrated the potential to positively change its market or the industry through innovation and market application. This year’s
winner is SiMa.ai.

As a global organization, the GSA recognizes outstanding companies headquartered in the Europe/Middle East/Africa (EMEA) and Asia-Pacific regions having a global impact and demonstrating a strong vision, portfolio and market leadership. Two awards were presented in this category:

Outstanding Asia-Pacific Semiconductor Company
• MediaTek

Outstanding EMEA Semiconductor Company
• Robert Bosch GmbH

Analyst Favorite Semiconductor Company
Two analyst pick awards were presented based on technology and financial performance, as well as future projections:
• Credo Technology Group was chosen by Needham & Company, LLC
• MACOM Technology Solutions, Inc. was chosen by Jefferies, LLC
This year’s in-person ceremony was attended by 1,500 global executives in the semiconductor and
technology industries.

All in all a very good experience. Hopefully my wife and I will see you there next year!

Also Read:

IEDM Buzz – Intel Previews New Vertical Transistor Scaling Innovation

Webinar: “Navigating our AI Wonderland” … with humans-in-the-Loop?

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®