You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
During the recent COVID pandemic it was common to read about automobile companies unable to deliver new vehicles, caused by the shortage of specific automotive chips. Even bad weather has shut down the supply of semiconductor parts to certain customers. This disruption of the IC supply chain has caused many companies that buy and use semiconductors to consider moving some of their chip designs to in-house programs. A recent white paper from Methodics addressed the challenges of in-house chip design.
Success stories of systems companies doing their own chip designs include Apple, Tesla, NVIDIA, Qualcomm and Broadcom. Designing ICs along with their embedded software requires experience and best practices. Many components of ICs are building blocks called IP, and there’s a whole ecosystem to choose from pre-built, common functions, like: Radios (Bluetooth, WiFi), RAM, ROM, Processors, USB, etc.
Semiconductor design also has risks and high costs from both the EDA software for design and photomasks involved in fabrication. Calculating the ROI at the very start of an IC project is required, and using an IP-based approach is a best practice. Modeling your new IC as IP blocks allows your engineering group to compare in-house development versus purchasing IP from a third party. Planing and tracking all IP is foundational to model costs and track development progress.
The Methodics recommended methodology is to start with a planning Bill of Material (BoM) to evaluate tradeoffs and perform analysis. Using a general-purpose Product Lifecycle Management (PLM) tool for 3rd party semiconductor components, and a semiconductor planning BoM tool for the in-house chip design, enables you to compare each approach. The combination of PLM and Methodics IP Lifecycle Management (IPLM) tools is shown below, as part of the build or buy decision process to improve your supply chain.
Perforce Methodics IPLM Planning BoM
The Planning BoM has all of the details for each hierarchical IP being used, along with version history as each IP goes through a release cycle. Your team transitions from using the Planning BoM to an execution BoM while using the Perforce IPLM.
Execution BoM
The IP hierarchy is defined in the Execution BoM and it should support popular data management tools like Git or Perforce Helix Core. Any version conflicts need to be automatically identified in both hardware and software IPs to keep compatibility.
Multiple DMs
Both hardware and software IPs are managed in this approach, making sure that requirements can be traced for every component. You will know which software component is delivered for each hardware component, ensuring transparency.
Release engineers are tasked with tracking each release candidate and managing the integration process, and using Methodics IPLM automates the manual integration and curation processes.
Meta-data can be used during the design process to account for ISO 26262 functional safety compliance requirements per IP. Traceability of requirements to IP blocks is captured with Methodics IPLM. Even security meta-data can be added on the IP or component hierarchy to help asses the security threats for each project, as you reference an internal IP Security Assurance (IPSA) catalog of issues for a circuit.
Summary
Systems companies are gradually adopting in-house IC design projects as a means to reduce supply chain bottlenecks. Using an IP-centric methodology is a best practice to control your ROI and start building IP re-use. The Methodics IPLM platform has been around for many years, helping to manage the challenges of IP-based design across entire corporations.
Benefits of this approach are: traceability of requirements, managing IP re-use, having a centralized catalog of all IP – hardware and software, having both planning and platform BoMs, and having analytics to see where you are at.
Chiplet is a hot topic in the semiconductor world these days. So much so that if one hasn’t heard that term, the person must be living on a very isolated islet. Humor aside, products built using chiplets-based methodology have been in existence for at least some years now. Companies such as Intel, AMD, Apple and others have integrated in-house chiplets to build these products. But the bigger opportunity lies in being able to build products using heterogeneous chiplets, meaning chiplets from multiple vendors. Heterogeneous chiplets integration poses many technical and business challenges to overcome.
The Open Compute Project Foundation has a subgroup called Chiplet Design Exchange (CDX) that is now focused on tackling the technical challenges. The effort is a collaborative one with Palo Alto Electron, Siemens EDA and many other companies and individuals participating and contributing. Jawad Nasrullah, CEO of Palo Alto Electron gave a talk at Siemens EDA’s User2User conference in Santa Clara, CA. The following are excerpts from that presentation.
Design management of chiplet projects can be broadly divided into four stages, namely architecture, design execution, verification and signoff.
The architecture stage needs to consider multiphysics including thermal, warpage, structure, etc., on top of the conventional power, performance and area (PPA) metrics. The goal is to generate a top level golden netlist based on all of the above considerations. Even a simple design could have tens of thousands of nets and typical designs could have nets in the millions. The Open Compute Project/ODSA subgroup is happy with the capabilities of Siemens EDA’s XSI tool for managing the golden netlist.
Design automation and management become very critical in a multi-vendor tools environment. In dealing with substrates for chiplets-based designs, bridges, interposers, etc. make things more complicated. A standardized workflow is needed to tackle the many challenges. Siemens EDA’s XPD solution does a pretty good job although there is room for improvement. Current tools in the market fall short a bit as they are being repurposed from their PCB oriented purpose to support packaging for chiplet-based designs. Participants and contributors to the CDX subgroup project are using Siemens Calibre 3D for signoff related R&D, making it easier to use the Siemens XSI generated golden netlist.
The above is the foundation for the work being done in CDX, with the goal of design automation standardization. In order for EDA tools from multiple vendors to be able to exchange design information, models need to be standardized and described in machine readable format.
The goal of CDXML is to provide a standardized format for describing chiplets, which will enable chiplets from different vendors to be easily integrated into a single system-on-chip (SoC) design. CDXML is designed to be compatible with existing Electronic Design Automation (EDA) tools and workflows, which are used to create and verify chiplet-based designs. Once the chiplets are defined, they need to be modeled to capture their thermal, physical, mechanical, I/O, behavioral, power dissipation, signal integrity, power integrity and testability aspects. A chiplet design kit (CDK) is a collection of tools, models and documentation that enable designers to create and verify complex chiplets-based SoCs. CDKs are to be provided by chiplet vendors in a heterogeneous chiplets market place.
CDX subgroup participants including Palo Alto Electron and Siemens EDA have contributed to a proposed standardization effort for chiplets modeling for heterogeneous integration.
You can download the whitepaper here. Those involved in chiplets-based designs currently or will be in the future will find this whitepaper very useful.
Dan is joined by Paul Wells, CEO of sureCore. Paul has worked in the semiconductor industry for over 25 years. His experience includes Director of Engineering for Pace Networks where he led a product development team creating broadcast quality video & data silicon. He worked for Jennic Ltd as VP of Operations successfully building the team from scratch as the company transitioned to a fabless model. Prior to that, he was responsible for the engineering team. Paul also led a team for Fujitsu Microelectronics supporting ASIC customers in Europe and Israel.
Dan explores the inner workings of sureCore’s new PowerMiser memory IP with Paul. Paul explains how sureCore can achieve ultra-low power and small footprint memory architectures. He explains the importance of customer collaboration to achieve these results and discusses critical applications both today and tomorrow.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
Every country realizes the importance of producing skilled chip designers who could decide their success as soldiers by creating advanced AI chips for winning the Chip War. Also, every country is now gearing up to build a good semiconductor manufacturing ecosystem to balance the global semiconductor supply chain that could overcome any more supply chain disruptions experienced during the pandemic. But the advanced foundries without next-gen chip designers would be the same as ‘Fabs without Chips’. So, in this article, I want to inspire the electrical and electronics engineers who can lead the semiconductor industry as the next generation of chip designers, sharing my insights and explaining how ‘Semiconductor rules the world’. Also, I hope this article inspires the VLSI Design companies to upskill and support their chip design workforce for their long-term VLSI career development, prioritizing and investing in their Learning and Development initiatives.
1.Chip War
The geopolitical AI cold war between the US and China is perceived as Chip War. Artificial Intelligence technology is one of the essential technologies for any country to emerge as the most powerful country with its next generation of defense and space technologies. Chip war is all about fighting for their dominance in AI.
The AI systems can be trained on vast data centers built with cutting-edge AI chips. So, every country is funding and collaborating with global chip manufacturing companies to build advanced fabs for their future.
The advancement and implementation of AI technology rely on semiconductors, and it demands semiconductor architectural improvements. The improvements in chip design for AI will be less about improving overall performance and more about speeding the movement of data in and out of memory with increased power and more efficient memory systems.
Only with higher technology nodes we can fabricate complex SoCs as cutting-edge AI chips that integrate complex neural engines and memories using new technologies like Chiplets. So, every country is gearing up with advanced fabs.
2.Chip Industry
Our journey in the semiconductor industry started with making IC with four transistors, and it’s continuing to produce complex SoCs with 100s of billions of transistors.
2.1 History:
Defense has been a key driver for the evolution of the semiconductor industry. Defense departments & institutes, especially DARPA in the US, funded inventing chip technologies and introduced the concept of using chips to create advanced defense technologies like next-gen missile guiding systems.
The first monolithic IC was invented by Robert Noyce at Fairchild Semiconductor. In 1960 first operational IC was created at Fairchild. Later the chip manufacturing ecosystem was built with a supply chain[Machines and materials] spread across Taiwan, Netherlands, South Korea, Japan, and other countries, beyond the US. The chip industry emerged from Texas, Silicon Valley, and expanded into other countries like Taiwan, as a global industry. Today more than 90% of the advanced chips are manufactured in Taiwan, mainly by top foundries like TSMC. Chris Miller’s book ‘Chip War’ can walk you through the incredible journey and evolution of the chip industry.
2.2 Semiconductor Ecosystem:
The global semiconductor industry is generating yearly revenue of 500+ billion USDs and growing exponentially towards generating 1 trillion USD by 2030. In my view, it reflects the evolution of VLSI technology. In terms of technology, we are making SoCs with 100+ billion transistors and evolving towards making chips with 1 trillion transistors by 2030.
2.3 Inventions and Innovations:
The semiconductor ecosystem incorporates various players like OEMs, IDMs, Fabless-IP, EDA, and Foundries. The ecosystem is evolving with all the inventions and innovations by its players/stakeholders. Every stakeholder contributes equally to the growth of the global semiconductor industry, and some of the highlights I am listing here for your reference.
OEM: Apple is adopting a smartphone-design approach for their MacBooks. Its M-series SoCs are now built with ARM RISC processors using 5 nm technology, replacing the popular desktop CISC processor, and the complexity ranges from 16 billion to 100 billion transistors.
IDM: Intel is growing its foundry business with the vision of creating an open system foundry to empower the chipmakers to create a System of Chips[SoC] using various technologies like 2.5D&3D advanced packaging, Chiplets with Universal Chiplet Interface UCIe, and System Software solutions.
Fabless-IP/Chip: ARM is still ruling the semiconductor industry as the most successful IP company, with the credit of having hundreds of billions of electronic devices shipped with ARM cores. World’s fastest supercomputer Fugaku built with more than 7 million ARM cores, is one of the highlights beyond its dominance in the smartphones, IoT, and Automotive market segments.
RISC-V open ISA began its journey of open era of computing from the CPU graveyard, where we witnessed the burial of many proprietary specialized ISAs. Now RISC-V is emerging as an industry standard open ISA for all kinds of processors, general purpose and specialized processors. Esperanto, SiFive, Western Digital, Alibaba, and many more have been creating new powerful processors and cutting-edge AI Chips with RISC-V open ISA and empowering the semiconductor ecosystem.
Nvidia and AMD continue to rule the global semiconductor industry as fabless chipmakers with their next-gen state-of-the-art GPUs and CPUs.
EDA: EDA is acquiring intelligence using AI technologies to help designers achieve optimal PPA results. Synopsys has recently released Synopsys.ai as the first full-stack AI-driven EDA suite for chipmakers, following its successful DSO.ai for Design Space Optimization. EDA on the cloud similar to SaaS and Multi-die-system solutions are some of the other innovative EDA solutions from the EDA industry.
Foundry: Intel is expanding its foundry business with its open system foundry vision for advanced fabrication. TSMC is also expanding its foundry business in the US beyond Taiwan. Other players like Samsung and Micron are also investing heavily in building new foundries with advanced fabrication technology. Following the US Chips Act funding, every other country well known for semiconductor design, including India, is investing heavily in this fabrication space towards becoming self-reliant.
2.4 Investments:
The US introduced Chips ACT funding with USD 280 billion to boost semiconductor R&D and manufacturing in the USA. Currently, the US is producing only 10% of the chips made globally and aims to achieve 30% by 2030.
India introduced USD 10 billion incentive package for building a semiconductor chip manufacturing ecosystem in India. Foxconn and Vedanta seek to bring European chipmaker ST Microelectronics as their technology partner in their proposed India manufacturing unit. ISMC (US$3 billion investment) and Singapore-based IGSS (investment worth INR 256 billion) will set up semiconductor plants in Karnataka and Tamil Nadu, respectively.
2.5 Opportunities:
As per the Deloitte report, 2 million direct employees run the global Semiconductor Industry, and this fast-growing industry needs additional 1 million workers by 2030.
India has a very fast-growing electronics system design manufacturing (ESDM) industry. According to the Department of Electronics and Information Technology, nearly 2,000 chips are being designed annually in India. More than 20,000 VLSI engineers are working on various chip design and verification aspects.
Semiconductor Fab units require huge investments, gallons of water for production, uninterrupted electricity supply, high operating costs, and frequent technology replacement. This is why India’s contribution to the global semiconductor industry has focused on its technical competencies in R&D, design, etc., due to its talent pool in IT design and R&D engineers.
Academia with 2500+ engineering colleges produces 1.5 million engineers in India. Only 2.5 lakh engineers succeed in getting jobs in the IT sector. Excluding the IT software services, only 50000 engineers enter into the core industry, including all the domains. The number of engineers directly enter into the Indian Semiconductor Industry could be in the range of 5000 to 10000, which is insufficient. So, we at Maven Silicon bridge the gap between academia and the semiconductor industry, collaborating with our industry partners and academia in India. Also, Maven Silicon emerged as a VLSI Centre of Excellence[VLSI-COE] for the global semiconductor industry, upskilling the workforce[experienced VLSI engineers] worldwide using our corporate VLSI training solutions.
3.Summary
Software giants Amazon, Microsoft, Google, and Facebook are now building their chip design centers and making their chips for their data centers. The emerging technologies AI, 5G, Cloud, IoT, and Automotive demand more advanced chips and accelerate the semiconductor industry’s growth further. So the global semiconductor industry needs millions of skilled VLSI engineers to meet this growing demand and emerge as a trillion-dollar industry by 2030.
It’s very evident from history that the semiconductor industry rules the world beyond wars. Robert Noyce, Jack Kilby, Gordon Moore, Morris Chang, and many more great engineers and entrepreneurs ruled this semiconductor industry with their inventions and innovations and built the semiconductor ecosystem as architects. As a curious electrical engineer reading this article, you could be the next one continuing this incredible journey as our hope.
Spring wouldn’t be the same without an opportunity to hear from some of the most visible executives of electronic system design (ESD) market segment. The in-person CEO Outlook sponsored by Keysight and hosted by the ESD Alliance, a SEMI Technology Community, will be held Thursday, May 18, in Santa Clara, Calif.
Attendees can expect a lively and far-ranging discussion moderated by Ed Sperling, editor in chief of Semiconductor Engineering. Panelists will be asked for their perspectives about the current state of the market and insights into the future of chip design and verification and semiconductors. Attendees are encouraged to bring questions.
Joe Sawicki, Executive Vice President of Siemens EDA
Simon Segars, ESD Alliance Governing Council
Prior to the start of the CEO Outlook will be the ESD Alliance’s Annual Membership Meeting. It will begin at 5 p.m. and offer an overview of the past year’s activities and a preview of what’s to come in 2023. Non-members with tickets to the CEO Outlook are invited to attend.
A welcome reception for networking and food and beverages follows the membership meeting and begins at 5:30pm. The hour-long panel starts at 6:30 p.m. and is open to ESD Alliance and SEMI members at no cost. Pricing for non-members is $50 per person. Details and registration can be found at: https://bit.ly/3UbF1Lf
Th location is Agilent’s Building 5 at 5301 Stevens Creek Blvd. in Santa Clara.
About the ESD Alliance
The ESD Alliance, a SEMI Technology Community, has a range of programs and represents members in the electronic system and semiconductor design ecosystem that address technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry.
Given popular fascination it seems impossible these days to talk about anything other than AI. At CadenceLIVE, it was refreshing to be reminded that the foundational methods on which designs of any type remain and will always be dominated in all aspects of engineering by deep, precise, and scalable math, physics, computer science and chemistry. AI complements design technologies, allowing engineers to explore more options and optimizations. But it will continue to stand on the shoulders of 200+ years of accumulated STEM expertise and computational methods as a wrapper around but not displacing those methods.
Granting this observation, where is AI useful in electronic design systems methods, and more generally, how do AI and other technologies affect business shifts in the semiconductor and electronic systems industries? That’s the subject of the rest of this blog.
AI in Cadence Products
Cadence clearly intends to be a front-runner in AI applications. Over the last few years, they have announced several AI-powered products—Cadence Cerebrus for physical synthesis, Verisium for verification, Joint Enterprise Data and AI (JedAI) for unifying massive data sets, and Optimality for multi-physics optimization. Recently, they added Virtuoso extensions for analog design, Allegro X AI for advanced PCB, and Integrity for 3D-IC designs.
As a physical synthesis product, I expect Cadence Cerebrus to be primarily aimed at block design for the same reasons I mentioned in an earlier blog. Here, I expect that reinforcement learning around multiple full physical synthesis runs drives wider exploration of options and better ultimate PPA.
Verisium has a quite broad objective in verification, spanning debug and test suite optimization, for example, in addition to block-level coverage optimization. Aside from block level coverage, I expect other aspects to offer value across the design spectrum, again based on reinforcement learning over multiple runs (and perhaps even between products in the same family).
Optimality is intrinsically a system-level analysis and optimization suite. Here, also, reinforcement learning across multiple runs can help complex multi-physics analyses—electromagnetics, thermal, signal and power integrity—to converge over more samples than would be feasible to consider in traditional manual iteration.
Virtuoso Studio for analog is intrinsically a block-level design tool because no one, to my knowledge, is building full-chip analog designs at the SoC scale (with the exception of memories and perhaps neuromorphic stuff). Automation in analog design has been a hoped for but unreached goal for decades. Virtuoso is now offering learning-based methods for placement and routing, which sounds intriguing.
Allegro X AI aims for similar goals in PCB design, offering automated PCB placement and routing. The website suggests they are using generative techniques here, right on the leading edge of AI today. The Integrity platform builds upon the large database capacity of the Innovus Implementation System and leverages both Virtuoso and Allegro for analog RF and package co-design, providing a comprehensive and unified solution 3D-IC designs.
Three Perspectives on Adapting to Change
It’s no secret that markets are changing rapidly in response to multiple emerging technologies (including AI) and faster moving changes in systems markets as well as economic and geopolitical stresses. One very apparent change in our world is the rapid growth of chip design in-house among systems companies. Why is that happening and how are semiconductor and EDA companies adapting?
A Systems Perspective from Google Cloud
Thomas Kurian, CEO of Google Cloud talked with Anirudh on trends in the cloud and chip design needs. He walked through the evolution of demand for cloud computing, starting with Software-as-a-Service (SaaS), driven by applications from Intuit and Salesforce. From there, the landscape progressed to Infrastructure-as-a-Service (IaaS) allowing us to buy elastic access to compute hardware without the need to manage that hardware.
Now Thomas sees digitalization as the principal driver: in cars, cell phones, home appliances, industrial machines. As digitalization advances happen, digital twins have become popular to model and optimize virtualized processes, applying deep learning to explore a wider range of possibilities.
To support this objective at scale, Google wants to be able to treat worldwide networked data centers as a unified compute resource, connecting through super low latency network fabrics for predictable performance and latency no matter how workloads are distributed. Meeting that goal demands a lot of custom semiconductor design for networking, for storage, for AI engines, and for other accelerators. Thomas believes that in certain critical areas they can build differentiated solutions meeting their CAPEX and OPEX goals better than through externally sourced semiconductors.
Why? It’s not always practical for an external supplier to test at true systems scale. Who can reproduce streaming video traffic at the scale of a Google or AWS or Microsoft? Also, in building system process differentiation, optimizing components helps, but not as much as full-process optimization. Say, from Kubernetes, to containers, to provisioning, to a compute function. Difficult for a mainstream semi supplier to manage that scope.
A Semiconductor Perspective from Marvell
Chris Koopmans, COO at Marvell, talked about how they are adapting to evolving systems company needs. Marvell is squarely focused on data infrastructure technology in datacenters and through wireless and wired networks. AI training and other nodes must be able to communicate reliably at high bandwidth and with low latency at terabytes per second across data center-size distances. Think of ChatGPT, which is rumored to need ~10K GPUs for training.
That level of connectivity requires super-efficient data infrastructure, yet cloud service providers (CSPs) need all the differentiation they can get and want to avoid one-size-fits-all solutions. Marvell partners with CSPs to architect what they call cloud-optimized silicon. This starts with a general-purpose component, serving a superset of needs, containing some of the right ingredients for a given CSP but over-built therefore insufficiently efficient as-is. A cloud-optimized solution is tailored from this platform to a CSP’s target workloads and applications, dropping what is not needed and optimizing for special purpose accelerators and interfaces as necessary. This approach allows Marvell to deliver customer-specific designs from a reference design using Marvell-differentiated infrastructure components.
An EDA Perspective from Cadence
Tom Beckley, senior VP and GM for the Cadence Custom IC & PCB group at Cadence, wrapped up with an EDA perspective on adapting to change. You might think that, with customers in systems and semiconductor design, EDA has it easy. However, to serve this range of needs a comprehensive “EDA” solution must span the spectrum—from IC design (digital, analog and RF) to 3D-IC and package design, to PCB design and then up to electro-mechanical design (Dassault Systèmes collaboration).
Add analytics and optimization to the mix, to ensure electromagnetic, thermal, signal and power integrity, allowing customers to model and optimize complete systems (not just chips) before the hardware is ready. While also recognizing their customers are working on tight schedules with now further constrained staffing. Together, that’s a tall order. More collaboration, more automation, and more AI-guided design will be essential.
With the solutions outlined here, Cadence seems to be on a good path. My takeaway, CadenceLIVE 2023 provided a good update on how Cadence is addressing industry needs (with a healthy dose of AI), plus novel insights into systems/semiconductor/design industry directions.
I don’t know the story behind the name Alchip. I’ve been asking this question ever since its founding in 2003 and still haven’t found the answer. Wikipedia sometimes provides insights and stories behind names of companies, products and services but I couldn’t find any regarding the name Alchip. One thing is for sure. After its consistent recording breaking financial results for many years in a row, no one is going to confuse the “Al” in the name for what “Al” stands for in the periodic table of chemical elements.
Alchip just announced financial results for 2022, breaking records on revenue, operating income, net income and earnings per share (EPS). It was able to achieve this in spite of the lower-than-expected performance due to substrate shortage influencing Inference chip shipments to North America. NRE revenue accounted for 40% to 45% of total 2022 revenue, with ASIC sales accounting for 55% to 60%. That’s upwards of $184 million in NRE revenue and is significant in itself. This bodes well for Alchip’s future production revenue. Artificial Intelligence (AI) is becoming a major driver in the projected growth of the semiconductor market. System companies are getting directly involved in SoCs and working with companies such as Alchip to ensure differentiation and profitability of their products. The number of design starts are projected to continue to grow, driven by many growth applications. This also bodes well for Alchip’s future.
Success Requires Focus
In the ASIC industry, those who are consistently successful have to judiciously overcome many challenges thrown at them. Consistent success doesn’t arrive by happenstance or luck. It requires focused dedication to the ASIC model and ongoing strategic investments to stay on top. Alchip has always focused on delivering leading edge services to its customer base with high performance computing (HPC), AI, Networking and Storage markets as key markets to pursue. While high-end markets and customers can offer high rewards, they also demand high investments. Without a laser beam kind of focus, players will try to be everything for everybody resulting in their investments being spread too thin. Alchip on the other hand has shown significant growth in design wins in its target focus markets through its focus and business acumen.
Design Technology and Infrastructure
Alchip has stayed with the market trends and developed design technology, infrastructure and methodologies to service its focus markets. It has consistently stayed on top of supporting the latest process nodes from TSMC, the leading foundry. Not only has it developed capability to support 2.5D/3D packaging in general but has also been qualified to support TSMC’s CoWoS packaging technology. The company has developed and continues to enhance the following:
Robust yet flexible design methodology
Flexible engagement model (both commercial and technical)
Best-in-class IP portfolio (access to third-party IP and in-house IP/customization)
Heterogenous chiplet integration capability
Advanced packaging and test capabilities
Results Speak for Themselves
Over the last four years (2019 revenue not in the above graphic), Alchip’s revenue derived from the two leading-edge processes has grown from 60% to 88% in 2022. Over the same period, its revenue derived from the HPC market segment has grown from 59% to 82% in 2022. When Networking and Niche markets are added in, the share reaches a whopping 94%.
You can read the entire press announcement of Alchip’s 2022 financial results here.[Link once announcement goes public on May 1st]
About Alchip
Alchip Technologies Ltd., founded in 2003 and headquartered in Taipei, Taiwan, is a leading global provider of silicon and design and production services for system companies developing complex and high-volume ASICs and SoCs. Alchip provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced, including 7nm, 6nm, 5nm and 4nm processes. Alchip has built its reputation as a high-performance ASIC leader through its advanced 2.5D/3D package services, CoWoS/chiplet design and manufacturing experience. Customers include global leaders in AI, HPC/supercomputer, mobile phones, entertainment device, networking equipment and other electronic product categories. Alchip is listed on the Taiwan Stock Exchange (TWSE: 3661).
As I sift through mounds of semiconductor press releases trying to figure out the relevance (with mixed results) I consider it a learning experience even when they don’t really tell me anything. This one however tells me two very important things:
1) Arm is a much more competitive company with the new leadership. I saw a noticeable press release change when Softbank bought Arm back in 2016 and it is great to see them back in the game. We can expect more of this, maybe even at a higher level, once the Arm IPO goes through this year which I am highly anticipating.
2) Silicon Catalyst continues to be a positive disruptive influence in the semiconductor industry, even more so than I imagined when I first spoke to the founders back in 2015. I have been involved with dozens of start-up companies during my 40 year semiconductor career and know first hand of their importance. Anything to help the start-up ecosystem is greatly appreciated, but let me tell you, Silicon Catalyst has by far exceeded even my extremely high expectations, absolutely.
We first reported the Silicon Catalyst Arm partnership in 2020 which was the first of many Silicon Catalyst announcements and events we have covered. For Arm to choose Silicon Catalyst for this event is very high praise indeed. Rather than summarize this historical event here is today’s press release in its entirety:
Silicon Catalyst announces “Silicon Startups Contest” in partnership with Arm
Worldwide call for applicants to qualify and win significant commercial and technical support from Arm
Silicon Valley, California and Cambridge, UK – May 10, 2023 – Silicon Catalyst, the world’s only incubator focused exclusively on accelerating semiconductor solutions, is pleased to announce a “Silicon Startups Contest” in partnership with Arm. The contest, launching today, is organized and administered by Silicon Catalyst and is directed towards early-stage entrepreneurial teams developing a system-on-chip (SoC) design using Arm® processor IP (intellectual property), proven in more than 250 billion chips shipped worldwide.
The contest offers an opportunity for silicon startups to win valuable commercial, technical and marketing support from Arm and Silicon Catalyst. The winner will receive Arm credit worth $150,000, which could cover IP fees for a complete embedded system, or significantly contribute to the cost of a higher performance application. In addition, both the winner and two runners-up will receive:
No cost, easy access to an extensive SoC design portfolio including a wide range of Cortex processors, Mali graphics, Corstone reference systems, CoreLink and CoreSight system IP.
Free tools, training, and support to enhance your team
$0 license fee to produce prototypes
Cost-free Arm Design Check-in Review with Arm’s experienced support team
Entry to an invitation-only Arm ecosystem event with a chance to be featured and connect with Arm’s broad portfolio of silicon, OEM and software partners
Investor pitch review and preparation support by Silicon Catalyst, with an opportunity to present to the Silicon Catalyst Angels group and their investment syndication network.
“We believe that Arm technology is for everyone, and early-stage silicon startups trust Arm to deliver proven, validated computing platforms that enable them to innovate with freedom and confidence,” said Paul Williamson, senior vice president and general manager, IoT Line of Business at Arm. “Since its launch, Arm Flexible Access for Startups has enabled around 100 startups with access to our wide portfolio of IP, extensive ecosystem and broad developer base, and we look forward to seeing what creativity this prize inspires in the exciting new startups that enter this contest.”
The contest is open to startup companies in pre-seed, seed and Series A funding, that have raised a maximum of $20M in funding and all contest applicant organizations will be considered for acceptance to the Silicon Catalyst Incubator/Accelerator. Judges include senior executives from both Arm and Silicon Catalyst.
“Arm was the first member of our ecosystem to join as both a Strategic Partner and an In-Kind Partner. Their Flexible Access program is a game-changer for startups. Through this program, silicon startups can move fast, experiment with ease, and design with confidence – so it’s a highly valuable part of the contest prize,” stated Pete Rodriguez, Silicon Catalyst CEO. “Entrepreneurial teams entering the contest will also automatically be applying to our Incubator, with the winning company receiving credit with Arm that could give them a significant head start in the commercialization of their product, as well as the opportunity to present to the Silicon Catalyst Angel investment group and their syndication network of investment partners.”
The contest will run from May 10, 2023 through to June 23, 2023. The contest winner and two runner-up companies will be announced in early July 2023. Contest rules and application details can be found at https://siliconcatalyst.com/arm-sic-contest-2023
About Silicon Catalyst“It’s about what’s next”
Silicon Catalyst is the world’s only incubator focused exclusively on accelerating semiconductor solutions, built on a comprehensive coalition of in-kind and strategic partners to dramatically reduce the cost and complexity of development. More than 900 startup companies worldwide have engaged with Silicon Catalyst and the company has admitted 97 exciting companies. With a world-class network of mentors to advise startups, Silicon Catalyst is helping new semiconductor companies address the challenges in moving from idea to realization. The incubator/accelerator supplies startups with access to design tools, silicon devices, networking, and a path to funding, banking and marketing acumen to successfully launch and grow their companies’ novel technology solutions. Over the past seven plus years, the Silicon Catalyst model has proven to dramatically accelerate a startup’s trajectory while at the same time de-risking the equation for investors. Silicon Catalyst has been named the Semiconductor Review’s 2021 Top-10 Solutions Company award winner. More information is available at www.siliconcatalyst.com
About Silicon Catalyst Angels
The Silicon Catalyst Angels was established in July 2019 as a separate organization to provide access to seed and Series A funding for Silicon Catalyst portfolio companies. What makes Silicon Catalyst Angels unique is not only the investment group’s visibility into a semiconductor-focused deal flow pipeline, but our membership is comprised of seasoned semiconductor veterans who bring with them a wealth of knowledge along with their ability to invest. Driven by passion and a desire to ‘give back’, our members understand the semiconductor market thanks to a lifetime of engagement in the industry. When you couple our members enthusiasm, knowledge, and broad network of connections with companies that have been vetted and admitted to Silicon Catalyst, you have a formula that is to date, nonexistent within the investment community. More information about membership can be found at www.siliconcatalystangels.com
Masks have always been an essential part of the lithography process in the semiconductor industry. With the smallest printed features already being subwavelength for both DUV and EUV cases at the bleeding edge, mask patterns play a more crucial role than ever. Moreover, in the case of EUV lithography, throughput is a concern, so the efficiency of projecting light from the mask to the wafer needs to be maximized.
Conventional Manhattan features (named after the Manhattan street blocks or the lit building windows in the evening) are known for their sharp corners, which naturally scatter light outside the numerical aperture of the optical system. In order to minimize such scattering, one may to turn to Inverse Lithography Technology (ILT), which will allow curvilinear feature edges on the mask to replace sharp corners. To give the simplest example where this may be useful, consider the target optical image (or aerial image) at the wafer in Figure 1, which is expected from a dense contact array with quadrupole or QUASAR illumination, resulting in a 4-beam interference pattern.
Figure 1. A dense contact image from quadrupole or QUASAR illumination, resulting in a four-beam interference pattern.
Four interfering beams cannot produce sharp corners at the wafer, but a somewhat rounded corner (derived from sinusoidal terms). A sharp feature corner on the mask would produce the same roundness, but with less light arriving at the wafer; a good portion of the light has been scattered out. A more efficient transfer of light to the wafer can be achieved if the mask feature has a curvilinear edge with the same roundness, as in Figure 2.
Figure 2. Mask feature showing curvilinear edge similar to the image at the wafer shown in Figure 1. The edge roundness ideally should be the same.
The amount of light scattered out can be minimized to 0 ideally with curvilinear edges. Yet despite the advantage of curvilinear edges, it has been difficult to make masks with these features, as curvilinear edges require more mask writer information to be stored compared to Manhattan features, reducing the system throughput from the extra processing time. The data volume required to represent curvilinear shapes can be an order of magnitude more than the corresponding Manhattan shapes. Multi-beam mask writers, which have only recently become available, compensate the loss of throughput.
Mask synthesis (designing the features on the mask) and mask data prep (converting the said features to the data directly used by the mask writer) also need to be updated to accommodate curvilinear features. Synopsys recently described the results of its curvilinear upgrade. Two highlighted features for mask synthesis are Machine Learning and Parametric Curve OPC. Machine learning is used to train a continuous deep learning model on selected clips. Parametric Curve OPC represents curvilinear layer output as a sequence of parametric curve shapes, in order to minimize data volume. Mask data prep comprises four parts: Mask Error Correction (MEC), Pattern Matching, Mask Rule Check (MRC), and Fracture. MEC is supposed to compensate errors from the mask writing process, such as electron scattering from the EUV multilayer. Pattern matching operations search for matching shapes and becomes more complicated without restrictions to only 90-deg and 45-deg edges. Likewise, MRC needs new rules to detect violations involving curved shapes. Finally, fracture needs to not only preserve curved edges but also support multi-beam mask writers.
“Strategy” is a word sometimes used loosely to lend an aura of visionary thinking, but in this context, it has a very concrete meaning. Without a strategy, you may be stuck with decisions you made on a first-generation design when implementing follow-on designs. Or face major rework to correct for issues you hadn’t foreseen. Making optimum architecture decisions for the series at the outset is key. Will it support replicating a major subsystem allowing more channels in premium versions, for more sensors or more video streams? Can the memory subsystem scale to support increased demand? Careful planning and modeling, checking target bandwidths and latencies is a necessary starting point. However architectural feasibility alone may not be sufficient to ensure scalability for one critical component – the interconnect between the function blocks in the design.
Strategies and risks for interconnect
The startup strategy. Starting with no design infrastructure, part of your funding must be committed to design tools and essential IP. Some CPU cores come with low-cost access to an interconnect generator based on a crossbar technology. Or perhaps you decide to build your own generator – how hard can that be?
This strategy may work well on the first-generation design. Crossbar-based interconnect is well-established for entry-level designs but exhibits a glaring scalability weakness as systems become more complex. Area consumed by interconnect grows rapidly as the number of initiators and targets grows, creating more challenges for bandwidth, latencies and layout congestion. Problems become acute in follow-on designs as target and initiator counts increase to merge multiple market demands into a common product. Designs must also be as robust as possible to IP changes. A home-grown bus fabric may have worked well with the IP portfolio for the launch design, but what if one IP fails to measure up in the next product? A workaround may be possible but would kill your margins. A better IP is available but only with an interface you don’t yet support. Designing and fully verifying a new protocol will take more time than you have in the critical path to product release.
If you are going to use a crossbar interconnect in your first-pass design, set clear expectations that this will be a proof-of-concept build. It is already widely accepted that scalable interconnect must be based on NoC technology; to transition to a scalable market product, it is almost certain you will have to redesign around that technology. Commercial NoC IP generators already support the full range of AMBA and other protocol standards, limiting risk if needing to change IP. Then again, you could just start with a NoC, avoiding later risks.
The “What we have works and change adds risk” strategy.
Risk in change is an understandable concern but must be balanced against other risks. If it was tough to close timing on your last design and your next design will be more complex, you may be able to battle through and make it work, but at what cost? Pride in surviving the challenge will dissipate quickly if PPA is compromised.
This is not a hypothetical concern. One large company planned to reduce total system cost by designing out a chip they were buying externally. They already had all the tooling and expertise needed to make this happen. The plan seemed like such a no-brainer that they built this expectation into forward projections to analysts – improved margins at more competitive pricing. But they couldn’t close timing at target PPA on their in-house replacement. To continue to deliver the larger system, they were forced to extend their contract with the existing external supplier. Missing projections and getting a black eye. For the next generation, they switched to a commercial NoC solution and were able to complete the design-out successfully.
The “Our interconnect is differentiating” strategy.
There are a few system architectures for which interconnect architecture must be quite special, commonly for mesh networks or more exotic topologies like a torus. Applications demanding such topologies are typically high-premium multi-core server systems, GPUs and AI training systems. Even here, commercial NoC generators have caught up, to the point that market-leading AI systems companies now routinely use these NoCs. Suggesting that fundamentally, differentiation even in these high-end designs is not in the NoC. Just as for other IP, the trend is to commercial solutions for all the usual reasons: Maybe initially comparable to the in-house option but proven across an industry-wide range of SoCs, continually enhanced to remain competitive, with lower total cost of ownership, always-on support and resilient to expert staff turnover.
In a challenging economic climate, it has become even more important for us to pick our strategic battles carefully. People who work on NoC design are often among the best designers in the company. Where is the best place to use those designers? In further securing your lead in truly differentiating features, or in continuing to support NoC technology you can buy off-the-shelf?
If these arguments pique your interest, take a look at Arteris’ FlexNoC and Ncore Cache Coherent interconnect IPs. They boast over 3 billion Arteris-based SoCs shipped to date across a wide range of applications.