You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
This past weeks over-reaction to Canon echoes the Sculpta Scare
Nanoimprint has made huge strides but is still not at all competitive
Shows basic lack of understanding of technology by some pundits
Chip industry has been searching for alternatives that don’t exist
Much ado about nothing much…..
This past week we saw a huge negative knee jerk reaction in ASML due to the announcement by Canon of a nano imprint tool. Somehow the market and many so called “analysts” got hot and bothered suggesting this would be the end of ASML as we know them. Not many people seemed to do any serious fact checking even a brief analysis prior to writing ASML’s obituary.
Perhaps there is just a natural schadenfreude in the market over companies that have a monopoly along with the associated high valuation. Maybe everyone just wants to see the top dog knocked off their pedestal, just a little bit.
The problem is that its not the case and ASML is as rock solid as ever and Canon will have in essence, zero impact on ASML’s business.
Echoes of the AMAT “Sculpta Scare Stampede Stupidity”
The Canon news was just a carbon copy of the same overreaction to the Applied Sculpta tool which was inappropriately introduced at the SPIE lithography conference even though its nothing more than an etch tool. Applied called it an imaging tool even though it is clearly not at all. People with zero technical understanding suggested that it was the end of double patterning and ASML’s tool sales would be cut in half.
Obviously this is the furthest thing from the truth and Applied was clearly trying to steal some of ASML’s value in the lithography world.
Now more than 6 months after the Scuplta scare it seems most investors have finally figured out it will have zero impact on ASML. Scuplta has not taken the market by storm.
Back when Sculpta was announced many pundits said it was an “existential threat” to ASML….this past week we have heard the same over exaggerated “existential threat” to ASML….NOT!
Much like the Applied Sculpta technology, the Canon technology has also been around for decades and has been struggling as a developing technology.
Nanoimprint has make huge strides but has very basic limitations
Canon got into the nano imprint business by buying Molecular Imprints of Texas in 2014. Molecular Imprints had been struggling for quite a while and never really got any significant traction. There was some early direction of using nano imprint to do surface modification of disk drive platters with micro patterns. Use in the semiconductor industry back then was a far off fantasy limited to repetitive patterns of memory devices.
Defectivity and alignment have been perpetual problems and limitations of nano imprint. We do applaud Canon in making excellent progress, by relentless engineering that Japanese firms are known for, in these and other areas but basic technical limitations still remain.
There could be some potential applications in memory for nano imprint which is more tolerant of defectivity issues than logic and runs at lower resolution but still quite a ways off from being a “real world” HVM (high volume manufacturing) solution.
DSA & multibeam are other “boogeymen” to be aware of
If 6 months from now, some company announces a breakthrough in DSA (directed self assembly) or multiple beam electron beam direct write systems that is touted as an “existential threat” to ASML, just go out and buy ASML’s stock in the face of stupid herd mentality……
DSA has also been around for decades as a lithography alternative with its own set of limitations comparable to nano imprint, being the always wished for alternative to standard lithography.
There is also direct write electron beam technology which while much higher resolution than EUV is millions of times slower, like copying a newspaper with a pencil rather than a printing press of EUV. There are attempts to use massively parallel pencils but obviously its still incredibly slow.
Lots of litho ASML wanna bees exist but nothing is real
As lithography costs go exponential the hope for alternatives grows
Part of the overreaction to non viable litho alternatives is that the cost of litho is growing exponentially and so is ASML’s monopoly.
We attend many industry conferences and keep up to date on the latest trends. We go out of our way and attend conferences that no industry analyst would ever attend let alone even know about such as the recent SPIE Photomask & EUV conference. DSA, nano imprint and other technologies are always discussed at such conferences but anyone serious in the industry knows that there are no viable alternatives anywhere near on the horizon that would impact ASML
There are still hopes and dreams of alternatives that intensify as current litho costs grow faster than any other semiconductor equipment segment.
We are also sure that China is trying harder than anyone else to come up with an alternative to current sanctioned litho tools. If DSA, nano imprint or direct write were viable, they would be doing it.
The Stock
Despite all the uneducated, hysterical, overreaction this past week over a “nothing burger” product announcement, nothing has changed at all for ASML due to the Canon announcement
Far bigger, real and more relevant issues are the global macro economic outlook, over supply in chips, the China sanctions etc; etc.
ASML’s monopoly and market position haven’t changed, the only significant variable remains the market itself.
ASML remains the most dominant player in the semiconductor equipment space by far and is appropriately valued as such.
Canon’s imprint threat is no more real than the monster under the bed….
About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.
Unreachability (UNR) analysis, finding and definitively proving that certain states in a design cannot possibly be covered in testing, should be a wildly popular component in all verification plans. When the coverage needle stubbornly refuses to move, where should you focus testing creativity while avoiding provably untestable logic? Seems like a no-brainer to figure this out if you want to reduce effort and schedule, yet UNR still is not as mainstream as you might imagine. A talk by Luv Sampat (Senior Engineer at Qualcomm) showed where the simple UNR premise falls short and shared a path forward at a recent Synopsys VC Formal SIG event.
Context
Unreachability analysis is based on formal technologies, as usual best applied to IP-level tasks in this case to Qualcomm Hexagon DSP cores. In different configurations these cores are used in products from Bluetooth earbuds all the way up to datacenter AI platforms. Coverage analysis requires UNR application in setup, run, and evaluation to scale effectively across that range.
Qualcomm assesses progress to closure through toggle, line, condition, and FSM coverage. Luv said that on one of their larger configurations they have over 200 million coverage goals. That is important not only because checking that many goals is a huge task but also because the number of claimed unreachable goals determined in analysis may also run to millions. Any given claim may result from an over-constraint or an unsupported use-case; only a designer can decide between these options and a true unreachable state.
Bounded proofs compound the problem. If a state was unreachable within proof bounds, can the bounds be increased to increase confidence? Taken together, these challenges are familiar enough in formal property checking but here potentially millions of claims may need manual review by design experts, an impractical expectation only manageable through engineer-defined blanket exceptions which undermine the integrity of the analysis. Worse yet, exceptions may not be portable to other configurations, or even between successive RTL drops for one configuration.
What about divide-and-conquer?
If you know your way around property checking this may still not seem like a real problem. There are multiple techniques to divide a big problem into smaller sub-problems – black-boxing, case analysis, assume-guarantee, etc. Why not use one or more of those methods?
The first and most obvious problem is that UNR is supposed to be a transparent complement to simulation coverage analysis. It should just automatically adjust simulation coverage metrics, so you don’t have to worry about what is not reachable. Requiring partitioning, setup, run and reconciliation through divide-and-conquer analyses is hardly transparent. Second it is unclear how you would divide and then recombine results for say a toggle coverage analysis without understanding if coverage for the sum of the parts really adds up to coverage for the whole. Third, even if such a method could be automated, would it be easily portable to other design configurations? Probably not.
Divide-and-conquer must be a part of a practical solution to conquer scaling, but not through standard methods. Luv/Qualcomm have been working together with the Synopsys VC Formal group to drive a better solution in their FCA (unreachability analysis) app.
Auto-Scale
The method is called Auto-Scale. Consider toggle coverage where you really need to look at the whole design, but the formal model for a large IP is too big. Instead of building this full model, break each coverage metric into sub-tasks and for each build a formal model only around the cone of influence for that sub-task, dramatically reducing the size of a proof without sacrificing integrity. In effect this method handles sub-task partitioning automatically but in a way that preserves proof integrity.
To optimize throughput with completeness, Luv talked about flexibility in a grid spec per metric, also a “memory ladder” allowing you to specify a starting memory requirement per sub-task while allowing that allocation to progressively ramp up in retries for a task which hits a bound before completing the proof. Contrast that with the standard approach where you would need to reserve the maximum memory available at the outset, wasting much of that allocation on quick proofs and maybe still limiting bounds on tough proofs.
Results are impressive. In one example (5-10 million goals), a standard approach to UNR required 6 partitions, 256GB of memory, and left 800k goals uncovered. The Auto-Scale version required no partitioning, ran in under 100GB and left only 500k goals uncovered. Further, line coverage improved from 96% to 99%, condition coverage from 77% to 78%, toggle coverage from 88% to 89% and FSM coverage from 55% to 99%. This last improvement Luv attributes to being able to see all the FSM logic in one-shot rather than needing to split the FSM into multiple runs.
In a larger test with over 50 million goals, the standard approach required 26 partitions, many hundreds of child processes and terabytes of memory. Even though they were able to complete the task in principle, the engineering team rejected these results which they considered too noisy. The Auto-Scale approach required only 4 partitions, ran between 100-500GB of memory and left only 900k uncovered goals (versus 2.4 million for the traditional analysis). Line coverage for both approaches came out at 94%, condition coverage climbed a little from 62% to 65% and toggle coverage jumped from 70% to 93%. The engineering team were happy to use these results 😊
Auto-Scale coverage improvements and significant reduction of uncovered items make a big difference to the effort required of the dynamic verification team to close coverage: net 12% reduction in coverage goals in the first example and 9% reduction in coverage goals in the second. This looks like a real step forward to extend the value of unreachability analysis to larger IPs.
Regardless of where you grew up you probably know the story of Alice in Wonderland. The story is over 100 years old, but holds up to the present day, thanks in part to some magic from Disney. It evokes visions of a place that elicits admiration and wonder, to create a place of magical charm. In the context of AI, it takes on a more relevant meaning as a place of great opportunity and potentially serious concerns. This year’s Silicon Catalyst Semiconductor Industry Forum used this theme to illuminate the impact AI will have on our world. It was a very popular event. The venue was even been moved to a larger space to accommodate the anticipated interest. The live, in-person event was Thursday, November 9, 2023 from 5pm – 8pm Pacific Time in Menlo Park, CA. The event sold out and a replay link will be coming. Let’s examine what was discussed and who was there to fully understand how Silicon Catalyst welcomes you to our AI wonderland.
About the Event
The Silicon Catalyst Semiconductor Industry Forum was launched in 2018 with a charter to enable a town-hall like event to discuss the broad impact of semiconductors on our world, beyond the traditional focus on technology, financial reviews and industry business forecasts.
Silicon Catalyst has delivered some memorable Semiconductor Industry Forum events since 2018. You can read about last year’s event, Welcome to the Danger Zone here.
I spoke with Richard Curtin, Managing Partner at Silicon Catalyst recently about the latest event – why the topic, and where he expected the discussion to go. What follows are some of his thoughts.
“The past few decades of semiconductor innovation have spanned many links in the value chain, covering manufacturing, design automation, global supply chain development and scaling. These breakthroughs have resulted in unprecedented growth areas for semiconductor applications, further enabling new business creation and delivering great economic returns for stakeholders and societal benefits for the world’s population. The impact of these innovations on our society is truly remarkable, especially now with the widespread application of AI to all aspects of our daily life and the world’s industries.
But as we’ve seen and experienced in 2023, in the context of AI, it takes on a more relevant meaning as a place of great opportunity, but also potentially serious concerns. To this point, check out the coverage of the developing AI-angst, as documented in the recent broadcasts: 60 Minutes episode and also Real Time with Bill Mahr .
In my personal opinion, the real question to be addressed: are we the proverbial frog in the pot of water?”
Event Details
This year, the topic was AI – its impact on industry, our world and overall innovation. The potential risks of AI deployment and government intervention are relevant to the discussion as well. Here is a summary of the items that were discussed. This is just a start, there will be more.
What are the AI technologies that will create new business models and industries?
What are the implications to semiconductor industry success for incumbents & startups?
How do we address the power-hungry AI hyper-scalers’ impact on our energy resources?
What impact will potential government and industry regulations have on innovation?
Who Presented?
The main event was a spirited panel discussion on the topics above with a group of high-profile executives. The panelists shared their thoughts on how best to address some key questions that arise as we look to navigate the years ahead in our new AI wonderland. There was also a live Q&A with the audience.
The panel was moderated by David French – CEO of SigmaSense and a Silicon Catalyst Board Member. Mr. French’s career spans a broad set of experiences in virtually all aspects of research, design, manufacturing, marketing, and business management within the semiconductor industry. He has recently become CEO of SigmaSense, a company developing breakthrough software-defined sensing technology.
The panelists were:
Deirdre Hanford – Chief Security Officer, Corp Staff, Synopsys; CHIPS Act Department of Commerce Industrial Advisory Committee. Deirdre leads efforts to drive industry awareness and enablement for secure design from software to silicon to support business in EDA, IP, and Software Integrity. Ms. Hanford previously served as co-general manager of Synopsys’ Design Group. She has held a number of positions at Synopsys since joining the company in 1987, including leadership roles in general management, customer engagement, applications engineering, sales, and marketing.
Moshe Gavrielov – Former CEO of Xilinx; Board member of TSMC and NXP. Mr. Gavrielov served as President and CEO of Xilinx, Inc. from January 2008 to January 2018. Prior to that, he served at Cadence Design Systems as Executive Vice President and General Manager of the verification division. He also held a variety of executive management positions at LSI Logic and engineering management positions in National Semiconductor and Digital Equipment Corporation. Since 2019, Mr. Gavrielov has served on the board of TSMC, and as of May 2023 he joined the NXP board of directors.
Ivo Bolsens – Senior Vice President, Head Corporate Research and Advanced Development, AMD. Previously he was Senior Vice President and Chief Technology Officer (CTO) at Xilinx. The research of his team led to the industry-leading adoption of 2.5D advanced packaging technology in Xilinx products. Bolsens came to Xilinx in June 2001 from the Belgium-based research center IMEC, where he was Vice President of information and communication systems. His research included the development of knowledge-based verification for VLSI circuits, design of digital signal processing applications, and wireless communication terminals.
To Learn More
The live event was held at the SRI Conference Center in Menlo Park, CA on Thursday, November 9, 2023, from 5pm – 8pm Pacific Time. The agenda included a reception, networking, and Q&A with the panelists. If you missed the event, a replay link is available here. And that’s how Silicon Catalyst welcomes you to our AI wonderland.
In the ever-evolving world of Conversational AI and Automatic Speech Recognition (ASR), an upcoming LinkedIn Live webinar is set to redefine the speech-to-text industry. Achronix Semiconductor Corporation is teaming up with Myrtle.ai to bring you a webinar on October 24, 2023, at 8:30am PST.
Moderated by EE Times’ Sr. Reporter, Sally Ward-Foxton, the webinar will explore a revolutionary ASR solution that promises to change the acceleration game in Conversational AI. Achronix, a leader in high-performance FPGAs and embedded FPGA (eFPGA) IP, and Myrtle.ai, a company known for optimizing low-latency machine learning (ML) inference for real-time applications, are teaming up to present a technology that’s highly relevant in today’s tech landscape.
At the core of this event lies a real-time streaming speech-to-text solution based on Achronix’s Speedster7t FPGA. Imagine the power to convert spoken language into text in over 1,000 concurrent real-time streams with remarkable accuracy and speed. This isn’t just about innovative technology; it’s about the practical applications and the potential impact on your business. If you’re part of a team that relies on fast, accurate speech-to-text conversion, this webinar is tailor-made for you.
One of the key takeaways from this event is understanding how this ASR solution can significantly reduce operational expenses (OpEx) and capital expenses (CapEx) while maintaining top-tier performance. Bill Jenkins, the Director of AI Product Marketing at Achronix, highlights that it can reduce costs by up to 90% compared to traditional CPU/GPU-based server solutions. In times where efficiency and cost-effectiveness are paramount, this is knowledge that can transform your decision-making processes.
Beyond the impressive cost savings, the webinar is your opportunity to explore the fascinating capabilities of FPGAs. The Achronix Speedster7t FPGA has unique features like a 2D network on chip (NoC) and ML processor (MLP) arrays. These features have been leveraged to create an ASR product significantly more optimized than anything available today. The extremely low latency of these FPGAs makes them ideal for real-time workloads, and this event will unveil how this low-latency technology can supercharge your business’s operations.
Moreover, this ASR solution is not just about performance; it’s also about flexibility. It’s compatible with major deep learning frameworks like PyTorch and offers re-trainability for multiple languages and specialties. If your business has specific needs or requirements, this solution can be customized to suit your objectives, making it a perfect fit for a wide range of industry-specific applications.
So why should you attend this webinar? In addition to unveiling the technology itself, it’s an opportunity to hear from experts in the CAI field. Bill Jenkins, an expert in Achronix-FPGA-powered ASR solutions, and Julian Mack, a Senior Machine Learning Scientist at Myrtle.ai, will guide you through this groundbreaking ASR solution with Sally Ward-Foxton’s moderation of the conversation and take on the current CAI landscape.
It’s a unique opportunity to discover the technology that will reshape how industries process speech data. Mark your calendars and attend this webinar on October 24th at 8:30am PST; it’s your ticket to a future where technology meets innovation.
By Philippe Flatresse, Bich-Yen Nguyen, Rainer Lutz of SOITEC
I. Introduction
Automotive radar is a key enabler for the development of advanced driver assistance systems (ADAS) and autonomous vehicles. The use of radar allows vehicles to sense their environment and make decisions based on that information, enhancing safety and driving performance. Automotive Radars are considered very robust against disturbing atmospheric and environmental factors being able to instantaneously measure distance, angle and velocity and produce detailed images of the surroundings.
Initially developed for high class vehicles, the radar has gained considerable momentum over the past 2 decades. The first mass-produced 77Ghz radar was implemented in a Mercedes-Benz S-Class in 1998. Eight years later it was followed by a more advanced system combining 77Ghz long range radar (LRR) with two 24GHz short range radar (SRR) sensors to fit the urban traffic [1]. In 2011, the democratization of automotive radar has clearly begun with the adoption of standard series product in middle class vehicles.
Ten years later, the worldwide automotive radar market is primarily driven by rising demand for advanced driver assistance systems and further accelerated by the requirement for active safety systems mandated by government laws or new car assessment program such as NCAP. The global automotive radar market has grown in response to the increased use of radar equipment per vehicle. Several new car models have announced that they will have up to 10 radar sensors per vehicle starting 2025, which will enable the creation of a radar-based 360° surround view necessary for advanced driver assistance and semi-autonomous operation. As a result, the automotive industry is currently experiencing a high demand for high-precision, multi-functional radar systems, which has led to increased research and development activities in the field of automotive radar systems.
To meet the demands of the next generation of radar systems, the move to advanced CMOS technology is considered to be a necessary transition. Adopting CMOS technology allows for a significant increase in the density of integration, making it possible to create a radar transceiver that is entirely integrated onto a single chip, known as a radar system on chip (SoC). This type of design typically includes the millimeter wave frontend, analog baseband, and digital processing all on the same chip. It may also include MCUs, DSPs, memory, and machine learning engines, allowing the radar to operate independently with very few external components, thus reducing the overall BOM cost. The main nodes of choices today are 40/45nm, 28nm, and 22nm. Some are even going for 16nm.
One of the most promising silicon technologies for automotive radar already identified by several module makers is fully depleted silicon-on-insulator (FD-SOI) [2, 3]. FD-SOI technology enables the integration of high-frequency radar components on a single chip. The technology can not only improve the performance of the radar system but it also allows for low-power operation, which is critical for automotive applications where power efficiency is a concern.
II. Automotive Radar trends
The use of radar technology in vehicles is expected to continue growing, driven by both an increase in the number of cars adopting radar and by the amount of radar content per vehicle. These trends are driven by the growing adoption of advanced driver assistance systems (ADAS) and will be further sustained and strengthened in the long-term by the development of highly automated or autonomous driving. According to several market reports [4,5], the global automotive radar market size accounted for USD 6 Billion in 2021 and is estimated to achieve a market size of USD 22 Billion by 2030 growing at a CAGR of 20% from 2022 to 2030. In other words, the sales of automotive radar (SRR, MRR, and LRR) for level 2 and above are projected to experience significant growth in the coming years from 100 million units in 2021 to 400 million in 2030, meaning 4X increase in less than one decade. It is worth mentioning that 50% of the automotive radar will be manufactured in CMOS technologies by 2025.
The technology of automotive radar sensors has evolved over the years since its first introduction in 2000. Previously, automotive radars were mainly used in the 24 GHz frequency band for short-range detection and in the 76-77 GHz range for longer range or more complex applications. It is important to notice that the European Telecommunications Standards Institute (ETSI) and the Federal Communications Commission (FCC) have allocated a specific frequency band between 76 GHz and 81 GHz for automotive radar applications. However, due to difficulties in designing efficient and cost-effective integrated circuits at such high frequencies, a temporary frequency band around 24 GHz was also made available to manufacturers to develop 77 GHz radar transceivers. With advancements in technology, 77 GHz radar products are now well-developed, and the temporary 24 GHz frequency band is no longer available since 2022. Nowadays, the trend has shifted towards using the 76-81 GHz frequency band for the development of new sensors. While research into even higher frequency bands above 100 GHz is ongoing, the integration of these technologies into vehicles and the challenges related to semiconductor technology performance are still being studied.
III. Automotive Radar challenges
Automotive radar technology plays is a crucial component in the development of advanced driver assistance systems (ADAS) and autonomous vehicles. However, the development and implementation of automotive radar systems comes with several challenges, including the need to save lives, energy, and costs (Fig.3).
Save Lives: Automotive radar technology needs improve the safety of vehicles by providing advanced warning of potential hazards on the road. To save lives, automotive radar must have high resolution to accurately detect and identify objects, low latency to provide timely warnings to the driver, and real-time classification capabilities to distinguish between different types of objects, dealing with the environmental conditions (like fog, rain, dust and other factors) that can affect the accuracy of the radar. These features allow the radar to detect and track objects such as other vehicles, pedestrians, and animals, and provide the driver with the information they need to make safe decisions on the road. Improving the resolution, latency, and classification capabilities of automotive radar technology can help to reduce accidents and save lives.
Save Energy: Automotive radar technology can also play a role in saving energy by optimizing the size, weight, and power (SWaP) of the system. This can be achieved by using efficient processing techniques and reducing the size and weight of the radar’s package. Optimizing SWaP is particularly important in electric vehicles (EVs) and hybrid electric vehicles (HEVs) where energy storage is limited. Reducing the size and weight of the radar system can also help to reduce the overall vehicle weight and improve fuel efficiency.
Save Costs: Transitioning to complementary CMOS technology is one way to reduce the cost of automotive radar systems. CMOS is a widely used manufacturing process for integrated circuits that allows for the integration of multiple functions onto a single chip. Single-chip solutions can further reduce costs by integrating all the necessary components and functions of the radar system onto a single chip. This can help to reduce the size and weight of the radar system, as well as simplify the manufacturing process. Reliability and yield are also important factors to consider when mass-producing automotive radar systems.
The success of advanced driver assistance systems (ADAS) and autonomous vehicles depends on the ability to address the challenges related to safety, energy efficiency and cost-effectiveness. A critical aspect in this regard is the choice of silicon technology, with a clear shift nowadays towards the use of CMOS technology. The technology that will dominate this field will be the one that can provide high resolution, low latency, and accurate classification, reduce energy consumption, simplify the manufacturing process and reduce costs by offering fully integrated radar systems.
IV. Automotive radar key metrics
The radar sensor is tasked with identifying and spatially locating mass-based obstacles. These can include other vehicles, bicyclists, pedestrians, animals, and even fixed obstacles. The key metrics when analyzing the performance of a radar sensor is how far it can detect object (range), how reliably it can resolve objects (range resolution), how much it can resolve velocity of objects (velocity resolution) and how accurate it can determine object position and trajectory (angle resolution). The table below gives the technical requirements of automotive radar sensor.
Table 1: Typical performance parameters of advanced LRR radar [6]Latest radars are able to achieve long range and high level of accuracy and resolution. High accuracy and resolution enables not just object detection today but also object classification. However, the price you pay for more accuracy and resolution is more data. As accuracy and resolution increase, the volume of data goes up accordingly, resulting in the need for more computing power. The choice of architecture and the use of efficient CMOS technology are crucial in managing the large volume of data generated by high accuracy and resolution radar systems, while keeping power consumption low, and is essential for the future of radar technology.
Let’s have a look now to the requirements to fulfill the key metrics of automotive radars:
Range: One of the range requirements is a high power transmitter, as it allows the radar to detect objects at a greater distance. A high sampling rate is also necessary to accurately determine the location and velocity of objects. Solutions to achieve these requirements include the stacking of multiple devices to increase the power output, and the use of low power and high linear ADC to efficiently process the radar signals.
Range and velocity resolution: Resolution refers to the ability of the radar to differentiate between objects that are close together. To improve resolution, one approach is to move to higher frequency bands, as the wavelength of the radar signal decreases with increasing frequency, allowing for finer resolution. Solutions to achieve this include using higher Ft/Fmax technologies in the radar signal. This allows for higher frequency components to be used in the signal, which can result in improved resolution.
Angle resolution: To improve angle resolution, one of the requirements is to limit thermal issues, as high temperatures can affect the performance of the radar’s electronic components. One solution to this is to improve the power amplifier (PA) and digital efficiency of the radar system. By increasing the efficiency of the PA and decreasing the digital power, less heat is generated, which can help to reduce thermal issues.
This section highlights the importance of having top-notch silicon technology with cutting-edge analog and millimeter wave RF capabilities in order to tackle the technological challenges posed by future automotive radars. Such technology must be able to handle a large amount of data while maintaining low power consumption.
V. FD-SOI technology, do more with less energy
The current major technology shift in the radar segment is the adoption of CMOS technologies. Current CMOS technologies available for automotive such as 40nm, 28nm, 22nm, 16nm provide a high level of integration for digital circuits and exhibit very good performance in RF applications. Using CMOS technology to design automotive radar systems allows for a number of advantages over traditional analog radar transceivers. One of the main benefits is that CMOS allows for the integration of multiple components, such as the radar transceiver, signal processing circuits and control logic, into a single chip. This improves the resolution and density of the radar system, allowing for more accurate and reliable detection of objects. Additionally, CMOS technology is also generally less expensive than traditional analog radar transceivers, which help to lower the overall cost of the radar system. By having the radar system on a single chip, the SoC can be more compact and power efficient, which is important in automotive applications where space and power consumption are both important.
One way to evaluate the suitability of CMOS technology for automotive radar operating at 77 GHz is to look at the speed of the transistors. The table 2 below compares the fT and fmax of various state-of-the-art CMOS technologies. These values give an indication of how fast the transistors can operate and therefore how well they can handle high frequency signals, making it a good indicator of the feasibility of using CMOS technology in automotive radar. Today, the transit frequencies achieved allow CMOS technologies to penetrate the radar automotive market traditionally dominated by BiCMOS processes representing more than 2/3 of the overall radar market as of today. As shown in Table 1, among CMOS processes currently used for radars, the 22nm FD-SOI technology clearly outperforms both finfet and bulk technologies and is on par with the state of art SiGe technologies. This technology is seen by several radar chip makers, such as Bosh or Arbe, as state of the art CMOS technology for radars with the ability to offer transistors of ft > 350 GhZ ad Fmax > 390 GhZ and several additional unique benefits that are described in the next sections.
Table 2: Transit Frequency comparison of various Silicon technologies used in radar applications [6], [7]
a. Unique features of FD-SOI technology
FD-SOI is a well-known technology in the field of semiconductors allowing for improved performance and lower power consumption compared to traditional bulk silicon technology. FD-SOI technology allows for more flexibility in the design and manufacturing process, making it a popular choice for a wide range of applications, including automotive radar systems. FD-SOI transistors offer several unique features, such as the ability to operate at low voltage, to cancel PVTA variations, to be quasi-insensitive to radiation and to exhibit a very high intrinsic transistor speed, which make it an ideal choice over other RF-CMOS technology alternatives (Figure 8).
i. FD-SOI and Ultra Low Voltage
Thanks to its intrinsic low-variability characteristics and body bias techniques, FD-SOI is able to operate at very low supply voltages down to 0.4V or below, making it an ideal technology for applications where power consumption is a critical concern. Lowering the supply voltage reduces the dynamic power consumption offering a unique advantage over other technologies, as it allows for more efficient power usage in applications where power is more of a challenge than performance.
ii. FD-SOI and Soft Error Rate
FD-SOI is known for its high resistance to high-energy particles, which can cause soft errors in electronic devices. This is because in FD-SOI, the active device region is separated from the substrate by a thin insulating layer called the Box. The buried oxide layer reduces the susceptibility of the device to charges generated in the substrate, making it less likely to experience soft errors. This feature makes FD-SOI a suitable technology for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving systems (AD).
iii. FD-SOI and Body Biasing
Another key advantage of FD-SOI is body biasing. The body biasing allows taking control of the threshold voltage of the device post-fabrication. Body bias is a very powerful knob in automotive applications and has been widely deployed for PVTA compensation in many consumer and automotive products already. By implementing body biasing in a product, significant reductions in process, voltage, temperature and aging variation can be attained, simplifying the task of product engineers to ensure product specifications at a 1-ppm [10].
Body Biasing is a must have in the next radar generation as a key technique to improve the digital and the analog as well as reliability.
On the digital side, new adaptive body biasing techniques (ABB) have been recently developed allowing the application design to maintain a targeted operating frequency over a wide range of operating conditions such as temperature, manufacturing variability and supply voltage [12]. The architecture enables to reduce energy consumption of processors in 22nm FD-SOI technology by up to 30% and increase the operating frequency up to 450% compared to a technique in which body biased technique is not used [20].
In analog circuits, body bias has several benefits in analog circuits. One of the most significant benefits is improved accuracy, which can be achieved by fine-tuning the performance of the circuit. Additionally, body bias can also reduce power consumption by controlling the operating point and reducing leakage currents. Furthermore, body bias increases the voltage headroom of the circuit, allowing it to operate over a wider range of supply voltages. Body bias also improves the noise immunity of the circuit by reducing the threshold voltage and increasing the drain current. Lastly, body bias can be used to optimize the performance of the circuit for specific applications, such as low-power, high-speed, or high-linearity. These benefits make body bias a valuable knob for trading-off between different performances characteristics such as power consumption, speed, and linearity of analog circuits. [17]
On the reliability side, by controlling the operating point of transistors, body bias can reduce stress on the devices, leading to improved reliability. Additionally, body bias can compensate for temperature-related changes in the threshold voltage, making the circuit more temperature stable. Body bias can also reduce the threshold voltage and increase the drain current, improving the immunity to single event effects such as cosmic rays and alpha particles. Dynamic reliability drift compensation is also a promising area of research that holds the potential to produce fully resilient automotive systems [11]. Furthermore, body bias can reduce the degradation of the threshold voltage over time, leading to improved aging performance. Finally, body bias can compensate for variations in the threshold voltage due to manufacturing processes, making the circuit more robust. These benefits make body bias a valuable tool for improving the reliability of analog circuits.
iv. FD-SOI and Analog/RF
As speed, noise, power, leakage, and variability targets are becoming more and more difficult to meet, FD-SOI technology offers a solution by providing improved matching, gain, and parasitics in transistors, thus simplifying the design of analog and RF blocks. Combining as many analog/RF functions as possible into a single RF-CMOS silicon platform is becoming more vital for cost and power efficiency, but RF-CMOS platforms struggle with increasing frequency, particularly in the mmWave spectrum. FinFETs have even more limitations architecture, so SiGe-Bipolar platforms are often used in this frequency range. FD-SOI, being a planar technology, does not have the limitations of 3D devices, with Ft/Fmax in the range of 350 GHz to 410 GHz reported, enabling full utilization of the mmWave spectrum, making FD-SOI RF-CMOS platforms a promising option for various applications such as automotive radars.
v. FD-SOI and performance booster
Smartcut technique has been used to transfer the biaxial strain silicon film that grown pseudo-morphically on the fully relaxed SiGe buffer layer on Si bulk as a donor wafer to form the unique strain Si on Insulator (SSOI) wafer. The SSOI technology is a nature extension of the SOI technology, combining the advantages of the FD-SOI and the carrier mobility enhancement of the tension strain silicon. The biaxial tension strain Si and compression SiGe which can be formed by partially relaxed the tension strain and then locally Ge condensation [18] to boost the performance of n-channel and p-channel transistors, respectively [19] for boosting performance of logic and RF as shown in Figures 15. The saturation current (Ion) of the n-channel FD-SOI device gains 28% with 20% tension strain Si channel (Fig. 15.a) and 16% Ion gain for 35% c-SiGe formed by Ge condensation without relaxed the tension strain by Ar or Si implant prior to Ge condensation (Fig. 15.b). Using partial tension strain relaxation prior to form cSiGe channel will gain higher performance to great than 20% with 25% cSiGe. Figures 15.c&d show further performance gain with segment the width of channel (W) into multi narrow W to convert the biaxial strain into uniaxial strain which can further improves the performance up to 50% gain for the same strain level. Figure 16 shows the transconductor gain as a function of the gate length, the performance of longer gate length is less limited by parasitic resistance thus its performance gains closer to the mobility gain. The geometry impacts by the strain channel material on performance gain should be considered to maximize the benefits of the strain materials.
Strained Si on silicon-on-insulator (SSOI) is a natural extension of SOI, combining the advantages of SOI and the carrier mobility enhancement of tensile-strained Si for high-performance low-power applications [4].
a. FD-SOI benefits for radar system-on-chips
A system-on-chip (SoC) that integrates the analog front-end and digital signal processing is a logical choice for the next generation of radar technology, as it allows for the efficient and extensive monitoring of multiple parameters and real-time evaluation during sensor operation, which is mandatory for being eligible for safety-critical applications. As shown in the block diagram in Figure 7, a fully integrated mmWave radar includes transmit (TX) and receive (RX) radio frequency (RF) components; analog components such as clocking; and digital components such as analog-to-digital converters (ADCs), microcontrollers (MCUs) and digital signal processors (DSPs). Traditionally, the radar systems were implemented with discrete components, which increased power consumption and overall system cost.
FD-SOI appears as an ideal technology and a natural evolution for automotive radars. FD-SOI technology combines the high mobility of undoped channel, smallest total capacitance for same design rule, low power digital capabilities with the option for Si-base mobility booster, these greatly enhanced performances for both digital and RF/mmWave functions, providing an ideal platform to develop fully integrated radar device.
i. Unrivalled energy efficiency
Power efficiency is a critical consideration for every automotive sensor application. Whether powered by fossil fuel or electricity or a combination of the two, the energy consumption and the thermal management constraints compound the power efficiency challenge of the vehicles. The radar sensor processing requires the utmost attention to performance-per-watt metrics. In this domain, FD-SOI exhibits a clear advantage compared to competing approaches. FD-SOI offers significant energy efficiency improvements for MCUs and DSPs over other CMOS technologies. At a given technology node, FD-SOI can consume up to 40% less power or operate 30% faster than equivalent transistors. It allows boosting the performance at constant power level, making it ideal for low power or thermally constraint applications. Additionally, sensors with a 20% smaller form factor can be designed in FD-SOI leading to reduced costs for the overall system and package. Overall, FD-SOI technology is a cost-effective and power-efficient solution for sensor applications, making it well suited for low-power mm-wave radar systems. The benefits can lead to improved imaging radar resolution, more features in the same footprint and reduced BoM cost.
ii. State of the Art CMOS Power Amplifier
Power amplifiers (PAs) play a critical role in automotive radar systems, as they are responsible for amplifying the weak radar signals that are transmitted and received by the radar antenna. When designing a power amplifier (PA) for radar systems, several key factors must be taken into consideration to ensure optimal performance.
High efficiency in saturation: The PA should be able to operate in saturation mode, which is commonly used in Frequency Modulated Continuous Wave (FMCW) radar systems, to achieve high efficiency while still providing a strong radar signal. For Pulse Modulated Continuous Wave (PMCW) radar systems, the PA should be able to achieve high efficiency even when operating with some back off, as linearity is required in the PA to ensure accurate radar measurements.
High output power: The PA must be able to provide a high output power to ensure a strong radar signal and a long detection range.
Stability of performance over temperature: The PA should be able to maintain stable performance over a wide range of temperatures, as automotive radar systems are subjected to harsh environments.
FD-SOI technology brings several advantages when designing power amplifier (PA) for radar systems. One of the most significant benefits of FD-SOI technology is its high efficiency. Intrinsically, the PA in FD-SOI provides high output power due to the high breakdown voltage of the fully depleted transistors. In CMOS technologies, the design of the PA is based a cascade architecture to increase power handling capability. The cascode stage acts as a buffer and provides additional gain, which improves the overall performance of the PA. One of the main limitations of using cascode architecture in bulk or Finfet is that it sees high drain-to-substrate voltage due to the biasing of the P-substrate. In FD-SOI, each transistor is fully isolated, or “floating,” which means that there is no direct contact between the substrate and the active devices as shown in figure 9. This eliminates the need for substrate biasing, which can reduce the drain-to-substrate voltage and improve the overall performance of the PA. The stacking in FD-SOI can reach up to 50% higher PAE compared to other technologies, as this allows for the input power to be distributed among multiple devices, reducing the power loss due to heat. In the context of radar, higher performance can be achieved for both long-range Radar (LRR) and short range radar (SRR) using FD-SOI. Another important factor to consider is the stability of performance over temperature. When assisted with back gate bias, the high thermal stability of FD-SOI technology can easily be maintained enabling a tight output power control over a wide range of temperatures [13]. +/- 1dB variation over 145 degree temperature change is presented in [14], a crucial result for automotive radar systems that are subjected to harsh environments. Finally, in terms of reliability, FD-SOI PAs exhibit also better reliability figures due to the absence of lateral bipolar and higher breakdown voltage allowing a larger swing. Overall, FD-SOI technology provide a optimum solution for automotive radar power amplifiers, providing high efficiency, high linearity, high output power, high reliability, and stability over a wide range of temperatures.
iii. Low Power ADCs
Automotive radar systems prioritize performance at mmWave frequencies over the specific requirements of the ADC (Analog-to-Digital Converter). However, the ADC still plays a important role in the overall system performance. The ADC should have low power consumption, particularly for high sampling rates, such as in Pulse-Modulated Continuous-Wave (PMCW) radar systems. Additionally, the ADC should have high linearity to accurately convert the analog signal to digital, and should be able to compensate for variations in process, voltage, and temperature (PVT). These requirements are important to ensure that the radar system operates correctly and accurately in real-world automotive environments.
The automotive radar ADC in FD-SOI (Fully-Depleted Silicon-on-Insulator) technology has several advantages for low power consumption. The smaller switches in FD-SOI have lower Ron (resistance) and lower parasitics, which results in higher SNDR (Signal-to-Noise-and-Distortion Ratio) and better switch linearity. Additionally, the FD-SOI technology allows for the use of lower power supply voltages, which further reduces power consumption.
In [15], comparing the ADC performance of 28nm BULK and 28nm FDSOI technology, it can be seen that the FDSOI technology has a lower power supply voltage of 1.1V as opposed to 1.0/1.8V, which leads to a significant reduction in power consumption from 76.4mW down to 19.8mW. The FDSOI technology also has a higher SNDR of 60.7 dB at a Nyquist frequency, as compared to 57.2 dB for the BULK technology. Overall, the use of FDSOI technology in automotive radar ADC results in lower power consumption and improved performance.
b. FD-SOI ideally positioned for the radar applications
FD-SOI technology is ideally positioned for radar applications due to its advantages in system cost, power efficiency, and radar performance. Compared to alternatives technologies such as SiGe or planar CMOS technologies as shown in figure 13, FD-SOI offers higher integration capability, better receiver noise figure and also built-in aging and temperature compensation solutions that can be easily implemented using body bias techniques. On top, the technology also exceeds the mmW automotive radar requirements in terms of output power and power amplifier efficiency. In few words, FD-SOI is well-suited for cost-sensitive automotive radar applications requiring significant processing and RF mmW capabilities at lower power consumption.
VI. FD-SOI and High Resistivity substrate
The CMOS mmWave era opens up the door to revolutionary applications in the fields of ADAS and AD. FD-SOI technology is entering those filed as a versatile and flexible solution, seen as a digital technology with state of the art RF/mmWave capabilities. The technology provides significant benefits in terms of resolution, velocity, power consumption and cost requested by the radar market.
A clear trend is the increasing differentiation of standard cost-effective sensors for driver assistance applications and high-performance sensors for autonomous driving. Future radar applications, such as 4D imaging, impose new levels of performance for both RF and logic devices. To fulfill all the requirements, additional improvements at substrate level are mandatory. The high resistivity option is seen as a new leveler to further enhanced performance of RF devices (Fig.4). High resistivity SOI substrate has been widely adopted in the smartphone market, being currently present in 100% of them. It is clearly a valuable option for next generation automotive radar, FD+HR yields best-in-class passive-loss, increasing efficiency and reducing floorplanning. The high resistivity option in FD-SOI is seen as a major booster to reach ultimate mmW performance, enabling a higher level of SoC integration with unrivalled RF-mmW characteristics. It is important to highlight that SOITEC has designed a new substrate that avoids device integration changes of existing 28nm, 22nm and future 18nm, 12/10nm technology nodes (Fig.5) [16].
VII. Conclusion
In conclusion, FD-SOI is playing a crucial role in shaping the future of automotive radar. Its intrinsic key features make it a valuable technology for the automotive radar industry. FD-SOI enables the development of single chip high performance and cost-effective radars, making it a crucial enabler in vehicle safety and autonomous driving. Its ability to increase computing power while maintaining energy efficiency and intelligence opens the door to disruptive innovations and SWaP optimizations FD-SOI technology can definitively help drive a safe transition to CMOS technologies in the automotive radar industry.
VIII. Graphs
Figure 1: Automotive Radar market per frequency
Figure 2: Automotive Radar RFIC volume by technology
Figure 4: High resistivity substrate
Figure 5: FD-SOI substrate roadmap
Figure 6: FD-SOI substrate and radar architecture
Figure 7: Single Chip radar architecture
Figure 8: FD-SOI key features
Figure 9: PA in FD-SOI
Figure 10: ADC in FD-SOI
Figure 11: Body Bias to improve digital, analog and reliability
Figure 12: FD-SOI 50% faster than Bulk, 10 times faster with ABB
Figure 13: FD-SOI ideally positioned for radar applications
Figure 14: FD-SOI value for automotive radar
Figure 15: Ion vesus Ioff of (a) SOI vs. sSOI for n-FET, (b) SOI vs cSiGeOI p-FET, (c)35% cSiGeOI p-FET with different W. (d) 35% cSiGeOI p-FET Ion and µ gain versus W [Ref.a]
Fig. 16: Peak transconductance enhancement of SSOI versus SOI
VIII. References
[1] Holger H. Meinel and Juergen Dickman, “Automotive Radar: From its origin to future directions”, Microwave Journal, 2013
[2] ee-News Automotive, “Arbe moves to production of 4D radar chipset”, Business News, 2022
[3] David Manners, “Bosch to use FD-SOI for automotive radar SoCs”, Electronics Weekly, 2021
[XX] K. Ramasubramanian, “Moving from legacy 24 GHz to state-of-the-art 77 GHz radar.” Texas Instrument, 2017.
[4] Acumen Research and Consulting , “Automotive Radar Market Report and Region Forecast, 2022 – 2030” , 2023
[5] Yole Development , “Status of the Radar Industry: Players, Applications and Technology Trends” , Market and Technology Report, 2020
[6] Christian Waldschmidt et al., “Automotive Radar—From First Efforts to Future Systems” , IEEE Journal of Microwave, 2021
[7] Philipp Ritter, “Toward a fully integrated automotive radar system-on-chip in 22 nm FD-SOI CMOS” , International Journal of Microwave and Wireless Technologies, 2021
[8] Nobuyuki Sugii, “Ultralow-Power SOTB CMOS Technology Operating Down to 0.4 V”, Journal of Low Power Electronics and Applications
[9] Nobuyuki Sugii, “Ultralow-Power SOTB CMOS Technology Operating Down to 0.4 V”, Journal of Low Power Electronics and Applications
[10] P. Flatresse, “Process and design solutions for exploiting FD-SOI technology towards energy efficient SOCs”, International Symposium on Low Power Electronics and Design (ISLPED), 2014.
[11] P. Flatresse, “RBB & FBB in FD-SOI” SOI Forum, Shangaï, 2017
[12] A. Bonzo et al. “A 0.021 mm² PVT-Aware Digital-Flow-Compatible Adaptive Back-Biasing Regulator with Scalable Drivers Achieving 450% Frequency Boosting and 30% Power Reduction in 22nm FD-SOI Technology.”, ISSCC 2021
[13] Florent Torres , Magali De Matos , Andreia Cathelin , Eric Kerhervé “A 31GHz 2-Stage Reconfigurable Balanced Power Amplifier with 32.6dB Power Gain, 25.5% PAEmax and 17.9dBm Psat in 28nm FD-SOI CMOS,” RFIC 2018.
[14] Venkat Ramasubramanian, “22FDX: An Optimal Technology for Automotive and mmWave Designs”, Solid State Technology Webinar, Dec. 13, 2018
[15] Ashish Kumar, Chandrajit Debnath, Pratap Narayan Singh, Vivek Bhatia, Shivani Chaudhary, Vigyan Jain, Stephane Le Tual, Rakesh Malik, “A 0.065mm2 19.8mW single channel calibration-free 12b 600MS/s ADC in 28nm UTBB FDSOI using FBB”, ESSCIRC Conference 2016.
[16] Bertrand et al, “Development Of High Resistivity FD-SOI Substrates for mmWave Applications” ECS Transactions, 2022
[17] Ragonese, E. Design Techniques for Low-Voltage RF/mm-Wave Circuits in Nanometer CMOS Technologies. Appl. Sci. 2022, 12, 2013.
[18] C, Sun et al, “Enabling UTBB Strained SOI Platform for Co-integration of Logic and RF: Implant-Induced Strain Relaxation and Comb-like Device Architecture”, VLSI 2020
[19] B. De Salvo et al, “A mobility enhancement strategy for sub 14nm power-efficient FDSOI technologies” IEDM 2014
[20] Y. Mousry et al, “A 0.021mm2 PVT-Aware Digital-Flow-Compatible Adaptive Back-Biasing Regulator with Scalable Drivers Achieving 450% Frequency Boosting and 30% Power Reduction in 22nm FDSOI Technology”, ISSCC 2021
[21] Ned Cahoon, Alvin Joseph, Chaojiang Li, Anirban Bandyopadhyay “WS-01 – Recent advances in SiGe BiCMOS: technologies, modelling and circuits for 5G, radar and imaging”, European Microwave Week (EuMW) 2019
[22] Skyworks white paper “5G Millimeter Wave Frequencies And Mobile Networks -A Technology Whitepaper on Key Features and Challenges” 2019
[23] Farzad Inanlou, Sudipto Bose “mmWave Foundry of Choice: Accelerated and Simple Automotive Radar Design” 2020
[24] S. Li, M. Cui, X. Xu, L. Szilagyi, C. Carta, W. Finger, F. Ellinger, “An 80 GHz Power Amplifier with 17.4 dBm Output Power and 18 % PAE in 22 nm FD-SOI CMOS for Binary-Phase Modulated Radars,” Asia-Pacific Microwave Conference, Dec. 2020, Hong Kong
[25] L. Gao, E. Wagner and G. M. Rebeiz, “Design of E-and W-Band Low-Noise Amplifiers in 22-nm CMOS FD-SOI,” in IEEE Transactions on Microwave Theory and Techniques, vol. 68, no. 1, pp. 132-143, Jan. 2020.
[26] M. SadeghDadashS. BonenU. AlakusuD. Harameand S. P. Voinigescu”DC-170 GHz Characterization of 22nm FDSOI Technology for Radar Sensor Applications” 2018 13th European Microwave Integrated Circuits Conference (EuMIC) pp. 158-161 2018.
[27] Vadim Budnyaev and Valeriy Vertegel “ A SiGe 3-stage stage stage LNA for automotive radar application from 76GHz to 81GHz”, ITM Web of Conferences 30, 2019
Dan is joined by Richard Barnett, chief marketing officer and SaaS sales leader at Supplyframe. With more than 25 years of leadership experience in strategic marketing, sales and product management, Richard is recognized as a thought leader on supply chain and strategic sourcing transformation as well as digital marketing engagement with design engineers.
Richard discusses the current state of the worldwide semiconductor supply chain and the drivers for change with Dan. He reviews the forces at play that shape the supply chain, their complex interrelationships and the impact and risks associated with shifting supply to different regions.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
Eng. Islam Nashaat received his B.Sc. and M.Sc. degrees from Ain Shams University, Cairo, Egypt, in 2010 and 2017, respectively. He joined Si-Vision as an Analog Physical Design Engineer in 2010, where he initiated the company’s CAD team in 2013, and became CAD and Physical Design Team Lead in 2016 after the company’s flagship product got acquired by Synopsys. In 2020, he joined Goodix Egypt acting as Physical Design Manager. He co-founded Master Micro in 2020 and joined it as a full-time CEO in 2021. During his professional career he participated-in and managed the delivery of tens of silicon verified IC chips and IPs. In addition, he developed and managed the development of many automation scripts covering the analog front-end and back-end flows with several publications.
Tell us about your company
Master Micro is a disruptive EDA startup in the field of analog/mixed-signal design automation. Founded in 2020 by Eng. Islam Nashaat (CEO) and Dr. Hesham Omran (CTO), our mission is to revolutionize the full-custom chip design methodology to keep up with the rapid advancements in technology.
Despite being a relatively young company, we have made significant strides. We successfully launched our first product, the Analog Designer’s Toolbox (ADT) in 2022, which is the culmination of several years of research. Since then, we have conducted product demonstrations for numerous companies worldwide. Many of these companies have become paying customers, while others are currently evaluating our offerings. Achieving this level of interest and adoption within such a short time frame is truly inspiring and motivates us to continue pushing boundaries in the EDA industry. In 2023, we launched our second product, the Sizing Assistant (SA), which is seamlessly integrated in the schematic editors to make the device sizing process fast, intuitive, and optimized.
What problems are you solving?
The rapid advancement of technology has led to increasingly complex and costly chip designs. This coincides with an expected shortage of talent in the semiconductor industry in the coming years, presenting a significant challenge. The existing analog design process, which is outdated and iterative, is struggling to keep up with the complexities of new technologies. Its lack of a systematic design approach results in ad-hoc methodologies that heavily rely on experts and lead to suboptimal designs.
That’s where our role comes into play. We are dedicated to developing the next generation of analog design automation tools that address the challenges of full-custom design productivity and the scarcity of analog design expertise. By leveraging innovative circuit solving techniques and a designer-oriented user-friendly interface, we aim at making the analog design process fast, optimized, and intuitive.
What application areas are your strongest?
With a combined experience of over 50 years in front-end and back-end analog/mixed-signal design, software, and CAD engineering, our team possesses a deep understanding of the intricacies of the field. Currently, our primary focus lies in the analog design front-end flow, specifically emphasizing the crucial analog building blocks utilized in any analog subsystem.
We are proud to offer two innovative products that cater to the needs of analog designers. The first is the Analog Designer’s Toolbox (ADT), a powerful tool that revolutionizes circuit-level design, visualization, and optimization. ADT enables users to paint the design space using millions of correct-by-construction design points generated within seconds, resulting in an up to 100x increase in design productivity.
Our second product, the Sizing Assistant, (SA) operates at the device level. SA gives designers the power to define the properties of transistors using their electrical parameters (e.g., gm/ID, fT, mismatch, Ron, etc.), and then receive the valid device sizing interactively. This tool streamlines the device-level sizing process and facilitates efficient decision-making. Both of our products are designed to seamlessly integrate within existing design environments, ensuring a smooth and hassle-free experience for analog designers.
What keeps your customers up at night?
Full-custom design is a bottleneck that dominates the time and cost of many chip design projects. In addition, the design quality is highly dependent on the designer’s expertise, and there can be a large unseen room for improving power, performance, and area.
Our tools provide a significant productivity boost to analog design teams, resulting in 10x-100x time savings. The designers can quickly visualize the design space and pick global optimal design points in a systematic and intuitive way that is independent of the designer’s level of expertise.
Our flagship product, ADT, has garnered tremendous excitement among visionary analog design leaders who are eager to explore and integrate our tools into their design flows. The feedback we got from our customers is that ADT is indeed taking analog circuit design to a new level, as it provides analog designers with profound insights into the analog design process and guides them to understand, optimize and improve their designs. The level of enthusiasm and ownership our visionary customers feel towards our tools is remarkable. Their passion for our tools is evident as they actively contribute suggestions for new features and functionalities that deepen their engagement and productivity with the tools.
What does the competitive landscape look like and how do you differentiate?
As you’re aware, the EDA market is highly specialized, with only a handful of established players. It’s a formidable challenge to stand out in this space. Traditional analog optimization tools have been around for many years, but they are not widely accepted in the analog design community. We differentiate ourselves by offering a designer-oriented design flow that leverages the powerful combination of the gm/ID methodology, precomputed lookup tables, and custom vectorized solvers. This provides distinct advantages to our tools in terms of speed, accuracy, and designer-oriented visualization.
Our approach not only empowers designers with unique capabilities but also ensures that their intuition remains intact through exceptional user interface and visualization features. This makes our tools a complementary and seamless integration within the familiar flow used by analog designers. We are proud to offer a solution that combines cutting-edge techniques with a user-friendly experience, addressing the specific needs of analog designers.
What new features/technology are you working on?
Before delving into our new features, it’s crucial to emphasize that our team comprises experienced designers who possess a thorough understanding of the existing gaps in the design process. Moreover, we maintain regular communication with our customers, allowing us to gain valuable insights into their challenges and pain points. With that in mind, we are excited that we will soon introduce a cutting-edge tool specifically designed to address the cumbersome analog/mixed-signal design porting flow. This tool is particularly beneficial for companies that frequently port designs across different technologies. Furthermore, we are actively harnessing the power of emerging technologies such as Artificial Intelligence and Machine Learning to enhance our tools further. By leveraging these advancements, we aim to provide not only functional solutions but also an exceptional customer experience that leaves a lasting impression.
How do customers normally engage with your company?
Our customer engagement spans across multiple channels, allowing us to connect with a wide range of clients. We actively reach out to customers through our extensive network in the industry. In addition, we engage with analog designers at renowned conferences and trade shows that we attend, and we also collaborate with our trusted distributors. Another important avenue for us is our strong presence on LinkedIn, where we have amassed over 12,000 followers, making it a powerful channel for communication.
Furthermore, our website https://adt.master-micro.com, serves as a hub for customers to interact with us directly. Here, we offer a comprehensive range of services, starting with personalized demos for design teams, and a support portal for customers. We offer a free evaluation period for prospective customers, allowing design teams to fully explore and familiarize themselves with the capabilities of our tools.
On the other side, we also reach out to universities and IEEE societies to educate the next generations of designers about new analog design methodologies, and empower professors, researchers, and students to adopt our tools in their research projects and teaching activities.
TSMC has been offering foundry services since 1987, and their first 3nm node was called N3 and debuted in 2022; now they have an enhanced 3nm node dubbed N3E that has launched. Every new node then requires IP that is carefully designed, characterized and validated in silicon to ensure that the IP specifications are being met and can be safely used in SoC or multi-die system designs. This new IP must cover a wide range of functions, like interface, memory and logic. Synopsys has a large IP team that has risen to the challenge by creating new IP for the TSMC N3E node and achieving first-pass silicon success.
Chiplet Interconnect
Systems made up of heterogeneous chiplets require die-to-die communication, and that’s where the UCIe standard comes into play. Synopsys is a Contributor member of the UCIe Consortium, and they offer IP for both a UCIe Controller and a UCIe PHY in the TSMC N3E node.
The UCIe PHY IP had first silicon results in August 2023, showing data rates of 16Gbps and scalable to 24Gbps per channel. . Earlier this year, Intel unveiled world’s first Intel-Synopsys UCIe interoperability test chip demo at Intel Innovation. The interoperability was between Synopsys UCIe PHY IP on TSMC N3E process and Intel PHY IP on Intel 3 technology.
Industry’s Broadest Interface IP Portfolio on TSMC N3E
The IEEE approved the 802.3 standard for Ethernet back in 1983, quite the extended standard, while the Synopsys 224G Ethernet PHY IP had first silicon success in August 2023. Network engineers look at the eye diagram to see the 224Gbps PAM-4 encoding. Jitter levels surpassed both the IEEE 802.3 and OIF standard specifications.
Supporting standards like PCI Express 6.0, 400G/800G Ethernet, CCIX, CXL 2.0/3.0, JESD204 and CPRI there is the Synopsys Multi-Protocol 112G PHY IP. Engineers can combine this PHY IP with a MAC and PCS to build a 200G/400G/800G Ethernet block.
SDRAM and memory modules can use the Synopsys DDR5 PHY IP on TSMC N3E to achieve transfer rates up to 8400Mbps. You can see the wide open eye and clear margins for this IP operating at speed.
The PCI Express standard started out in 2003 and has been continually updated to meet the growing demands of cloud computing, storage, and AI. PCIe 5.0 is now supported using the Synopsys PCIe 5.0 PHY IP. First silicon on TSMC N3E showed operating speeds of 32 GT/s, and the Synopsys PCIe 5.0 PHY IP is listed on the PCI-SIG Integrators list.
I’ve been using USB-C on my MacBook Pro, iPad Pro and Android phone for years now. Synopsys now supports USB-C 3.2 and DisplayPort 1.4 PHY IP in the latest TSMC process. With this IP users can connect up to 8K Ultra High-Definition displays.
Smartphone companies standardized on the MIPI protocol years ago as an efficient way to connect cameras, and the Synopsys MIPI C-PHY IP/D-PHY IP can operate at 6.5Gb/s per lane and 6.5Gs/s per trio. The C-PHY IP supports v2.0, and the D-PHY IP2.1.
The latest Synchronous DRAM controller spec is LPDDR5X, supporting data transfer speeds up to 8533Mbps, a 33% improvement over LPDDR5 memory. The Synopsys LPDDR5X/5/4X Controller is silicon-proven, and ready to be designed with.
Logic Libraries and Memories
Up to half the area of an SoC can be memories, so the good news is that the Synopsys Foundation IP allows you to add memories and logic library cells quickly into a new design. Here are the test chip diagrams from Synopsys on the TSMC N3E node for memories and logic libraries.
Summary
TSMC and Synopsys have collaborated quite well together over the years, and that partnership now extends to the N3E node where SoC designers can find silicon-tested IP for interfaces, memories and logic. Power, performance and yield are looking attractive for N3E, so the technology is ready for your most demanding designs. Starting a design with N3E also provides you a quicker path to migrate to the N3P process.
Instead of creating all of your own IP from scratch, which will lengthen your schedule, require more engineering resources and increase risk, why not take a look at the proven and broad Synopsys IP portfolio for N3E .
Synopsys recently hosted a cross-industry panel on the state of multi-die systems which I found interesting not least for its relevance to the rapid acceleration in AI-centric hardware. More on that below. Panelists, all with significant roles in multi-die systems, were Shekhar Kapoor (Senior Director of Product Management, Synopsys), Cheolmin Park (Corporate VP, Samsung), Lalitha Immaneni (VP Architecture, Design and Technology Solutions, Intel), Michael Schaffert (Senior VP, Bosch), and Murat Becer (VP R&D, Ansys). The panel was moderated by Marco Chiappetta (Co-Founder and Principal Analyst, HotTech Vision and Analysis).
A Big Demand Driver
It is common under this heading to roll out all the usual suspects (HPC, Automotive, etc) but that list sells short maybe the biggest underlying factor – the current scramble for dominance in everything LLM and generative AI. Large language models offer new levels of SaaS services in search, document creation and other capabilities, with major competitive advantages to whoever gets this right first. On mobile devices and in the car, superior natural language-based control and feedback will make existing voice-based options look primitive by comparison. Meanwhile generative methods for creating new images using Diffusion and Poisson flow models can pump out spectacular graphics drawing on text or a photograph complemented by image libraries. As a consumer draw this could prove to be the next big thing for future phone releases.
While transformer-based AI presents a huge $$$ opportunity it comes with challenges. The technologies that make such methods possible are already proven in the cloud and emerging at the edge, yet they are famously memory hungry. Production LLMs run anywhere from billions to trillions of parameters which must be loaded to the transformer. Demand for in-process workspace is equally high; diffusion-based imaging progressively adds noise to a full image then works its way back to a modified image, again through transformer-based platforms.
Apart from an initial load, none of these processes can afford the overhead of interacting with external DRAM. Latencies would be unacceptable and power demand would drain a phone battery or would blow the power budget for a datacenter. All the memory needs to be near – very near – the compute. One solution is to stack SRAM on top of the accelerator (as AMD and now Intel have demonstrated for their server chips). High bandwidth memory in-package adds another somewhat slower option but still not as slow as off-chip DRAM.
All of which requires multi-die systems. So where are we at in making that option production-ready?
Views on where we are at
I heard a lot of enthusiasm for growth in this domain, in adoption, applications and tooling. Intel, AMD, Qualcomm, Samsung are all clearly very active in this space. Apple M2 Ultra is known to be a dual die design, and AWS Graviton 3 a multi-die system. I am sure there are plenty of other examples among the big systems and semiconductor houses. I get the impression that die are still sourced predominantly internally (except perhaps for HBM stacks), and assembled in foundry packaging technologies from TSMC, Samsung or Intel. However, Tenstorrent just announced that they have chosen Samsung to manufacture their next generation AI design as a chiplet (a die suitable to be used in a multi-die system), so this space is already inching to towards broader die sourcing.
All panelists were naturally enthusiastic about the general direction, and clearly technologies and tools are evolving fast which accounts for the buzz. Lalitha grounded that enthusiasm by noting that the way that multi-die systems currently being architected and designed is still in its infancy, not yet ready to launch an extensive reusable market for die. That doesn’t surprise me. Technology of this complexity seems like it should mature first in tight partnerships between system designers, foundries and EDA companies, maybe over several years before it can extend to a larger audience.
I’m sure that foundries, system builders and EDA companies aren’t showing all their cards and may be further along than they choose to advertise. I look forward to hearing more. You can watch the panel discussion HERE.
You are probably familiar with the acronym PPA, which stands for Power/Performance/Area. Sometimes it is PPAC, where C is for cost, since there is more to cost than just area. For example, did you know that adding an additional metal layer to a chip dramatically increases the cost, sometimes by millions of dollars? It requires a minimum of two masks (interconnect, and vias) plus all the additional associated process steps. And interconnect layers normally come in pairs, vertical and horizontal, so usually it is four masks.
There are many inputs into optimizing PPAC, and a significant one is designing the clock tree. The clock can consume a lot of the power, and a lot of the interconnect, and obviously affects performance. The process of designing the clock tree is usually called Clock Tree Synthesis, usually abbreviated to CTS. Siemens EDA recently published a white paper Placement and CTS Techniques for High-Performance Computing Designs.
One challenge EDA tools face is that you only get the true quality of results when you have finished the design. In practice, this means that tools need to either use pessimism to guard band the results, or increase accuracy by having much better correlation between the tool in use and the final results.
The white paper discusses how to solve the placement and clock tree challenges in HPC designs using the Aprisa digital implementation solution, as these steps are fundamental to achieving the desired performance metrics during place and route. While most other place-and-route tools require waiting to post-route optimization to discover the true quality of the results, Aprisa offers users excellent correlation throughout the place-and-route implementation, which allows designers to gain confidence on the results much earlier in the flow at the placement and clock tree synthesis (CTS) stages. Aprisa is ideally suited to help designers deliver HPC IC innovations faster.
Aprisa is the Siemens digital implementation solution for hierarchical and block-level designs. Under the hood, it has a detail-route-centric architecture to reduce time to design closure, partly by pulling the implications of decisions early in the design process as opposed to waiting until the design is completed to find it has problems that were introduced earlier. A key to a modern implementation flow is to have consistent timing, extraction, DRC, and more across the whole flow.
Aprisa delivers optimal performance, power and area (PPA) for advanced nodes, and it has complete support for design methodologies and optimization to achieve both lowest power and highest performance.
The white paper uses an example design, an Arm Cortex-A76 in 5nm running at 2.75 GHz, and using 12 layers of metal for interconnect. I don’t have space here to go into the design in detail, you’ll have to read the white paper for a deeper dive.
The focus of the exercise was to analyze using 10 layers of metal versus 12 layers of metal (I said interconnect layers usually come in pairs already). The analysis revealed that, for the 10-layers option, the frequency would have to be lowered by 9 percent to achieve the desired power target. However, it resulted in significant cost savings for the entire project. Obviously, Aprisa cannot make the decision for you as to whether 9% performance hit is worth it to reduce the cost.
The focus of the white paper is clock tree synthesis (CTS), one of the big challenges in any HPC design. Aprisa supports useful skew, starting at placement optimization and continuing all the way to route optimization, to make certain that challenging design frequency targets are met. A strength of Aprisa CTS technology is that the push and pull offsets generated during placement optimization are realized during clock tree implementation.
Clocks generally go to flip-flops, and an optimization that modern cell libraries include are multi-bit flip-flops with a common clock. Aprisa has the capability to merge or demerge multi-bit flip-flops and clone/declone integrated clock gates. Aprisa does this based on the timing, physical location of the cells and criticality of the paths.
Post-CTS optimization in Aprisa includes congestion recovery that recovers congestion created during clock tree synthesis. Congestion recovery is a clock-aware approach that does not degrade timing and so reduces iterations back to placement optimization that otherwise would be required.
Aprisa supports different types of clock tree structures such as H-tree, multi-point CTS and custom mesh. Multi-point is the most popular approach for HPC designs and is the one described in the white paper.
There is a lot more to an implementation flow than synthesizing the clock tree, of course! But CTS is a critical stage, especially for demanding HPC designs, because there is so little room for deviation to achieve the desired performance and meet PPA requirements.
Aprisa is certified by the top foundries for the most advanced nodes. It ensures all PPA metrics are carefully balanced for HPC design implementation through high-quality clock trees. Not to mention placement and routing technologies that reduce timing closure friction between the block and top-level during assembly.
Once again, the white paper can be downloaded here.