DAC2025 SemiWiki 800x100

Breakthrough Gains in RTL Productivity and Quality of Results with Cadence Joules RTL Design Studio

Breakthrough Gains in RTL Productivity and Quality of Results with Cadence Joules RTL Design Studio
by Kalar Rajendiran on 08-08-2023 at 10:00 am

Joules RTL Design Studio Benefits

Register Transfer Level (RTL) is a crucial and valuable concept in digital hardware design. Over the years, it has played a fundamental role in enabling design of complex digital chips. By abstracting away implementation details and providing a clear description of digital behavior, RTL has contributed significantly to the advancement and widespread adoption of digital design methodologies. It abstracts away the specific implementation details and technology-dependent aspects, providing a more manageable and technology-agnostic representation of the design. RTL provides a basis for design exploration and optimization. Engineers can modify the RTL code to explore various design alternatives and identify the most efficient solutions.

While the chip design process benefits tremendously from the use of RTL, the designs need to be synthesized and taken through the layout process before the chips can be manufactured. Tools for synthesis and place and route rely on RTL as input to generate the physical layout of the chip. This transition comes with several challenges that designers need to address to ensure a successful and optimal chip implementation. Physical design constraints such as area, power and routability constraints must be satisfied during the layout process while considering the characteristics and limitations of the target process technology and manufacturing process. Power integrity, signal integrity, design for manufacturability (DFM) and many more requirements need to be addressed as well.

As designs grow in complexity, the productivity and turnaround time become significant challenges during the RTL-to-layout transition. The RTL-to-layout transition often involves iterative processes where designers must go back to the RTL level to make modifications and then repeat the layout process. Efficient iteration management is crucial to avoid time-consuming and costly iterations. It is in this context that Cadence’s recent announcement highlighting the delivery of the Joules RTL Design Studio takes significance. It promises to deliver up to 5X faster RTL convergence and up to 25% improved Quality of Results (QoR) when compared with traditional RTL design approaches.

Actionable Intelligence

The driving force behind the Joules RTL Design Studio lies in its ability to provide RTL designers with actionable intelligence and rapid insight into physical effects. This capability enables design teams to address potential issues early in the design process, leading to reduced iterations, thus speeding time to market. Front-end designers can now access digital design analysis and debugging capabilities from a single, unified cockpit, streamlining the design process and ensuring a fully optimized RTL design before implementation handoff. This provides the physical design tools a strong starting point.

Intelligent RTL Debugging Assistant System

Joules RTL Design Studio further distinguishes itself with an intelligent RTL debugging assistant system. It provides early power, performance, area and congestion (PPAC) metrics and actionable debugging information throughout the design cycle‑including logical, physical, and production implementation stages. Engineers can thoroughly explore “what-if” scenarios and identify potential resolutions with ease. This not only saves valuable time but also improves the overall design outcomes, leading to more efficient chip designs.

Integrated AI Platform

A key highlight of this solution is its integration with Cadence Cerebrus, an AI-driven solution for design flow optimization, and the Cadence JedAI Platform, which facilitates big data analytics. By leveraging generative artificial intelligence (AI) for RTL design exploration and comprehensive analytics with Cadence’s leading AI portfolio, designers gain new insights into design space scenarios, floorplan optimization, and frequency versus voltage tradeoffs. This opens up new possibilities for creative exploration and significantly enhances design productivity.

The software’s capabilities are based on proven engines, shared with Cadence’s Innovus Implementation System, Genus Synthesis Solution, and Joules RTL Power Solution. This integration allows users to access all analysis and design exploration features from a single intuitive graphical user interface (GUI), ensuring an optimal QoR and a seamless design experience.

Incorporating lint checker integration, Joules RTL Design Studio empowers engineers to run lint checkers incrementally. This capability helps rule out data and setup issues upfront, effectively reducing errors and accelerating the design completion process. The unified cockpit experience offered by the software caters to the specific needs of RTL designers, providing physical design feedback, localization, and categorization of violations, bottleneck analysis, and cross-probing between RTL, schematic, and layout. This user-friendly interface streamlines the design workflow and fosters productivity.

Intelligent System Design

Joules RTL Design Studio plays a vital role in Cadence’s broader digital full flow. This integrated flow offers customers a faster path to design closure, ensuring efficient and successful chip design. The tool aligns well with Cadence’s Intelligent System Design strategy, empowering engineers to achieve excellence in system-on-chip (SoC) design.

Summary

The impact of this innovation extends to all aspects of physical design, from power and performance to area and congestion. By incorporating advanced technologies like machine learning, big data analytics, and generative artificial intelligence, Cadence has engineered a powerful solution that empowers designers to achieve optimized RTL designs faster with improved QoR.

Customers from various industries have endorsed its powerful capabilities and the benefits it brings to their design processes. For details, refer to the Joules RTL Design Studio press release.

For more information, visit the Joules RTL Design Studio product page.

Also Read:

Cadence and AI at #60DAC

Automated Code Review. Innovation in Verification

Xcelium Safety Certification Rounds Out Cadence Safety Solution


DVCon India 2023 | Keynote: “Journeying Beyond AI: Unleashing the Art of Verification”

DVCon India 2023 | Keynote: “Journeying Beyond AI: Unleashing the Art of Verification”
by Daniel Nenni on 08-08-2023 at 10:00 am

Keynote by Sivakumar DVCon India 2023

DVCon India 2023 | Keynote: “Journeying Beyond AI: Unleashing the Art of Verification” by Sivakumar P R, Founder & CEO, Maven Silicon

Get Ready for an Epic Tech Odyssey with the keynote, ‘Journeying Beyond AI: Unleashing the Art of Verification’, by P. R. Sivakumar, Founder, and CEO, Maven Silicon.

The semiconductor industry is undergoing a transformative shift, embracing novel design methodologies and innovative flows to meet the demands of a rapidly evolving technological landscape. In this keynote address, we will explore how these advancements, such as AI-driven Electronic Design Automation (EDA), System of Chips (SoCs) utilizing Chiplets with UCIe, and cutting-edge 2.5D and 3D advanced packaging techniques, are revolutionizing chip production. This transformative journey positions the semiconductor industry to emerge as a trillion-dollar market by 2030, fueled by the creation of complex chips boasting trillions of transistors.

The rise of disruptive technologies, such as AI, cloud computing, and autonomous vehicles, has sparked a pressing need for sophisticated SoCs and chips specially designed to cater to these domains. These intricate designs incorporate standard CPUs, GPUs, FPGAs, and specialized AI accelerators, providing the foundation for groundbreaking innovation. With AI serving as a key driver for progress, its pervasive influence is permeating every industry sector.

Within the realm of EDA, machine learning has emerged as a vital tool, significantly enhancing the efficiency of the design and verification processes. Leveraging the power of machine learning, we are propelled towards the adoption of AI-driven EDA, facilitating the creation of advanced chips that fuel the growth and proliferation of emerging technologies. During this keynote, we will delve into the uncharted territory of verification challenges stemming from these new designs. Furthermore, we will illustrate how AI-driven EDA empowers verification engineers to efficiently validate these state-of-the-art chips, enabling them to unleash their creative potential and innovate with unprecedented freedom.

To know more, click here

About Maven Silicon
Maven Silicon is a trusted VLSI Training partner that helps organizations worldwide build and scale their VLSI teams. We provide outcome-based VLSI training with our variety of learning tracks i.e. RTL Design, ASIC Verification, DFT, Physical Design, RISC-V, and ARM etc. delivered through our cloud-based customized training solutions. To know more about us, visit our website.

Also Read:

Upskill Your Smart Soldiers and Conquer the Chip War in Style!

Chip War without Soldiers

Maven Silicon’s RISC-V Processor IP Verification Flow


The Era of Flying Cars is Coming Soon

The Era of Flying Cars is Coming Soon
by Ahmed Banafa on 08-08-2023 at 6:00 am

The Era of Flying Cars is Coming Soon

For decades, the concept of flying cars has captivated our imagination, fueling visions of a future where we can soar above the ground, free from the constraints of traffic and congestion. While once considered purely the stuff of science fiction, recent advancements in technology have brought us closer to turning this fantasy into a reality. Electric vertical takeoff and landing (eVTOL) vehicles, commonly known as flying cars, hold the promise of revolutionizing transportation, offering new levels of efficiency, convenience, and accessibility. It’s important to explore the needs driving the development of flying cars, the challenges they face, the benefits they offer, the risks involved, and what the future holds for this transformative technology.

Needs for Flying Cars

·      Congestion and Traffic Woes: Growing urbanization and population density have led to increasingly congested roads in cities around the world. Commuting times have become longer, and frustration levels have risen. Flying cars could alleviate these problems by utilizing the airspace, bypassing traffic and reducing travel times. This could lead to more efficient transportation and improved overall mobility.

·      Transportation Accessibility: Flying cars have the potential to address accessibility issues by providing transportation options for areas with limited infrastructure. Remote regions, islands, and disaster-stricken areas could benefit greatly from the ability to fly above ground-based obstacles, connecting previously isolated communities. Flying cars could bridge the gap between urban and rural areas, fostering economic development and social integration.

·      Rapid Emergency Response: Flying cars could revolutionize emergency services by enabling faster response times and facilitating the transportation of medical supplies, organs for transplantation, and injured individuals to hospitals. In situations where time is critical, such as during natural disasters or in hard-to-reach locations, flying cars could make a significant difference in saving lives and minimizing the impact of emergencies.

Challenges of Flying Cars

·      Infrastructure Requirements: The widespread implementation of flying cars requires the development of a comprehensive infrastructure framework. This includes establishing designated landing and takeoff zones, creating charging stations for electric vehicles, designing efficient air traffic management systems, and establishing regulations to ensure safe and efficient operations. Building this infrastructure will be a significant challenge that requires careful planning and coordination.

·      Safety and Reliability: Ensuring the safety and reliability of flying cars is of paramount importance. New technologies, such as autonomous flight systems, collision avoidance mechanisms, and fail-safe protocols, must be developed and rigorously tested to minimize the risk of accidents and malfunctions. Safety standards and certifications will need to be established to instill public confidence in this emerging mode of transportation.

·      Noise Pollution: Flying cars introduce the challenge of managing noise pollution in urban areas. The sound of numerous flying vehicles could disrupt the tranquility of residential neighborhoods and potentially cause annoyance or discomfort. Efforts must be made to design quieter propulsion systems and establish regulations to minimize noise emissions, ensuring that the benefits of flying cars do not come at the expense of quality of life for those on the ground.

Benefits of Flying Cars

·      Efficient Urban Mobility: Flying cars have the potential to significantly reduce commuting times by bypassing congested roads. This could lead to increased productivity, improved work-life balance, and enhanced overall quality of life for urban dwellers. Imagine being able to travel across a crowded city in minutes instead of hours, with the freedom to avoid gridlock and traffic congestion.

·      Environmental Sustainability: Electric-powered flying cars have the potential to contribute to environmental sustainability, provided they are powered by renewable energy sources. By shifting transportation from ground-based vehicles to the sky, flying cars could help reduce carbon emissions and mitigate the impacts of climate change. This transition to clean energy-powered transportation could have a positive impact on air quality and the overall health of our planet.

·      Economic Opportunities: The development and deployment of flying cars can stimulate economic growth and create new job opportunities. Manufacturing flying cars, building and maintaining the necessary infrastructure, and managing air traffic control systems all require a skilled workforce. Additionally, new industries and services could emerge around flying car technology, further boosting local economies and fostering innovation.

Risks Associated with Flying Cars

·      Air Traffic Management: The integration of flying cars into existing airspace systems poses significant challenges in terms of air traffic management. Ensuring the safe coexistence of conventional aircraft, drones, and flying cars requires the development of robust communication and navigation systems. Cooperation between aviation authorities, technology providers, and regulators is crucial to establishing effective protocols and infrastructure to manage the complex airspace environment.

·      Cybersecurity: As flying cars become increasingly reliant on software and connectivity, the risk of cybersecurity threats arises. Safeguarding against hacking, system breaches, and data privacy breaches is crucial to ensure passenger safety and protect against potential malicious activities. Strong cybersecurity measures and protocols must be implemented to ensure the integrity and privacy of the systems controlling flying cars.

·      Regulatory Framework: The development of comprehensive regulations and policies is essential to govern the use of flying cars. Striking a balance between innovation and safety, while addressing concerns related to privacy, noise pollution, and liability, requires careful consideration. Governments and regulatory bodies need to collaborate with industry stakeholders to establish a robust regulatory framework that ensures the safe and responsible deployment of flying car technology.

Future Outlook

·      Technology Advancements: Ongoing advancements in electric propulsion, battery technology, autonomous systems, and materials science will contribute to improving the performance, safety, and affordability of flying cars. Continued research and development will likely lead to more efficient and environmentally friendly flying car models in the future.

·      Urban Air Mobility Ecosystems: The successful integration of flying cars will involve the creation of urban air mobility ecosystems. This will require collaboration between vehicle manufacturers, infrastructure developers, air traffic control authorities, policymakers, and communities. Establishing a robust framework that encompasses infrastructure, regulations, and public acceptance is essential for the widespread adoption and safe operation of flying cars.

·      Public Acceptance: Public acceptance is critical for the successful integration of flying cars into society. Transparency in terms of safety, privacy, and environmental impact will play a vital role in fostering public confidence in this revolutionary mode of transportation. Educating the public about the benefits and addressing concerns through effective communication and public engagement initiatives will be crucial for the widespread acceptance and adoption of flying cars.

Flying cars hold the potential to transform transportation and reshape our urban environments. By addressing the needs for efficient mobility, accessibility, and emergency response, flying cars offer promising solutions to the challenges faced by our current transportation systems. However, significant hurdles related to infrastructure development, safety, and regulation must be overcome. With careful management of risks and continued technological advancements, flying cars could usher in a new era of transportation that is efficient, sustainable, and accessible to all. The future of flying cars depends on collaboration between industry, government, and society as we work together to turn this futuristic vision into a tangible reality.

Ahmed Banafa’s books

Covering: IoT, Blockchain and Quantum Computing

Also Read:

Xcelium Safety Certification Rounds Out Cadence Safety Solution

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US

Automotive IP Certification


Cadence and AI at #60DAC

Cadence and AI at #60DAC
by Daniel Payne on 08-07-2023 at 10:00 am

Cadence, AI, #60DAC min

Paul Cunningham from Cadence presented at the #60DAC Pavilion and gave one of the most optimistic visions of AI applied to EDA that I’ve witnessed, so hopefully I can convey some of his enthusiasm and outright excitement in my blog report. Mr. Cunningham reviewed the various ages of EDA design with each era providing about a 10X productivity improvement: Transistor-level, cell-based, RTL reuse, AI-driven system design.

Paul Cunningham, Cadence

Human chip designers are good at intuition, judgement, remembering experiences and understanding context, however, we are limited in our serial thinking patterns. AI, on the other hand has merits like scalability, parallelization, access to massive data, and the ability to classify data. To reach the next 10X productivity improvement will take an approach that accelerates design efficiency by keeping the human in the loop, not really replacing engineers.

In EDA 1.0, there were all of these separate EDA tools, each with their own silo of data, and most of the time was spent waiting to get back tool results so that an engineer could analyze the results. Now, with EDA 2.0, all of the tool results get collected as big data, then cataloged and indexed, creating a more wholistic viewpoint on the design process.

Within Cadence the data platform is called JedAI—Joint Enterprise Data and AI Platform—which lets engineering teams visualize workflow and design data across some of their tools, so expect it to grow across all of their tools in the future. Another use of AI at Cadence is in running the combination of logic synthesis and P&R tools to achieve better PPA results, and that’s called Cerebrus. In just a short period of time, Cerebrus has been used on more than 180 tapeouts, and using this methodology allows one engineer to do the work of 10 previous engineers, so that’s a big productivity boost and allows engineers to focus on more strategic projects.

On the PCB tool side, the application of AI is called Allegro X AI, and there, engineers are seeing 30-50X improvements on placement and routing, while achieving better QoR.

Functional verification is another hot topic area to apply AI, and the basic question remains, “Is verification ever done?” Verification engineers still need to debug why a test just failed, and why the coverage goals not being reached. AI technology can help by creating a triage funnel, and answering basic questions like, “Who just checked in recent changes?” AI is used to rank bug locations and help pinpoint which change caused the latest failures. Cadence has also found that applying language processing on waveforms is better done by machine in terms of finding patterns and signatures of failures. The product name for AI applied to verification is called Verisium.

In general, AI can be applied to most NP-complete problems in computer science. Using constraint-solving in randomization also shows promise with AI technique, as AI can learn what was randomized before, so you are not starting all over again. The Xcelium logic simulator uses ML to get up to 5X faster coverage with the same CPU usage as previous approaches.

For formal logic verification, the Jasper apps have a method where AI guides and helps choose the best proof techniques and can create about 30% more properties versus a manual approach.

Summary

At Cadence, the product groups have been adding AI capabilities to help IC, PCB designers and verification engineers  become more productive, explore more alternatives, and even improve the quality of results. Yes, a human engineer still has to direct the EDA tools and choose the best results to meet their specific PPA, DFT and DFM goals. The first application of AI is in digital flows where models are trained at the customer site—they don’t get shipped pre-trained.

Cadence has applied AI techniques successfully across many of their tools, so I look forward to more announcements to come.

Related Blogs

 


How Taiwan Saved the Semiconductor Industry

How Taiwan Saved the Semiconductor Industry
by Daniel Nenni on 08-07-2023 at 6:00 am

Taiwan USA

Now that semiconductors are front page news and a political football, I would like to write more about how we got to where we are today to better understand where semiconductors will go tomorrow. I will start this article with a provocative quote that really made me laugh and will put some context to what I am trying to accomplish here:

Trump accuses Taiwan of taking away America’s semiconductor business “Taiwan, they took our business away. We should have stopped them. We should have taxed them. We should have tariffed them,” said Donald Trump, Fox News, July 2023.

I know, American politics is an acquired taste but it is funny just the same. We already covered the fabless transformation in our book “Fabless: The Transformation of the Semiconductor Industry” so let’s start with a quick look at the history of the semiconductor industry in Taiwan (GPT 4.0):

The semiconductor industry in Taiwan has a rich history that spans several decades, starting from the 1970s to the present day. The industry has played a crucial role in Taiwan’s economic transformation from an agrarian economy to one of the world’s top high-tech industrialized nations.

Here are some of the key milestones:

“Formation of Electronics Research and Service Organization (ERSO): In 1973, the Taiwanese government established ERSO under the Industrial Technology Research Institute (ITRI). Its aim was to develop the electronics industry in Taiwan and provide a platform for semiconductor research and development.

Birth of Taiwan Semiconductor Manufacturing Company (TSMC): In 1987, TSMC, the world’s first dedicated semiconductor foundry, was founded by Dr. Morris Chang. The establishment of TSMC initiated the foundry model in the global semiconductor industry, which separated the manufacturing and design aspects of chip production. This allowed fabless companies to focus on designing innovative chip solutions without the need for significant capital investments in manufacturing facilities.

Hsinchu Science and Industrial Park: The Taiwanese government established this park in 1980 to serve as a hub for high-tech companies. It now houses numerous semiconductor companies and has played a significant role in the growth of Taiwan’s semiconductor industry.

Expansion of other major semiconductor companies: Over the years, several other Taiwanese companies have become key players in the global semiconductor industry. These include United Microelectronics Corporation (UMC), the world’s second-largest pure-play semiconductor foundry, MediaTek, a leading fabless semiconductor company, and ASE Technology Holding Co., the world’s largest provider of independent semiconductor manufacturing services in assembly and test.

Leadership in Advanced Process Technology: In recent years, TSMC, in particular, has emerged as a global leader in advanced process technologies, successfully achieving mass production at ever smaller node sizes (7nm, 5nm, 3nm and planning for 2nm and beyond). This has placed Taiwan at the forefront of the global semiconductor industry.

While this summary provides a brief overview, the history of Taiwan’s semiconductor industry is rich and complex, driven by strategic government initiatives, visionary leadership, strong educational programs, and the rise of the global digital economy. As of 2023, Taiwan is one of the world’s largest and most important centers for semiconductor manufacturing.”

Great summary, here is a little color on what happened. When I joined the semiconductor industry in the 1980s it was a challenging decade. Mini computer companies such as IBM, Hewlet-Packard, Digital Equipment, Data General, Prime Computer, and Wang all had their own fabs all over the United States. Unfortunately, due to over regulation (especially here in California) and the inability to hire skilled workers (sound familiar?), manufacturing of all types left the US for more friendly countries.

Additionally, in the 1980s, there were quite a few economic ups and downs including the crash of 1985. Keeping these very expensive fabs running was difficult which spawned the IDM foundry business where US and Japanese semiconductor companies accepted designs from outside customers for contract manufacturing to fill their fabs.

One of the first big fabless companies to do this was FPGA vendor Xilinx (founded in 1984, now owned by AMD). Sieko Epson (Japan) was Xilinx’s first IDM foundry partner. Xilinx quickly outgrew the relationship and moved to UMC and then TSMC which is where they are today.

Clearly IDM foundries were a stop-gap solution back then since they routinely competed with customers and the foundry business had lower margins than the products they manufactured internally so those products always had priority in the fabs.

Also in the 1980s, the ASIC business model was developed by VLSI Technology (founded in 1979) and LSI Logic (founded in 1980). VLSI and LSI accepted designs from fabless companies and manufactured them using internal fabs. But again the cost of the fabs was prohibitive. The ASIC business model is again thriving but it is now populated by fabless ASIC companies who do the design and manage manufacturing through the foundries.

Bottom line: The early IDM foundries and ASIC companies created the perfect storm for the pure-play foundry business model that fully evolved in the 1990s and that is where Dr. Morris Chang comes in.

To be continued… Morris Chang’s journey to Taiwan.

Also Read:

Morris Chang’s Journey to Taiwan and TSMC

Intel Enables the Multi-Die Revolution with Packaging Innovation

TSMC Redefines Foundry to Enable Next-Generation Products


Podcast EP175: The Complexities of Compliance for a Worldwide Supply Chain with Chris Shrope

Podcast EP175: The Complexities of Compliance for a Worldwide Supply Chain with Chris Shrope
by Daniel Nenni on 08-04-2023 at 10:00 am

Dan is joined by Chris Shrope. Chris leads high tech product marketing at Model N, a compliance leader for high-tech manufacturers. Chris has deep experience defining product market fit and related new product development activities. He received his MBA and holds certifications in Economics, Law, Product Management and Marketing. For any ocean lovers out there, like Dan, Chris is also an advisory board member of the Inland Ocean Coalition.

Dan explores the evolving government rules associated with semiconductor sales with Chris. The impact geopolitical tensions creates is outlined by Chris, along with a discussion of how semiconductor suppliers can ensure compliance. A multi-tier supply chain consisting of distributors and resellers can make it challenging to know exactly where parts are used.

Chris offers several strategies to manage this problem that are based on collaboration and forward visibility, among other approaches.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Harry Peterson of Siloxit

CEO Interview: Harry Peterson of Siloxit
by Daniel Nenni on 08-04-2023 at 6:00 am

hwp photo

Harry Peterson is a mixed-signal chip designer with a BS in Physics from Caltech.  He managed IC design groups within Fairchild, Kodak, Philips, Northern Telecom, Toshiba and Pixelworks.  During sabbaticals he helped fly experiments on NASA’s orbiting satellite observatory (OSO-8) and build telescopes in the Canary Islands.  He is CEO and a co-founder of Siloxit, a startup that has developed an Industrial IoT (IioT) module that securely monitors and controls operation of the electric grid.  He has produced many publications, patents, presentations, and short courses.  For fun he swims, hikes and thinks about astronomy.

Tell me about your journey to Siloxit.

I co-founded Siloxit which is a startup and a Portfolio Company in the Silicon Catalyst Incubator. Siloxit is in the business of Distributed Online Condition Monitoring (DOCOM) of infrastructure.  The specific infrastructure market we focus on is the energy grid.

Long ago when I was an undergraduate at Caltech, I was lucky enough to get a job building telescopes.  That continued through my grad school days.  Eventually I transitioned into integrated circuit design at the venerable Fairchild Semiconductor, which was a great place to learn device physics and a nice starting point for a lifetime of adventures in mixed signal circuit design.  In 1988 some of my Fairchild colleagues invited me to join them as employee number seven at ACM research, which was the IoT startup founded by Mike Markkula before IoT was a thing.  About three years ago, my friends Nick Tredennick and Eudes Prado Lopes and I decided that IoT for infrastructure was a very compelling concept that nobody had really implemented well, and we seized the opportunity to fix that by founding Siloxit.

Can you talk a little bit more about Fairchild? Because that’s a significant job to have early in your career.

When an opportunity came up to join the team at Fairchild Semiconductor, working at the R&D fab in Palo Alto, I jumped on it. The first assignment that they gave me was working out the device physics of RAM chips to be built in an exotic fabrication process.  IBM loved it, and bought a bunch of the chips.  But the chip consumed too much power to be competitive.

At Siloxit, you are working on technology improvements that can slash the cost of building and operating the grid.  Tell me, what problems are you solving?

The use cases for these kinds of devices come from the challenges with expanding the electric-grid infrastructure.  The grid is not doing so well these days.  The grid needs to run more efficiently and more cost effectively, the distribution network needs to be expanded, reliability and security need to be better.  Also, here in California, we need to get rid of the failure modes that start wildfires and burn down forests. And big technical challenges remain as we develop solutions that have the agility and stability to deal with low-cost sources using dynamic energy resources (DER) such as wind and solar.

The good news is that most of the grid’s problems can be fixed by just using IIoT to properly control and manage the system.  The even better news is that there are some pretty sharp declines in costs on the power-generation side, so the spiraling costs could actually be pushed back down. Going after some of the easy-to-solve problems on the transmission side will likely have a big financial impact.  Technology is going through very fundamental shifts; this is something that has been a hundred years in the making, but now renewable energy and other factors are redefining the grid.

Infrastructure fails way too often. So what do you mean by helping the grid to do its job?

Let’s focus on a key example – power transformers.  At first glance, these are just boring hundred-ton hunks of iron that generally don’t even contain any Silicon.  That’s deceptive.  The physics of effectively monitoring and managing their condition with cost-effective IIoT devices turns out to be a sweet problem that can be solved with IIoT.

Now, why should this matter to you? Well, the cost implications are significant.   Each major transformer failure leads to extensive disruptions and subsequent downstream consequences. The good news is that when we deploy IIoT that does the obvious diagnostic homework, we can easily identify which ones are likely to fail next.

By doing so, we can transform an expensive catastrophe into a more affordable scheduled maintenance event. This is the ultimate goal. How do we achieve this? By detecting the failure of an insulator. Insulators have a service life of up to half a century or more, enduring high levels of stress, particularly in harsh climates like my home state of Arizona. As an insulator approaches failure, it sends warning signs, indicating its impending demise. By leveraging our understanding of plasma physics, we can recognize these signals.

These signals manifest as electrical impulses that can be captured from the associated power lines. It’s not a complex task – the main thing we need is a reliable analog-to-digital converter capable of sampling at around 100 megasamples per second with a precision of approximately 14 bits.  Also required is an IIoT system that will be described below.

In a production run of a few hundred thousand units, the cost per unit can be reduced to about a thousand dollars. Considering the financial consequences of transformer failures, which are estimated at approximately $15 million per incident according to ABB, one of the leading transformer manufacturers, the cumulative impact of one in every 200 transformers failing annually is substantial.

To address this, we propose investing a small fraction of the cost of a single transformer failure on an insurance policy, or rather an Industrial Internet of Things solution. This IIoT solution is capable of detecting the precursors to such failures and relaying this information to a central control center. With this system in place, we can organize a prompt and efficient response.

In conclusion, by leveraging advanced sensor technology and IIoT solutions, we have the potential to significantly mitigate the financial and operational risks associated with power transformer failures. Through proactive monitoring and early detection of failure precursors, we can transform these potential disasters into manageable maintenance events.

This is kind of old school technology, power grids, meet new school, IoT sensors. What tech trends are you leveraging?

Chiplets, energy harvesting, and the cognitive edge are some of the trends that we leverage. Chiplets facilitate cost-effective heterogenous integration of the odd mix of technologies required for the needed IIoT. Energy harvesting is often the only practical solution that meets the cost and longevity constraints of our applications.  Service lifetime of much of the infrastructure that needs to be monitored and managed by our IIoT devices is very long.  Commonly it needs to exceed half a century.  The notion of meeting such requirements with batteries that ‘only’ last a decade is a complete non-starter.  Magnetic-field energy harvesting (MFEH) turns out to be a good solution for many of the use-cases we address.

So what is the cognitive edge and why is this essential to you?

It is essential to process data at the edge. This limitation becomes evident when considering the costs associated with transferring a terabyte of data to a distant data center, which includes significant transportation expenses. Ideally, all calculations should be performed at the sensing point itself, following a hierarchical approach.

You can also think of this as a bio-inspired architecture.  The partition for distributing the processing workload in hardware is quite similar to the body’s partition of processing, which puts, for example, a lot of the processing of vision into biological interconnect between the photoreceptors in your retina and the neurons in your brain. Some processing is done at the sensing point, while additional processing can take place at the gateway and other levels.  Successive layers of processing lead to successive layers of data compression.

Cognitive-edge architecture brings additional benefits.  It allows for better control over communication costs and enhances security. Currently, the security of the grid is a major concern due to frequent hacking attempts. However, existing efforts in this regard have proven inadequate. Therefore, our perspective is that specific applications, such as ensuring the integrity of transformers, would benefit from reevaluating the entire process from scratch. This doesn’t imply discarding the existing infrastructure and legacy grid; rather, it means incorporating new components that are not reliant on the ineffective elements of the old system. For instance, leveraging advanced communication chips or low earth orbit satellites, which are already starting to become available, can greatly enhance the system’s capabilities.

Artificial Intelligence is essential for many of the applications we address.  Most engineers and system architects are starting to realize the huge benefits that accrue when you distribute processing rather than just sending all the data from sensors to some central signal-processing block.

What big infrastructure development projects are affected by these developments?

Let’s begin by emphasizing the significance of these projects, even if they are not big in the sense that they always grab major headlines. An example worth highlighting is the recent press release from Iberdrola, a prominent Spanish company in the grid sector. They announced their commitment to invest half a billion dollars in expanding transmission line capacity in Brazil. While individual projects may not seem monumental, the cumulative effect of such opportunities can be substantial.

These developments involve the installation of thousands of kilometers of wire, alongside various accompanying components. The challenge lies in managing an increasingly complex grid with each new generation. Furthermore, the rise of affordable, renewable energy sources adds another layer of complexity to the business landscape. To achieve cost-efficiency, it is crucial to have an agile grid that can adapt to the fluctuations in power supply from sources like wind, solar, and emerging forms of nuclear energy.

Brazil, in particular, is experiencing substantial growth, constructing thousands of kilometers of new transmission lines. These installations must withstand challenging environmental conditions, ensuring both reliability and cost-effectiveness. Additionally, they need to be safeguarded against potential threats, including acts of vandalism or unintended failures caused by external factors. The key to achieving this lies in implementing more advanced condition monitoring systems. Siloxit aims to specialize in providing highly effective condition monitoring solutions while seamlessly integrating them into the communication networks and management structures of the grid.

Why did you choose to work with Silicon Catalyst?

When we decided to build IIoT that would help the grid do its job, we knew we would have to dance with elephants.  In the early days we were just half a dozen folks in a little startup explaining to billion-dollar customers that they should embrace our out-of-the-box thinking about better solutions for the problems they have been working on for a century.  When you first think about how a dozen folks in a little startup can actually interact with the people and institutions that are driving these big-league developments, the answers are not immediately obvious.  Shortly after we founded Siloxit, we joined Silicon Catalyst.  It has been a wonderful experience to see how Silicon Catalyst has helped us connect the dots, allowing us to partner with TSMC and ST Microelectronics and IMEC and Leti and many others.  The list of partners we’ve had the good fortune to work with thanks to Silicon Catalyst is just overwhelming.

I saw that you have a partnership coming up with YorChip. Is there something there you would like to talk about?

Our partnership with YorChip is very exciting. I have collaborated with Kash Johal, the CEO of YorChip, on various projects for many years. It is clear that chiplets offer significant advantages for specific use cases. Siloxit faces challenges that encompass multiple disciplines, requiring us to bring together various resources like threshold sensors, security widgets, processor widgets, A-to-D conversion solutions, and communication components within a very compact module while ensuring cost-effectiveness.

The contemporary approach is to leverage chiplets, which harness the best attributes of highly efficient technologies but in a size format that avoids unnecessary overprovisioning. For instance, an A-to-D conversion task doesn’t require a square centimeter-sized chip; a millimeter-sized chip may suffice. Furthermore, advancements in packaging technology, particularly in sensor, processor chip, A-to-D converter, and antenna domains, enable seamless integration of these chiplets. It’s not merely about assembling IP components from different catalogs but designing chiplets that effectively harmonize with the entire system.

Security is one critical aspect that demands special attention, requiring careful consideration at the logic design and device design levels.

By successfully integrating these chiplets, while utilizing only a small percentage of the available space, we achieve a significant outcome that is far from trivial but well worth the effort. YorChip’s focus aligns precisely with these objectives, and we anticipate incorporating this technology into more Siloxit products moving forward.

Also Read:

CEO Interview: Rob Gwynne of QPT

CEO Interview: Ashraf Takla of Mixel

CEO Interview: Dr. Sean Wei of Easy-Logic


Alphawave Semi Visit at #60DAC

Alphawave Semi Visit at #60DAC
by Daniel Payne on 08-03-2023 at 10:00 am

Alphawavesemi, DAC 2023 3nm eye diagram

On Wednesday at #60DAC I met Sudhir Mallya, Sr. VP Corporate Marketing at Alphawave Semi to get an update about what’s been happening at their IP company and with industry trends. The tagline for their company is: Accelerating the Connected World; and they have IP for connectivity, offer chiplet solutions, and even provide custom silicon that are optimized per application. The company recently announced a 3nm tape-out on July 10th to prove out chiplets for generative AI applications that use a High Bandwidth Memory 3 (HBM3) PHY and Universal Chiplet Interconnect Express (UCIe) PHY IPs. The UCIe PHY IP supports die-to-die data rates of 24Gbps per lane. There’s been tremendous interest with custom silicon and chiplets, caused by the data explosion from using LLM and ChatGPT applications that require fast connectivity rates.

Alphawave Semi, 3nm Eye Diagram

All of that data generated getting into and out of chiplets requires high-speed, low-power, data connectivity. There have been a couple of chiplet interconnect approaches, with Buch Of Wires (BOW) and UCIe as the two most predominant, and UCIe being more standardized for chiplets. BOW was there at the start in 2020 as a die-to-die interface.

Praveen Vaidyathan, VP at Micron noted that, “The tape-out of Alphawave Semi’s HBM3 solution in TSMC’s most advanced 3nm process is an exciting new milestone. It allows cloud service providers to leverage Alphawave Semi’s HBM3 IP subsystems and custom silicon capabilities to accelerate AI workloads in next-generation data center infrastructure.”

At DAC they were demonstrating four things:

  • Chiplets
  • HBM3
  • 3nm 112GB s XLR, PAM4 SERDES
  • PCIe Gen 6 with PAM 4

On the demo of 112GB Ethernet they were showing their test chip using a loop back test, and the longer length interconnect creates more delays as they wanted to max out the simulated length. A Bit Error Rate (BER) of 2e-8 was measured, while the actual spec is 1e-4, so their results are 4 orders magnitude ahead of the spec. That BER number can even be brought down to 1e-10. The PAM4 eye diagram was prominently displayed in the demo. The test chip is multi-standard, so Ethernet or PCIe are both supported for all generations. Four lanes are possible, although only one was being shown in the demo.

TSMC was the foundry for the 3nm test chip and Alphawave Semi used a DSP-based SerDes, and has a rich history of SerDes IP, where there has been sufficient margins designed in. The PCIe 7.0 specification requires 128 GT/s raw bit rate, but the final spec is targeted for release in 2025.

Generative AI system designs require a complete IP subsystem with connectivity for chiplets, custom silicon and advanced packaging. This is all coming together in the industry now.

Alphawave Semi has about 700 people worldwide now, and just one year ago it was 400 people, so that’s a lot of growth, fueled by connectivity IP demand. The company has sites in Canada, USA, UK, Israel, India, China, Taiwan and South Korea.

Sudhir talked about trends that he sees this year, and chiplets continue to be a big trend, and the ecosystem to support chiplets is really coming together. The IP vendors, foundries, designers and standards groups are actively involved in making chiplets realized. The use of chiplets causes new methodologies to support concurrent design across both electrical and thermal domains.

Summary

The 3nm demo from Alphawave Semi at DAC was pretty impressive, and the 112G PAM4 eye diagram looked open and clean. Most of DAC is filled with EDA vendors, but it’s always refreshing to witness real silicon IP operating in a booth.

Related Blogs


Accellera and Clock Domain Crossing at #60DAC

Accellera and Clock Domain Crossing at #60DAC
by Daniel Payne on 08-02-2023 at 10:00 am

Accellera, clock domain crossing, #60DAC

Accellera sponsored a luncheon panel discussion at #60DAC, so I registered and attended to learn more about one of the newest working groups for Clock Domain Crossing (CDC). An overview of Accellera was provided by Lu Dai, then the panel discussion was moderated by Paul McLellan of Cadence, with the following panel members:

Accellera Panel, Clock Domain Crossing

Panel Opening Remarks

Anupam Bakshi – has been with Agnisys since 2007, and before that with Gateway Design Automation – where Verilog was invented. Agnisys has been offering CDC automation tools, and is a member of the Accellera working group on CDC standards. He recommended to avoid meta-stability by using a spec-driven and synchronization approach, along with correct by construction design automation. This approach uses declarative attributes, and then engineers simply run the tool.

Frank Schirrmeister – he’s the VP of business development at Arteris, and before that at Cadence. Arteris has a Network On Chip (NOC) focus, and they acquired companies like Semifore to gain system IP, and Magillem to add ISO 26262 and IP-XACT expertise. Frank recommends the generation of registers from a higher-level specification along with CDC logic. As IP for an SoC is provided by multiple vendors, it makes sense to have a CDC standard to ensure that all IP that is integrated will work reliably, so there’s a need for a common intent language.

Dammy Olopade – he’s the working group chair for CDC and a principal engineer at Intel. The new working group was proposed in September 2022, then approved in December, and there are now 96 participants from 22 companies so far. The draft LRM for the CDC is due around December 2023.

Ping Yeung – at Nvidia he is a Senior Manager of Formal Verification. Today bringing up IP blocks with proper CDC checks is very tedious, and too time consuming, so a CDC standard with hierarchy is welcomed.  It will allow engineers to focus on CDC at the top level only. They really want to mix internal and external IP blocks easily. Assertions will ensure that models are used correctly, to verify interface properties, constraints and assumptions.

Q&A

Q: Paul – why a CDC standard now?

A: Dammy – at one time all lines of code came from one design team, not now, it’s multiple vendors now. The new model has IP blocks from many different vendors. With so many IP vendors and IP users,  a CDC standard is required.

A: Frank – Systemic complexity has grown too large, so a CDC standard is required to keep issues in control. The implementation details now impact the architecture choices. A common language and vocabulary become more important now.

A: Anupam – customers request more CDC validation in their IP integration challenges so we have to act now.

A: Ping – also internal IP blocks must be checked to see if they are being reused properly. What can we do if the original designer is gone?

A: Frank – Since 2000 we’ve been doing abstraction at the same levels, but now we can abstract register generation automatically from a high-level. We now have virtual platforms for HW and SW design.

Q: Paul – what about asynchronous input signals to a chip?

A: Dammy – there has to be a specification for design intentions. What is the spec? How do we design to meet the spec? The first spec should come along in the September time frame. We need to make sure that our clock never glitches, to keep it a synchronous design.

A: Frank – we know how to handle that challenge. We see different clock domains, and then insert the required logic, however the validation bit is a focus of new WG. If your PLLs start to jitter, then video and audio can drift out of synch.

A: Ping – we know how to handle asynchronous signals with known solutions. EDA tools can find CDC domain crossings. When the interfaces have been verified per IP block, how do we capture that verification at the top level, instead of re-verifying all over again?

Q: Paul – how many clock domains are being used today?

A: Anupam – 3-10 is a typical range.

A: Frank – hundreds have been seen. Even the number of re-used IP blocks can be in the hundreds now. Formal verification can be used at full-chip level, but different tools return different results, so some violations are false-positive. IP integrators and IP vendors need to have the same understanding on clock domains.

Q: Paul – what’s next for IP integration after CDC?

A: Ping – Reset domain crossing needs to be standardized.

A: Anupam – what about correct by design approaches? The specification has not been rigorous enough.

A: Frank – integration issues with IP is still a tough issue. What about CDC and safety issues interacting together? Can we ever go beyond RTL abstractions?

A: Anupam – what about FSM and datapath standards? Standards are only at the interfaces for now.

A: Frank – what about MBSE using SysML? Can we get to that level?

A: Dammy – if we already have a working system, then let’s keep working EDA tools then add innovative new tools. Power and performance challenges cannot be easily solved with today’s tools.

Q: Dennis Brophy – what about manufacturing issues and using chiplets?

A: Frank – I asked at UCIe about PHYs working together. Are there any plugfest possibilities? It’s a new layer of complexity.

A: Dammy – there is no interoperable language for CDC now. So we should follow a more correct by construction approach to enable chiplets.

Summary

This panel discussion was fast-paced and the engineers in the audience were actively asking questions and approaching the panelists after the discussion to get their private questions answered. CDC standardization is moving along, and interested engineers are encouraged to join in the working group discussion sessions. If your company is not already an Accellera member, please visit https://accellera.org/about/join for more information on how to join and participate.

Related Blogs


Application-Specific Lithography: Via Separation for 5nm and Beyond

Application-Specific Lithography: Via Separation for 5nm and Beyond
by Fred Chen on 08-02-2023 at 8:00 am

1689394958554

With metal interconnect pitches shrinking in advanced technology nodes, the center-to-center (C2C) separations between vias are also expected to shrink. For a 5/4nm node minimum metal pitch of 28 nm, we should expect vias separated by 40 nm (Figure 1a). Projecting to 3nm, a metal pitch of 24 nm should lead us to expect vias separated by 34 nm (Figure 1b).

Figure 1. (a) Left: 4nm 28 nm pitch M2 and M3 via connections may be expected to have center-to-center distance of 40 nm. (b) Right: 3nm 24 nm pitch M2 and M3 via connections may be expected to have center-to-center distance of 34 nm.

Is it really straightforward to do this by EUV?

Conventional EUV Patterning

A conventional EUV patterning would use a current 0.33NA EUV system to image spots smaller than 20 nm. However, for such an optical system, the spot is already limited to the image of the point spread function (PSF), which after resist absorption (e.g., 20%), has a highly stochastic cross-section profile (Figure 2).

Figure 2. Cross-section of point spread function in 20% absorbing resist. The stochastic characteristic is very apparent. The red dotted line indicates the classical non-stochastic image.

The real limitations from the point spread function from putting two of them together [1]. At close enough distances, the merging behavior is manifest by image slope reduction (degraded contrast) between the two spots (Figure 3). This is also accompanied by a change in the distance between the expected spot centers in the image, and stochastic printing between the two spots.

Figure 3. (a) Left: Absorbed photon number per sq. nm. for two point spread functions placed 36 nm apart. Note that the actual image C2C distance is 40 nm. (b) Right: Absorbed photon number per sq. nm. for two point spread functions placed 34 nm apart. The red dotted lines indicate the classical, non-stochastic images.

This means basically although two spots will appear, the chance of defects is too high to print them when running a high-throughput exposure. It would be safer to restrict them to be forbidden layout.

Alternative Patterning

Going to 0.55NA EUV worsens the stochastic behavior because of much lower resist absorption (e.g., 10%) due to the requirement for much thinner resist from severely limited depth of focus [2]. Such systems are also not available currently either, so the only remaining alternative is to print the two spots individually in separate exposures, i.e., double patterning [3]. Moreover, given that the EUV point spread function already has a significant stochastic distortion (see Figure 2), it would be better for a wider spot to be printed for each exposure (even by DUV) and post-litho shrink applied [4].

References

[1] F. Chen, Stochastic Behavior of the Point Spread Function in EUV Lithography, https://www.youtube.com/watch?v=2tgojJ0QrM8

[2] D. Xu et al., “Feasibility of logic metal scaling with 0.55NA EUV single patterning,” Proc. SPIE 12494, 124940M (2023).

[3] F. Chen, Lithography Resolution Limits: Paired Features, https://www.linkedin.com/pulse/lithography-resolution-limits-paired-features-frederick-chen/

[4] H. Yaegashi et al., “Enabled Scaling Capability with Self-aligned Multiple patterning process,” J. Photopolym. Sci. Tech. 27, 491 (2014), https://www.jstage.jst.go.jp/article/photopolymer/27/4/27_491/_pdf

This article originally appeared in LinkedIn Pulse: Application-Specific Lithography: Via Separation for 5nm and Beyond

Also Read:

NILS Enhancement with Higher Transmission Phase-Shift Masks

Assessing EUV Wafer Output: 2019-2022

Application-Specific Lithography: 28 nm Pitch Two-Dimensional Routing