SNPS1670747138 DAC 2025 800x100px HRes

How to Update Your FPGA Devices with Questa

How to Update Your FPGA Devices with Questa
by Mike Gianfagna on 10-29-2024 at 6:00 am

How to Update Your FPGA Devices with Questa

It’s a fact of life that technology marches on. Older process nodes get replaced by newer ones. As a result, ASSPs and FPGAs are obsoleted, leaving behind large system design investments that need to re-done. Since many of these obsolete designs are performing well in the target application, this re-do task can be particularly vexing. Thanks to advanced technology offered by Siemens Digital Industries Software, there is now another way around this problem. Using the Questa™ Equivalent FPGA retargeting flow, all the work on obsolete designs no longer needs to go to waste. Siemens recently published a white paper that takes you through the entire process. A link is coming, but first let’s look at the big picture and how to update your FPGA devices with Questa.

An Overview of the Problem and the Flow

The goal of the flow presented in the white paper is to extend the design life of obsolete FPGAs by migrating those designs to newer technologies. This way, the design work can be reused with the added benefit of taking advantage of the latest safety, security, and power saving features of newer FPGAs. The fundamental (and correct) premise here is that retargeting a working design to a newer technology takes far less time and effort than re-designing the application from scratch. Siemen’s Questa Equivalent FPGA retargeting solution is at the center of this approach.

Additional motivation includes process simplification and cost reduction due to end-of-life supply limitations and minimization of counterfeit risks that may be present in older supply chains. The proposed solution takes the final netlist from the original design and generates a functionally equivalent final netlist in a modern target FPGA technology. Months of engineering time can be eliminated because designers do not have to recreate the RTL for reimplementation on a modern FPGA device.

A high-level view of this process is shown in the graphic at the top of this post. Digging a bit deeper, the figure below provides more details of the retargeting methodology presented in the white paper.

Retargeting methodology

Details of Use Cases

 Not all design situations are the same. Recognizing that, the Siemens white paper presents three use cases. You will get all the information needed to build your migration strategy in the white paper – again, a link is coming. For now, let’s briefly examine the three scenarios that are discussed.

Use Case 1: Equivalence with RTL: In addition to proving the obsolete netlist against the new netlist, Questa Equivalent FPGA can be used to prove the functional equivalence of the RTL to the obsolete netlist, if it has not been manually modified to meet the requirements of the original specification. A complete description of how to examine the design to identify any needed additional inputs and how to set up this flow are all covered.

Use Case 2: RTL Retargeting: If the RTL is available for the obsolete device netlist and you decide to use the RTL for synthesis and retargeting and want to verify the old device netlist versus the new netlist (synthesized using RTL), this is the flow to use.

A high-level summary of this flow includes:

  • Synthesize the RTL for the new device
  • Create (if necessary) and apply formal models for the new device netlist
  • Prove functional equivalence for RTL versus the new device netlist using Questa Equivalent FPGA
  • Create (if necessary) and apply formal models for the obsolete device netlist
  • Prove functional equivalence of the obsolete device netlist versus the modern device netlist using Questa Equivalent FPGA

Again, all the details about how to examine the design and identify any needed information and how to set up the overall flow are covered in the white paper.

Use Case 3: RTL-RTL Retargeting: RTL-RTL retargeting can be used if the RTL of the obsolete device netlist has IP that is no longer available for the new device, and the obsolete IP can be replaced with up-to-date IP with similar functionality (or functionally equivalent logic).

A high-level summary of this flow includes:

  • Replace the obsolete IP with similar updated IP or equivalent logic
  • Create (if necessary) and apply formal models for the obsolete device IP
  • Create (if necessary) and apply formal models for the new device IP
  • Prove functional equivalence for RTL with obsolete IP versus RTL with updated IP (or equivalent logic) using Questa Equivalent FPGA
  • Synthesize the RTL for the new device
  • Create (if necessary) and apply formal models for the new device netlist
  • Prove functional equivalence for RTL with updated IP or equivalent logic versus the new device netlist using Questa Equivalent FPGA
  • Create (if necessary) and apply formal models for the obsolete device netlist
  • Prove functional equivalence for RTL with obsolete IP versus the obsolete device netlist using Questa Equivalent FPGA

As before, all the details of how to examine the design to identify missing data, how to create that data, and how to set up the flow are all covered in the white paper.

To Learn More

In most cases, reuse makes more sense than redesign. If you are faced with this decision, I highly recommend you find out how Siemens Digital Industries Software can help. You can download the Questa Equivalent FPGA Retargeting Flow white paper here. And that’s how to update your FPGA devices with Questa.

Also Read:

The RISC-V and Open-Source Functional Verification Challenge

Prioritize Short Isolation for Faster SoC Verification

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs


Overcoming obstacles with mixed-signal and analog design integration

Overcoming obstacles with mixed-signal and analog design integration
by Chris Morrison on 10-28-2024 at 10:00 am

Central,Computer,Processors,Cpu,Concept.,3d,Rendering,conceptual,Image.

Mixed-signal and analog design are key aspects of modern electronics. Every chip incorporates some form of analog IP, as even digital logic is dependent on analog signals for critical functions. Many digital design engineers are known to be uncomfortable with the prospect of integrating analog components. However, the current shortage of analog design engineers means that more digital designers are having to take on this daunting task. Here, we address the main integration issues and look at how recent developments in analog IP technology from Agile Analog are helping to make analog design far less complex, costly and time-consuming.

Traditional mixed-signal and analog design integration issues

Integrating analog and digital functions can result in a complicated design. Ensuring that a chip design meets all requirements can be challenging. Digital design engineers have often relied on reusable digital IP blocks, but the opportunity for design reuse has been limited with analog and mixed-signal designs as these usually involve bespoke solutions for each project. Mixed-signal circuits require close attention to physical layout, as well as correct component placement for compactness and peak performance. It’s also crucial to manage voltage levels, signal levels and signal processing between analog and digital components to enable seamless functionality.

Specialized techniques are needed for controlling noise and interference in mixed-signal systems, because of the sensitive nature of analog circuits and potentially noisy digital elements. Balancing power consumption and temperature regulation in mixed-signal systems adds an extra degree of difficulty, as digital and analog components may have different power requirements. Simulation demands a high level of precision to account for the continuous range of possible values. Testing traditional mixed-signal systems can lead to further challenges as this can involve expensive equipment, as well as time-intensive verification processes that may be alien to digital engineers.

Embracing advances in analog IP

Overcoming the obstacles with traditional mixed-signal and analog design integration can be tricky. Fortunately, following new advances in the analog IP sector, there is now an alternative fresh approach. At Agile Analog, we can automatically generate analog IP that meets the customer’s exact specifications, for any process and foundry, from legacy nodes right up to advanced nodes. Parameters such as accuracy, power consumption, die area, sensitivity and speed can be optimized to suit the precise requirements of the application.

The Agile Analog team is fully focused on expanding our analog IP product portfolio and helping chip design engineers by simplifying mixed-signal and analog design integration. Agile Analog is changing the landscape of analog IP and transforming the way analog circuits are developed. Disrupting analog design methodologies that have remained the same for decades. Removing the hassle, delay and expense associated with conventional custom IP. We can also regenerate analog IP using a foundry PDK, so it is straightforward to make modifications. For example, it is not necessary to process port all analog circuits when moving to a smaller process node as this Agile Analog IP can simply be regenerated.

Our growing range of customizable analog IP solutions cover data conversion, power management, IC monitoring, security and always-on IP, with a vast array of applications including HPC (High Performance Computing), IoT, AI and security. All Agile Analog IP comes with a comprehensive set of IP deliverables – including test specifications, documentation, simulation outputs and verification models. This digitally wrapped IP can be seamlessly integrated into any SoC, substantially cutting the complexity, constraints, risks and costs of analog design. Speeding up the time-to-design will help to accelerate the time-to-market for new semiconductor devices and encourage further innovation across the global semiconductor industry.

Learn more at www.agileanalog.com

Chris Morrison has over 15 years’ experience of delivering innovative analog, digital, power management and audio solutions for International electronics companies, and developing strong relationships with key partners across the semiconductor industry. Currently he is the Director of Product Marketing at Agile Analog, the customizable analog IP company. Previously he has held engineering positions, including 10 years at Dialog Semiconductor, now acquired by Renesas. Chris has an engineering degree in computer and electronic systems from the University of Strathclyde and a masters’ degree in system level integration from the University of Edinburgh.

Chris Morrison, Director of Product Marketing, Agile Analog

Also Read:

CEO Interview: Barry Paterson of Agile Analog

International Women’s Day with Christelle Faucon VP Sales Agile Analog

2024 Outlook with Chris Morrison of Agile Analog


Emerging Growth Opportunity for Women in AI

Emerging Growth Opportunity for Women in AI
by Bernard Murphy on 10-28-2024 at 6:00 am

Fem AI Logo Reg R5

I was invited to the Fem.AI conference in Menlo Park, the first sponsored by the Cadence Giving Foundation with a goal to promote increased participation of women in the tech sector, especially in AI. Not just for equity, also to grow the number of people entering the tech/AI workforce. There are countless surveys showing that demand for such talent is fast outstripping supply. Equally interesting, men and women seem to bring complementary skills to tech, especially to AI. More women won’t just add more horsepower, they can also add a new competitive edge.

What follows is based on talks I heard mostly in the first half of the day. Schedule conflicts prevented me from attending the second half.

Role models

This event was so stuffed with content and high-profile speakers that I struggled at first to decide my takeaways. Listing all speakers and what they had to say would make for a very long and boring blog so I’m taking a different path based on an analogy the MC (Deborah Norville, anchor of Inside Edition) used to kick off the day. She talked about a pipeline for women entering tech, imagining roles in tech, starting in academia, progressing to first jobs and beyond. Reminding us that the reason we don’t see more women in tech is that this pipeline is very leaky.

Many of us find first inspiration for a career path in a role model, especially a high-profile role model, someone in who we can imagine ourselves. In engineering this is just as true for boys as for girls, but girls don’t see as many identifiable role models as boys do. More are now appearing but are still not sufficiently championed as role models.

We also need to correct girls’ own perception that engineering and software careers are not for them. If they don’t see fun and inspiration in these areas driven by like-minded activity among their peers, they won’t look for role models with those characteristics. Girls Who Code and similar organizations are making an important dent in this problem. Fem.AI and others are aiming (among other things) to raise the visibility of role models who can inspire girls looking for that initial spark. The speakers at this event were a good indication of the caliber of inspiring examples we should promote more actively.

Fixing the leaky pipeline

Start with school programs. A big barrier to those interested in STEM is the math hurdle. I’m told a belief among girls that “math is hard” starts at age 7. Females don’t lack genetic wiring for math, and they are certainly not alone in this challenge. My father (an English teacher) hated math, but he liked trigonometry because he understood how it can be used to solve real world problems like figuring out the height of a building. Relevance to real world problems is an important clue.

At the college level, as many women as men enter as computer science majors, but 45% of them change majors or leave school before graduation. The consensus here was they don’t feel they belong. One solution already in place at multiple colleges is mentorship and allyship programs, where an undergrad can turn to a grad student for guidance or support through a rough patch in self-confidence. (This is arguably just as valuable and accessible for male undergrads. These programs are fostered by encouraging graduates to develop their own leadership skills through mentoring, making them just as feasible and accessible for men.)

A second solution is blended majors, combining CS and AI with say economics, or brain and cognitive sciences, recognizing that women are often drawn to areas where they can see impact. The Dean of Engineering from MIT said she saw female engagement in such programs rising to 50% to 60%.

Working at the interfaces between disciplines was a recurring theme throughout the Summit. Accuracy, fairness and accountability are huge concerns in AI deployments today, and these can’t be addressed solely within CS/AI. One research program I heard about involved a collaboration between a law school and AI researchers. Another (private discussion) was with a senior at Notre Dame leading an all-female team building a satellite – a very impressive multi-disciplinary project with an easily relatable goal.

The irrepressible Dr Joy Buolamwini talked about her seminal work into inaccuracies in face recognition software, leading to major (though not yet complete) regulatory actions in the US, Europe and other countries. It is quite shocking to me that we seem to accept levels of inaccuracy in AI that in any other STEM context would earn an automatic fail. While understandable for high school and research programs, we should demand more for any public deployment affecting safety, finances, policing, legal decisions, everywhere AI claims it can make our lives easier.

The opportunity for women in AI

The theme I have mentioned above several times, that women lean into areas which have a clear impact on real world needs, led to a very interesting VC panel discussion focused on waves in AI venture investment. We’ve seen the AI boom: large language models, spectacular growth in AI hardware and incredible claims. At the same time most would agree we’re headed into a “trough of disillusionment”. The initial thrill is wearing off.

I get it. I work with a lot of tech teams, including several startups. Invariably run by guys, fascinated by their tech, and sure that the world will immediately figure out to apply their innovation to solve an unlimited number of real-world needs. That’s the way it works with us guys: technology first, figure out a real-world application later.

VCs see the next big wave in AI being problem-centric: start with a domain-specific need – in health care, agriculture, education, wherever – then build a technology solution around that need. Adapt as required to find the best fit between initial concept and experience with prototypes.  This sounds like a perfect fit for the way many women like to work. Suggesting a wave where women can lead, maybe even help men find real applications for their cool tech!

Very interesting series of talks. I look forward to learning more, especially digging deeper into those real-world problems. You can learn more about Fem.AI HERE.


LRCX- Coulda been worse but wasn’t so relief rally- Flattish is better than down

LRCX- Coulda been worse but wasn’t so relief rally- Flattish is better than down
by Robert Maire on 10-27-2024 at 8:00 am

Happy Lamb
  • Lam put in good quarter with flattish guide- still a slow recovery
  • This is better than worst case fears of order drop like ASML
  • China spend is slowing but tech spending increase offsets
  • Relief rally as the market was braced for bad news and got OK news
Lam has OK, slightly better than in line quarter with OK guide….

It coulda been way worse but wasn’t.

It wasn’t a blow out quarter but at least it wasn’t a disaster that many had been bracing for after the ASML news last week.

Lam came in at $4.17B in revenues and $0.86 in EPS versus expected $4.05B and $0.80 so a “normal” slight beat.

Guidance is for $4.3B+-$300M and EPS of $0.87+-$0.10, flattish with street expectations of $4.24B and $0.84.

The fact that Lam is looking at flattish to slightly up is way better than the greater than 50% cut in orders ASML reported and the street was fearful of.

China moderating as expected but tech spending is improving.

China moderated from 39% of business to 37% of business with expectations of falling to about 30% in the December quarter. All this down from highs in the 40’s.

There were a ton of China questions on the conference call as analysts probed over their fears of a coming China collapse but there were no clear answers.

We did hear that tech spending, mainly in DRAM (read that as HBM) were improving while NAND is still languishing.

Still a very slow recovery from a very long and deep downturn

We have been covering the semiconductor industry for decades through many cycles and this is perhaps one of the slowest/longest recoveries we have ever seen the industry experience.

Memory obviously had built way too much excess capacity that we are still experiencing the after effects of.

Usually in a downcycle, all spending takes a holiday while capacity gets burned off. In this down cycle, China never stopped spending.

This downcycle was a whiplash cycle started by the COVID crisis which is largely responsible for the severity and the overbuild on the bounce back.

2025 WFE to be up from 2024….but how much?

Lam spoke about 2025 being better than 2024 but would not quantify by how much. Is it 1% better or 50% better….its anybody’s guess.

Our guess would be slightly up on the order of 10-15% with China slowing as we exit 2024 and technology spending picking up in foundry & DRAM.

The main issue we see is that with Samsung foundry and Intel not spending much its hard for the rest of the industry to offset that weakness.

The Stocks-Expect a short lived “relief rally”/bounce

The market was expecting news that was likely bad coming out of Lam after the ASML debacle last week.

Lams news was more or less in line and OK and wasn’t anywhere near the disaster that it could have been so we saw the aftermarket bounce in the stock and will likely see a bounce in the equipment stocks overall as investors breathe a sigh of relief.

We would caution investors that its clearly not off to the races as Lam’s lukewarm guidance underscores.

We see a quick bounce then settling in to a slightly lower overall valuation of the semi equipment market as the ASML dark cloud will still hang over the market until proven otherwise.

We would again point out that ASML is the canary in the coal mine of equipment orders as litho tools are always ordered well before other tools due to the long lead time.

Its also important to point out that Deposition and Etch tools that Lam and Applied makes are useless without litho tools to print the pattern that is then etched, so there is most certainly a relationship between the number of litho tools and dep and etch tools.

Bottom line is that the industry isn’t going to buy a lot of dep and etch tools while not buying litho tools….it just doesn’t work that way, that divergence does not exist (at least not for very long)

So be aware…..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

ASML surprise not a surprise (to us)- Bifurcation- Stocks reset- China headfake

SPIE Monterey- ASML, INTC – High NA Readiness- Bigger Masks/Smaller Features

Samsung Adds to Bad Semiconductor News


Podcast EP255: The Growing Proliferation of Semiconductors and AI in Cars with Amol Borkar

Podcast EP255: The Growing Proliferation of Semiconductors and AI in Cars with Amol Borkar
by Daniel Nenni on 10-25-2024 at 10:00 am

Dan is joined by Amol Borkar, Product Marketing Director at Cadence. Since joining in 2018 as a senior product manager, he has led the development of many successful hardware and software products, including Tensilica’s latest Vision 331 and Vision 341 DSPs and 4DR accelerator targeted for various vision, automotive and AI edge applications.

Within Tensilica, he has been responsible for product management, marketing, partnerships and ecosystem growth for the Vision, ConnX and MathX families of DSPs. Previously, he was at Intel’s RealSense group, where he held various positions in engineering, product management and marketing and was responsible for the success of a number of RealSense’s 3D cameras.

Before joining Intel, Borkar developed computer vision-based advanced driver assistance algorithms for self-driving vehicles as part of this Ph.D. thesis.

In this informative discussion, Amol explains his passion for technology and working with customers to achieve the required impact. Dan explores AI proliferation in automotive applications with Amol.

Many of the architectural trends in AI are clearly explained, with examples of use models, challenges and benefits. Examples include the merging of vision and radar processing, the benefits and challenges of sensor fusion and domain-based vs. central compute architectures.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Sean Park of Point2 Technology

CEO Interview: Sean Park of Point2 Technology
by Daniel Nenni on 10-25-2024 at 6:00 am

Sean Park scaled

Sean Park is a seasoned executive with over 25 years of experience in the semiconductors, wireless, and networking market. Throughout his career, Sean has held several leadership positions at prominent technology companies, including IDT, TeraSquare, and Marvell Semiconductor. As the CEO, CTO, and Founder at TeraSquare, Sean was responsible for leading the company’s strategic direction and overseeing its day-to-day operations. He also served as a Director at Marvell, where he provided invaluable guidance and expertise to help the company achieve its goals. He holds a Ph.D. in Electrical Engineering from the University of Washington and also attended Seoul National University.

Tell us about your company?

Founded in 2016, Point2 Technology designs and manufactures ultra-low power, low-latency, and scalable interconnect mixed-signal SoC solutions. Headquartered in San Jose, California, Point2 is bringing interconnect technology to extend the reach of copper cabling and introducing an innovative technology to the market – e-Tube – that better addresses the terabit network bandwidth requirements inside the next-generation Artificial Intelligence (AI) / Machine Learning (ML) datacenters.

What problems are you solving?

Today’s datacenters rely on 400 gigabit (400G) Ethernet network devices that are transitioning to 800G. Point2’s UltraWireTM Smart Retimers are purposed-built for active electrical cables (AECs) to extend the copper cable reach required for in-rack and adjacent rack applications. AECs built with Point2’s UltraWireTM Smart Retimers consumes 40% less power and are 75% lower in latency. As AI/ML applications drive datacenters to 1.6T speeds and beyond, yesterday’s copper and optical interconnects, cannot scale to the future terabit requirements. Typically, optical technologies offer the speed and the bandwidth required but are expensive and power-hungry with temperature variability and reliability issues. Copper, the most cost-effective option, experiences significant signal loss at higher speeds with limited cable reach. Finding the optimal solution to overcome the limitations of these two technologies – scalable bandwidth, low power consumption, low latency, and at comparable cost to copper – is a daunting challenge for the datacenters. e-Tube is the answer.

What is e-Tube?

e-Tube is a scalable interconnect technology platform using RF data transmission through a plastic dielectric waveguide made of common plastic material, such as Low-Density Polyethylene (LDPE). Active RF cables (ARCs) built with e-Tube technology provides the ultra-low latency, energy-efficient, interconnect solution that is ideal for top-of-rack (ToR) to Smart NIC, ToR to ToR, and accelerator-to-accelerator connectivity in AI/ML data centers. e-Tube eliminates the fundamental limitations of copper at terabit speeds by supporting up to 7m cable reach with 50% of the cable bulk and 20% of the weight at a similar cost structure. Compared to optical cabling, e-Tube consumes 50% less power, with latency that is three orders in magnitude lower, at 50% lower in cost, and without temperature variability and reliability issues. This scalable technology is the ideal replacement for copper cabling for AI/ML in-rack and adjacent-rack applications.

What application areas are your strongest?

Point2’s expertise is in mixed-signal interconnect SoC designs with the lowest power and latency. Starting with our UltraWireTM Smart Retimers for 400G and 800G AECs that typically consume 40% less power and 75% less latency compared to other DSP-based Retimer solutions. Our cable partners have used these advantages to design and deploy AECs with hyperscalers and enterprises for switch-to-server and accelerator-to-accelerator connectivity for in-rack and adjacent-rack connectivity. With AI/ML data centers transitioning to terabit speeds, development for 1.6T and 3.2T ARCs is underway to address future AI/ML workloads. e-Tube technology is also expected to expand into ‘inside the box’ for chip-to-front panel and backplane applications with tighter integration with accelerator and switch ASIC manufacturers. This approach will deliver the higher interconnect bandwidth and port density required to keep up with future AI/ML accelerator speeds.

What keeps your customers up at night?

Over the next few years, AI/ML data centers must overcome three challenges simultaneously: 1) deliver better performance to meet soaring bandwidth demand; 2) contain costs while expanding in performance and complexity; 3) continue improving energy efficiency. It is this trifecta of challenges that keeps network operators up at night.

What is next for Point2 and the development of e-Tube?

The most important interconnect attributes are scalability, energy efficiency, low latency, and affordability. As the AI/ML workload evolves rapidly—pushing the limits on data rates with trillions of calculations being processed every second—it is vitally crucial that cabling interconnects support this rapid growth. Network requirements of 1.6T and 3.2T are approaching fast, and the industry must have the proper infrastructure to meet this demand while seamlessly adapting to new data rates. Our commitment is to continue to develop innovative interconnects that scale at the pace of the AI accelerators while achieving best-in-class energy efficiency and affordability required for mass deployment for the next-generation AI/ML datacenters.

Also Read:

CEO Interview: Nikhil Balram of Mojo Vision

CEO Interview: Doug Smith of Veevx

CEO Interview: Adam Khan of Diamond Quanta


From Space-Central to Space-Time Balanced – A Perspective for Moore’s Law 2.0 and A Holistic Paradigm for Emergence

From Space-Central to Space-Time Balanced – A Perspective for Moore’s Law 2.0 and A Holistic Paradigm for Emergence
by Daniel Nenni on 10-24-2024 at 4:00 pm

cover 2

A friend of SemiWiki published an article on Moore’s Law in IEEE that I think is worth reading:

IEEE Signal Processing Magazine, Vol. 41, Issue 4.

The topic of Moore’s Law is of paramount importance, reaching almost the entire field of electronics (and the semiconductor industry). In the course of six decades, for the first time, this article proposes a strategic change from “Space-Central” to “Space-Time Balanced” for this “law”. It also challenges contemporary AI practices by arguing that consciousness cannot arise from the current reductionist paradigm (the first paradigm). By promoting a second paradigm of holistic nature, the term of Moore’s Law 2.0 is coined as its emblem.

The abstract of this article is enclosed as follows.

The history of electronics is studied from physical and evolutionary viewpoints, identifying a crisis of “space overexploitation”. This space-central practice is signified by Moore’s Law, the 1.0 version. Electronics is also examined in philosophical stand, leading to an awareness that a paradigm is formed around late 1940s. It is recognized that this paradigm is of reductionist nature and consciousness is not ready to emerge wherein. A new paradigm is suggested that diverts from the space-central practice to the foresight of putting space and time on equal footing. By better utilizing time, it offers a detour from the space crisis. Moreover, the paradigm is prepared for holism by balancing the roles of space and time. Integrating the entwined narratives of physical, evolutionary, and philosophical, an argument is made that, after decades of adventure, electronics is due for an overhaul. The two foundational pillars, space and time, ought to be used more meticulously to rectify the electronics edifice. This perspective of shifting from space-central to balanced space-time is proposed as Moore’s Law 2.0 and is embodied as second paradigm, a holistic one. The aim is to transcend reductionism to holism, paving the way for the likely emergence of consciousness.

The Outline of this article is given below:

First Paradigm: Space-Central Moores Law 1.0

Electronics’ history is reviewed. It is opined that a paradigm is formed circa the late 1940’s and Moore’s Law later becomes its mark. The quintessence is to make the basic processing unit ever smaller and assemble ever more of such units into a given space for higher processing capability.

Reflection: Physical, Evolutionary and Philosophical Aspects of Electronics

Electronics is examined under the scopes of physics, evolution, and philosophy. Useful insights are extracted that are used to guide the strategic discussions for future advance.

Edifice of Electronics: Transistor & Signal as Base and Space & Time as Pillars

The field of electronics is virtualized as an edifice. Its foundation is established on transistor and signal. Its two pillars are space and time. The show of signal processing is played by the transistor and signal on the stage sustained by space and time.

Overexploitation of Space: Heading into A Crisis

After decades of ruthless exploitation, the potential from space is exhausted and we are diving into a crisis. Two obstacles are firmly erected by quantum limit and the second law of thermodynamics, impenetrable however sophisticated our engineering skill will be.

Impotence of Logic Based Computing: Consciousness-less

The current electronics edifice is established on the bedrock of symbolic logic. Consciousness is unfortunately beyond the reach of logic and thus is not expected to emerge from this paradigm. Contemporary AI technologies are unable to address this fundamental deficiency.

More Is Different: Second Paradigm of Space and Time on Equal Stand

A new paradigm is proposed where space and time are valued equally. It offers a detour from the space crisis. With the awareness of space and time as vital notions for cognition, this paradigm is prepared for something deeper than the complexity grown out of the sheer increase in number of transistors.

Next Stage of Moores Law: from 1.0 of Space-Central and Reductionist AI to 2.0 of Holism

The space crisis and reductionist AI are ramifications of Moore’s Law, the 1.0 version. After decades of endeavor, it is time for us to ponder the electronics deeper and question the existing paradigm for its efficacy. This is reasoned as the motive for Moore’s Law 2.0, a holistic paradigm made for circumventing the space crisis and seeking of true intelligence.

The full article can be found in IEEE Xplore:

From Space-Central to Space-Time Balanced: A Perspective for Moore’s Law 2.0 and a Holistic Paradigm for Emergence [Perspectives] | IEEE Journals & Magazine | IEEE Xplore

The article can also be found here if you cannot access IEEE Xplore

(PDF) From Space-Central to Space-Time Balanced: A Perspective for Moore’s Law 2.0 and a Holistic Paradigm for Emergence [Perspectives] (researchgate.net)

Also Read:

AI Semiconductor Market

The RISC-V and Open-Source Functional Verification Challenge

Sarcina Democratizes 2.5D Package Design with Bump Pitch Transformers


AI Semiconductor Market

AI Semiconductor Market
by Bill Jewell on 10-24-2024 at 2:00 pm

AI Semiconductor Market 2024

AI (artificial intelligence) is widely cited as a growth driver for the technology industry, including semiconductors. While AI is in its early stages, opinions vary on whether it will become common in the next few years. A McKinsey study from May 2024 showed 72% of organizations have adopted AI in at least one business function. These organizations saw the risks in AI as inaccuracy (63%), intellectual property infringement (52%) and cybersecurity (51%).

In the U.S., may consumers have concerns about AI. A Pew Research Center survey in 2023 revealed that the majority of people were more concerned than excited about AI (52%), 36% were equally excited and concerned, and only 10% were more excited than concerned. 60% of respondents were uncomfortable with the use of AI in healthcare. An AAA survey in March 2024 showed 66% of people in the U.S. were afraid of self-driving cars while only 9% would trust them.

Nevertheless, AI is here to stay and will have a major effect on the global economy in the near future. AI will have a significant impact on the semiconductor industry. A Gartner report from May 2024 estimated worldwide

AI IC revenue at $54 billion in 2023, $71 billion in 2024 and $92 billion on 2025. Forecasts of the compound annual growth rate (CAGR) of AI ICs over the next several years range from 20% (MarketsandMarkets) to 41% (DataHorizzon Research).

The AI IC market has undergone explosive growth in the last few years. NVIDIA is the dominant AI IC company. The revenue of its Data Center division, which includes most of NVIDIA’s AI, more than tripled from $15 billion in fiscal year 2023 (calendar year 2022) to $48 billion in fiscal 2024 (calendar 2023). It will likely double in fiscal 2025 (calendar 2024) to over $100 billion. We estimate NVIDIA’s AI IC revenue will be about $96 billion, most of the revenue from its Data Center division. NVIDIA’s latest AI graphics processor will sell for between $30,000 and $40,000 according to CEO Jensen Huang.

NVIDIA’s AI revenue includes its AI processors as well as memory included on the circuit board. SK Hynix has been NVIDIAs primary supplier of high bandwidth memory (HDM) used for AI. Micron Technology and Samsung are also HDM suppliers. The memory cost for NVIDIA will amount to several billion dollars in 2024.

AMD, the next largest supplier, projects $4.5 billion in AI IC revenue in 2024. Intel expects $500 million in AI IC revenue in 2024. AIMultiple Research lists eighteen additional companies which have announced AI ICs. These include cloud service providers (AWS, Alphabet, IBM and Alibaba), mobile AI IC providers (Apple, Huawei, MediaTek, Qualcomm and Samsung) and startups (SambaNova Systems, Cerebras Systems, Graphcore, Grog and Mythic).

We at Semiconductor Intelligence estimate the global AI IC market in 2024 at $110 billion. NVIDIA dominates with $96 billion in revenue and 87% market share. AMD is $4.5 billion, and Intel is $0.5 billion. Apple is estimated at $3 billion based on the AI processors in its iPhone 16 models. Samsung’s AI-enabled Galaxy S24 should generate about $1 billion in AI IC revenue for its primary supplier, Qualcomm. Other companies are just getting started in the market and are estimated at a total of $5 billion.

The $110 billion AI IC market in 2024 will account for 18% of the total semiconductor market based on the WSTS June 2024 forecast. Using a relatively conservative projection of a 20% CAGR for AI ICs over the next five years results in a $273 billion market in 2029. We project the long-term growth rate of the semiconductor market at a 7% CAGR, reaching $856 billion in 2029. AI ICs would then account for 31.9% of the total semiconductor market. AI ICs will not be totally additive to the semiconductor market. AI ICs will replace many of the current processors used in data centers, PCs, smartphones and automobiles.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Asia Driving Electronics Growth

Robust Semiconductor Market in 2024

Semiconductor CapEx Down in 2024, Up Strongly in 2025


The RISC-V and Open-Source Functional Verification Challenge

The RISC-V and Open-Source Functional Verification Challenge
by Daniel Nenni on 10-24-2024 at 10:00 am

Semiwiki Blog Post #1 Image #1

Most of the RISC-V action at the end of June was at the RISC-V Summit Europe, but not all. In fact, a group of well-informed and opinionated experts took over the Pavilion stage at the Design Automation Conference to discuss functional verification challenges for RISC-V and open-source IP.

Technology Journalist Ron Wilson and Contributing Editor to the Ojo-Yoshida Report moderated the panel with Jean-Marie Brunet, Vice President and General Manager of Hardware-Assisted Verification at Siemens; Ty Garibay, President of Condor Computing; Darren Jones, Distinguished Engineer and Solution Architect with Andes Technology; and Josh Scheid, Head of Design Verification at Ventana Microsystems.

Their discussion is broken into a three-part blog post series starting with selecting a RISC-V IP block from a third-party vendor and investigating its functional verification process.

Wilson: Assuming a designer is going to use a CPU core in an SoC and not modify the RTL or add custom instructions, is there any difference in the functional verification process for licensing a core from Arm or licensing a RISC-V core from an established vendor? What about downloading an open-source core that seems to work? Do they have the same verification flow or are there differences?

Scheid: A designer will use the same selection criteria and have the same integration experience for RISC-V as Arm and other instruction set architectures. The RISC-V software support is the challenge because everything is open source, mostly coming from upstream and open-source areas, not necessarily proprietary tool chains from the vendors.

Ventana uses standard open specification protocols on how to integrate with other products. Verification IP support is available from multiple vendors to make this experience similar to others.

Garibay: The big difference is using a core from true open source. The expectation for that core would be different. In general, a designer paying for IP can assume some amount of energy and effort put into the verification of the CPU core itself that enables a less painful delivery of the IP and integrated at the SoC level.

I would have a different expectation between any true open-source IP versus something licensed from an established design house. An ISA is an ISA. What matters is the design team and the company that stands behind it more than the ISA itself relative to the user experience.

Jones: I agree. It’s less about RISC-V versus Arm and more about the quality of the licensed IP. In fact, I would remind anyone who’s building an SoC that includes some number of IP, CPU, PCIe, USB: they are paying for verification. A designer can go out and find free RTL for almost anything by looking on Google.

Basing a company’s product (and potentially the company’s future) on a key IP block, it must be from a company that can stand behind it. That’s the verification as well as support.

Acquiring IP can be separated into three options –– Arm versus RISC-V versus something found on Google. Something found on Google is risky. Between Arm and the various RISC-V vendors, it’s more about the company’s reputation and good design flow. It’s less about RISC-V versus Arm.

Wilson: It’s almost a matter of buying a partner who’s been through this versus you’re on your own?

Garibay: Absolutely. You certainly want to have a vendor that has a partnership attitude.

Brunet: Yes. It’s all about verification. As a provider of hardware-assisted verification, RISC-V is a dream come true. Arm has a large compute subsystem providing a complex environment fully verified that is broad and sophisticated in the compute environment and paid for by users. RISC-V-based designers are going to have to verify much more of the interaction between the netlist, the hardware and the software stack.

Software stack verification is the big challenge as well as scaling the RTL of the device. That’s common for designers as soon as they do big checks. Verification is the biggest bottleneck and the size of the RISC-V software stack and software ecosystem is still not at the same level as Arm. Therefore, it’s putting even more pressure on the ability to verify not only the processor, the capacity of the processing unit, but integrating with the IP that is PCIe, CXL and so on. That’s a far greater verification challenge.

Wilson: RISC-V has respected vendors and so many extensions that sometimes are not so subtly different. Does that complicate the verification problem, or does it simplify it by narrowing the scope?

Scheid: The number of extensions is the wrong thing to focus on. Arm uses the term features and many dozens of those within its Arm versions. The number of ratified extensions was around 50 something. It gets scary if there’s a big list. Going forward, designers are going to focus more on what RISC-V is working on in terms of profiles. We’re talking about advanced processors at one level, microcontrollers at another and then a time-ordered series of improvements in terms of what extensions are supported in that profile. That’s going to be easier to focus on in terms of selecting IP, the support for that profile and still allow optionality between the different implementations. It won’t be as confusing as a list of extensions.

Wilson: Do you see open-source verification IP converging around those?

Scheid: The value of having fewer combinations supported is going to circle around itself. Everyone involved from the implementers to the verification IP providers to the software system aren’t going to look at that combinatorial explosion favorably. Some who have tight constraints will want to choose arbitrary combinations. The vast majority are going to focus around working with the rest of the ecosystem to focus on those profiles and focus on that.

Garibay: The specification of the profiles is a big leap forward for the RISC-V community to allow for the establishment of software compatibility baselines such that there is at least the promise of vendor interoperability for full stack software. Designers should have a baseline and then accept it as customization or as optional sub-architectures.

The fun part about RISC-V right now is not the number of different features being added to the specification. It is the pace. Over the last two years and probably going forward for the next year, RISC-V has been on a rapid pace adding features that are needed. Features that are the right set of features to expand the viability of the architecture of high-performance computing. We need them and it has dramatically inflated the design space and the verification space. Really, we’re just getting to the same point where Arm and x86 are now and probably have been for years. It’s just a dramatic rate of change for RISC-V coming from a much simpler base.

Jones: I’m not sure it’s the right question. If I’m the SoC designer and have an option, I can take a 32-bit floating point or a 64-bit floating point. If I go with RISC-V and it’s 64-bit, it must have 32-bit. By having extensions, RISC-V benefits from the history of x86, MIPS, SPARC and Arm.

What I mean to say is, if I’m the SoC designer, I don’t have to verify the CPU. My CPU vendor has to verify it. That’s fair and I will choose a high-quality vendor. When I talk about verification on my SoC design, I’m talking about connecting the bus properly, assigning address spaces correctly and routing properly throughout my SoC design. The SoC designer has to verify that the software running on the CPU is correct. There again, I benefit from the standard that is RISC-V.

When I started out, MIPS had a problem because each MIPS vendor had a different multiply accumulate instruction (MAC) because MIPS didn’t have a MAC. Software vendors were charging each MIPS vendor $100,000 to port their compiler to it. The first vendor got his money’s worth. Everybody else got that same compiler for $100,000. MIPS figured this out, standardized, and everyone was happy. RISC-V avoids those kinds of problems.

Wilson: Do you see extensions as an advantage?

Jones: Yes, because RISC-V can be implemented as a small microcontroller that doesn’t have all the other features that a small core does not require. Arm does this too, though it doesn’t call it an extension. An Arm core is available without a floating-point unit. I would go so far as to say that the number of pages in the Arm ISA and the number of pages in the various extensions of the RISC-ISA are probably similar. RISC-V’s may be shorter.

Garibay: Aggressively standardizing is getting ahead of the problem that we saw in the past with different architectures where designers tried to implement the same function five different ways and come back with a standard. Four designers are going to be upset. It’s great to see the RISC-V organization leading in this way and paving the road. The challenge is to make sure we fill all the holes.

Brunet: I don’t see software stack interoperability happening for RISC-V. It’s a complex challenge and probably the main reason why it is not taking off. A few companies that have a complete chip entirely RISC-V are using it. Most are large, complete subsystems or compute subsystems that are mainly Arm and some cores with well-defined functionality that is RISC-V. Few are completely RISC-V. Is it architecturally the main reason or is it because of the software stack?

Jones: I’ve done a totally RISC-V chip myself. I also know a number of AI chips are completely RISC-V. The difference for these successes is that, in general, those software stacks are not intended to be exposed to the end user. They’re almost 100% proprietary. Given that, a designer can manage the novelty of the RISC-V software stack. Exporting that as an ecosystem to users is the challenge that RISC-V has and what the profiles are intended to enable going forward. We’re at the beginning.

End of Part I

Also Read:

Prioritize Short Isolation for Faster SoC Verification

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs

SystemVerilog Functional Coverage for Real Datatypes


Sarcina Democratizes 2.5D Package Design with Bump Pitch Transformers

Sarcina Democratizes 2.5D Package Design with Bump Pitch Transformers
by Mike Gianfagna on 10-24-2024 at 6:00 am

Sarcina Democratizes 2.5D Package Design with Bump Pitch Transformers

2.5D package design is rapidly finding its stride in a wide variety of applications, including AI. While there are still many challenges to its widespread adoption, the chiplet approach is becoming more popular compared to monolithic design. However, the required market to create a chiplet ecosystem is still under development. As package design complexity continues to rise, system partitioning, verification, and power management remain critical and challenging to fulfill. Additionally, package design presents secondary problems, including thermal and mechanical stress constraints, which, combined with the sheer cost of the package, make the design of advanced packages very challenging.

Advanced packaging is the domain of Sarcina Technology. Recent announcements from the company have illustrated how these challenges can be overcome. Let’s see how Sarcina democratizes 2.5D package design with something called Bump Pitch Transformers.

What is a Bump Pitch Transformer?

If you are a sci-fi movie buff, the term “transformer” might bring to mind something interesting, but it’s not relevant to the innovation discussed here for 2.5D package design. What Sarcina is addressing is the cost and complexity of the package for 2.5D designs.

Current advanced 2.5D packaging uses a substrate to transpose a chip’s microbump pitch from 40-50 micrometers to the package’s 130 micrometer bump pitch. This is typically done with a silicon TSV (through-silicon via) interposer. While this approach is effective, these substrates are very expensive, in short supply, and complex to design, resulting in lead-time and cost challenges for many advanced designs.

Sarcina’s Bump Pitch Transformer (BPT) approach uses silicon bridge technology, replacing silicon TSV interposers with more cost-effective re-distribution layers (RDL). This architecture is ideal for homogeneous and heterogeneous chiplet integration, targeting high-performance computing (HPC) devices for AI, data center, microprocessor, and networking applications.

By delivering lower costs and faster design times, Sarcina aims to democratize 2.5D package design, making it more readily available to companies to solve a wider range of problems.

Details and Applications

Sarcina’s BPT is effectively a wafer fan-out RDL technology which, thanks to its maturity, delivers lower costs and shorter lead times. This will help system designers optimize AI for new, lower-cost applications, effectively expanding the market. The company is currently engaging customers with two Bump Pitch Transformer options.

The first option (Option 1 above) creates a silicon bridge in high-density applications to connect the I/Os of adjacent dice. Tall copper pillars with a pitch of about 130 micrometers are grown underneath the RDLs. Because the majority of the I/O interconnections are between adjacent dice and have been connected by the silicon bridge, fewer I/Os need to be routed to the next level of interconnect with 130 micrometer bump pitch, which is achieved with tall copper-pillar bumping onto a standard substrate for flip-chip assembly.

The RDLs will merge power and ground micro-bumps with 40-50 micrometer bump pitch, reducing the number of power and ground bumps with tall copper pillars suitable for 130 micrometer bump pitch density. These bumps may also be assembled onto a standard organic or laminate substrate for flip-chip assembly.

The second option (Option 2 above) is a “chip last” service that handles die-to-die interconnects with less interconnection routing density. As the routing density reduces, the die-to-die interconnects no longer require a silicon bridge to pack so many traces onto a small area. This allows the removal of the silicon bridge, using only the RDL traces as the die-to-die interconnects between adjacent silicon dice. With the removal of the silicon bridge from the bump pitch transformer, the interposer cost in this option is lowered compared to the first option.

Sarcina’s BPT service offering includes BPT interposer design, O/S test pattern insertion, fabrication, BPT wafer sort, package substrate design, power/signal integrity, thermal system simulation, and substrate fabrication. A complete WIPO (wafer in, package out) engagement also covers wafer sort, package assembly, final test, qualification, and production services.

To Learn More

You can see the Bump Pitch Transformer announcements from Sarcina here and here. You can also learn more about this unique company on SemiWiki here. And you can get a broad view of Sarcina’s advanced packaging design services here. In a past life, I was involved in several advanced 2.5D package designs. I can tell you it’s nowhere near as easy as it may look. You really need a partner like Sarcina to tip the odds in your favor. I highly recommend you take a look.

Also Read:

How Sarcina Revolutionizes Advanced Packaging #61DAC

Sarcina Teams with Keysight to Deliver Advanced Packages

How Sarcina Technology Makes Advanced Semiconductor Package Design Easier