SILVACO 073125 Webinar 800x100

sureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler

sureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler
by Mike Gianfagna on 04-04-2024 at 10:00 am

SureCore Brings 16nm FinFET to Mainstream Use With a New Memory Compiler

Semiconductor processes can have a rather long and interesting life cycle. At first, a new process defines the leading edge. This is cost-no-object territory, where performance is king. The process is new, the equipment to make it is expensive, and its use is reserved for those that have a market (and budget) big enough to justify its use. As the technology matures, yields go up, costs go down, access becomes easier and more mainstream applications start to migrate toward the node as newer technologies take the leading-edge spot. I clearly remember when 16nm FinFET was that leading edge, much sought after technology. But that is now the past and 16nm FInFET is finding application in mainstream products, but there is a catch. As I mentioned, 16nm FinFET was all about speed, often at the expense of power. But mainstream applications can be very power sensitive. The good news is that sureCore is fixing the problem. Read on to see how sureCore brings 16nm FinFET to mainstream use with a new memory compiler.

The 16nm Problem

Applications such as wearables, IoT and medical devices can be good matches for what 16nm FinFET has to offer. The combination of performance, density and yield offered by the technology can be quite appealing. Also, cutting operating voltage generates substantial power savings while still delivering the needed performance. The technology has been in production for over ten years. This means the process is quite stable and yields are high. The fabs involved are largely depreciated as well. All this brings the cost of 16nm FinFET in reach for lower cost, power sensitive devices.

Can low power applications implemented in 28 or 22nm bulk or FDSOI cut ASPs and deliver better features and power with 16nm?  Things seem to line up well except for the key point made earlier: 16nm FinFET was focused on performance first, so much of the IP available for the node is built with performance in mind – power optimization was not central to its design, so a mismatch with newer, power-sensitive applications exists.

The sureCore Solution

sureCore is a company that tends to change the rules and open new markets. A recent post discussed how the company is enabling AI with low power memory IP. sureCore is even working with Agile Analog to enable quantum computing. So, opening 16nm FInFET to a broad range of applications is certainly in the sureCore wheelhouse.

Recently, the company announced the availability of its PowerMiser ultra-low, dynamic power memory compiler in 16nm FinFET. This effectively opens new opportunities for application of the technology by allowing demanding power budgets to be met.

Paul Wells, sureCore’s CEO explains the details quite well:

“FinFET was developed to address the increasingly poor leakage characteristics of bulk nodes. In addition, the key driver for the mobile sector was ever greater performance to deliver new features and a better user experience. The power consumption was not deemed a significant issue, as both the radio and the display were the dominant factors in battery life determination. This, in addition, to the relatively large form factor of a mobile phone meant that the batteries had capacities in excess of 3-4000mAH.”

He went on to highlight sureCore’s strategy:

“However, designers of power sensitive applications such as wearables and medical devices with much more constrained form factors and hence smaller batteries need a range of power optimised IP that can exploit the power advantages of FinFET whilst being much less concerned about performance. This has meant a demand for memory solutions that are specifically tailored to deliver much reduced power consumption. By providing the PowerMiser SRAM IP, sureCore is enabling the shift to mature FinFET processes for low power applications and is thus helping to provide clear differentiation for such products based on both cost and battery life. By doing so, the all-important competitive advantage over rivals may be realised.”

You can read the full text of the press release here .  And that’s how sureCore brings 16nm FinFET to mainstream use with a new memory compiler.


Are Agentic Workflows the Next Big Thing in AI?

Are Agentic Workflows the Next Big Thing in AI?
by Bernard Murphy on 04-04-2024 at 6:00 am

Agentic flows min min

AI continues to be a fast-moving space and we’re always looking for the next big thing. There’s a lot of buzz now around something called agentic workflows – ugly name but a good idea. LLMs had a good run as the state-of-the-AI-art, however evidence is building that the foundation model behind LLMs alone has limitations, both theoretically and in practical applications.  Simply building bigger and bigger models (over a trillion parameters last time I looked) may not deliver any breakthroughs beyond excess cost and power consumption. We need new ideas and agentic workflows might be an answer.

Image courtesy Mike McKenzie

Limits on transformers/LLMs

First I should acknowledge a Quanta article that started me down this path. A recent paper looked at theoretical limits on transformers based on complexity analysis. The default use model starts with a prompt to the LLM which should then return the result you want. Viewing the transformer as a compute machine, the authors prove that the range of problems that can be addressed is quite limited for these or any comparable model architectures.

A later paper generalizes their work to consider chain of thought architectures, in which reasoning proceeds in a chain of steps. The prompt suggests breaking the task down into a series of simpler intermediate goals which are demonstrated in “show your work” results. The authors prove complexity limits increase slightly with a slowly growing number of steps (with respect to the prompt size), more quickly with linear growth in steps, and faster still with polynomial growth. In the last of these cases they prove the class of problems that can be solved is exactly those solvable in polynomial time.

Complexity-based proofs might seem too abstract to be important. After all the travelling salesman problem is known to be NP-hard, yet chip design routinely depends on heuristic solutions to such problems and works very well. However limitations in practical applications of LLMs to math reasoning (see my earlier blog) hint that these theoretical analyses may not be too far off-base. Accuracy certainly grows with more intermediate steps in real chain of thought analyses. Time complexity in running multiple steps also grows, and per the theory will grow at corresponding rates. Suggesting while higher accuracy may be possible, the price is likely to be longer run times.

Agentic flows

The name derives from use of “agents” in a flow. There’s a nice description of the concepts in a YouTube video by Andrew Ng who contrasts the one-shot LLM approach (you provide a prompt, it provides an answer in one shot) with the Agentic approach which looks more like the way a human would approach a task. Develop a plan of attack, do some research, write a first pass, consider what areas might need to be improved (perhaps even have another expert review the draft), iterate until satisfied.

Agentic flows in my understanding provide a framework to generalize chain of thought reasoning. At a first level, following the Andrew Ng video, in a prompt you might ask a coder agent LLM to write a piece of code (step 1), and in the same prompt ask it to review the code it generated for possible errors (step 2). If it finds errors, it can refine the code and you could imagine this process continuing through multiple stages of self-refinement. A next step would be to use a second agent to test the code against some series of tests it might generate based on a specification. Together these steps are called “Reflection” for obvious reasons.

There are additional components in the flow Andrew that suggests: for Tool Use, Planning and Multi-Agent Collaboration. However the Reflection part is most interesting to me.

What does an Agentic flow buy you?

Agentic flows do not fix the time complexity problem; instead, they suggest an architecture concept for extending accuracy for complex problems through a system of collaborating agents. You could imagine this being very flexible and there are some compelling demonstrations. At the same time, Andrew notes we will have to think of agentic workflows taking minutes or even hours to return a useful result.

A suggestion

I see long run times as an interesting human engineering challenge. We’re OK waiting seconds to get an OK result (like a web search). Waiting possibly hours for anything less than a very good result would be a tough sell.

I get that VCs and the ventures they fund are aiming for moonshots – artificial general intelligence (AGI) as the only thing that might attract enough attention in a white-hot AI market. I wish them well, especially in the intermediate discoveries they make along the way. The big goal I suspect is still a long way off.

However the agentic concept might deliver practical and near-term value if we are prepared to allow expert human agents in the flow. Let the LLM do the hard work to get to a nearby goal, and perhaps suggest a few alternatives for paths it might follow next. This should take minutes at most. An expert human agent then directs the LLM to follow one of those paths. Repeat as necessary.

I’m thinking particularly of verification debug. In the Innovation in Verification series we’ve covered a few research papers on fault localization. All useful but still challenged to accurately locate a root cause. An agentic workflow alternating between an LLM and an expert human agent might help push accurate localization further and it could progress as quickly as the expert could decide between alternatives.

Any thoughts?


Navigating the Complexities of Software Asset Management in Modern Enterprises

Navigating the Complexities of Software Asset Management in Modern Enterprises
by Kalar Rajendiran on 04-03-2024 at 10:00 am

Engineering Environment

In today’s digital age, software has become the backbone of modern enterprises, powering critical operations, driving innovation, and facilitating collaboration. However, with the proliferation of software applications and the complexity of licensing models, organizations are facing significant challenges in managing their software assets effectively.

Altair has published a comprehensive guide that explores the nuances of Software Asset Management (SAM), uncovers the strategies, challenges, and solutions for optimizing software usage, addresses cost reduction, and helps ensure compliance in modern enterprises, with a particular focus on the acute challenges in Computer-Aided Design (CAD), Computer-Aided Engineering (CAE), Electronic Design Automation (EDA) environments. You can access the entire guide from here. Following is a synopsis of the importance of SAM, challenges faced and best practices for successful SAM initiatives.

Understanding Software Asset Management

Software Asset Management (SAM) encompasses the processes, policies, and tools used by organizations to manage and optimize their software assets throughout the software lifecycle. From procurement and deployment to usage tracking and retirement, SAM aims to maximize the value of software investments while minimizing risks and costs associated with non-compliance and underutilization.

The Growing Importance of SAM

Enterprise software spending is on the rise, driven by the increasing reliance on digital technologies for business operations and innovation. According to industry reports, enterprise software spending is expected to reach unprecedented levels in the coming years, highlighting the critical importance of effective SAM practices. In today’s competitive landscape, organizations cannot afford to overlook the strategic value of software asset management in driving efficiency, reducing costs, and mitigating risks.

Challenges in Software Asset Management

While SAM presents challenges across various industries, CAD, CAE and EDA environments face unique hurdles due to the specialized nature of their software tools and computing requirements.

These environments rely on specialized software tools tailored for engineering design, simulation, and analysis. These tools often come with complex licensing models and require high levels of expertise for effective management. Engineering simulations and analyses often require significant computational resources, including high-performance computing (HPC) clusters and specialized hardware accelerators. Managing software licenses across distributed computing environments adds another layer of complexity to SAM efforts. In addition, CAD and CAE environments deal with multidisciplinary engineering problems, involving various software tools and domains. Coordinating software usage and licenses across different engineering teams with diverse requirements poses a significant challenge for SAM initiatives.

Best Practices for Successful SAM Initiatives

To overcome these challenges and maximize the benefits, organizations can adopt the following best practices.

  • Establish clear goals and objectives aligned with business objectives and engineering requirements to guide SAM initiatives effectively.
  • Gain support from senior leadership to prioritize SAM efforts, allocate resources, and overcome organizational barriers.
  • Foster collaboration between IT, engineering, procurement, and finance teams to ensure alignment of SAM efforts with business needs and technical requirements.
  • Choose SAM tools specifically designed for CAD and CAE environments, capable of managing specialized software licenses and integrating with HPC workload management systems.

Partnering with Altair

Altair, a leading provider of engineering and HPC solutions, offers specialized SAM solutions tailored for CAD, CAE and EDA environments. With solutions like Altair® Software Asset Optimization (SAO) and Altair® Monitor™, organizations can leverage advanced analytics, predictive modeling, and visualization tools to optimize their software assets effectively. Altair’s industry expertise, proven track record, and commitment to innovation make it a trusted partner for organizations looking to streamline their SAM initiatives in CAD and CAE environments.

Summary

Software Asset Management (SAM) plays a crucial role in optimizing software usage, reducing costs, and ensuring compliance in modern enterprises, especially in engineering and design environments. By understanding the unique challenges and adopting best practices tailored for these environments, organizations can navigate the complexities of SAM with confidence and achieve success in their software asset optimization endeavors. With Altair as a trusted partner, organizations can unlock significant value, enhance productivity, and drive sustainable growth in the digital age.

To access the eGuide, please visit Make the Most of Software License Spending.

Also Read:

2024 Outlook with Jim Cantele of Altair

Altair’s Jim Cantele Predicts the Future of Chip Design

How to Enable High-Performance VLSI Engineering Environments


yieldHUB Improves Semiconductor Product Quality for All

yieldHUB Improves Semiconductor Product Quality for All
by Mike Gianfagna on 04-03-2024 at 6:00 am

yieldHUB Improves Semiconductor Product Quality for All

We all know that building advanced semiconductors is a team sport. Many design parameters and processes must come together in a predictable, accurate and well-orchestrated way to achieve success. The players are diverse and cover the globe. Assembling all the information required to optimize the project in one place, with the right level of analysis and insight is a particularly vexing problem. My first job out of college was building such a system for internal use at the RCA Solid State Division (RIP). If you happen to have a copy of the 15thDesign Automation Conference Proceedings handy, thumb to page 117 to check out my early efforts. That was back in the dawn of time when infrastructure was thin, and automaton was non-existent. This early work gave me an appreciation for the tools that followed over the decades. Today, optimizing designs is still difficult, but there are some excellent systems that cut the problem down to size.  Let’s examine how yieldHUB improves semiconductor product quality for all.

What’s The Problem?

Let me begin by framing the problem. It’s well known that many companies all over the world are involved in the design and manufacture of advanced chips. Design, verification, wafer fab, packaging, final test, qualification, and in-system validation and bring-up are all highly complex processes that involve many tools and companies around the world.

To ensure harmonious operation between all these entities requires, first and foremost, visibility into the data produced by each step. Here is the first problem – all sources of data have a particular format and access method. It would be nice if they were all the same, but they are not. So, there is a many-to-many challenge to assemble all the data needed in one place that is reliable and accurate. The single source of truth that is the holy grail for many online systems.

Once this is achieved, the next problem is what to do with all the data. What types of analyses are needed to turn data into useful information? The answer to that question depends on what you’re trying to monitor, debug or optimize. Many, many ways of looking at the data to find the needed insight must be supported. And we’re talking about massive amounts of information, making the whole process very challenging.

At the end of the day, product management teams need accessibility (all data in one place), analysis (a holistic view of everything important), coordination (everyone needs to be using the same information around the world), insight (the ability to spot the right trends), and support (to help use what’s available and quickly add something new when it’s needed).

Having been part of an internal team trying to solve this problem, I can tell you it’s too big for any internal team to address adequately. The cost of doing it right is too high for any one company.

An Elegant Solution

For almost twenty years, yieldHUB has been helping people working on yield improvement to enjoy their jobs and be more efficient. The company’s yield management platform and support organization have a worldwide footprint, both on-premise and in the cloud. Its data model is unique and hugely scalable, allowing customers to analyze data without having to download it first. The result is the ability to analyze hundreds of wafers worth of data in seconds. This is game-changing technology.

Kevin Robinson

I recently had the opportunity to get a live tour of some of yieldHUB’s capabilities with Kevin Robinson, yieldHUB’s VP of operations. Based in the UK, Kevin has been building capabilities at yieldHUB to help customers conquer yield challenges for over ten years.

Kevin began by describing the breadth and depth of yieldHUB’s central data server system. Data is automatically consolidated from the foundry (WAT, wafer test, final test as well as in-line data from the fab line), PCB data, and module data for actual products shipping to end customers. This is supplemented with genealogy data and manufacturing execution/ERP data to create a complete view of the enterprise.

Using thin client technology, secure access to all this information for targeted analysis is possible either behind the firewall or in the cloud. A knowledge base allows information and comments to be attached to any part of the process and shared efficiently. For example, lot-level or product-level information in the knowledge base is automatically seen by all those working in that area of the enterprise. This saves a lot of time and dramatically improves collaboration.

Kevin began by examining yield data for a half million mixed signal parts with embedded memory. This represents about 100GB of data. Each part is uniquely identified in the system, so there are many ways to explore the information. Kevin was able to quickly display this data to begin to identify possible trends. One example is shown below, where color-coded bar charts present failure mode distributions for various lots.

Failure mode distribution

Kevin then began to drill down into this data with many views, generated in real-time. In the interest of time, I’ll show one example – analyzing the lowest yielding lot. Focusing on the highest failing test for that lot, he examined the behavior across test sites. That created a rich view of data relationships as shown in the figure below.

Test site behavior
Wafer map view

The data can also be displayed on a per-lot, per wafer basis to create wafer maps. An example is shown on the right. Kevin provided many more examples of how to explore and analyze this data and other large datasets.

From personal experience, I can testify that the easy-to-use analysis capabilities of this system hide a massive amount of implementation detail. Accurately acquiring data from so many different sources and making the resultant massive data sets accessible and easy to visualize and analyze is no small feat.

The diagram below summarizes the big picture view of yieldHUB and its impact on the enterprise.

YieldHUB functions and impact

To Learn More

If you care about yield and product quality, you need a system like this. If you’re considering building it yourself, let me emphatically suggest you take a different approach. It will take a lot longer than you think to implement, and the ever-changing supply chain dynamics, equipment profiles and analysis requirements will quickly consume way too much time and effort.

I’ve provided a small window into a very detailed demonstration. The good news is that you can set up your own tour of yieldHUB here. You can also view a recent webinar on collaboration and yield improvement here. And that’s the best way to learn how yieldHUB improves semiconductor product quality for all.


Scaling Data Center Infrastructure for the Terabit Era

Scaling Data Center Infrastructure for the Terabit Era
by Kalar Rajendiran on 04-02-2024 at 10:00 am

Scaling Data Center Infrastructure for the Terabit Era Panel

Earlier this month, SemiWiki wrote about Synopsys’s complete 1.6T Ethernet IP solution to drive AI and Hyperscale Data Center chips. A technology’s success is all about when, where and how it gets adopted within the ecosystem. In the high-speed ethernet ecosystem, the swift adoption of 1.6T Ethernet relies on key roles and coordinated actions. Technology developers, such as semiconductor companies and IP providers, drive innovation in ethernet technologies, while standardization bodies like IEEE set crucial standards for interoperability. Collaboration among industry players ensures seamless integration of components, and interoperability testing validates compatibility. Infrastructure upgrades are essential to support higher speeds, requiring investments in hardware and networking components.

IEEE hasn’t yet ratified a standard based on 224G SerDes for 1.6Terabit Ethernet. High-speed ethernet is more than just a SerDes, a PCS and a PMA spec. There a lot of different pieces to ratifying an ethernet standard. Will the 1.6T Ethernet get standardized soon? How will the standard get rolled out into the industry? How does the technology evolve to handle latency requirements, and deliver the massive throughput demands and still keep the power at manageable levels? How do SoC designers prepare to support 1.6T ethernet? And what would data center technology look like ten years from now?

The above are the questions that a thought leadership webinar sponsored by Synopsys explored.

The session was hosted by Karl Freund, founder and principal analyst at Cambrian-AI Research. The panelists included John Swanson, HPC IP product line manager, Synopsys; Kent Lusted, principal engineer Ethernet PHY standards advisor, Intel; Steve Alleston, director of business development, OpenLight Photonics; John Calvin, senior wireline IP planner, KeySight Technologies. Those involved in planning for, implementing and supporting high-speed ethernet solutions would find the webinar very informative.

The following is a synthesis of the key points from the webinar.

At the heart of the matter lies the standardization process. While IEEE has yet to ratify a standard based on 224G SerDes for 1.6T Ethernet, the urgency for adoption is palpable. With the rise of artificial intelligence (AI) and machine learning (ML) applications driving the demand for enhanced data processing capabilities, the industry cannot afford to wait. The first wave of adoption is expected to emanate from data centers housing AI processors, where the need for massive data training and learning capabilities is paramount. Subsequently, switch providers like Broadcom and Marvel are poised to facilitate the second wave of adoption by furnishing the infrastructure necessary to support the burgeoning demands.

Amidst this backdrop, standardization bodies such as IEEE play a pivotal role in shaping the future of ethernet technology. IEEE P802.3dj draft specifications are instrumental in defining the parameters for 1.6T Ethernet, encompassing a myriad of physical layer types ranging from backplanes to single-mode fiber optics. However, the ecosystem of ethernet extends beyond IEEE, encompassing various industry bodies that develop specifications for different applications such as InfiniBand and Fibre Channel. Collaboration among these entities is imperative to ensure a robust ecosystem that meets the diverse needs of end-users and operators.

The proliferation of AI and ML applications has accelerated the pace of standardization efforts, compelling bodies like IEEE and OIF to expedite the development of specifications. While the quality of standards necessitates thorough review, the industry’s pressing needs mandate a balance between quality and expediency. This urgency is underscored by the advent of captive interfaces, where AI players who own both ends of the network are forging ahead with proprietary solutions to meet their immediate requirements, necessitating subsequent convergence with industry standards.

As part of this technology evolution, power efficiency emerges as a paramount concern. As the transition to 1.6T Ethernet entails a doubling of power consumption, innovative solutions are imperative to mitigate energy demands. Strategies such as co-packaged optics and silicon photonics hold promise in reducing power consumption while optimizing performance. However, achieving optimal solutions necessitates exploring a landscape of competing architectures and approaches.

As industry players gear up for the advent of 1.6T Ethernet, the role of system-on-chip (SoC) designers becomes pivotal. Despite facing monumental challenges, early access to IP facilitates progress amidst evolving standards. Moreover, power efficiency emerges as a cornerstone of data center evolution, with advancements in signaling efficiency poised to redefine the power landscape. As the march towards 1.6T Ethernet continues, collaboration, innovation, and a keen focus on efficiency will pave the way for a new era of connectivity.

Looking out ten years ahead, the data center promises a paradigm shift towards optical connectivity and enhanced power efficiency. With optics poised to play an increasingly central role, the industry must adapt to a landscape where latency and power consumption are paramount concerns.

Also Read:

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

Synopsys SNUG Silicon Valley Conference 2024: Powering Innovation in the Era of Pervasive Intelligence

2024 DVCon US Panel: Overcoming the challenges of multi-die systems verification


TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production
by Daniel Nenni on 04-02-2024 at 6:00 am

nvidia culitho

NVIDIA cuLitho Accelerates Semiconductor Manufacturing’s Most Compute-Intensive Workload by 40-60x, Opens Industry to New Generative AI Algorithms.

An incredible example of semiconductor industry partnerships was revealed during the Synopsys User Group (SNUG) last month. It started with a press release but there is much more to learn here in regards to semiconductor industry dynamics.

I saw a very energized Jensen Huang, co-founder and CEO of Nvidia, at GTC which was amazing. It was more like a rock concert than a technology conference. Jensen appeared at SNUG in a much more relaxed mode chatting about the relationship between Nvidia and Synopsys. Jensen mentioned that in exchange for Synopsys software, Nvidia gave them 250,000 shares of pre IPO stock which would now be worth billions of dollars. I was around back then at the beginning of EDA, Foundries, fabless, and it was quite a common practice for start-ups to swap stock for tools.

Jensen said quite clearly that without the support of Synopsys, Nvidia would not have not gotten off the ground. He has said the same about TSMC. In fact, Jensen and TSMC founder Morris Chang are very close friends as a result of that early partnership.

The new cuLitho product has enabled  a 45x speedup of curvilinear flows and a nearly 60x improvement on more traditional Manhattan-style flows. These are incredible cost savings for TSMC and TSMC’s customers and there will be more to come.

“Computational lithography is a cornerstone of chip manufacturing,” said Jensen Huang, founder and CEO of NVIDIA. “Our work on cuLitho, in partnership with TSMC and Synopsys, applies accelerated computing and generative AI to open new frontiers for semiconductor scaling.”

“Our work with NVIDIA to integrate GPU-accelerated computing in the TSMC workflow has resulted in great leaps in performance, dramatic throughput improvement, shortened cycle time and reduced power requirements,” said Dr. C.C. Wei, CEO of TSMC. “We are moving NVIDIA cuLitho into production at TSMC, leveraging this computational lithography technology to drive a critical component of semiconductor scaling.”

“For more than two decades Synopsys Proteus mask synthesis software products have been the production-proven choice for accelerating computational lithography — the most demanding workload in semiconductor manufacturing,” said Sassine Ghazi, president and CEO of Synopsys. “With the move to advanced nodes, computational lithography has dramatically increased in complexity and compute cost. Our collaboration with TSMC and NVIDIA is critical to enabling angstrom-level scaling as we pioneer advanced technologies to reduce turnaround time by orders of magnitude through the power of accelerated computing.”

“There are great innovations happening in computational lithography at the OPC software layer from Synopsys, at the CPU-GPU hardware layer from NVIDIA with the cuLitho library, and of course, we’re working closely with our common partner TSMC to optimize their OPC recipes. Collectively, we have been able to show some dramatic breakthroughs in terms of performance for one of the most compute-intensive semiconductor manufacturing workloads.” — Shankar Krishnamoorthy, GM of the Synopsys EDA Group

Collaboration and partnerships are still critical for the semiconductor industry, in fact collaborative partnerships have been a big part of my 40 year semiconductor career. TSMC is an easy example with the massive ecosystem they have built. Synopsys is in a similar position as the #1 EDA company, the #1 IP company, and the #1 TCAD company. All of the foundries closely collaborate with Synopsys, absolutely.

Also Read:

Synopsys SNUG Silicon Valley Conference 2024: Powering Innovation in the Era of Pervasive Intelligence

2024 DVCon US Panel: Overcoming the challenges of multi-die systems verification

Synopsys Enhances PPA with Backside Routing


ASML moving to U.S.- Nvidia to change name to AISi & acquire PSI Quantum

ASML moving to U.S.- Nvidia to change name to AISi & acquire PSI Quantum
by Robert Maire on 04-01-2024 at 10:00 am

Moving to the US
  • Nvidia changing name to AISi (AI silicon) reflecting business focus
  • Nvidia to buy PSI Quantum to combine AI & quantum efforts
  • ASML to move to U.S. to reduce China & employee restrictions
  • New Japanese consortia firms join Rapidus & IBM fab team

Nvidia renaming to reflect AI reality

Nvidia which is now clearly seen as the poster child and dominant leader in all things AI will change its name to AISi (pronounced “I See”) which reflects its current position as the dominant source of AI (artificial intelligence) Si (silicon).

In conjunction with the name change, the company will also move its stock trading to the NYSE under the new ticker symbol “AI”.

Jensen Huang, CEO of Nvidia, will announce the changes at a scheduled news conference but his statements regarding the change were released earlier today.

“Nvidia and the world are entering a new era of pervasive, ubiquitous artificial intelligence. As the leader in AI based silicon, our new name, AISi, is more reflective of our business position and our focus going forward”, Jensen continued, ” We believe AI will be more impactful on society than the PC, the internet or mobile phones, and are dedicating the company, and its name, to it.”

Rumored acquisition of PsiQuantum likely to be announced at news conference

It is also speculated that Nvidia, the new “AISi”, will announce the acquisition of PsiQuantum, the leading start up in quantum computing, at the same news conference. Nvidia (AISi) has been working on a stealth quantum computing program to combine its 200 Billion transistor AI chips with PsiQuantums millions of optical quantum qubit quantum computer systems. The combination would likely be a superior competitor to Google’s Quantum AI project which is called Sycamore, as it would combine the two market leaders in AI and quantum computing.

Googles quantum AI project

We would add that the combined entity could easily be called AISiPI (I see pie…) if you added PSI Quantum to Nvidia’s new name…..just saying…..

ASML to move headquarters to US to reduce China and employee restrictions

There have been ongoing reports in the media regarding ASML’s unhappiness with its current restrictive position in the Netherlands. It has been hamstrung by limits of foreign employees and is in obvious conflict over exports to China as well as other restrictions.

The concerns have grown so strong that the Dutch government has launched a previously secret project called “operation Beethoven” aimed at keeping ASML in the Netherlands. It has been reported that operation Beethoven has offered 2.5 billion Euros in incentives and changes in laws to entice ASML to stay put.

News report on operation Beethoven

Previously unknown and unreported in the press is that there has been a secret effort in the US appropriately nicknamed “Roll Over Beethoven” to convince ASML to move to the U.S.

The secret effort has been spearheaded by the unusual combination of Sam Altman of AI fame and Elon Musk (strange bedfellows their opposing views). The target location in the U.S. for ASML to move to appears to be Austin Texas not far from Tesla’s headquarters. Altman has been very publicly calling for up to 7 Trillion dollars in spend on the semiconductor industry to support the AI industry. Musk has obviously been very vocal about AI in general and especially in Tesla’s products. So having the number one, premier, semiconductor equipment company in the world as a neighbor would be quite a coup and a bonus for both AI and semiconductor efforts in the U.S.

An unnamed source we spoke to at ASML commented that part of the “USA move package” would include assurances from the U.S. Department of Commerce to reduce China export restrictions to similar low levels currently enjoyed by other U.S. equipment makers, from ASML’s current more restrictive limits in the Netherlands, which ASML has long complained about. In addition ASML would more easily qualify for billions of dollars of CHIPS act funding as a US based company. Also, ASML moving to Austin would offset Applied Materials moving manufacturing jobs out of Texas to Singapore.

In a somewhat bizarre, counterintuitive move, Texas would open up restrictions on immigrants in order to supply ASML’s need for talent.

In our view this move makes sense as most ASML shareholders are in the U.S. anyway and Austin should prove very attractive with all the incentives…..we also find it somewhat poetic justice that Eindhoven means “last hooves” in english and ASML would be moving to the land of cattle (hooves) in Texas……

Rapidus adds Japanese Optimus & Megatron robotic makers to the team

Rapidus, the Japanese fab consortium that is racing to build a 2NM fab in Japan with the help of U.S. based IBM and many others in the industry continues to add to the team of experienced industry players involved in the effort.

With the goal of building one of the most advanced fabs in the world, the need for advanced automation and robotics is clear. In addition to the AMHS (automated material handling systems) which are the overhead fab mini railroads made by Daifuku and Murata the new Rapidus fab will feature robotic wafer handling and tool control automatons made by Optimus and Megatron. The Optimus robot will be called the “Optimus Prime” while the Megatron robots will be a series called “Decepticons” as they imitate human operators.

A spokeperson for Rapidus, Satsu Slayer, welcomed both Optimus and Megatron to the Rapidus consortium by saying ” We are certain that Optimus Prime and the Decepticons will play a key role in “transforming” the dream of Rapidus into the leading fab in the world”

We hope you enjoyed this April “First” issue of Semiwatch!!!!

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

SPIE Let there be Light! High NA Kickoff! Samsung Slows? “Rapid” Decline?

AMAT – Flattish QTR Flattish Guide – Improving 2024 – Memory and Logic up, ICAPs Down

KLAC- OK Quarter & flat guide- Hopefully 2025 recovery- Big China % & Backlog


MZ Technologies Enables Multi-Die Design with GENIO

MZ Technologies Enables Multi-Die Design with GENIO
by Mike Gianfagna on 04-01-2024 at 6:00 am

MZ Technologies Enables Multi Die Design with GENIO

MZ Technologies is a unique company that enables multi-die design by providing critical planning and analysis tools that sit above the traditional EDA design flow. Chip and package design tools are good at what they do. Given a set of constraints, they will deliver a good result. The question is, what is the right set of constraints?  What type of stack (for 3D), what type of interposer (for 2.5D) and what type of placement of blocks and pins will deliver the best result?  These are just some of the questions MZ Technologies addresses. The company’s design tool is called GENIO™. I got an opportunity to see a live demonstration of the tool recently. That illuminated a lot about its impact. Read on to see how MZ Technologies enables multi-die design with GENIO.

If you want some background on MZ Technologies and how its products fit in the design flow, you can get that here.  You can also get an overview of the GENIO product suite here. As they say, a picture is worth 1,000 words. A live demo has similar power to illuminate concepts. Let’s dig in…

GENIO for 2.5D

Francesco Rossi

Francesco Rossi, engineering manager at MZ Technologies began the demo by developing a 2.5D design consisting of an XPU and four HBM memory stacks. Using simple and intuitive “drag and drop” capabilities and library managers, he configured items such as the four HBM stacks, the XPU, the PHYs for each HBM and a silicon interposer.  Bump locations were also defined for the interposer to handle connectivity between components and through the silicon interposer to the package substrate. Connection points on the package were also defined with GENIO in a straight-forward manner.

Below is a screen shot of the graphical representation of the completed stack.

2.5D Stack Configuration

Once the complete stack was defined (package, interposer, devices), connectivity was introduced and optimized. The optimization process examined the fly lines implied by the connectivity to minimize overall fly line length. This will deliver a more optimal starting point for the downstream implementation flow. Consideration was also given to ensure there were no crossovers in the fly lines. The figure below shows the results of this work. All fly lines are displayed. The red items are through-silicon vias (TSVs). These have been either automatically placed or guided by the designer for critical areas.

2.5D Stack with Fly Line Routing
Anna Fontanelli

Anna Fontanelli, founder and CEO at MZ Technologies also joined the demo. She explained that this demo was developed in conjunction with Synopsys to ensure a good fit between GENIO and the implementation tools and IP that it works with. She said that Synopsys DesignWare IP was used for the demo, which interfaced with Synopsys IC Compiler and Custom Compiler. The key point was a good flow between the high-level planning offered by GENIO and the tools and IP that would ultimately implement the final system.

She went on to explain that this design had over 200,000 nets. The interconnect cockpit provided by GENIO delivers substantial new capabilities to manage and optimize a problem of this size. For example, pin groupings can be defined that cross the entire design hierarchy. Fly lines and group of fly lines can be analyzed for average, min and max length. She pointed out that analyzing the design across the full hierarchy, from silicon all the way to the package provides a unique perspective on system performance that is difficult to achieve with conventional approaches.

Using these, and many other capabilities the aspect ratio of the initial design can be examined to ensure an optimal result. Slight changes in aspect ratio and placement can be quickly assessed to find the best result. Anna also explained that estimated resistances can be extracted from the interconnect to drive early static timing analysis.

GENIO for 3D

Marco Cignarella

Marco Cignarella, senior software engineer at MZ Technologies showed how GENIO can be used to define and optimize 3D stacks. A design consisting of multiple chips and memories was used. By changing the stack configuration, the overall interconnect length and number of TSVs can be quickly assessed. Key relationships about the relative placement of components in the 3D stack can be easily specified before optimization begins. This allows global designer perspective to be considered with minimal intervention.

Using these capabilities, the top two or three stack configurations can be quickly identified for further analysis. Below are screen shots of one candidate 3D stack configuration and the associated fly line routing view. A lot of global perspective can be achieved in a short period of time.

3D Stack Configuration
3D Stack with Fly Line Routing

To Learn More

This demo session provided an incredible amount of design perspective and analysis in a short period of time. I am sure many design teams work to develop the optimal configuration for a 2.5D or 3D design using Microsoft Excel and PowerPoint. The data that drives these analyses is often scattered across multiple directories.

The ability to do this work in one “cockpit” with one, verified data source and automated analytics and visualization tools can take a multi-week project down to a day or so, with far better results. If you are considering multi-die design, you need a tool like GENIO. The ways to contact MZ Technologies can be found here. And that’s how MZ Technologies enables multi-die design with GENIO.

Also Read:

MZ Technologies Enables Multi-Die Design with GENIO

How MZ Technologies is Making Multi-Die Design a Reality

Outlook 2024 with Anna Fontanelli Founder & CEO MZ Technologies

CEO Interview: Anna Fontanelli of MZ Technologies


Podcast EP214: The Broad Impact of proteanTecs with Noam Brousard

Podcast EP214: The Broad Impact of proteanTecs with Noam Brousard
by Daniel Nenni on 03-29-2024 at 10:00 am

Dan is joined by Noam Brousard, who has over 20 years of diverse technology experience, spanning systems engineering, software development and hardware design. He currently serves as the vice president of Solutions Engineering at proteanTecs, where he helps customers implement on-chip monitoring solutions to address their biggest quality and reliability challenges and optimize their power and performance. Previously, he served as proteanTecs’ vice president of Product, leading the development and commercialization of the company’s multi-disciplinary product portfolio.

Noam discusses the architecture and operation of proteanTecs fine-grained embedded sensing and analysis capabilities and how the technology enhances many aspects of the design, including power, performance, quality and reliability over the device lifetime.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

 


LIVE WEBINAR: RISC-V Instruction Set Architecture: Enhancing Computing Power

LIVE WEBINAR: RISC-V Instruction Set Architecture: Enhancing Computing Power
by Daniel Nenni on 03-29-2024 at 8:00 am

RISC V Banner SemiWiki

In the dynamic landscape of chip design, two trends stand out as game-changers: the rise of the RISC-V instruction set architecture (ISA) and the advent of Software Defined products. Today, we delve into why these trends are not just shaping the industry but propelling companies like Andes and Menta to the forefront of innovation. Join us for an enlightening webinar where we explore the intersection of these trends and their impact on the semiconductor industry.

SEE REPLAY

RISC-V, a relatively new player in the field, has managed to disrupt a market long dominated by established ISAs. What sets RISC-V apart? One key factor lies in its ability to empower chip designers like never before. With RISC-V, designers can extend the ISA to unlock enhanced computing power, significant performance improvements, power reduction, and reduced costs. Take, for instance, the groundbreaking Meta Training and Inference Accelerator (MTIA). Leveraging Andes Technology Corp.’s RISC-V CPU with vector extensions IP, MTIA showcases the potential of custom extensions to drive innovation in chip design.

Traditionally, adding functionality to a CPU ISA posed significant challenges, often resulting in lengthy design cycles and delays in time to market. However, Andes has revolutionized the process with tools like ACE (Andes Custom Extension) and CoPilot, streamlining the integration of custom extensions into RISC-V CPUs. Now, designers can implement custom changes more efficiently, paving the way for rapid innovation and product development.

But the evolution of chip design doesn’t stop at RISC-V. Enter the era of Software Defined products, where flexibility and adaptability reign supreme. Whether it’s Software Defined Vehicles or configurable electronics in aerospace applications, the need for dynamic adjustments is more pressing than ever. This is where Menta’s embedded Field-Programmable Gate Array (eFPGA) comes into play.

Menta’s eFPGA technology complements RISC-V CPUs with custom extensions, offering unparalleled flexibility across a myriad of use cases. From software-defined radio in telecom to configurable engine management systems in automotive applications, the possibilities are limitless. With Menta’s eFPGA, chip designers can seamlessly adapt to evolving standards, address security vulnerabilities, and optimize performance in real-time.

The synergy between RISC-V and Software Defined products represents a paradigm shift in chip design. By combining the power of customizable ISAs with the flexibility of embedded FPGA technology, Andes and Menta are empowering designers to push the boundaries of innovation. Whether it’s unlocking new capabilities in telecom infrastructure or enhancing imaging and preprocessing in space applications, the possibilities are as vast as the cosmos.

SEE REPLAY

Join us as we dive deeper into the transformative potential of RISC-V and Software Defined products. Discover how these trends are reshaping the semiconductor industry and paving the way for a future where innovation knows no bounds. Don’t miss out on this opportunity to stay ahead of the curve and unlock the full potential of chip design. Register now and be part of the revolution!

Also Read:

LIVE WEBINAR: Accelerating Compute-Bound Algorithms with Andes Custom Extensions (ACE) and Flex Logix Embedded FPGA Array

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology

Extendible Processor Architectures for IoT Applications