SNPS1670747138 DAC 2025 800x100px HRes

Siemens EDA Makes 3D IC Design More Accessible with Early Package Assembly Verification

Siemens EDA Makes 3D IC Design More Accessible with Early Package Assembly Verification
by Mike Gianfagna on 05-13-2024 at 6:00 am

Siemens EDA Makes 3D IC Design More Accessible with Early Package Assembly Verification

2.5D and 3D ICs present special challenges since these designs contain multiple chiplets of different materials integrated in all three dimensions. This complexity demands full assembly verification of the entire stack, considering all the subtle electrical and physical interactions of the complete system. Identifying the right stack configuration as early as possible in the design process minimizes re-work and significantly improves the chances of success. Siemens Digital Industries Software recently published a comprehensive white paper on how to address these problems. A link is coming, but first let’s examine the array of challenges that are presented in the white paper to see how Siemens EDA makes 3D IC design more accessible with early package assembly verification.

2.5/3D IC Design Challenges

2.5D and 3D ICs are composed of multiple chiplets, each of which may be fabricated on separate process nodes. The white paper talks about some of the challenges for this kind of design methodology. For example, options for connecting chiplets are reviewed. A partial list:

  • Chiplets connected via interposer with bump connections and through-silicon-vias (TSVs)
  • Chiplets on package
  • Chiplets on packages with discreet and thinned interposers embedded without TSVs
  • Chiplets stacked on chiplets through direct bonding techniques
  • Chiplets stacked on chiplets with TSVs or copper pillars and more bumps

There is a lot to consider.

The 3D IC Assembly Flow

Challenges here include some method to disaggregate the components of a design into appropriate chiplets. Each chiplet must then be assigned to an appropriate foundry technology. The specific approach to assemble the design is critical. Material choices and chiplet placement will induce thermal and mechanical stresses that can impact the intended electrical behavior of the full assembly design. This phase may require many iterations.

3D IC Physical Verification

There are a lot of challenges and new methods discussed in this section. For example, the most common approach for checking physical and electrical compliance for a 3D IC requires the use of separate rule decks for design rule checking (DRC), LVS, etc. for each interface within the package (chip-to-chip, chip-to-interposer, chip-to-package, interposer-to-package, etc.). These rule decks typically use pseudo-devices, commonly in the form of 0-ohm resistors, to identify the connections across each interface while still preserving the individual chiplet-level net names.

Problems here include the fact that designers must associate the many individual rule decks to the corresponding interfaces within the assembly layout, which may not always be intuitive. As errors are identified, designers must be able to highlight them at the proper interfaces (with proper handling of rotations and magnifications) to help identify the appropriate fixes.

Many more challenges are discussed in this section. For example, without a holistic assembly approach, it is impossible to verify ESD protection when the ESD circuits exist in one chip and the protection devices exist in another.

Shift Left IC Design and Verification

In this section, the benefits of a “shift left” or early verification approach are reviewed. Reduced design time and a higher quality result are some of the benefits. How the Calibre nmPlatform and other Siemens EDA tools can be used to implement a shift left approach are discussed.

Shift Left for 3D IC Physical Verification

This section begins by pointing out that 3D IC verification of physical and electrical constraints requires a holistic assembly-level approach. A holistic approach requires full knowledge of both the 3D IC assembly and the individual chiplet processes.  Many tools must be integrated in the correct flow and emerging standards such as 3Dblox must be used correctly.

Approaches to handle thermal and mechanical stress analysis are also detailed.  This is also a complex process requiring many tools used in the correct way. The importance of a holistic approach is again stressed. For example, thermal and mechanical issues cannot be treated in isolation. Mechanical stresses induce heat. Thermal impacts create mechanical stress, and so on. A correct approach here can avoid unwanted surprises by the time final iteration is performed.

To Learn More

This white paper covers a lot of aspects of package assembly verification for multi-die designs. The benefits of a shift left, or early approach are clearly defined, along with a description of how the current flow must be modified to accommodate these techniques.

If you are considering a multi-die design, I highly recommend reading this white paper. It will save you a lot of time and effort. You can get your copy of this important document here. And that’s how Siemens EDA makes 3D IC design more accessible with early package assembly verification.


Podcast EP222: The Importance of Managing and Preserving Ultrapure Water in Semiconductor Fabs with Jim Cannon

Podcast EP222: The Importance of Managing and Preserving Ultrapure Water in Semiconductor Fabs with Jim Cannon
by Daniel Nenni on 05-10-2024 at 10:00 am

Dan is joined by Jim Cannon, Head of OEM and Markets at Mettler-Toledo Thornton. Jim has over 35 years of experience managing, designing, and developing ultrapure water treatment and technology. Jim is currently involved in the standards and regulatory organizations including the Facilities and Liquid Chemicals Committee, Reclaim/Reuse/Recycle Task Force, and the UPW Task Force.

Jim focuses on a unique challenge the semiconductor industry faces. Rather than the power or performance of the end device, he discusses the substantial challenge of reducing water usage for fabs, both new and existing facilities. It turns out semiconductor fabs use ultra-pure water as the universal solvent for all process steps. When you consider that multiple rinses are required for many steps, the amount of water used by large fabs can literally drain the water table of the surrounding area.

Jim discusses the ways the industry is focusing on this problem, both from a regulatory perspective as well as employing advanced technology to both sense water purity and develop new methods to reclaim water in the process The techniques discussed will have substantial impact on the growth of the semiconductor industry.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Webinar: Fine-grained Memory Protection to Prevent RISC-V Cyber Attacks

Webinar: Fine-grained Memory Protection to Prevent RISC-V Cyber Attacks
by Daniel Nenni on 05-10-2024 at 8:00 am

EW Award 24 Logo winner safety Security coloured RGB 300dpi 960x117

Most organizations are aware of cybercrime attempts such as phishing, installing malware from dodgy websites or ransomware attacks and undertake countermeasures. However, relatively little attention has been given to memory safety vulnerabilities such as buffer overflows or over-reads. For decades, the industry has created billions of lines of C & C++ code but addressing the resulting memory safety risks has been a tough challenge.

WEBINAR REGISTRATION

Each year, Microsoft lists vulnerabilities (CVE*) reported after analyzing cyberattacks. It appears that ~70% of them are caused by memory safety issues. At Codasip we are convinced that the way CHERI technology revisits fundamental design choices in hardware and software will prevent this problem, and significantly improve system security. Building on our Custom Compute approach, we are implementing CHERI technology into our RISC-V cores to provide our customers with safe and secure solutions. This webinar will take a closer look into how we implement this technology across our product line.

Capability Hardware Enhanced RISC Instructions. As defined by the University of Cambridge, CHERI extends conventional hardware Instruction-Set Architectures (ISAs) with new architectural features to enable fine-grained memory protection and highly scalable software compartmentalization.

CHERI is implemented through our EDA tool, Codasip Studio. Using Codasip Studio, we have added built-in, fine-grained memory protection by extending the RISC-V ISA with CHERI-based custom instructions. This allows for 100% coverage in checking for memory errors using fine-grained memory protection against software attacks​ and programming errors​. CHERI technology provides built-in CPU logic to check read/write permissions and unalterable hardware capabilities protect against known and future vulnerabilities. All of this at just a small increase in area and a low impact on performance. Our solution also allows for just critical areas of code to be recompiled.

WEBINAR REGISTRATION

Codasip’s CHERI technology recently won the award in the Safety & Security category at the embedded world Exhibition&Conference.

With the rise of cybersecurity threats, the susceptibility of software coded in memory unsafe languages is a significant worry. CHERI, a deterministic hardware-based security method, tackles two crucial areas: a) ensuring memory safety, and b) enabling scalable compartmentalization. Studies suggest that these areas cover approximately 74% of severe CVEs in Linux. Notably, devastating vulnerabilities like Heartbleed could have been averted with CHERI.

Having undergone over a decade of research, CHERI stands as a mature technology. At Codasip we have taken the initiative to develop the first commercially licensed processor IP core featuring CHERI. Leveraging RISC-V serves as an excellent foundation for integrating CHERI instructions into the ISA.

WEBINAR REGISTRATION

Codasip is a processor solutions company which uniquely helps developers to differentiate their products. We are Europe’s leading RISC-V company with a global presence. Billions of chips already use our technology.

We deliver custom compute through the combination of the open RISC-V ISA, Codasip Studio processor design automation and high-quality processor IP. Our innovative approach lets you easily customize and differentiate your designs. You can develop high-performing, and game-changing products that are truly transformational.

Also Read:

How Codasip Unleashed CHERI and Created a Paradigm Shift for Secured Innovation

RISC-V Summit Buzz – Ron Black Unveils Codasip’s Paradigm Shift for Secured Innovation

Extending RISC-V for accelerating FIR and median filters


CEO Interview: Dr. Nasib Naser of ORION VLSI Technologies.

CEO Interview: Dr. Nasib Naser of ORION VLSI Technologies.
by Daniel Nenni on 05-10-2024 at 6:00 am

Nasib Naser Picture

Dr. Nasib Naser brings over 35 years of experience in the field. His expertise spans the entire VLSI cycle from conception to chip design, with a strong focus on verification methodologies. For his 17 years at Synopsys, Dr. Naser have held senior management positions, leading North American Verification IP, managing Central US Verification Applications Engineering, and driving all Memories DDR VIP activities. His experience encompasses diverse verification tools like VCS, SystemVerilog, UVM, SystemC, and Low Power UPF, along with expertise in Transaction Level Modeling, Virtual Prototyping, System Level Design, and HW/SW co-simulation co-verification. Prior to Synopsys, Dr. Naser spent 10 years at NASA developing flight simulation software. He also held positions at CoWare and Varian Associates.

Tell us about your company.
Orion VLSI Technologies is the first Palestinian semiconductor house to be specialized in VLSI Design Services. Orion provides it services based on multiple engagement models such as staff augmentation, train-to-hire program, and turnkey solutions. Moreover, Orion utilizing the experience of its senior staff to develop its own chip (IP), which is the first chip ever to be designed in Palestine. This provide opens a new horizon for young VLSI engineers to gain hands-on experience.

Orion was launched in 2023 with 4 engineers, and now we are more than 65 engineers. We are on our path to achieving our mission to establish a Palestinian VLSI industry and carve the path for talented engineers eager for new opportunities.

What problems are you solving?
Orion VLSI Technologies is well-positioned to address a critical need in the global VLSI industry: talent shortages. Deloitte predicts a need for over one million more skilled workers by 2030 in the semiconductor industry worldwide. With a highly motivated engineering staff and expertise across the entire design flow, from idea to chip, Orion offers valuable outsourcing services. This allows companies grappling with the lack of in-house VLSI talent to access a skilled workforce and continue innovating without delays.

What application areas are your strongest?
We pride ourselves in the know-how of taking an idea from specification to Chip. With our senior staff’s extensive expertise in the whole VLSI Design Cycle, we provide our clients with valuable and quality services to elevate their chip design efforts.

What keeps your customers up at night?
RTL freeze timeline, Verification Metrics closures, and meeting the Tape Out date. At Orion, we put customers first. We help them overcome any issues or obstacles they may face by providing high quality services and solutions.

What does the competitive landscape look like and how do you differentiate?
We are aware that there are many outsourcing outlets in the VLSI industry around the world, all trying to fill the gap in the talents shortages. However, we have many strengths that differentiates us, some of them are cost, quality of services provided, communication skills and English skills, and time-zone proximity. But what makes us unique is the extensive expertise that we bring to the table. Our senior staff possess the necessary know-how to drive innovation, meet deadlines, and deliver.

What new features/technology are you working on?
SystemVerilog/SystemC co-simulation. Using SystemC models as the Golden Reference models while using UVM for testbench and verification.

How do customers normally engage with your company?
We offer a variety of ways for customers to engage with us depending on their needs. Many customers look to work with us under the staff augmentation model, and some others look for turnkey solutions. As to the channels where customers can engage with us, we have a robust online presence, including a comprehensive knowledge base and active social media channels where customers can find helpful resources and connect with our team. We are ready to provide high quality service for our customers. Contact our Business Development team for more information: bd@orionvtech.com.

Also Read:

CEO Interview: Harish Mandadi of AiFA Labs

CEO Interview with Clay Johnson of CacheQ Systems

CEO Interview: Khaled Maalej, VSORA Founder and CEO


Webinar: Samtec and Achronix Expand AI in the Data Center

Webinar: Samtec and Achronix Expand AI in the Data Center
by Mike Gianfagna on 05-09-2024 at 10:00 am

Webinar Samtec and Achronix Expand AI in the Data Center

The performance demands of data centers continue to grow, driven to large degree by the ubiquitous use of complex AI algorithms. On April 25, Embedded Computing Design held an informative webinar on this topic. Two experts looked at the problem from the standpoint of processor architecture and communication strategies, which covers a lot of ground. It turns out AI can be used to solve some of the challenges of bringing AI algorithms to life. A link for the replay is coming, but first, let’s look at the topics covered as Samtec and Achronix expand AI in the data center.

Webinar Background

The event was part of Embedded Computing Design’s An Engineer’s Guide to AI Integration webinar series. According to the publication:

AI is one of the most complex technologies embedded developers must tackle. Integrating it into your system brings with it so many questions and not so many answers. In this monthly series, Embedded Computing Design will look to simplify the design process, as much as that’s possible.

In this webinar, the differences between data center design and conventional embedded computer design were examined, with a look at what choices a developer needs to make. The presenters were:

Matt Burns

Global Director, Technical Marketing, Samtec 

 

 

Viswateja Nemani

Product Manager of AI & Software at Achronix

 

And the moderator was:

Rich Nass

Executive Vice-President, Brand Director, Embedded Franchise, OpenSystems Media

Rich brought a lot of knowledge and perspective to the event and did a great job moderating the webinar and conducting a valuable Q&A session afterwards. The presenters also brought a lot to the event. Samtec’s Matt Burns and Achronix’s Viswa Nemani have extensive knowledge of AI algorithms and the demands they bring to system design.

Samtec covers a broad range of high-performance communication technology and Achronix covers a broad range of high-performance computing technology. This combination makes for a complete view of the challenges data center and system designers face. You can learn more on SemiWiki about Samtec here, and Achronix here.

The Achronix FPGA-Powered View

Viswa kicked off the discussion by explaining of how the various layers of AI algorithms relate to each other and how each delivers unique capabilities. The diagram below summarizes this discussion; it is useful to understand how AI builds upon itself.

He then framed how data centers fit and what challenges exist. Viswa explained that data centers are uniquely positioned to support AI applications thanks to the connectivity and processing power they deliver. He pointed out that vast amounts of data pass through today’s data centers, requiring huge storage and compute capabilities. The AI workloads running in data centers demand high power densities, requiring as much as 100kW+ per cabinet and liquid cooling. This is the beginning of the design challenges.

He explained that AI/ML algorithms are exceptional at spotting patterns in data. This leads to opportunities to improve efficiency, implement predictive and prescriptive analytics, optimize energy, reduce cost, and enhance security and productivity. Viswa then went into detail on each of these topics. It is a very useful and relevant discussion.

He outlined the many benefits of FPGAs play in data center architecture:

  • Real-time processing
    • Low latency, e.g., Automatic speech recognition
  • Efficiency
    • Increase performance and energy efficiency of existing systems through accelerators
  • React quickly
    • To new market opportunities and competitive threats using programmable accelerators
  • Operational agility
    • Host and scale multiple applications on a single heterogeneous platform
    • Rapid deployment of new applications while minimizing total cost of ownership
  • In-network compute
    • Processing as traffic moves

Viswa concluded by discussing the Achronix Speedster7t FPGA and its ways of delivering enhanced performance, lower cost and faster time-to-market. Some very compelling information shared here. He also discussed Achronix embedded FPGA (eFPGA) IP and accelerator cards. Watch the full webinar now.

The Samtec View

Matt also began his discussion with an overview of AI algorithms. In this case, he discussed the evolution of high-profile AI applications. One example he cited was the move from OpenAI’s ChatGPT to its new application, Sora. ChatGPT will create textual answers to textual questions while Sora will create images from textual descriptions. These new breakthroughs in large language models increase performance and power demands exponentially. Both Viswa and Matt cite some sobering statistics about these increases.

While this all seems like a huge problem, Matt explained how much more is coming. Early AI models from around 2018 had about 100 million parameters (ELMo is the example he cited). Today, LLM models such as ChatGPT have about 100 – 200 billion parameters just six years later. And the latest models require about 1.6 trillion parameters. The “tip of the iceberg” moment came when Matt pointed out that a human of average intelligence has about 100 trillion synapses. If we’re going after human-like intelligence, we have a long way to go.

With this backdrop of continued exponential growth in processing capability (and associated power consumption), Matt began a discussion of how to route all that data between the huge number of processing elements involved. Solving this problem is a core strength of Samtec, and Matt spent some time explaining the capabilities the company brings to a broad class of high-performance computing designs.

Matt discussed what Samtec offers from the front panel to the backplane for data center designs. The photo below summarizes some of the latest silicon-to-silicon connectivity solutions Samtec offers.

This large catalog of technologies can optimize data routing electrically, optically or via RF. Couple these cutting-edge products with expert design support and highly accurate, verified models and you can begin to see how Samtec is the perfect partner to address the continued exponential demands that AI will create.

Matt discussed a couple of AI-related applications to help put things in perspective. First, he discussed the special requirements of AI accelerators. He touched on the many new specifications that must be addressed (e.g., PCIe® and CXL®). He also discussed the need for domain specific architectures that are needed to support new hardware Al chipsets that create complete end-to-end Al systems. This level of innovation is required as current compute density is insufficient to address the demands of newer AI models. Nvidia Blackwell was used as an example.

Matt described many solutions from Samtec to address these trends, from high-performance connectors, cables to direct chip attach solutions. These solutions deliver superior performance and density when compared to traditional PCBs alone.

The breadth of capabilities Matt described is quite complete and compelling. You can explore Samtec’s AI solutions here. You will find a lot of product information there, as well as a very useful white paper called Artificial Intelligence Solutions Guide.

The webinar ended with a very informative Q&A session moderated by Rich Nass.

To Learn More

By now, you should really want to watch this webinar. If AI architectures are part of your future, it’s a must-see event. You can access the webinar replay here. This will help you to really understand how Samtec and Achronix expand AI in the data center.


Don’t Settle for Less Than Optimal – Get the Perfect Inductor Every Time

Don’t Settle for Less Than Optimal – Get the Perfect Inductor Every Time
by Bud Hunter on 05-09-2024 at 6:00 am

Figure 1

The meaning of the word “Veloce” is “blazing fast”. It is the inspiration behind the name of the Ansys VeloceRF electromagnetic (EM) passive device synthesis platform that has been a favorite among RF and high-speed integrated circuit (IC) designers for more than 15 years. VeloceRF is a name that designers automatically connect with speed. Inductors are very often used in analog, RF, and high-speed integrated circuits for tuning, impedance matching, filtering, ESD protection, etc. Wireless and IoT devices, such as cellphones, tablets, laptops, wearables, mobile communications equipment, and automotive radars are powered by analog and RF ICs that rely on several inductors to operate. On-chip inductors generally enhance the reliability and efficiency of integrated circuits; they can offer circuit solutions with superior performance and contribute to a higher level of integration.

The primary capabilities sought by chip design engineers are speed and performance, and these are the key features of VeloceRF. The software offers an easy interface, with unparalleled speed and performance in synthesizing and modeling passive devices. In the IC design world, defining speed and performance can be very subjective. In the VeloceRF tool, designers place down, in layout, a variety of passive electromagnetic (EM) structures that are DRC-clean and manipulate the devices in the form of parametric cells that can be used to quickly generate an accurate EM model. VeloceRF leverages the Ansys RaptorX EM solver, which is known for its accuracy, and high capacity, and has been certified by all major foundries on multiple fabrication processes. So, for VeloceRF, the performance is a given but what about the speed?

Fortunately, the RaptorX EM engine has unrivaled speed, but for a designer, speed is not as simple as having the fastest extraction or simulation times. Speed is a wholistic measure, it is a metric that precisely measures how effectively the tool allows them to adhere to their timelines and be the pioneers in the market. No other tool in the industry comes anywhere close to maximizing a designer’s efficiency than the Magic Wand passive device synthesis feature of VeloceRF, and this is where the fun really begins.

Magic Wand is a powerful feature of VeloceRF that allows designers to easily select the type of EM structure that they need to synthesize together with the respective set of physical constraints, design goals, and EM extraction settings. When ready, Magic Wand will start running EM extractions in the background (with the RaptorX EM engine) that are optimized around their constraints and design goals. The end result is an entire library of parametric cell solutions the designer can immediately begin to use and analyze. Figure 1 displays an example of the Magic Wand user interface, a set of optimized solutions, and a parametric cell placed in the layout. The flexibility provided by this feature is unprecedented. Designers have some control over how strenuously the Magic Wand optimization engine looks for solutions that can be used as soon as they are found. This means real work can begin immediately while the optimization continues to run in the background. The designer can either let Magic Wand finish running extractions to completion or stop the run. Either way, the solutions are automatically saved and can easily be recalled in the future.

Figure 1: Magic Wand interface, an example set of solutions, and parametric cell

One of the biggest bottlenecks in design efficiency is falling into the trap of redundant and iterative methodologies. How well can the measure of speed be served if the starting point for synthesis isn’t even based on a real electromagnetic extraction? The user takes a big step back if they start with a ballpark estimate just to realize after the fact, they still need to verify the performance of the device by running an electromagnetic extraction and simulating it. This is where Magic Wand really hits the mark in speed and efficiency. Since the solutions are inherently found with RaptorX electromagnetic extraction, they are fully ready to go and verified right out of the gate. If the user decides to make modifications to the parametric cell, the changes can quickly be analyzed in VeloceRF with a built-in performance plotting utility.

VeloceRF can easily synthesize a wide range of passive structures, which includes a variety of single-ended and differential inductors, multi-spiral transformers, T-coils, and transmission lines. One of the newest features of VeloceRF is the Bus Builder. Rather than relying on predefined transmission line topologies and spending weeks to design custom buses for clock and signal propagation according to increasingly demanding design constraints, this feature enables customized design and modeling of complex busses tailored to specific user settings, including an arbitrary number of lines, metal layers, flexible geometries of the ground net, and dummy fill.  Figure 2 shows the Bus Builder interface with cross-sectional and 3D views.

Figure 2: Bus Builder interface with cross-sectional and 3D views

The feature-rich aspects of VeloceRF, combined with its speed and comprehensive device synthesis portfolio, allow for a broad spectrum of enjoyable user experiences. Regardless of which design synthesis paths are taken, there are assurances and features shared for all of them. Whether or not an EM structure is synthesized with Magic Wand, placed directly from the Parametric Cell Library, or bus lines are designed with the Bus Builder, it will have automated support for ground shields and dummy fill and assured to be DRC clean. VeloceRF is also fully integrated with the Cadence and Synopsys custom IC design platforms and provides a variety of plug-and-play features that enable designers to seamlessly utilize the synthesized parametric cells and EM models with their simulation flows.

One of the more advanced features of VeloceRF is the full coverage of layout dependent effects (LDE). These effects include impacts such as horizontal metal widths and spacing, dielectric damage, metal thickness, and side and bottom dielectric variations. Properly modeling LDE is already a critical design trend in the industry, especially for advanced technology nodes from 7nm down to 3nm. The fact that the RaptorX solver is the only electromagnetic engine in the industry to boast this capability and be fully certified by TSMC down to 5nm with more to come soon, makes VeloceRF a clear standout from the competition. Advanced dummy fill and complex ground shields are a necessary evil for EM solvers to deal with in terms of prolonging extraction times and limiting accuracy, but the performance of VeloceRF truly makes these challenges seem like an afterthought.

VeloceRF is a powerful platform that will take any IC design team in need of passive EM structures to the next level of success. Whether success is measured by more first-pass silicon, faster time to market, cost savings, risk aversion or good old-fashioned engineering entertainment for an IC designer, VeloceRF can provide it all when it comes to passive device synthesis. Its robust feature set with Magic Wand, Bus Builder, Parametric Cell Library, and Performance Plotter provides a broad spectrum of design flexibility while leveraging the RaptorX EM solver’s best-in-class performance, speed, and capacity attributes. Trends in the semiconductor industry are fast-paced and ever-changing, and it is more important than ever that design tools like VeloceRF reflect that. As design groups that require EM modeling begin to entrench themselves in advanced technology nodes, having a tool at their disposal that can capture LDE and be fully certified by leading foundries will be an unrivaled game-changer in the industry.

VeloceRF captures the essence of a famous quote from George S. Patton and his mentality for success, “Don’t tell people how to do things, tell them what to do and let them surprise you with their results.”  Executing a plan quickly and efficiently is far more important than waiting for the perfect plan and being slowed down by analysis paralysis. The speed and optimization attributes of VeloceRF allow plans to be executed as fast as possible while ensuring the user is in the correct design space. Critical time and engineering resources are not wasted upfront due to a designer dwelling on how to micromanage the tool and synthesis settings and how the devices will be modeled later on. One thing is guaranteed: VeloceRF will always impress with its results.

Also Read:

Simulation World 2024 Virtual Event Simulation

2024 Outlook with John Lee, VP and GM Electronics, Semiconductor and Optics Business Unit at Ansys

Unleash the Power: NVIDIA GPUs, Ansys


Synopsys is Paving the Way for Success with 112G SerDes and Beyond

Synopsys is Paving the Way for Success with 112G SerDes and Beyond
by Mike Gianfagna on 05-08-2024 at 10:00 am

Synopsys is Paving the Way for Success with 112G SerDes and Beyond

Data communication speeds continue to grow. New encoding schemes, such as PAM-4 are helping achieve faster throughput. Compared to the traditional NRZ scheme, PAM4 can send twice the signal by using four levels vs. the two used in NRZ. The diagram at the top of this post shows the how data density is increased. With progress comes challenges. PAM4 has a worse signal-to-noise ratio, and reflections are also much worse. More expensive equipment is required and even then, there are challenges in establishing a link. There were a couple high-profile events recently that showcase what Synopsys is doing to address these challenges. Capabilities of its IP were demonstrated, as well as how a reference design from Synopsys is helping with interoperability across the ecosystem.  Let’s take a closer look to see how Synopsys is paving the way for success with 112G SerDes and beyond.

Webinar Presentation

 On February 20 a webinar was held to explore how to get the best performance out of a 112G SerDes solution. The challenges of PAM-4 were discussed, along with the the importance of auto negotiation (AN) and link training (LT) to address those challenges. Three knowledgeable people participated in this webinar, as shown below.

Madhumita Sanyal from Synopsys began the webinar with a presentation. She discussed the growing use of high-speed ethernet in many applications and how 112G ethernet is enabling 400G and 800G. On this topic, she discussed the role of PAM-4 modulation as an enabler and some of the design challenges PAM-4 presents.

Madhumita went on to discuss the importance of auto-negotiation and link training to help address the challenges presented by PAM-4. She cited several design examples. The approach is applicable to copper cables and backplanes. She explained that PAM4 challenges are partly compensated through a Tx equalization after auto-negotiation is completed. She also pointed out these techniques are applicable to all rates defined in IEEE 802.3 Clause 73 (ranging from 1G to 200G) and the Ethernet Consortium 400GBASE-CR8/KR8 specification. The figure below summarizes the discussion.

Auto Negotiation & Link Training are Essential

The Webinar also presented a very useful interoperability demo. I’ll get to that in a moment, but first some other news from the trade show floor is quite relevant.

News From the Trade Show Floor

 At the 49th European Conference on Optical Communications (ECOC) last October, Madhumita Sanyal presented impressive details of the impact the Synopsys 800G ethernet subsystem link-level interoperability was having across the ecosystem.

The 800G demos at ECOC’23 were done in the Ethernet Alliance booth. The demos used 8 lanes of Synopsys LR 112G ethernet PHY IP and 800G MAC/PCS interop with exercisers, analyzers & third-party 800G EVBs over DAC channels showing linkup, packet receive/transfer, FEC histogram & other performance metrics.

800G ethernet throughput was shown with zero errors across several demonstrations integrating Synopsys products with ecosystem partners. This interoperability success highlights the possibilities for future design collaboration across the ecosystem. The figure below illustrates one HPC data center rack-like demo configuration.

HPC Data Center Rack Like Demo

You can view a complete summary of this work from the show floor at ECOC with this short video.

The Webinar Demo

A detailed demonstration of interoperability based on the Synopsys 800G SS evaluation board was featured during the recent webinar as well.

Martin Qvist Olsen of Teledyne LeCroy set up the first demonstration. The demo configuration is summarized below. A Teledyne LeCroy Xena Z800 Freya tester connected to a Synopsys 800G SS evaluation board with a Teladyne LeCroy SierraNet M1288 in between as a probe.  

The first demo showed how the Xena and Synopsys devices performed auto-negotiation. Details of the operation of each device was shown by examining UI outputs to explain how the results were achieved. The protocols and standards used were also discussed, along with the details of tuning and the associated challenges.

Martin then moved to the next demonstration, which focused on how the Xena and Synopsys devices initiate automatic link training and how the devices perform the link training. The ways the performance of the link is improved was also covered. Details of FEC and BER statistics were shown. The impact of standards and what is covered by those standards was also discussed.

The third demonstration was also presented by Martin. Here, the detailed steps of how link training is achieved are examined. The steps involved and the associated pre-sets were shown. A great amount of detail of how the process works and how optimal results can be achieved were reviewed with many examples.

The next demonstration was presented by Craig Foster of Teledyne LeCroy. Craig focused on the various ways to implement link training. How the Xena device performs link training on its own is examined. This is followed by how the Synopsys device implements link training on its own. And finally, how the two devices work together to implement link training was reviewed.

Craig covers a lot of detail regarding how each device implements link training and then how they work together. The attendees were able to view, step by step, how each device works.

Martin presented the final demonstration, which is the Xena and Synopsys device running a 1X800G channel. The parameters used to implement the link are shown in detail, along with performance statistics.

To Learn More

This masterclass webinar is rich in technical information, backed up with practical demonstrations to show the details. If high-speed communications is important to you, I highly recommend you take a look. You can access the webinar replay here.

Also, as mentioned, you can view a summary of the Synopsys interoperability work from the show floor at ECOC here. Synopsys provides a complete Ethernet IP solution for up to 1.6Tbps, including MAC, PCS, PHY, VIP and security.

And that’s how Synopsys is paving the way for success with 112G SerDes and beyond.

 


Oops, we did it again! Memory Companies Investment Strategy

Oops, we did it again! Memory Companies Investment Strategy
by Claus Aasholm on 05-08-2024 at 8:00 am

Opps We Did it Again Semiconductor Memory 4

We are in the semiconductor market phase where everybody disagrees on what is going on. The market is up; the market is down. Mobile phones are up…. oh no, now they are down. The PC market is up—oh no, we need to wait until we can get an AI PC. The inflation is high—the consumer is not buying.

For us in the industry, the 13-week financial analyst cycle is the entire universe – time did not exist before this quarter, and it will cease to exist after it.

Sell, sell, sell, pull, push, cheat, steal, fake, blame! Anything to make the guidance number. If you are in the hamster wheel, there is no oxygen, and you lose the overview.

The bottom line is that it does not matter, and the quarterly cycles are a (sometimes expensive) distraction from achieving long-term business success. The semiconductor business does not rotate around a quarter or a financial year. It is orientated around a four-year cycle, as can be seen below.

The growth rates are high or negative – rarely moderate. The semiconductor industry is briefly in supply/demand alignment every second year. It is about as briefly aligned as two high-speed trains passing each other.

One of the primary reasons for this cyclical behaviour is the long capital investment cycle for semiconductor manufacturing. A capacity expansion might take 2-3 quarters, while a new fab takes 2-3 years to construct and fill. This leads to the first law regarding semiconductor manufacturing investments:

The first law of Semiconductor Manufacturing Investments:

“You need to invest the most when you have the least.” The quarterly hamster wheel and pressure to deliver to analysts and share squatters (There are few owners) make this law incredibly hard to follow. Failing to do so leads to the second law of semiconductor manufacturing:

The second law of Semiconductor manufacturing investments:

If you fail to abide by the first law of semiconductor manufacturing,
New capacity will come online when you need it the least.
So, a high-level and long-term strategy to create sustainable growth and profitability should be possible. That is until you learn about the third law of semiconductor manufacturing investments:

The third law of Semiconductor Manufacturing Investments:

All semiconductor cycles are different.
All semiconductor cycles are the same.
Only quantum engineers understand the third law of Semiconductor manufacturing investments. I will try and explain it anyway.

The upcycle is easy to explain. Everybody repeat after me: We are doing a great job and taking market share. Everybody is taking market share.

The down cycle is more complex. Every time I have faced a downcycle (I don’t want to reveal my age here, but is it more than a couple), I have heard new arguments about why this cycle is different: Dot Com Crash, Financial Crisis, Asia Crisis, Covid and so forth. This makes companies address this well-established cycle as something new that we might never recover from – so we have to be careful.

Once the cycle is behind us, we see it was the same as the others, plus or minus a brick.

Collectively, we never learn.

But THIS cycle is different!

The memory markets and the three leading companies, Samsung, SK Hynix, and Micron, make the semiconductor industry even more cyclical. The market is more commodity-orientated than the rest of the industry; prices vary greatly depending on the cycle’s timing.

When the market is near a peak, memory prices are so elevated that smartphone and PC prices become prohibitive, and consumers stop buying. This propels the industry into a down-cycle.

At the bottom, where memory is sold at a loss, PCs and smartphones are so cheap that a replacement cycle initiates the next semiconductor upcycle.

Other factors impact the Semiconductor market over time (AI and GPUs are currently pushing the envelope), but the memory cycle has the most potent effect.

The combined memory revenue of the three leading companies can be seen below.

This shows the massive difference in memory companies’ profitability depending on the timing of the semiconductor cycle. In particular, the last downcycle was nasty.

This significantly impacted the memory giant’s combined capital expenditure, as seen below.

During the last down cycle, the CapEx got below the level needed to service and sustain the existing capacity (Maintenance CapEx), and there have been indications that the production capacity has declined since Q3-22.

“The best time to invest is when you have the least money.” The memory companies have failed to comply with the first law of Semiconductor Manufacturing Investment.

The lack of investment also comes at a bad time when the Memory supply chain is changing.

Large Data centre processing companies now need High Bandwidth Memory (HBM) for their AI systems. HBM needs 2x the capacity per bit than regular DRAM. As HBM is expected to be 10% of the total DRAM market in 2025, the capacity needs to increase with that amount before there is capacity for the average upcycle growth.

As the processing companies negotiate long-term contracts directly with the memory companies, they will get the capacity first. This can already be seen in general DRAM pricing, which is rising.

Two potential AI upgrade cycles for Smartphones and PCs might need extra capacity.

The Memory Capacity Outlook

As there are concerns about the large memory companies that have underinvested, it is worth taking a deeper dive into the capacity situation.

When analysing memory capacity, we investigate the historic Expansion CapEx (investment beyond maintenance) and add our projection of future expansion capex, as seen below.

Adding the Semiconductor cycles as the backdrop reveals an investment void in the 22 upcycle compared to the 18 upcycle. Companies were likely waiting with capital investments to use the Chips Act signed on August 22. Then, all memory companies were deep in the red, and only Samsung kept investing above the maintenance level to support its Taylor Fab expansion.

The Memory market has a lot of inventory, but it looks like it is drying out, revealing the severity of the capacity shortage. We will know very soon.

Samsung was the only company investing in maintenance after the Chips Act. This supported its Taylor Expansion, expected to come online in late 2024. Kudos to Samsung for timing Taylor ideally to open in the middle of the upcycle. This is the only significant capacity increase that the memory market will benefit from during this upcycle.

SK Hynix Cheongju expansion will go online precisely at the projected 26’ peak, creating the next downcycle if it is not already underway.

Micron’s Boise expansion will likely go online when the market is deep in the slide, making it difficult to make it profitable.

The semiconductor tool sales confirm that there has been no immediate response to this potential capacity shortage. An uptick in Q4-23 was not followed up in Q1-24, and the market level is generally low.

I am certainly not here to criticise the leaders of Memory companies. It is a crazy business, not for the faint of heart. When Elon Musk talked about eating glass and staring into the abyss, he was indeed speaking about the downcycle in memories.

However, if the logic in this post were applied, there might be less panic.

We have presented the facts and analysis and expect a challenging period for the memory market. However, while we trust our facts, our analysis might be misguided, and other experts can add colour to the discussion.

The Semiconductor industry is too large and complex for anybody to know it all.

Also Read:

Nvidia Sells while Intel Tells

Real men have fabs!


An Enduring Growth Challenge for Formal Verification

An Enduring Growth Challenge for Formal Verification
by Bernard Murphy on 05-08-2024 at 6:00 am

Math blackboard min

A high-quality verification campaign including methods able to absolutely prove the correctness of critical design behaviors as a complement to mainstream dynamic verification? At first glance this should be a no-brainer. Formal verification offers that option and formal adoption has been growing steadily, now used in around 30-35% of designs per the Siemens/Wilson Research Group survey. However anecdotal evidence from formal verification service companies such as Axiomise suggests that real benefit extracted from formal methods still falls significantly short of the potential these methods can offer. A discussion with Dr Ashish Darbari, CEO of Axiomise, prompted this analysis of that gap and how it can be closed.

What’s going on?

About half of reported usage is attributable to support apps which make canned checks more accessible to non-expert users. The balance comes in straight property checking, a direct application of formal engines for which you must define all required assertions, constraints, and other factors in constructing proofs. Apps offer a bounded set of checks; property checking offers unbounded options to test whatever behavior you want to prove.

The complexity of formulating an efficient property check is another matter. Like any other problem in CS, problem complexity can span from relatively simple to difficult to practically insoluble. By way of example, consider a check for deadlocks. In a single finite state machine (FSM) such a check is sufficiently easy to characterize that it is included in standard apps. Checking for possible deadlocks in multiple interacting FSMs is more challenging to package because problem characterization is more complex and domain specific. Checking for deadlocks in a network on chip (NoC) is more challenging still given the span, topology, and size of a typical NoC. Cross-sub-system proofs or proofs of behavior under software constraints I suspect are beyond the bound of methods known today (without massive manual abstraction – I’d be happy to hear I’m wrong).

Another complication is that while many argue you don’t need to be a mathematician to use these methods, effective formal attacks to complex problems still very much depend on finesse rather than brute force. You may not need a math degree but you do need something of a mathematical or at least a puzzle mindset, constantly reinforced. I think this is why successful formal verification deployments run as separate teams. Dynamic verification teams also face difficult challenges but of a different kind. It is difficult to see how one person could routinely switch between both domains and still excel in each.

In this light, outsourcing for complex formal verification objectives becomes inevitable, to specialists with concentrated and growing experience in that domain. Axiomise certainly seems to be benefiting from that demand, not just from small ventures but from major names among semiconductor and systems enterprises.

Why Axiomise?

Axiomise provide consulting and services, training, and an app they call formalISA for RISC-V formal verification. Apps of this type may ultimately add a high margin revenue component to growth though it appears today that clients prefer a full turnkey solution, for which formalISA underlies a value-added service.

Ashish and his CTO Neil Dunlop have extensive experience in formal methods. The rest of the technical team are mathematicians, physicists, and VLSI experts trained from scratch in Ashish’s approach to formal problem solving. This they have applied across a wide variety of subsystems and test objectives. One very topical application is for RISC-V cores.

Extensibility and multiple sources for cores are key strengths for RISC-V but also come with a weakness I have mentioned before. Arm spends between $100M and $150M per year in verification; Intel and AMD probably spend much more. They have all built up decades of legacy verification assets, spanning many possible CPU variants and optimizations. To rise to comparable verification quality on an unmodified RISC-V core is a major task, given a staggering range of test scenarios which must be covered against a wide range of possible architecture optimizations. Add in a custom instruction or two and the verification task is amplified even further.

Formal methods are the ideal way to prove correctness in such cases, assuming an appropriate level of finesse in proofs. Axiomise use their formalISA app to run push-button proofs on correctness on 32-bit and 64-bit implementations, and they have production-ready implementations for RV32IMC and RV64IMC instruction sets. Examples of problems found include a bug in RISC-V specification v 2.2 and over 70 deadlocks found in the previously verified zeroriscy. The app found new bugs in the ibex core and architectural issues with LOAD instruction in zeroriscy. It found 30 bugs in WARP-V (2-stage, 4-stage, and 6-stage in-order cores) and cv32e40p core from OpenHW. Axiomise has also verified the out-of-order execution CVA6 core using formalISA. Details of these bugs are available on GitHub.

As usual, the development involved in these tests is a one-time effort. Once in place, regressions can be run hands-free. Ashish tells me that with formalISA, diagnosis of any detected problem is also simplified.

Takeaway

I’d like to believe that in time, more of these kinds of tests can be “app-ified”, extending the range of testing that can be performed in-house without service support. Today building such tests require a special level of formal expertise often only available in long-established centers of excellence and in organizations such as Axiomise. Since other big semiconductor and systems companies are happy to let Axiomise consult and train their teams to better corral these complex problems, you might want to check them out when you face hard proof problems.

You can learn more about Axiomise HERE, the formalISA studio HERE and the RISC-V studio HERE.

Also Read:

2024 Outlook with Laura Long of Axiomise

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®

WEBINAR: The Power of Formal Verification: From flops to billion-gate designs


Rigid-flex PCB Design Challenges

Rigid-flex PCB Design Challenges
by Daniel Payne on 05-07-2024 at 10:00 am

PADS Professional Design

From Zion Research I learned that the flexible electronics market was about $13.2B in 2021 and growing at a CAGR of 21%, so that was impressive. There are several factors that make rigid-flex circuit so attractive, like: space efficiency, reduced weight, enhanced reliability, improved signal integrity, streamlined assembly, design flexibility, cost savings, miniaturization, and better durability. I learned more by reading a new eBook online.

Wearable products like fitness trackers, smart watches and AR glasses have very limited space, so they benefit from space efficiency. Traditional PCBs use connectors and cables that add to product weight, so rigid-flex provides weight savings in markets like automotive and aerospace. Connectors and cables contribute to reliability issues, while rigid-flex designs are engineered to withstand bending and movement. With fewer electrical discontinuities, rigid-flex circuit exhibit improved signal integrity and impedance control, a benefit for high-speed and high-frequency products. Using less components to connect makes rigid-flex a simpler assembly process with lower costs from labor and materials. Engineers can design products with new shapes and configurations using rigid-flex, not possible with rigid PCBs, enabling new categories of products.

PCB design tools that support rigid-flex PCB need to manage layer stackups in both the rigid and flexible layers. Bend areas should be easily defined and visualized and must support the bend radius and fold lines. Components ought to be placed quickly in both the rigid and flexible areas, along with 3D visualization. Routing tools are required to support trace routing along the flex areas while maintaining signal integrity and allowing sketch or manual routing. Issues like excessive bending or trace spacing violations need to be found during Design Rule Checking (DRC). Both thermal and signal integrity simulations must be performed to ensure reliable operation. An accurate media library should include flexible materials and support both rigid and flexible substrates.

Rigid-flex stackups

PCB designers use 3D visualization to best understand the entire rigid-flex PCB, along with bending simulations. An ideal EDA tool automatically generates fabrication and assembly drawings, detailing the flexible regions. Mechanical and electrical designers collaborate through import and export features. The best manufacturing yield is ensured when Design For Manufacturing (DFM) checks specific to rigid-flex are run. Using library components designed for flex circuits makes for quicker work.

Designs require trace, plane, cover layers, bend area, bend radius, vias and stiffener support

Rigid-flex technology is used in a wide range of markets as demands are growing for miniaturization, like for wearable smart products, automotive electronics, aerospace and defense, medical devices, consumer electronics, and IoT.

Markets for flexible electronics

PADS Professional

Siemens offers their PADS Professional software as a solution to rigid-flex PCB design challenges that have been discussed so far. The approach with PADS Professional is to use a correct-by-construction technology, which enables your design team to create an optimal form factor with high quality, in the shortest time frame.

PADS Professional Design

With this EDA tool PCB designers can define unique stack-up types, specify bend areas, and use flex-aware placement and routing to get high quality results. Both 3D bending and 3D DRC are supported, eliminating surprises in fabrication. Signal Integrity and Power Integrity are validated quickly through simulations. DFM validation understands rigid-flex and your designs are readied for NPI hand-off.

Summary

So much of our consumer electronics, automobiles and aircraft are already using rigid-flex technology and the market projections show a healthy growth for years to come. There are many challenges to adopting a rigid-flex PCB design flow, so you really want to adopt technology that is well proven over many years, designs and industries. With PADS Professional there is solid technology to address each of the challenges of rigid-flex PCB design.

Read the complete 7 page ebook online at Siemens.

Related Blogs