ads mdx semiwiki building trust gen 800x100ai

Siemens EDA will be returning to DAC this year as a Platinum Sponsor.

Siemens EDA will be returning to DAC this year as a Platinum Sponsor.
by Daniel Nenni on 11-29-2021 at 10:00 am

Siemens EDA DAC

The 38th Design Automation Conference is next week and this one is for the record books. Having been virtual the last two years, next week we will meet live once again. I think we may have all taken for granted the value of live events but now we know how important they are on both a professional and human level, absolutely.

“The Design Automation Conference (DAC) is recognized as the premier conference for design and automation of electronic systems.  DAC offers outstanding training, education, exhibits and superb networking opportunities for designers, researchers, tool developers and vendors.”

“We would like to extend a big thank you to the DAC organizers under the leadership of Siemens EDA’s own Harry Foster, the General Chair of this year’s Design Automation Conference, in organizing a wonderful conference program under challenging circumstances. Kudos, Harry and the DAC program team!” – Siemens EDA Management

Siemens EDA in the Conference Program

You’ll find Siemens EDA experts featured throughout the conference program – delivering five conference papers and four DAC Pavilion presentations, hosting a tutorial and designer track panel, and presenting 11 posters during the Poster Networking Reception. We’ve highlighted some must-see events below, but you can view their full list of conference activities here.

DAC Pavilion Session: Digitalization—the return to outsize growth for the semiconductor industry

10:15am – 11:15am PST | Monday, Dec. 6th

Joe Sawicki, Executive VP of Siemens EDA

In just one short year, a decade of digitalization occurred across all industries fueled by innovation in the semiconductor industry. Dramatic growth occurred in use of the cloud, work from anywhere and telemedicine, while online collaborative tool usage increased a staggering 4000%.

Impressive as this all is, it is just the beginning of a massive reinvigoration of the semiconductor industry. Emerging new compute and telecom infrastructures, with IoT starting to deliver its long-promised value, coupled with new technologies such as artificial intelligence are reshaping the competitive landscape at break-neck speed. Despite valid concerns over trade wars, there is no doubt the semiconductor market is once again on a dramatic growth trajectory.

Designer Track Panel: UVM: Where the Wild Things Are

10:30am – 12:00pm PST | Wednesday, Dec. 8th

Moderator: Dennis Brophy, Siemens EDA

Experts from Cerebras, Marvell Semiconductor, NVIDIA, Paradigm Works, and Synopsys will focus on specific enhancements being planned or considered to be added for the next revision of UVM IEEE 1800.2. Panelists have strong backgrounds in UVM development as current or past members of the UVM-WG in Accellera and/or IEEE and equally strong opinions on what is needed to keep UVM growing and relevant for functional verification.

Tutorial: Design and Consumption of IPs for Fail: Safe Automotive ICs

1:30pm – 5:00pm PST | Monday, Dec. 6th

This tutorial featuring experts from Siemens EDA, NXP Semiconductors, and Arm Ltd. will focus on both the creation and consumption of automotive IP, looking at the various technologies and methodologies that can be used to standardize and automate this process.

Must-See Conference Paper Presentations:

Input Qualification Methodology Helps Achieve System Level Power Numbers 8x Faster

An automated Input Qualification Methodology is proposed that performs various Data Integrity Checks at design build and prototype stage and ensures in quicker iterations that input data is high fidelity leading to a well correlated power numbers. If multiple retries are needed, checkpoint database method is implemented to bypass the clean stages of the tool run.

Various checks pertaining to activity annotation (FSDB/SAIF/STW/QWAVE), technology libraries (.lib) and parasitic (SPEF) mapping are already part of the tool. Defining an input qualification methodology around these checks can save up to 88% of project time in achieving reliable power numbers.

What can chip design learn from the software world?

Many industries are undergoing a major transformation in the last years, but it seems the chip design practice is still basically where it was decades ago, with relatively minor improvements since. On the other hand, new bigger projects enabled by the on-going Moore’s law race, pose increasingly harder design & verification challenges – that our industry is struggling to keep up with.

It seems that our friends in the software industry also face big challenges, but they have been introducing many and different approaches, methodologies, technologies to do things different…and better. We will discuss the need and possibility of doing things different and potentially better, looking at certain concepts from the software development world and look into some possible concepts that could be adopted more broadly, such as: Open Source, Agile methodology and leveraging data and machine learning.

Siemens EDA customers will be delivering presentation and posters at DAC on their use of Siemens EDA technologies.

Customer presentations include:

Customer poster sessions include:

DAC Design Infrastructure Alley Presentation: Siemens EDA Cloud Offerings

3:30pm – 4:15pm PST | Monday, December 6th

Watch Craig Johnson’s presentation at the Design on Cloud Theater to learn how Siemens EDA is leading the way in cloud-based EDA.

Siemens EDA on the Exhibit Floor

You can visit Siemens EDA experts on both exhibit floors of Moscone West. The main Siemens EDA booth (#2521) is on the second floor – stop by to grab a free espresso drink and tune in to our informative booth presentations. You can also find them in the booths for OneSpin, A Siemens Business (#1539), Siemens Cloud (#1246), and Siemens at RISC-V Pavilion, booth B7.

Also Read:

Machine Learning Applied to IP Validation, Running on AWS Graviton2

Siemens EDA Automotive Insights, for Analysts

Tessent Streaming Scan Network Brings Hierarchical Scan Test into the Modern Age


Silicon Catalyst Hosts an All-Star Panel December 8th to Discuss What Happens Next?

Silicon Catalyst Hosts an All-Star Panel December 8th to Discuss What Happens Next?
by Mike Gianfagna on 11-29-2021 at 6:00 am

Silicon Catalyst Hosts an All Star Panel December 8th to Discuss What Happens Next

Each year, Silicon Catalyst assembles a panel of industry luminaries to discuss important questions about the future. The charter of the Silicon Catalyst Industry Forum is to: “create a platform for broad-topic dialog among all stakeholders involved in the semiconductor industry value chain. The Forum topics focus on technical and financial aspects of the industry, but more importantly the industry’s societal, geo-political and ecological impact on the world. “

Last year, for Forum 3.0, “A View to the Future” was pondered. You can view coverage of that event here and replay the complete 2020 Forum here.  The fourth annual version of this event is happening on December 8.

Once again for 2021, it’s a seasoned and high-profile cast who will participate. The event promises to be both entertaining and thought-provoking. After the year we’ve all just experienced, the topic seems particularly on-point:

Semi Industry Forum 4.0: What happens next?

 

The panel will be moderated by Don Clark, Contributing Journalist, New York Times. Panel members includeMark Edelstone, Chairman of Global Semiconductor Investment Banking at Morgan Stanley; Janet Collyer, Independent Non-Executive Director, UK Aerospace Technology Institute; John Neuffer, President & CEO, Semiconductor Industry Association; and Dr. Wally Rhines, President & CEO of Cornami and GSA 2021 Morris Chang Exemplary Leadership award recipient. Quite a group of industry luminaries.

Panelists

The event will begin with a Forum 4.0 overview from Richard Curtin, Managing Partner at Silicon Catalyst. Pete Rodriguez, CEO of Silicon Catalyst will then introduce the panel and Mark Edelstone will kick things off with a presentation of the on-going semiconductor industry consolidation. A panel discussion will then follow moderated by Don Clark.

As a backdrop for the panel discussion, the semiconductor industry, and society in general, is now at a major inflection point. The globalization of the supply chain, combined with the on-going geo-political turmoil, layered on top of the pandemic, has created a unique set of challenges for our industry, and most importantly, the world at large.

Topics to be discussed during the panel include:

  • Semiconductor Supply Chain Challenges: The current limited supply situation has impacted all aspects of our lives, industries, and global economies. Is there an end in sight? What are the key lessons learned? What should be done to ensure that the current chip shortage and other supply chain challenges are not repeated in the future?
  • US-China Relationship: The recent trend of punch / counterpunch does not seem to have an end in sight. As viewed by both countries, our industry is “too big to fail”. We’re now well beyond risk-mitigation and squarely in crisis-mode. Can we ever put the “genie back in the bottle?”
  • Public-Private Partnerships vs Free-Market Forces: The response to the pandemic has clearly shown that industry and government can collaborate for the good of society. Can the same be said for the initiatives by the major industrialized nations to establish domestic sources for the vital electronics demanded by their industries and societies? Is it too little, too late? And what impact will the continuing consolidation of semiconductor vendors, combined with local government investments, drive a new type of “territorial bottom-line”?
  • Startups: The landscape for startups has changed substantially over the past decade. What are the new challenges chip startups face? What barriers must be overcome? What target markets and applications are most promising?
  • Work-From-Home: The Good, the Bad and the Ugly: WFM / hybrid work environments are here to stay, especially for those in that are in the “knowledge worker” demographic. If you’re an electronic systems vendor, you’re seeing record setting business results (if you can get the chips…). But isn’t the history of the semiconductor industry’s innovation significantly based on the “randomness” of chance encounters with colleagues in the office? Can we truly be as creative and innovative, working individually dispersed and remote?

Silicon Catalyst’s Semiconductor Industry Forum 4.0 will take place on December 8, 2021, at 9:00 AM Pacific time. You can register for the event here.  You’ll want to attend this event to better understand what happens next.

Also Read:

Silicon Startups, Arm Yourself and Catalyze Your Success…. Spotlight: Semiconductor Conferences

WEBINAR: Maximizing Exit Valuations for Technology Companies

Silicon Catalyst and Cornell University Are Expanding Opportunities for Startups Like Geegah


Big Data Helps Boost PDN Sign Off Coverage

Big Data Helps Boost PDN Sign Off Coverage
by Tom Simon on 11-28-2021 at 8:00 am

PDN Sign Off

The nearly unavoidable truth about dynamic voltage drop (DVD) signoff for power distribution networks (PDN) is that the quality of results depends on the quality and quantity of the vectors used to activate the circuit switching. As SOCs grow larger and are implemented on smaller nodes, the challenges of sufficient coverage and increased sensitivity of chips to PDN issues makes the task of PDN sign off increasingly difficult. Often designers are limited to only running a few nanoseconds of vectors due to runtime and capacity issues. Western Digital recently gave a presentation at the Ansys Ideas Digital Forum on the how they used the capabilities in Ansys® Redhawk-SC™ with the SeaScape Analysis Platform to achieve big improvements in PDN Sign off coverage. The presentation, given by Kushang Bajani, principal engineer at Western Digital, is titled “A Methodology for Steep Increase in PDN Sign-off Coverage Using Big-Data Platform”.

Western Digital switched to Redhawk-SC from Redhawk to take advantage of the native cloud support and big-data techniques it offers. SeaScape allows RedHawk-SC to utilize scalable parallel processors and distributed local memory for running extremely large jobs. Previously for each vector set and mode the user needed to create and maintain a separate set-up. Thanks to the massive parallelization offered with SeaScape, many vector sets can be run in parallel to find the most comprehensive worst case switching for power. Redhawk-SC can consolidate the worst power windows from multiple vector sets to provide a realistic worst case for sign off.

PDN Sign Off

In his experience, Kushang reports seeing runtime going from 60 hours to just 12 hours in a multi VCD flow. This actually also allowed increased coverage and in one test case RedHawk-SC uncovered a better power window that had double the power usage of the one they found using just Redhawk.

To ensure that they felt comfortable moving to Redhawk-SC, Western Digital ran an exhaustive correlation exercise to verify QoR. Kushang shares one example where they started with a 4.034 microsecond VCD. Both tools identified the same 10 ns power window. When they ran each tool to get power figures they matched within a fraction of a percent.

Kushang feels that they now have significantly improved PDN sign off coverage. This comes with improved runtimes that make it possible to screen using multiple vectors, even on large multi-million node designs. They can uncover more potential PDN weaknesses and have higher confidence when they go to tape-out.

SeaScape is a key enabling technology for RedHawk-SC, giving it the ability to run much larger design problems and explore PVT conditions, modes and vectors. The results can be combined using analytics to provide insights into the design. SeaScape scales linearly to hundreds of CPUs and can operate on-premises or in the cloud. For many companies having the ability to access massive compute resources only as needed means that high costs of ownership can be avoided.

RedHawk-SC is the first application that Ansys has ported to the SeaScape platform, but others, like PathFinder-SC, are also available now and others to follow soon. We’ve always known that EDA’s compute requirements are very large. Ansys’ investment in offering their users a pathway to efficiently, reliably, and easily access resources to improve design results is good to see. The full presentation by Western Digital is available on-demand by registering at the Ansys IDEAS Digital Forum.

Also Read

Optical I/O Solutions for Next-Generation Computing Systems

Bonds, Wire-bonds: No Time to Mesh Mesh It All with Phi Plus

Neural Network Growth Requires Unprecedented Semiconductor Scaling


Empyrean Technology‘s Complete Design Solution for PMIC

Empyrean Technology‘s Complete Design Solution for PMIC
by Daniel Nenni on 11-28-2021 at 6:00 am

PMIC Design SemiWIki

Power management integrated circuits (PMICs) are integrated circuits for power management. Driven by the strong demand in consumer electronics, IoT, and the automobile industry, the design for PMIC is getting more challenging in terms of integration, reliability and efficiency. The design methodology needs to be updated to handle complex integration within a smaller footprint and higher performance; a better simulation solution for better verification on multi-scenarios; a reliability verification solution to handle high power density.

A power MOSFET with an area of several square millimeters is the core of a PMIC. It’s important for those parallel transistors to have a very low on-resistance, or Rds(on). Although PMIC design is still using mature process nodes, PMIC is becoming highly integrated with digital techniques and blocks like ADCs and timers. It makes the verification and optimization of PMIC more challenging and time consuming. Traditional RC extraction methods cannot satisfy power IC design requirements because power ICs often use special shapes and have large areas. Sometimes their layout satisfies DRC/LVS rules but they still may not function correctly. Often a long time is required for accurate power and current simulations for power ICs using traditional RC extraction methods and simulators, leading to long analysis and debugging cycles.

Empyrean Technology provides a complete design solution for PMIC that addresses the above requirements. Empyrean’s solution has helped customers worldwide to produce billions of PMICs over 10 years. Empyrean’s solution supports major PMIC processes and has been certified by several major foundries.

  • Empyrean Aether is a design platform with schematic and layout entry. It can integrate with Empyrean’s SPICE simulator, physical verification, and RC extraction tool. supports mature processes from various foundries.
  • Empyrean ALPS is a high-performance true SPICE simulator. It supports up to the latest processes with an optimized engine to provide better convergency on high voltage design. ALPS can greatly improve your design verification performance on cases with multi-corners and long ramp-up time. Being integrated with Aether, ALPS provides a GUI-based simulation environment for PVT simulation, circuit check, and result debugging.
  • Empyrean Argus is a hierarchical parallel physical verification tool. It provides DRC/LVS/Dummy Fill and DFM. Argus supports voltage-dependent DRC. It supports dynamic checks between nets with different power supply voltages. Argus engine can also handle shapes placed in any angle without compromising accuracy.
  • Empyrean RCExplorer supports transistor-level and gate-level RC extraction. It has built-in field solver that provides high accuracy resistance and capacitance calculation.
  • Empyrean Polas provides reliability analysis such as Rds(on) calculation, EM/IR-drop analysis, power MOSFET timing analysis, and crosstalk analysis. It has a built-in field solver to handle specials polygons in the layout for accurate extraction. Rds(on) and power path resistance is calculated accurately by SPICE simulation. Gate delay distribution for MOSFETs is calculated by dynamic simulation. High performance SPICE simulation also enables efficient current density analysis for EM effects, and facilitates IR-drop analysis that takes into account contacts, vias and metal layers. You can refer to this article to learn how MPS using Polas for their power MOSFET devices (https://semiwiki.com/eda/empyrean/286217-automating-the-analysis-of-power-mosfet-designs/ )

Empyrean Technology will showcase at the 58th Design Automation Conference (DAC) in Moscone West in San Francisco, CA from December 5-9, 2021. Empyrean Technology kindly invites you to visit their booth 2537 if you have question or want to know more about their PMIC solution.

Empyrean Technology, founded in 2009, is an EDA software and services provider to the global semiconductor industry.

In the EDA domain, Empyrean Technology provides complete solution for analog design, digital SoC solution, complete solution for flat panel display design and foundry EDA solution, and provides EDA related services such as foundry design enablement services.

Empyrean is headquartered in Beijing, with major R&D centers in Nanjing, Chengdu, Shanghai and Shenzhen in China. http://www.empyrean-tech.com/

Also Read

High Reliability Power Management Chip Simulation and Verification for Automotive Electronics

Speed Up LEF Generation Times on Huge IC Designs

Analysis of Curvilinear FPDs


Podcast EP50: What happens next in the CPU and GPU wars?

Podcast EP50: What happens next in the CPU and GPU wars?
by Daniel Nenni on 11-26-2021 at 10:00 am

Tom is the creator of the Moore’s Law Is Dead YouTube Channel and Broken Silicon podcast. He creates videos and writes articles containing in-depth commentary and analysis of what’s going on in Technology, Gaming, and Computer Hardware; and also recaps the news and interviews people working within the gaming & semiconductor industry on Broken Silicon.

YouTube Channel (https://www.youtube.com/channel/UCRPdsCVuH53rcbTcEkuY4uQ)

Podcast
(https://podcasts.apple.com/us/podcast/broken-silicon/id1467317304)

Website
(https://www.mooreslawisdead.com/).

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Pradeep Vajram of AlphaICs

CEO Interview: Pradeep Vajram of AlphaICs
by Daniel Nenni on 11-26-2021 at 6:00 am

Pradeep Pic 2020

Pradeep Vajram is a successful entrepreneur and a veteran in the Semiconductor / Embedded industry. He has over 25+ years of experience in having executed, at all levels of responsibilities, in design and development of ASIC products.

Pradeep has been an active investor in semiconductor and deep tech USA-INDIA corridor start-up, since 2017 and has vast experience in building successful businesses in Silicon Valley and India.

Currently, Pradeep is the CEO & Exec. Chairman of the AlphaICs Corporation. Before AlphaICs, Pradeep founded SmartPlay Technologies in 2008 – the world’s first integrated end-to-end product engineering services company. SmartPlay was then acquired by Aricent in 2015.

Prior to SmartPlay, he served as the Vice President of Engineering at Qualcomm, heading the India semiconductor division in Bangalore. Under his leadership, Qualcomm Bangalore Design Center developed into a strong center of excellence and delivered multiple 3G/4G products successfully.

Prior to Qualcomm, Pradeep was the CEO & co-founder of Spike Technologies – a leading chip design services company. Spike was acquired by Qualcomm in 2004.

Pradeep has a Bachelor’s degree in Electronics Engineering from Karnataka University & a Master’s degree in Computer Engineering from Wayne State University, Detroit

What is the backstory of AlphaICs and what does it do?

AlphaICs Corporation, a 4-year-old startup, designs and develops the best-in-class AI Co-processors for delivering high-performance AI computing on edge devices. With the growth in popularity of Deep Neural Networks, there has been a huge demand for running such networks in real-time, on edge devices.  The AI hardware market is estimated to be a $67 Billion market by 2025.     We have developed a power efficient, high throughput  AI Processor technology called Real AI Processor (RAPTM) for accelerating AI workloads. RAPTM is highly scalable and modular, enabling OEMs to choose the configuration that fits their performance and power requirements.

The RAPTM co-processor can be configured from 0.5 TOPS to 32 TOPS and can scale above 32 TOPS (64 TOPS, 128 TOPS, etc.)  by using a multi-core strategy. We have developed the entire software stack for creating and deploying neural networks, developed in on standard AI frameworks, on the RAPTM.   Software tool-chain provides an easy method to port existing neural networks onto our processors. Our software stack supports TensorFlow currently, and we plan to add support for other AI frameworks in the future.

What is your current status and go to marketing strategy?

We are excited to have first silicon Gluon that is an 8 TOPs AI inference coprocessor.  We  show cased  Gluon capabilities with our marketing partner CBC in the AI expo in Tokyo, Japan last month.

The response to our technology was very encouraging, and we are very excited to bring this product for our customers.  Competing solutions in the market are offering a SoC solution that integrates host processor and AI accelerator which necessitates complete redesign of the system resulting in huge investment and delay. We believe a co-processor strategy will quickly enable our customers to integrate AI capabilities in their current systems resulting in significant savings. Our initial focus is video analytics. This is a big market, and many verticals like surveillance, retail, automotive, manufacturing, healthcare will have AI enabled Video analytics applications by 2025.

Our product enables OEMs and system integrators to achieve market cost, and power-performance goals for edge solutions. So, in a nut-shell, we are developing high performance, low power, easy to use, edge AI co-processors for our customers to integrate AI quickly to their solutions.

How do you differentiate from various AI start-ups and incumbent solutions in this space?

AlphaICs differentiation comes from proprietary architecture. Gluon provides better throughput in lower power than incumbent products as well as other startups’ solutions. We have also developed a software tool-chain that makes it very convenient for users to deploy their trained networks on Gluon.

AlphaICs solutions will enable edge AI compute both for inference and incremental edge learning.  Edge learning is an ability of devices to learn from new data and scenarios on which they were not trained; providing additional intelligence to the edge devices. In this mode, devices start with a trained model on the partial data, and then they learn new scenarios as they encounter new data. We have showcased this on our Architecture, and it is a unique feature that gives our solution an advantage when compared to the other solutions out there. Edge learning is planned in our next generation product.

Can you elaborate your edge learning technology?

Today, edge devices run inferencing of trained deep neural networks to accomplish tasks such as object recognition, image classification, and image segmentation, to name a few. When new unseen data is encountered by the edge devices, the accuracy drop of such systems can be substantial. This is a major problem today for the real-world solution as nature of data keeps changing in these applications. With this in mind, at AlphaICs we designed our proprietary Real Artificial Intelligence Processor (RAPTM), to enable learning when new data is available to the edge devices; without affecting the already learned intelligence. We showcased Proof of Concept for Edge Learning based on a research grant from a US Gov R&D institution.  Our results are very promising, and we will continue to further develop this technology.

What is AlphaICs future roadmap and direction?

AlphaICs’ core technology RAPTM supports edge inference and edge learning. We are working to bring our next product that will integrate inference and edge learning. Our current solution is 8 TOPs and we will scale up to 64 TOPs as well integrate pre and post processing capabilities. We are very bullish on huge opportunities at the Edge and we have right technologies to enable edge AI for our customers.

https://alphaics.ai/

Also Read:

CEO Interview: Charbel Rizk of Oculi

CEO Update: Tuomas Hollman, Minima Processor CEO

CEO Interview: Dr. Ashish Darbari of Axiomise


PCIe Gen5 Interface Demo Running on a Speedster7t FPGA

PCIe Gen5 Interface Demo Running on a Speedster7t FPGA
by Kalar Rajendiran on 11-24-2021 at 10:00 am

PCIe Gen5 Interface Demo Board

The major market drivers of today all have one thing in common and that is the efficient management of data. Whether it is 5G, hyperscale computing, artificial intelligence, autonomous vehicles or IoT, there is data creation, processing, transmission and storage. All of these aspects of data management need to happen very fast. Fast storage and high-speed networking are ever more critical for today’s applications. Data centers and hyperscale data centers cannot afford to tolerate data traffic jams anywhere in the data path. They need to process incoming external data very efficiently and get the data to the final destination rapidly. But, with Ethernet speeds evolving must faster than PCIe generational speed jumps, the gap is growing.

As network interfaces upgrade from 100GbE to 400GbE, a full-duplex 400GbE link would require 800Gbps of bandwidth that translates to 100GB/s. A PCIe Gen4 x16 cannot handle that bandwidth but a PCIe Gen5 x16 can. And, as offloading tasks that were traditionally handled by the host is becoming more common, NVMe storage is being used like network attached storage with access managed by a SmartNIC. A faster NVMe storage solution can be implemented with PCIe Gen5. In other words, PCIe Gen5 will become very important for data centers where fast storage and high-speed networking are critical for communications.

SmartNICs are being expected to handle more functionality and offer flexibility to handle changing data management requirements. An earlier blog discussed how a reconfigurable SmartNIC can benefit from a Speedster7t FPGA based implementation. The focus of that post was the 2D-NoC feature of the Speedster7t FPGA. The blog was based on an Achronix webinar titled “Five Reasons Why a High Performance Reconfigurable SmartNIC Demands a 2D NoC.“ You can watch that on-demand webinar by registering here.

This blog focuses on the Speedster7t FPGA’s PCIe Gen5 capability. The Speedster7t family is one of the first FPGAs available now that natively supports the PCIe Gen5 specification. It is in this context that a recent video publication by Achronix is of interest. The video shows a demonstration of a successful PCIe Gen5 link between a Teledyne LeCroy PCIe exerciser and a Speedster7t FPGA. Teledyne LeCroy offers an integrated and automated compliance testing system, approved by the PCI-SIG® as a standard tool for compliance testing of PCIe specifications. The PCI Express exerciser can generate PCI Express transactions, observe behavior, and perform both stress testing and compliance testing.

Steve Mensor, vice president of sales and marketing at Achronix introduces the Speedster7t FPGA with a high-level overview of its features. He then hands off to Katie Purcell, application engineering manager at Achronix to present the PCIe Gen5 interface demo on Speedster7t FPGA. The demo setup includes a Speedster7t FPGA board, the PCIe exerciser and a connected computer to set up the exerciser.

First, Katie launches the exerciser’s control program graphical user interface (GUI) on the connected computer. The goal of the demo is to show the FPGA successfully link (achieving PCIe L0 state) at Gen1 through Gen5 specs. The demo shows that a PCIe L0 state can be achieved between the FPGA and the Gen5 capable LeCroy A58 PCIe exerciser. Although the FPGA can support up to PCIe Gen5 x16, the demo is run in x8 mode as that is the maximum mode supported by the exerciser. The demo shows all eight lanes downstream and upstream show the status of having reached the L0 state for a 32GT/s PCIe Gen5 data rate. The exerciser is cycled through to show that links can be achieved at all 5 PCIe Gen speeds.

If you are involved in or will be upgrading to a PCIe Gen5 system, you may want to watch the demo. It runs just 4-minutes long but could be useful for your project. You can find out more details about the Speedster7t FPGA family here.

 

 

 


WEBINAR: Using Design Porting as a Method to Access Foundry Capacity

WEBINAR: Using Design Porting as a Method to Access Foundry Capacity
by Tom Simon on 11-24-2021 at 8:00 am

Schematic Porting the NanoBeacon

There have always been good reasons to port designs to new foundries or processes. These reasons have included reusing IP in new projects, moving an entire design to a smaller node to improve PPA, or second sourcing manufacturing. While there can be many potential business motivations for any of the above, in today’s environment with semiconductor supply shortages, design porting has taken on a new and compelling importance. With almost every fabless semiconductor company facing reductions in fab allocation, design teams are pressed to move existing designs to alternative fabs.

Webinar: Efficient and User-Friendly Analog IP Migration

Second sourcing SOCs calls for porting both the digital and analog portions of the designs. In many SOCs it is enough to find equivalent analog IP, for such things as PLLs and IO’s, but mixed signal designs that feature custom IP blocks need more attention. While it is never truly easy to port digital designs, as a result of the use of RTL, libraries, synthesis and P&R this task is tractable. Analog is quite another thing altogether. Fortunately, MunEDA has a comprehensive solution for each stage of the analog design porting process. They offer their Schematic Porting Tool (WiCkeD SPT) and a suite of analog tools for tuning device parameters and design optimization.

InPLAY Inc.  is a rapidly growing company focused on RF designs for low latency wireless (SMULL), Bluetooth, and Industrial IoT. Their products offer unique features and extremely high performance in terms of range, throughput and battery life. With demand growing rapidly, especially for their new active BLE beacon product, NanoBeacon, they have sought to diversify their manufacturing. I spoke recently with InPLAY’s CoFounder and Director of RF/AMS Design Russell Mohn about how they are managing the process.

Design Porting the NanoBeacon

Russell told me that once they realized they would need to move production to additional foundries, they chose MunEDA’s SPT – partly because they were already using MunEDA’s WiCkeD analysis and verification tools to optimize their analog designs. WiCkeD offers Circuit & Sensitivity Analysis, PVT & Corner Analysis, MonteCarlo Statistical Analysis, High Sigma & Worst Case Analysis, and a Robustness Verification Flow. Russell has been quite happy with the design results he has achieved with WiCkeD, and it was an easy choice to look at SPT to solve their new challenges.

SPT handles all the details of switching to the devices in the new process PDK. SPT helps the user set up the device, pin and parameter mapping information. Of course, some manual intervention is required, but the SPT user interface makes the task intuitive and straight forward. SPT will even help manage the changes in the drawn schematic symbols so the schematic remains legible.

Symbol Mapping

In analog designs there is, of course, a lot more to moving to a new PDK than just mapping devices. Every aspect of the circuit behavior is prone to change. MunEDA’s DNO sizing and optimization tools, however, can automate most of the work using designer provided performance targets.

While I am sure that folks like Russell would rather be working 100% on developing new products, it come as a huge relief for him to have an effective option to keep up with the growing demand for their products in a time when the extra effort is required. It might be that SPT is a product whose time has come.

If you are interested in learning more about SPT and how it can smooth the move to new PDKs please register for this webinar.

 

Also Read

Numerical Sizing and Tuning Shortens Analog Design Cycles

CEO Interview: Harald Neubauer of MunEDA

Webinar on Methods for Monte Carlo and High Sigma Analysis


Traceability and ISO 26262

Traceability and ISO 26262
by Bernard Murphy on 11-24-2021 at 6:00 am

V graphic 2 min

Since traceability and its relationship to ISO 26262 may be an unfamiliar topic for many of my readers, I thought it might be useful to spend some time on why this area is important. What is the motivation behind a need for traceability in support of automotive systems development? The classic verification and validation V-diagram is a useful starting point for understanding. The left arm of the V decomposes system design from concepts into requirements, architecture, and detailed design. The right arm represents verification and validation steps from unit testing all the way up to full system validation.

System development, verification and validation

Interdependency in system design

First, let’s talk about systems. A system is generally more than just a chip + software running on that chip. An example system is a car in which SoCs (chips + software) play multiple roles. There are many mechanical components in a car – engine, body structure, braking, airbags, seats and windows. All of which are enabled in various ways by electronic components: sensors to detect possible collisions, actuators to control brakes and steering, seat and window positions, communication and infotainment. These must work together as flawlessly as possible. Accomplishing this goal is managed through mountains of specifications, requirements lists and use-case definitions to ensure everyone is building and testing against the same expectations.

Car design depends heavily on reuse for the same reasons we face in SoC design – cost, schedule, quality, reliability. Plus, of course, safety. Which puts heavy constraints on interfaces between levels in the system. A new SoC must comply with multiple existing requirements in addition to meeting new requirements. The larger system and its software are very expensive to change and re-certify. New components like SoCs must fit the system requirements.

Mind the gaps

Now we’ve established that everything starts with specifications, requirements and use-cases, all non-negotiable expectations on a supplier; how does an SoC company map those into what they need to build? Going down the left arm of the V, requirements are managed in tools like IBM DOORS or Jama, and specifications might be in PDF and use-cases, perhaps in SysML. This information is very high level, not directly executable by an RTL design team.

An architect will map these requirements manually using her expertise into a more detailed specification, leveraging available IP and company differentiated skills. She will also optimize the architecture to meet performance, power and cost goals. The architect will use a different set of tools at this level, together with virtual modeling, to start early software development. That intent is then translated, usually manually, into the more familiar RTL design and modeling phase, where the full implementation is developed.

In the right arm of the V, verification and validation start with unit testing. These tests are built independently from development to maximize integrity of the testing. Subsystem and system testing follow, also independently developed for the same reason. Finally, full system validation runs against system software in a lab emulation of the full electronic system, perhaps even with some mechanical modeling.

There are gaps between all these stages, some well-intended, but gaps nonetheless. Humans must bridge these gaps; however, we are imperfect. We miss some things, we misinterpret others, and we don’t stay current with spec changes. You might hope for a universal modeling language to design out human fallibility, but that dream seems unattainable. Instead, we bridge the gaps with traceability – links connecting a higher-level requirement to lower-level implementation and tests of that requirement.

How traceability bridges the gaps

Without automation, the way you check correspondence between levels is through painstaking line-by-line checks between requirements and implementation. Tying up experts for days in performing those checks. Not so bad the first time, but as the design evolves, if the customer changes the specification or if multiple customers have conflicting requirements, periodically repeating that detailed level of check becomes very hard.

Bridging the gaps with Arteris® Harmony Trace™ traceability

A better solution would automate links between requirements and implementation, say a bus width or a register offset. Setup will require some initial effort, but then the integrity of that check persists through the design lifecycle and beyond. In a design review, you don’t have to slog through the documents every time; the tool will check automatically. If some parameter slipped out of compliance, you’d instantly know. You know you are still in compliance if the tool hasn’t raised any flags.

Traceability also gives you instant evidence you can show to somebody who’s going to check your work. Maybe your own internal safety team, maybe a customer demanding proof of compliance, maybe a safety process auditor. There’s another benefit. When something goes wrong (because this is engineering; something always goes wrong), you have an audit trail through traceability to help figure out what you should change in your process.

Traceability throughout the lifecycle

Traceability isn’t only important in the design phase. After the chip goes into production, when the auto OEM is running extensive tests, they may find a problem. In diagnosis, they want to trace back through software to hardware components. Can this problem be attributed to a deficiency in the requirements?  Being able to trace quickly to a root cause can have a huge impact on corrective action and ultimately model release. Being able to provide quick turn-around and definitive evidence of compliance, or an unforeseen problem, can only enhance the reputation of a provider in the supply chain.

Arteris® Harmony Trace™

Is there such a solution available for SoC Product design teams? Arteris IP has now released its Harmony Trace product to automate and report on these links. Harmony Trace connects IP-XACT-based SoC assembly and the hardware/software interface to popular requirements management tools and to popular documentation formats. There is now an automated path to ensure compliance with those higher-level requirements and to be able to quickly demonstrate that compliance to customers and ISO 26262 auditors. To learn more, click HERE.

Also Read:

Physically Aware SoC Assembly

More Tales from the NoC Trenches

Smoothing the Path to NoC Adoption


Bonds, Wire-bonds: No Time to Mesh Mesh It All with Phi Plus

Bonds, Wire-bonds: No Time to Mesh Mesh It All with Phi Plus
by Matt Commens on 11-23-2021 at 10:00 am

Ansys Phi Plus

Automatic adaptive meshing in HFSS is a critical component of its advanced simulation process. Guided by Maxwell’s Equations, it efficiently refines the mesh to accurately capture both the geometric and electromagnetic detail of a design. The end result is a process that guarantees accurate and reliable simulation results with no user input required.

Before the adaptive meshing process can begin, there must be an initial mesh to faithfully represent the device’s geometry. With today’s highly dense and complex designs, creating this mesh can be a challenging task. A variety of initial meshing approaches are available in HFSS, each with a different scheme for mesh generation and a different set of strengths, apropos for different design types. For example, the TAU meshing technology is well suited for meshing complex 3D CAD while the Phi meshing technology is highly effective for meshing PCBs. Attempting to apply a one-mesh-fits-all paradigm is a significant challenge when approaching complex designs containing a mixture of CAD, like a PCB in a shielding enclosure.

Fortunately, today HFSS uses the new breakthrough HFSS Mesh Fusion technology to apply meshing approaches according to local CAD specifications. From there, HFSS proceeds with the same reliable, adaptive refinement process with guaranteed accuracy.

Figure 1. HFSS Mesh Fusion applied to PCB-Connector-Flex System. Mesh-left, Fields-right

Phi is one of the meshing techniques that Mesh Fusion can apply to local CAD for components like PCBs and IC packaging. Phi is “geometry-aware.” It works with designs that are composed of 2D layers uniformly swept in their normal. When applied to the right CAD, such as a PCB, Phi is 10, 15, or even 20 times faster than its partner 3D meshing technologies—Classic and TAU.

After two decades of meshing innovation, there were still a few “rogue” design types that were notoriously difficult to mesh—most notably a common, inexpensive design solution in consumer electronics: wirebond packaging. With tiny, high-aspect-ratio copper wires connecting to an integrated circuit inside the package, its design is both layered and 3D in nature. These two CAD features, when combined in a single design, were challenging for the existing meshing technologies to manage…until now.

Introducing Phi Plus
Ansys created Phi Plus using a ground-up approach for new meshing technology, specifically designed for challenging mixed geometries like wirebond packaging. Like its predecessor, Phi, it’s “geometry-aware.” It was designed specifically to understand how wirebond packages, and other complicated 3D components like PCB connectors, are manufactured and assembled. Phi Plus takes advantage of this design knowledge and accounts for those nuances in the meshing model.

Figure 2. Package on PCB system meshed with Phi Plus and HFSS Mesh Fusion. Z-stretched view

Key features include:

Parallel Meshing Technology
Ansys prioritized parallel meshing to speed up Phi Plus processing time. From the ground up, Phi Plus was designed to take advantage of parallel computational strategies; in beta, upwards of 10 times faster mesh times were observed when meshing with 12 cores.

Robustness and Reliability
Phi Plus combines the reliability of Phi with component-specific considerations to control for a uniform, high-quality, and water-tight mesh. A higher quality mesh generates downstream benefits in the adaptive meshing and frequency sweep solving steps. Benefits include smaller final mesh count, less solution memory, and the ability to solve more frequency points in parallel for a faster total speed with high performance computing (HPC) resources.

Figure 3. Close-up view of wirebond at top layer of package

More Than “Just” Fast
With no change to user flow, the entire simulation process can be upwards of 10 times faster with little to no mesh failures. For now, implementation requires a single settings change in simulation set up with no other action needed from the user. Phi Plus has proven to be so robust in its beta rollout that it’s anticipated to become the default meshing approach in a near-future release.

It doesn’t stop with “just” wirebond packages either. Phi Plus addresses system complexity that wasn’t possible before. Its robust meshing capabilities can manage other common 3D effects in complex electronics design, including trace etching or the inclusion of 3D Encrypted Components.

The Bottom Line
However beautifully rendered a 3D design may look on the computer screen, it’s the mesh that dictates what’s simulated. Mesh is foundational to an accurate physics model. HFSS has a long legacy of letting Maxwell’s Equations guide the creation of accurate, efficient mesh, and that legacy grows stronger with the advent of Phi Plus. With Ansys HFSS 2022 R1, Phi Plus meshing will be fully incorporated into our system-capable Mesh Fusion technology. Combined with hyper-scale technologies like Ansys Cloud, there are no limits to what teams can tackle with Ansys HFSS.

Also Read

Optical I/O Solutions for Next-Generation Computing Systems

Neural Network Growth Requires Unprecedented Semiconductor Scaling

SeaScape: EDA Platform for a Distributed Future