SILVACO 073125 Webinar 800x100

An Automated Method to Ensure Designs Are Failure-Proof in the Field

An Automated Method to Ensure Designs Are Failure-Proof in the Field
by Rob vanBlommestein on 06-13-2023 at 6:00 am

fusa white paper semiwiki

I don’t know about you, but when I think of mission-critical applications, I immediately think of space exploration or military operations. But in today’s world, mission-critical applications are all around us. Think about the cloud and how data is managed, analyzed, and shared to execute any number of tasks that have safety and security implications. Or in home IoT-based applications where security systems or smoke alarms should reliably operate and send alerts when something goes awry. What about your self-driving car? One failure could cause serious damage or fatality. If you look, you’ll find that mission-critical applications exist in every aspect of our lives from travel to medical to energy to manufacturing to connectivity.

SoCs are at the heart of these mission-critical applications so how do we ensure that these SoCs don’t fail in the field? How do we make sure that these designs are resilient against random hardware failures? Systematic failures are often detected and fixed during IC development and verification, but random failures in the field are unexpected and can be difficult to plan against leading to serious implications. Devices need to not only be reliable, and function properly as expected, but also resilient against random failures that can occur. Devices need to be able to either recover from these events or mitigate them.

Devices in the field also need to be built to last. Aging effects can be factored into the reliability of the design during the development phase using models, DFM, test, and simulation. However, random failures must be accounted for during the design phase. Designing in safety mechanisms or safety measures (SMs) is key to ensure mission-critical designs are not affected by random failures such as single event upsets (SEUs) during the lifespan of the device.

Adding SMs, which are generally in the form of redundancy, into a design to protect against SEUs is not a new concept – it has been around for decades. However, this effort has largely been manual. Manually inserting SMs is painstaking and error prone as physical placement constraints and routing considerations need to be accounted for to ensure that these SMs don’t have any adverse cascading effects on elements such as reset, power, or clock network signals.

Synopsys synthesis and implementation tools provide a fully automated approach to inserting the SMs to make mission-critical design much more resilient. Synthesis can automatically insert the elements while the place and route (P&R) tools will take care of the physical implementation challenges such as placement distance and routing independence of signal nets. We have drafted a white paper to describe the process of adding these SMs and analyzing and verifying that they meet requirements from RTL to GDSII. Download the white paper “An Automated Method for Adding Resiliency to Mission-Critical SoC Designs” to learn more.

An Automated Method for Adding Resiliency to Mission-Critical SoC Designs

Adding safety measures to system-on-chip (SoC) designs in the form of radiation-hardened elements or redundancy is essential in making mission-critical applications in the Aerospace and Defense (A&D), cloud, automotive, robotics, medical, and Internet-of-Things (IoT) industries more resilient against random hardware failures that occur. Designing for reliable and resilient functionality does impact semiconductor development where these safety measures have generally been inserted manually by SoC designers. Manual approaches can often lead to errors that cannot be accounted for. Synopsys has created a fully automated implementation flow to insert various types of safety mechanisms, which can result in more reliable and resilient mission-critical SoC designs.

This paper discusses the process of implementing the safety mechanisms/measures (SM) in the design to make them more resilient and analyze their effectiveness from design inception to the final product.

Also Read:

Automotive IP Certification

Why Secure Ethernet Connections?

Chiplet Interconnect Challenges and Standards


WEBINAR: Revolutionizing Chip Design with 2.5D/3D-IC Design Technology

WEBINAR: Revolutionizing Chip Design with 2.5D/3D-IC Design Technology
by Daniel Nenni on 06-12-2023 at 10:00 am

Figure 1 (2)

In the 3D-IC (Three-dimensional integrated circuit) chip design method, chiplets or wafers are stacked vertically on top of each other and are connected using Through Silicon Vias (TSVs) or hybrid bonding.

The 2.5D-IC design method places multiple chiplets alongside each other on a silicon interposer. Microbumps and interconnect wires establish connections between dies whereas TSVs are used to make connections with the package substrate.

Figure 1: 2.5D IC design block diagram
Why do we need 3D-ICs?

Emerging technologies like Artificial Intelligence, machine learning, and high-speed computing require highly functional, high-speed, and compact ICs. 3D-IC design technology offers ultra-high performance and reduced power consumption, making it suitable for multi-core CPUs, GPUs, high-speed routers, smartphones, and AI/ML applications. As the high-tech industry evolves, the need for smaller size and more functionality grows. The heterogeneous integration capability of 3D-IC design provides more functional density in a smaller area. The vertical architecture of 3D-ICs also reduces the interconnect length, allowing faster data exchange between dies. Overall, this advanced packaging technology is a much-needed IC design method to meet the growing demand for speed, more functionality, and less power consumption.

Benefits of 3D-ICs

One key advantage of 3D-ICs is heterogeneous integration. It allows the integration of chiplets in different technology nodes in the same space. Digital logic, analog circuits, memory, and sensors can be placed within a single package. This enables the creation of highly customized and efficient solutions tailored to specific application requirements.

Higher integration density is another benefit of 3D-IC design. By vertically stacking multiple layers of interconnected chiplets or wafers, the available chip area is utilized more efficiently. This increased integration density allows for the inclusion of more functionality within a smaller footprint, which is particularly beneficial in applications where size and weight constraints are critical, such as mobile devices and IoT devices.

3D-ICs also exhibit higher electrical performance. The reduced interconnect length in vertically stacked chips leads to shorter signal paths and lower resistance, resulting in improved signal integrity and reduced signal delay. This translates to higher data transfer rates, lower power consumption, and enhanced overall system performance.

With the latest configuration methods like TSMC’s CoWoS (Chip On Wafer on Substrate) and WoW (Wafer on Wafer), which utilize hybrid bonding techniques, the interconnect length is further minimized, leading to reduced power losses and improved performance.

3D-IC technology provides a range of exceptional advantages, including heterogeneous integration, higher integration density, smaller size, higher electrical performance, reduced cost, and faster time-to-market. These advantages make 3D-ICs a compelling solution for advanced chip designs in various industries.

Challenges of 3D-IC Design

Although 2.5D/3D-IC design methods have numerous advantages, these new methodologies have also introduced new challenges related to physics. The structural, thermal, Power, and Signal integrity of the entire 3D-IC system is more complicated. 3DIC designers are at the beginning of the learning curve to master the integrity challenges during the physical implementation of the system. Accurate simulation methods are a must for any chip designer especially when dealing with 3D-IC. Each component in the 3D-IC system should be examined and validated using highly accurate simulation tools.

Learn more about the latest developments in 3D-IC design, challenges, and simulation, and the key to a successful 3D-IC design by registering for the replay: Design and Analysis of Multi-Die & 3D-IC Systems by Ansys experts. He will also discuss the advanced simulation methods to predict the possible structural, thermal, Power, and Signal integrity issues in 3D-IC.

Also Read:

Chiplet Q&A with John Lee of Ansys

Multiphysics Analysis from Chip to System

Checklist to Ensure Silicon Interposers Don’t Kill Your Design


VLSI Symposium – Intel PowerVia Technology

VLSI Symposium – Intel PowerVia Technology
by Scotten Jones on 06-12-2023 at 6:00 am

Slide4

At the 2023 VLSI Symposium on Technology and Circuits, Intel presented two papers on their PowerVia technology. We received a pre-conference briefing on the technology embargoed until the conference began and received the papers.

Traditionally all interconnects have taken place on the front side of devices with signal and power interconnects sharing the same set of interconnect layers. There is a fundamental trade off between signal routing where small cross sectional area routing lines are required for scaling and large cross sectional area routing lines are needed for low resistance/power drop power delivery. Moving power delivery to the backside of a wafer, Backside Power Delivery Network (BS-PDN) enables optimized signal routing layers on the frontside and optimized power delivery layers on the backside with big/thick power interconnects, see figure 1.

Figure 1. Frontside Versus Backside Power Delivery

As logic technology has advanced the number of interconnect layers required has been steadily growing, see figure 2.

Figure 2. Intel Interconnect Layers

Please note that for recent nodes interconnect layers may vary by a few layers depending on the device.

Connections from the outside world to a device are made through the top interconnect layers and that means for power to get down to the devices, power must go through the entire interconnect stack via chain, see figure 3.

Figure 3. Power Routing Challenges

The example in figure 3 from TSMC’s 3nm technology shows a via chain resistance of 560 ohms versus imec reports of a backside nano-via of ~50 ohms. One of the key advantages of BS-PDN becomes clear.

Another advantage that Intel is talking about is cost. BS-PDN relaxes the requirements for metal zero lowering cost for the most expensive interconnect layer at the expense of relatively large pitch backside metal layers.

There are multiple approaches to BS-PDN. Imec is advocating for Buried Power Rails (BPR) as a connection point for BS-PDN. In figure 4. Intel shows a density advantage to Power Via versus BPR.

Figure 4. Buried Power Rail Versus Power Via

I have two comments about, first of all, my sense is the industry is reluctant to implement BPR because it requires metal buried in the wafer before transistor formation. In my discussions with imec they admit this reluctance but believe BPR will eventually be needed. I should also mention that imec believes BPR can also connect into the side of the device without going up to metal 0 and achieve the same or better density as Power Via, this is an area of contention between the technologies.

In order to minimize risk instead of running their first PowerVia tests on Intel’s 20A process that also introduces RibbonFET (Horizontal Nanosheets), Intel has run Power Via on the i4 FinFET process Intel is currently ramping up in production.

Figure 5. summarize the results seen with PowerVia on i4. Power Via has demonstrated improved Power, Performance, and Area (PPA).

Figure 5. Power Via integrated into i4

Figure 6. illustrates the area improvement and figure 7. Illustrates the power and performance advantages.

Figure 6. Power Via Scaling

From figure 6 it can be seen that power via reduces the call height while also relaxing metal 0 from 30nm pitch to 36nm pitch. The relaxation in pitch likely results in a single patterned EIV layers versus multipatterned EUV.

Figure 7. IR Droop and Fmax

In figure 7 it can bee seen that IR Droop is reduced by 30% and Fmax is increased by 6%.

Finally, in figure 8 we can see that i4 = PowerVia yield is tracking i4 yield offset by 2 quarters.

Figure 8. i4 + Power Via Yield

With PowerVia due to be introduced in 2024 on Intels 20A process in the first half and 18A in the second half, it appears that PowerVia should have minimal impact on yield.

It is interesting to note that Intel is planning to introduce PowerVia in 2024. Samsung and TSMC have both announced BS-PDN for their second generation 2nm nodes due in 2026, giving Intel a 2 year lead in this important technology. My belief is two fold, one Intel is continuing to make progress on the timely introduction of new technologies, and, two, Intel likely prioritized BS-PDN because they are more focused on pure performance that the foundries.

Here is the official Intel press release:

https://www.intel.com/content/www/us/en/newsroom/news/powervia-intel-achieves-chipmaking-breakthrough.html

Also Read:

IEDM 2022 – Ann Kelleher of Intel – Plenary Talk

Intel Foundry Services Forms Alliance to Enable National Security, Government Applications

Intel and TSMC do not Slow 3nm Expansion

How TSMC Contributed to the Death of 450mm and Upset Intel in the Process


Podcast EP167: What is Dirty Data and How yieldHUB Helps Fix It With Carl Moore

Podcast EP167: What is Dirty Data and How yieldHUB Helps Fix It With Carl Moore
by Daniel Nenni on 06-09-2023 at 10:00 am

Dan is joined by Carl Moore, a Yield Management Specialist at yieldHUB. Carl is a semiconductor and yield management expert with more than 35 years of experience in the industry. Carl has held technical management positions across product and test engineering, assembly, manufacturing, and design.

Carl explains what “dirty data” is from a semiconductor test and yield management perspective. He  explains the sources of dirty data, the negative impact it can have on an organization and its customers and how yieldHUB partners with its customers to analyze and fix dirty data at the source.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Sean Wei of Easy-Logic

CEO Interview: Dr. Sean Wei of Easy-Logic
by Daniel Nenni on 06-09-2023 at 6:00 am

Photo Sean Wei 006

Dr. Wei has served as CEO & CTO of Easy-Logic since 2020.  Prior to this role, Dr. Wei served as CTO since 2014 where he constructed the core algorithm and the tool structure of EasyECO.  As the CEO, Dr. Wei focuses on building a strong company infrastructure.  In his CTO role he interfaces with strategic ASIC design customers and leads the field support efforts to seamlessly align EasyECO’s technology with emerging industrial needs.  Wei worked at Agate Logic as an FPGA P&R algorithm developer prior to pursuing his PhD degree.

Dr. Wei received his PhD in Computer Engineering from the Chinese University of Hong Kong and both his MS and BS degrees of Computer Science and Technology from Tsinghua University.

Tell us about Easy-Logic

Easy-Logic was founded in year 2014 by a group of PhD graduates with their supervisor from the Chinese University of Hong Kong.  While in the school, they analyzed the EDA solutions for ASIC design industry and realized that functional ECO demands were growing at an alarming rate, but the EDA industry didn’t respond to it.

They participated in the CAD contest of ICCAD International Conference using the functional ECO-related algorithms developed in their research, won world champions 3 times in a row (2012- 2014).  Worth mentioning, in 2012, the contest subject was functional ECO provided by Cadence. Their algorithm performed twice as good compared to any other contender’s.

With a strong combination of the required product development expertise, Easy-Logic set its course for empowering the ASIC project teams to quickly react to functional ECOs at a substantially lower overall cost.

After the product EasyECO was first introduced in 2018, positive responses from the design industry surprised the young entrepreneurs.  The number of customer evaluation requests overwhelmed the startup, and Easy-Logic quickly became a rising star in the EDA industry.  Currently the customer base extends across Asia and North America, among them, many world’s top-tier semiconductor providers.

What problems are you solving?

Easy-Logic Technology is a solution provider for Functional ECO issues in the ASIC design.

A Functional ECO requirement occurs when there is a change in the RTL code that fixes, or modifies, the chip function.  Functional ECO means inserting only a small patch into the existing design (i.e., pre-layout, cell routing, or even post-mask) to make sure the logic function of patched circuit is consistent with revised RTL.  The purpose is to quickly implement the RTL change without re-spinning the whole design.

The design team may receive Functional ECO requests at any stage of the design process.

Depending on the design stage, the required RTL change ripples through design constraints like multi-clock domain and low power design rules, the DFT test coverage requirements, physical restrictions of the layout change, eventually metal changes, and timing closures.  There is no reliable correlation between the complexity of RTL change and the success of layout ECO even if the RTL change looks simple, where a ECO failure means project re-spin.

At present, most IC design companies still need to invest a lot of manual work in functional ECO because market leading EDA tools are not yet capable of effectively addressing challenging ECO issues.  Each design revision mentioned above requires a skilled engineer to crack down the problem based on the nature of the RTL change and the characteristics of the ASIC design.

Easylogic ECO’s automatic design flow efficiently solves functional ECO problems for design teams.

What application areas are your strongest?

Almost all ASIC designs require Functional ECOs, however, each different application has its unique ECO challenges.  Fortunately, EasylogicECO is structured to handle all challenges.

For example,

  1. HPC has challenges on deep optimization which leads to larger differences between netlist and RTL structure, posing greater challenges for ECO algorithms.
  2. AI chips comprise a significant amount of arithmetic logic, requiring specialized algorithms for arithmetic logic ECO.
  3. Automotive area has challenges on scan chain fixing as test coverage is critical.
  4. Consumer products, such as panel controllers, have challenges adopting subsequent functional ECOs as their products need to be versatile and are revised frequently.

EasylogicECO’s core optimization algorithm lays the foundation for all general optimizations on top of the general algorithm. Algorithm designed for each specific application scenario enables identifying and handling the application challenge automatically.

What keeps your customers up at night?

As mentioned earlier, there is no guarantee for the success of functional ECO and each failed functional ECO job means a project delay from weeks to months. The closer it gets to the tape-out stage, the greater the challenges in achieving success.  A re-spin when the design is close to tape out might even kill the product, so the enormous pressure on the success of ECO task, within the shortest ECO turnaround time, sometimes pushes designers over the edge.

Functional ECO is never a simple job.  Its importance has become an industry consensus, and yet, to this day, major EDA companies still couldn’t provide any satisfactory solutions.  The nagging uncertainty of whether the ECO task can be successful is extremely stressful.

What does the competitive landscape look like and how do you differentiate?

Most ASIC design companies still must invest a lot of manpower on complex functional ECO cases as the solutions provided by major EDA vendors couldn’t get the job done efficiently.

Easy-Logic is a newcomer in the functional ECO landscape.  Easy-Logic’s flagship product, EasylogicECO, deploys patented optimization algorithms to create a combination of

  1. The smallest ECO patch
  2. The easiest tool to address complex cases
  3. The most suitable tool flow to address the depth of ECO design changes

That differentiates EasylogicECO from other solutions.

What new features are you working on? 

Functional ECO requires a complete design flow/toolchain.  Following Functional ECO, DFT ECO, PR ECO, Timing ECO, Metal ECO are also required.  Currently, there is no complete solution available for all these needs.  Easy-Logic is committed to developing a toolchain for the complete functional ECO process, enabling customers to easily navigate from an RTL change to a GDS2 change.

How do customers normally engage with Easy-Logic?

The easiest way is to send an email to the Easy-Logic Customer Response Team through the Contact Us form on the Easy-Logic website.  The Easy-Logic field team will reach out to the sender shortly.

Now that travel is open, Easy-Logic will appear in many conference events, the next one being DAC 2023 in San Francisco.  Please make an appointment before the event, or simply drop by, for a detailed solution discussion.

Also Read:

CEO Interview: Issam Nofal of IROC Technologies

CEO Interview: Ravi Thummarukudy of Mobiveil

Developing the Lowest Power IoT Devices with Russell Mohn


Getting the most out of a shift-left IC physical verification flow with the Calibre nmPlatform

Getting the most out of a shift-left IC physical verification flow with the Calibre nmPlatform
by Peter Bennet on 06-08-2023 at 10:00 am

Correct Verify Debug

Who first came up with this term shift-left ? I’d assumed Siemens EDA as they use it so widely. But their latest white paper on the productivity improvements possible with shift-left Calibre IC verification flows puts the record straight: a software engineer called Larry Smith bagged the naming rights in a 2001 paper (leapfrogging hardware engineers who’ve been doing prototyping for decades).

It’s well known that catching problems earlier in the design process can reduce the rework cost by orders of magnitude.

While the detect, debug and correct costs might not vary much through the flow, it’s rework costs that escalate as fixes require longer fixing and verification loops, with potential hardware respins.

Not all design errors and violations are created equal – some have higher impact and fixing costs than others – and design checks fall into two categories: strictly functional (binary; pass/fail) and attribute checks which are qualitative (checked against values) where there may be more scope to over-design earlier or perhaps waiver later.

Shift-left strategies hope that early violation detection reliability is as good as the 100% level at signoff. Let’s consider what actually happens.

Differences in verification engines or rule interpretation between early and signoff checks may result in false positive and/or negative violations. So Siemens make a strong point that a shift-left strategy hugely benefits from using the same engine for physical design checks throughout the flow.

But the toughest challenge in handling large, dirty (early stage or incomplete) designs is the sheer volume of violations. Unless we do something smart, the signal to noise ratio here can get pretty bleak. And anyone who’s done much verification will know that huge error and warning reports often cluster into similar types with common causes. Figuring out these patterns often takes time, even with experienced designers.

Features we’d like in a shift-left flow might include:

Optimizing shift-left in the Calibre nm Platform flow

How does a Calibre flow measure up to these challenges ?

A Calibre shift-left flow must include the whole range of physical verification checks – LVS, DRC, ERC, PERC, DFM and reliability – as well as design modifications like metal fill and in-design fixing (DRC: Design rule checking; LVS: Layout-vs-schematic; ERC: Electrical rule checking; PERC: Programmable ERC; DFM: Design for manufacturing).

We can’t just optimize the flow without first making sure the tools have the capabilities to do the necessary early design checking. Calibre has added many features here and considers these from four aspects:

Early-stage verification includes equation-based design rule checking, intelligent pattern matching, advanced property extraction and clustering, and embedded machine learning. Reliability is addressed with a set of pre-formatted Calibre PERC™ checks and the Calibre language infrastructure supports signoff verification capability during design and implementation.

Execution optimization covers run configuration and management, including automated run invocation and simplified setup. Calibre nmDRC™ Recon and nmLVS™ Recon tools minimize the rules and data needed for early-stage DRC and LVS verification. Automated check selection and design portioning allows designers to quickly find and fix the real errors while filtering out the irrelevant errors in incomplete designs.

Debug includes color mapping to help minimize and group results and identify root causes quickly, efficiently, and accurately. Intelligent debug signals speed up determining optimal corrections. Calibre RealTime Custom and Digital tools give immediate DRC feedback during design and implementation using standard foundry-qualified Calibre rule decks. Smart automated waiver processing avoids repeating already waivered violations.

Correction is improved with automated, Calibre-correct layout enhancements and repairs that are back-annotated to implementation tool design databases. Calibre’s DFM toolsuite provides a wide range of correct-by-construction layout modifications and optimizations that enhance both manufacturing robustness and design quality. Combining fixing and verification in the same tool also saves license usage and run time.

Another recent Calibre white paper provides a detailed summary around this diagram.

Some of these operations – like smart automation and recognizing patterns in complex result sets – are a natural fit for Artificial Intelligence (AI) techniques, so it’s no surprise to see these are widely used. There’s more detail around the diagram below in the paper.

Summary

Design flows often feel like they were built “tools-up”, with usability aspects added as an afterthought. It’s refreshing to see a more “flow-down” approach here and perhaps no surprise that comes from Siemens as a historically system-centric EDA company.

Much as we’ve seen flows try to consolidate around common timing engines, Siemens argue a strong case for making signoff qualified Calibre PV checks available throughout the design flow.

Siemens have made some really interesting progress with these Calibre shift-left capabilities and clearly see this as a continuing journey with plenty more to come.

Find out more in the original white paper here:

Improve IC designer productivity and design quality with Calibre shift-left solutions; published 3 May 2023

https://resources.sw.siemens.com/en-US/white-paper-calibre-shift-left-solutions-optimize-ic-design-flow-productivity-design

Related Blogs and Podcasts

I found these closely-related white papers very useful:

Michael White, “Optimize your productivity and IC design quality with the right shift left strategy,” Siemens digital Industries Software; published 01 July 2022, updated 10 March 2023.

https://resources.sw.siemens.com/en-US/white-paper-optimize-your-productivity-and-ic-design-quality-with-the-right-shift-left

The four foundational pillars of Calibre shift-left solutions for IC design & implementation flows, published 4 May 2023.

https://resources.sw.siemens.com/en-US/white-paper-the-four-foundational-pillars-of-calibre-shift-left-solutions-for-ic-design

Here’s the original software engineering article introducing the shift-left concept:

Larry Smith, “Shift-Left Testing,” Dr. Dobb’s, Sept 1, 2001.

https://www.drdobbs.com/shiftleft-testing/184404768

Also Read:

Securing PCIe Transaction Layer Packet (TLP) Transfers Against Digital Attacks

Emerging Stronger from the Downturn

Chiplet Modeling and Workflow Standardization Through CDX


Democratizing the Ultimate Audio Experience

Democratizing the Ultimate Audio Experience
by Bernard Murphy on 06-08-2023 at 6:00 am

3D Audio

I enjoy talking with CEVA because they work on such interesting consumer products (among other product lines). My most recent discussion was with Seth Sternberg (Sensors and Audio software at CEVA), on spatial or 3D audio. The first steps to a somewhat immersive audio experience were stereo and surround sound, placing sound sources around the listener. A little better than mono audio, but your brain interprets the sound as coming from inside and fixed to your head, because it’s missing important cues like reverb, reflection, and timing differences at each ear. 3D audio recreates those cues, allowing the brain to feel the sound source is outside your head but still fixed to your head; move your head to the left and the band moves to the left, move to the right and the band moves to the right Connecting head movements to the audio corrects this last problem, fixing the sound source in place. When you move your head you hear a change in the same way you would in the real world. This might seem like a nice-to-have but it has major implications in user experience and in reducing fatigue induced by lesser implementations.

Why should we care?

Advances in this domain leverage large markets, especially in gaming (~$300B), which doesn’t just drive game sales. If you doubt gaming is important, remember that last year gaming led NVIDIA revenues and is still a major contributor. As a further indicator the headphones/earphones market is already above $34B and expected to grow to $126B by 2030. Apple and Android 13 provide proprietary spatial audio solutions for music and video services and are already attracting significant attention. According to one reviewer there are already thousands of Apple Music songs encoded for 3D. Samsung calls their equivalent 360 Audio, working with their Galaxy Buds Pro and content encoded for Dolby Atmos (also supported by Apple’s Spatial Audio). Differentiating on the user audio experience is a big deal.

The music option is interesting but I want to pay special attention to gaming. Given an appealing game, the more immersive the experience the more gamers will be drawn to that title. This depends in part on video action of course, but it also depends on audio well synchronized both in time and in player pose with the video. You want to know the difference between footsteps behind you or in front. When you turn your head to confirm, you expect the audio to track with your movement. If you look up at a helicopter flying overhead, the audio should track. Anything less will be unsatisfying.

Though you may not notice at first, poor synchronization in timing and pose can also become tiring. Your brain tries to make sense of what should be correlated visual and audible stimuli. If these don’t correspond, it must work harder to make them align. An immersive experience should enhance excitement, not fatigue, and game makers know it. Incidentally, long latencies and position mismatch between visual and audio stimuli are also thought to be a contributing factor in Zoom fatigue. Hearing aid wearers watch a speaker’s lips for clues to reinforce what they are hearing; they also report fatigue after extended conversation.

In other words, 3D audio is not a nice-to-have. Product makers who get this right will crush those who ignore the differentiation it offers.

To encode or not to encode

In the early days of surround sound, audio from multiple microphones was encoded in separate channels, ultimately decoded to separate speakers in your living room. Then “up-mixing” was introduced using cues from the audio to infer a reasonable assignment of source directions to support 5.1 or 7.1 surround sound. This turns out to be a pretty decent proxy for pre-encoding and certainly is much cheaper than re-recording and encoding original content in multiple channels. If there is more information like stereo, a true 5.1, 7.1 or ambisonics, 3D audio should start with that. Otherwise up-mixing provides a way for 3D audio to deliver a good facsimile of the real thing.

The second consideration is where to render the audio, on the phone/game station or in the headset. This is relevant to head tracking and latency. Detecting head movements obviously must happen in the headset but most commonly the audio rendering is handled in the phone/gaming device. Sending head movement information back from the headset to the renderer adds latency on top of rendering. This roundtrip over Bluetooth can add up to 200-400 milliseconds, a very noticeable delay between visual and audible streams. Apple has some proprietary tricks to work around this issue but these are locked into an Apple exclusive ecosystem.

The ideal and open solution is to do the audio rendering and motion detection in the headset for minimal total latency.

The RealSpace solution

In May of this year, CEVA acquired the VisiSonics spatial audio business. They have integrated this together with the CEVA MotionEngine software for dynamic head tracking, providing precisely the solution defined above. They also provide plugins for game developers who want to go all the way to delivering content fully optimized to 3D audio. The product is already integrated in chips from a couple of Chinese semis and a recently released line of hearables in India. Similar announcements are expected in other regions.

Very cool technology. You can read about the acquisition HERE, and learn more about the RealSpace product HERE.

Also Read:

DSP Innovation Promises to Boost Virtual RAN Efficiency

All-In-One Edge Surveillance Gains Traction

CEVA’s LE Audio/Auracast Solution


Nominations for Phil Kaufman Award, Phil Kaufman Hall of Fame Close June 30

Nominations for Phil Kaufman Award, Phil Kaufman Hall of Fame Close June 30
by Paul Cohen on 06-07-2023 at 10:00 am

PK Generic

Plan ahead now because Friday, June 30, is the deadline to submit nominations for the Phil Kaufman Award and the Phil Kaufman Hall of Fame for anyone you think is deserving of these honors. If you haven’t given it any thought, please consider nominating someone.

Before we look at both and the nomination requirements, here’s a thumbnail sketch of Phil Kaufman (1942-1992) and the reasons why we continue to honor his memory. Phil Kaufman was an industry pioneer who turned innovative technologies into commercial businesses that have benefited electronic designers. At the time of his death, he was president and CEO of Quickturn System, developer of hardware emulators. Quickturn’s products helped designers to speed the verification of complex designs. Previously, he headed Silicon Compiler Systems, an early provider of high-level EDA tools that enabled designers to efficiently develop chips.

The annual Phil Kaufman Award for Distinguished Contributions to Electronic System Design was first presented in 1994 to Dr. Herman Gummel (1923-2022) of Bell Labs (now Nokia Bell Labs). Since then, an impressive list of notables from across the spectrum of our ecosystem have received the award.

Sponsored by the Electronic System Design Alliance (ESD Alliance) and the IEEE Council on Electronic Design Automation (CEDA), it honors individuals who have made a visible and lasting impact on electronic design. Their influence could be as a C-level executive or someone setting industry direction or promoting the industry, a technologist or engineering leader or a professional in education and mentorship. Dr. Gummel, for example, was honored for his fundamental contributions to central EDA areas including the integral charge control model for bipolar junction transistors known as the Gummel-Poon model.

Per a policy set with the IEEE, only living contributors are eligible to receive awards. Thus, the Phil Kaufman Hall of Fame was introduced in 2021 by the ESD Alliance and the IEEE CEDA to honor deceased individuals who made significant and noteworthy creative, entrepreneurial and innovative contributions and helped our community’s growth. As Bob Smith, executive director of the ESD Alliance, said at the time: “Many contributors to our success died before being recognized for their efforts shaping our community. The Phil Kaufman Hall of Fame changes that.”

Our first recipients in 2021 were Jim Hogan (1951–2021) and Ed McCluskey (1929-2016). Jim Hogan was managing partner of Vista Ventures, LLC., and an experienced senior executive who worked in the semiconductor design and manufacturing industry for more than 40 years. Ed McCluskey, a professor at Stanford University, sustained a relentless pace of fundamental contributions for efficient and robust design, high-quality testing and reliable operation of digital systems. Mark Templeton (1958-2016) was the 2022 recipient. Artisan Components (now Arm), where he served as CEO, catalyzed the increasing use of IP as major components in chip designs. At the time of his death, he was managing director of investment firm Scientific Ventures, and a Lanza techVentures investment partner and board member.

How to Nominate
Selections for the Phil Kaufman Award and the Phil Kaufman Hall of Fame are determined through a nomination process reviewed by the ESD Alliance and IEEE CEDA Kaufman Award selection committees. To download a nomination form, go to: Phil Kaufman Award or Phil Kaufman Hall of Fame.

About the ESD Alliance
The ESD Alliance, a SEMI Technology Community, acts as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry. With a variety of programs for member companies, it represents the electronic system and semiconductor design ecosystem for technical, marketing, economic and legislative issues affecting the entire industry.

Follow SEMI ESD Alliance

www.esd-alliance.org

ESD Alliance Bridging the Frontier blog

Twitter: @ESDAlliance

LinkedIn

Facebook

Also Read:

SEMI ESD Alliance CEO Outlook Sponsored by Keysight Promises Industry Perspectives, Insights

Cadence Hosts ESD Alliance Seminar on New Export Regulations Affecting EDA and SIP March 28

2022 Phil Kaufman Award Ceremony and Banquet Honoring Dr. Giovanni De Micheli


Applied Materials Announces “EPIC” Development Center

Applied Materials Announces “EPIC” Development Center
by Scotten Jones on 06-07-2023 at 8:00 am

Applied Materials EPIC briefing under embargo Page 10

On May 22nd Applied Materials announced a new development center, Equipment and Process Innovation and Commercialization Center (EPIC).

Applied Materials already operates the Maydan Technology Center (MTC), a billion-dollar development facility with over 120 advanced process tools and 80 metrology and inspection tools located in Santa Clara, California. Applied Materials also has the Material Engineering Technology Accelerator (META) for materials research and innovation located at Albany Nanotech in Albany New York. For META Applied materials buys capacity at the Albany Nanotech fab and has their own space.

All of this leads to the question of why build a new development center and what is unique about EPIC.

The key differentiator around EPIC is the focus on collaboration.

“Goal is to change the way equipment companies work with chipmaker, universities and other partners to optimize time to market, R&D cost and overall success rate. Potentially 30% faster than current baseline”

Currently between university foundational research, equipment company equipment and process development, and chipmaker module/integration, pilot and introduction to high volume manufacturing a new process can take 10 to 16 years, see figure 1.

Figure 1. Current Development Path.

Foundational research for new technologies takes place at universities and Applied Materials has identified this as a key bottleneck. EPIC will provide universities with access to state-of-the-art hardware and labs to accelerate this work. There will be space available at EPIC for universities and Applied Materials is also going to establish collaborative labs at universities where they will run/maintain equipment as an extension of this lab.

Another key bottleneck is the transfer of new processes and hardware from equipment companies to chipmakers.

By overlapping foundational research with equipment and process development, and overlapping equipment and process development with module/integration by the chipmaker the overall development process can be 30% faster, see figure 2.

Figure 2. Accelerated Development Path.

The new EPIC center will provide 180,000 square feet of cleanroom and supporting space to Applied Materials, Customers, Universities, and Partners with operations due to begin in Q1 of 2026, see figure 3.

Figure 3. EPIC Facility.

The EPIC center will have a common tool set of state-of-the-art tools, dedicated private customer space, partners/peers and universities/star ups space as well as private space for Applied Materials, see figure 4.

Figure 4. EPIC Center Implementation.

Applied Materials will invest up to $4 billion dollars over the next 7 years to establish the center. There will also be 5 to 7 satellite labs with 2 up and running and 4 more in the discussion phase.

By locating EPIC in Silicon Valley Applied Materials be in close proximity to the leading technology companies with $10 trillion dollars of market capitalization located within a 50-mile radius. There are also nearby world class universities.

The new EPIC center represents a large investment in rearchitecting how new semiconductor processes are developed to accelerate new developments and the semiconductor roadmap.

Also Read:

SPIE 2023 – imec Preparing for High-NA EUV

TSMC has spent a lot more money on 300mm than you think

SPIE Advanced Lithography Conference 2023 – AMAT Sculpta® Announcement

IEDM 2023 – 2D Materials – Intel and TSMC

 


PCI-SIG DevCon and Where Samtec Fits

PCI-SIG DevCon and Where Samtec Fits
by Mike Gianfagna on 06-07-2023 at 6:00 am

PCI SIG DevCon and Where Samtec Fits

PCIe (peripheral component interconnect express) is an interface standard for connecting high-speed components contained in PCs, MACs and other types of processors. Think graphics, storage arrays, Wi-Fi and the like. This communication standard has become incredibly popular. The first version of the standard was released in 2004 by Intel. Like many parts of computing architectures, newer generations delivered ever faster and more efficient performance. Gen 7 of the standard is currently in development. There is a key conference coming up on June 13 for all things PCIe. Just like DesignCON and MemCON, Samtec will be a force at this event. Read on to learn about PCIe, the PCI-SIG DevCon and where Samtec fits.

PCIe – Where Samtec Fits

Any communication standard requires hardware and software to implement the communication protocol, electronics to drive the communication channel and an interface to the communication channel and the physical medium. These last two parts are critical to completing the channel, and this is where Samtec provides a variety of solutions to get the job done.

Both the electrical and mechanical parts of the specification must be adhered to if the interface is to be robust and reliable. Samtec offers both connector and cable solutions that meet PCI Express® electrical and mechanical specifications. The high-level summary includes:

  • High-speed edge card sockets that support one, four, eight and sixteen PCI Express links and mate with PCI Express cable assemblies
  • PCI Express-Over-FireFly™ copper and optical cable assemblies for low latency, power savings and guaranteed transmission
  • An optical adaptor card is available with PCIe x16 edge card connector

For increased design flexibility, additional solutions are available that meet PCI Express electrical specifications with potential cost savings, including mezzanine board-to-board solutions and signal/power routing flexibility.

PCI-SIG DevCon – Where Samtec Fits

First, the vital stats for this important event:

PCI-SIG Developers Conference 2023

June 13-14

Santa Clara Convention Center

​Santa Clara, CA

You can register for the conference here. If you are developing any kind of computing device, PCIe is very likely to be part of the architecture, so this is key show to attend. What makes it even more compelling is registration is free if your company is involved in the standard, and over 900 companies are. Samtec is a Platinum Sponsor for the event. Here are some of the ways they are supporting the technical sessions:

Samtec and its partners will be participating in two technical sessions. Resident PCI Express technology expert Steve Krooswyk will detail exceptions of excursion compliance in cables and connectors. 

Martin Stumpf Rohde & Schwarz will detail the challenge of PCIe 5.0/PCIe 6.0 compliance in interconnect. He will discuss the partnership between Samtec, Rohde & Schwarz and Allion Labs.

The details for these two presentations are as follows:

Cable and Connector Compliance with Integrated Return Loss

Steve Krooswyk – Tuesday, June 13 | 3:30 PM – 4:30 PM PT

Upcoming PCIe 5.0 and 6.0 Cable and 6.0 CEM specifications are considering Integrated Return Loss (IRL) for excursion compliance.  Excursions may occur as compliance further reduces noise requirements and suppliers optimize high volume manufacturing practices.  Excursions up to a limit have minimal system impact.  IRL is not new, it’s history and process are reviewed, followed by simulation and measurement examples.

Connector and Cable Assembly Challenges for PCIe 5.0 and 6.0

Martin Stumpf – Wednesday, June 14 | 9:00 AM – 10:0 AM PT

With 32 GT/s in PCIe 5.0 and 64 GT/s in PCIe 6.0, channel characteristics like loss, reflections and crosstalk are increasingly critical for the overall system performance. We will discuss performance requirements and implementations of PCIe 5.0 / 6.0 connectors and cable assemblies and the corresponding test setups and measurement methods to characterize and verify these interconnects. New metrics of ICN and IRL are included, as well as the related measurements. 

As the PCIe specifications define the performance requirements without the test fixtures, optimized test fixture design and accurate test fixture modelling and de-embedding is key for good measurement results. We will preview modern de-embedding techniques with accurate impedance modelling of lead-ins and lead-outs.

To Learn More

Register for this event now, you won’t want to miss it. And check out Samtec’s PCI Express® Interconnect Solutions brochure to learn more about how to a bring PCIe implementation to life. And that’s the story of PCI-SIG DevCon and where Samtec fits.