SNPS1670747138 DAC 2025 800x100px HRes

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC

Ansys’ Emergence as a Tier 1 EDA Player— and What That Means for 3D-IC
by Daniel Nenni on 09-20-2022 at 10:00 am

Ansys chip package board
Thermal, mechanical, electrical, and power analysis must be analyzed simultaneously to capture the interdependencies that are accentuated in muti-die 3D-IC designs

Over its 40+ year history, electronic design automation (EDA) has seen many companies rise, fall, and merge. In the beginning, in the 1980s, the industry was dominated by what came to be known as the big three — Daisy Systems, Mentor Graphics, and Valid Logic (the infamous “DMV”). The Big 3 has morphed over the years, eventually settling in for a lengthy run as Cadence, Synopsys, and Mentor. According to my friend, Wally Rhines’ always informative presentations, the Big 3 have traditionally represented over 80% of the total EDA revenue. However, EDA is now absolutely led by four major billion-dollar-plus players, plus a collection of much smaller niche suppliers. The “Big 3” is now the “Big 4,” consisting of Ansys, a $2 billion Fortune 500 company, Synopsys, Cadence Design Systems, and Siemens Digital Industries Software (née Mentor Graphics). Much smaller, niche players like Silvaco (the next largest) follow in the distance with a cool $50 million in revenues.

Ansys has a 50-year history in engineering simulation, but its involvement in EDA began just with the acquisition of Apache in 2011. In the process, Ansys acquired Redhawk — a verification platform specifically for the power integrity and reliability of complex ICs, along with a suite of related products that included Totem, Pathfinder and PowerArtist. This evolution continued with the acquisition of Helic with its suite of on-chip electromagnetic verification tools, and subsequent acquisitions in domains such as photonics, to address key verification issues in IC development.

To give context to Ansys’ role in the current EDA ecosystem, the chip design flow consists of 40 or more individual steps executed by a wide range of software tools. No single company can hope to cover the entire flow. While every stage is important, there are certain extremely difficult, critical verification steps mandated by semiconductor manufacturers that all chips must pass. These signoff verifications are required before foundries will accept a design for manufacture.

Ansys multiphysics solutions for 3D-IC provide comprehensive analysis solutions along 3 axis: across multiple physics, across multiple design stages, and across multiple design levels

Referred to as golden signoff, they include:

  • Design Rule Checking (DRC)
  • Layout vs. Schematic (LVS)
  • Timing Signoff (STA: static timing analysis)
  • Power Integrity Signoff (EM/IR: electromigration/voltage drop).

The reliability and confidence in these checks rests on industry reputations and experience built up over decades, including years of close collaboration with foundries, which makes golden signoff tools very difficult and risky to displace. Today, virtually every IC design relies on Calibre from Siemens and PrimeTime from Synopsys. Joining these two longstanding golden tools, Ansys RedHawk-SC™ (for digital design) and Ansys Totem (for analog/mixed signal (AMS) design) are golden tools for power integrity signoff, critical for today’s advanced semiconductor designs.

Foundry Certification

Beyond the signoff certification of RedHawk-SC and Totem for power integrity, other Ansys tools have also been qualified by the foundries for a range of verification steps including on-chip electromagnetic modeling (RaptorX, Exalto, VeloceRF), chip thermal analysis (RedHawk-SC Electrothermal), and package thermal analysis (Icepak).

Due to limited engineering resources and demanding schedules, foundries typically work with just a few select EDA vendors as they develop each new generation of silicon processes. Most of these collaborations now exist within the bubble of the Big 4, hinging on relationships built on a reputation for delivering specific technological capabilities, working relationships forged over many years, and the reliability of those tools established by working with a wide spectrum of customers over many technology generations.

Ansys Brings EDA Into the 3D Multiphysics Workflow

The evolution of semiconductor design is moving beyond scaling down to ever-smaller feature sizes and is now addressing the interdependent system challenges of 2.5D (side-by-side) and 3D (stacked) integrated circuits (3D-IC). These disintegrate traditional monolithic designs into a set of ‘chiplets’ that offer benefits in yield, scale, flexibility, reuse, and heterogenous process technologies. But in order to access these advantages, 3D-IC designers must grapple with the significant increase in complexity that comes with multi-chip design. Many more physical effects must be analyzed and controlled than in traditional single-chip designs, and a broad suite of physical simulation and analysis tools is critical to manage the added complexity of the multiphysics involved.

Ansys has strategically positioned itself to take on these challenges as an industry leader by leveraging its lengthy, broad multiphysics simulation experience with updated Redhawk-SC and Totem capabilities to support advances in power integrity for 3D-IC. This includes brand new capabilities like RedHawk-SC Electrothermal that are targeted specifically at 3D-IC design challenges with thermal and high-speed integrity.

Over the past few years, Ansys has been recognized by TSMC for its critical role in the EDA design flow. In 2020, Ansys achieved early certification of its advanced semiconductor design solution for TSMC’s high-speed CoWos (Chip-on-Wafer-on-Substrate) design, and InFO (Integrated Fan-Out) 2.5D and 3D packaging technologies. Continued successful collaboration with TSMC has delivered an hierarchical thermal analysis solution for 3D-IC design. In a more recent collaboration, Ansys Redhawk-SC and Totem achieved signoff certification for TSMC’s newest N3E and N4P process technologies. Similar collaborations for advanced processes, multi-die advanced packaging, and high-speed design have led to certifications from Samsung and GlobalFoundries. Ansys is even moving beyond foundry signoff and certification to define reference flows incorporating these tools, such as TSMC’s N6 radio frequency (RF) design reference flow.

TSMC has also recognized Ansys with multiple Partner of the Year Awards in the past 5 years, most recently in:

  • Joint Development of 4nm Design Infrastructure for delivering foundry-certified, state-of-the-art power integrity and reliability signoff verification tools for TSMC N4 process
  • Joint Development of 3DFabric™ Design Solution for providing foundry-certified thermal, power integrity, and reliability solutions for TSMC 3DFabric™, a comprehensive family of 3D silicon stacking and advanced packaging technologies

Achieving Greater Efficiency through Engineering Workflows

As more system companies embark on designing their own bespoke silicon and 3D-IC technology becomes more pervasive, more physics must be analyzed, and they must be analyzed concurrently, not in isolation. Multiphysics is not merely multiple physics. Building a system with several closely integrated chiplets is more complex, so more physical/electrical issues come into play. In response, Keysight, Synopsys and others have chosen to partner with Ansys, recognizing the value of its open and extensible multiphysics platforms. Keysight has integrated Ansys HFSS into their RF flow, while Synopsys has tightly integrated Ansys tools into their IC design flow.

Ansys is well-positioned to accelerate 3D-IC system design, offering essential expertise in different disciplines — in EDA and beyond — for an efficient workflow that spans a range of physics in virtually any field of engineering. For example, Ansys solutions support the complete thermal analysis of a 3D systems, including the application of computational fluid dynamics to capture the influence of cooling fans, and mechanical stress/warpage analysis to ensure system reliability despite differential thermal expansion of the multiple chips. Ansys even provides technology to address manufacturing reliability, predicting when a chip will fail in the field. These products enable the understanding of silicon and systems engineering workflows from start to finish.

Ansys’ influence as a leader in physics spans decades. It extends beyond multiple physics to multiphysics-based solutions that consider interactions more consistent with 3D-IC systems development simultaneously — in thermal analysis, computational fluid dynamics for cooling, mechanical, electromagnetic analysis of high-speed signals, low-frequency power oscillations between components, safety verification, and more, all within the context of the leading EDA flows. And, Ansys’ open and extensible analysis ecosystem connects to other EDA tools and the wider world of computer-aided design (CAD), manufacturing, and engineering.

Summary

There’s little doubt that 3D-IC innovation is accelerating. As systems companies expand further into 3D-IC, they will continue to look to, and trust Ansys solutions in support of their IC designs. To date, the vast majority of the world’s chip designers rely on Ansys products for accurate power integrity analysis. Ansys provides cyber-physical product expertise, with an acute understanding of silicon and system engineering workflows. With one foot in the semiconductor world, and another in the wider system engineering world, Ansys is uniquely positioned to provide broader multiphysics solutions for 2.5D/3D-IC that will continue to grow its footprint in EDA. The EDA Big 3 is now the Big 4, absolutely.

Also Read:

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

What Quantum Means for Electronic Design Automation

The Lines Are Blurring Between System and Silicon. You’re Not Ready.


Finally, A Serious Attack on Debug Productivity

Finally, A Serious Attack on Debug Productivity
by Bernard Murphy on 09-20-2022 at 6:00 am

Verisium min

Verification technologies have progressed in almost all domains over the years. We’re now substantially more productive in creating tests for block, SoC and hybrid software/hardware verification. These tests provide better coverage through randomization and formal modeling. And verification engines are faster – substantially faster in hardware accelerators – and higher capacity. We’ve even added non-functional testing, for power, safety and security. But one area of the verification task – debug – has stubbornly resisted meaningful improvements beyond improved integration and ease of use.

This is not an incidental problem; debug now accounts for almost half of verification engineer hours on a typical design.  Effective debug depends on expertise and creativity and these tasks are not amenable to conventional algorithmic solutions. Machine learning (ML) seems an obvious answer; capture all that expertise and creativity in training. But you can’t just bolt ML onto a problem and declare victory. ML must be applied intelligently (!) to the debug cycle. There has been some application-specific work in this direction, but no general-purpose solutions of which I am aware. Cadence has made the first attack I have seen on that bigger goal, with their Verisium ™ platform.

The big picture

Verisium is Cadence’s name for their new AI-driven verification platform. This subsumes the debug and vManager engines for those tracking product names, but what is most important is that this platform now becomes a multi-run, multi-engine center applying AI and big data methods to learning and debug. Start with the multi-run part. To learn you need historical data; yesterday the simulation was fine, today we have bugs – what changed? There could be clues in intelligent comparison of the two runs. Or in checked-in changes to the RTL, or in changes in the tests. Or in deltas in runs on other engines – formal for example. Maybe even in hints further afield, in synthesis perhaps.

Tapping into that information must start with a data lake repository for run data. Cadence has built a platform for this also, which they call JedAI for Cadence Joint Enterprise Data and AI Platform. Simulation trace files, log files, even compiled designs go into JED AI. Design and testbenches can stay where they are normally stored (Perforce or GitHub for example). From these Verisium can easily access design revs and check-in data.

Drilling down

Now for the intelligent part of applying ML to all this data in support of much faster debug. Verisium breaks the objective down into four sub-tasks. Bug triage is a timing-consuming task for any verification team. Grouping bugs with a likely common cause to minimize redundant debug effort. This task is a natural candidate for ML, based on experience from previous runs pointing to similar groupings. AutoTriage provides this analysis.

SemanticDiff identifies meaningful differences between RTL code checkins, providing another input to ML. WaveMiner performs multi-run bug root-cause analysis based on waveforms. This looks at passing and failing tests across a complete test suite to narrow down which signals and clock cycles are suspect in failures. Verisium Debug then provides a side-by-side comparison between passing and failing tests.

Cadence is already engaging with customers on another component called PinDown, an extension which aims to predict bugs on check-in.  This looks both at historical learning and behavioral factors like check-in times to assess likely risk in new code changes

Putting it all together

First a caveat. Any technique based on ML will return answers based on likelihood, not certainties. The verification team will still need to debug, but they can start closer to likely root causes and can get to resolution much faster. Which is a huge advance over the way we have to do debug today. As far as training is concerned, I am told that AutoTriage requires 2-3 regressions worth of data to start to become productive.  PinDown bug prediction needs a significant history in the revision control system, but if that history exists, can train in a few hours. Looks like training is not a big overhead.

There’s a lot more that I could talk about, but I’ll wrap with a few key points. This is the first release of Verisium, and Cadence will be announcing customer endorsements shortly. Further, JedAI is planned to extend to other domains in Cadence. They also plan APIs for customers and other tool vendors to access the same data, acknowledging that mixed vendor solutions will be a reality for a long time 😊

I’m not a fan of overblown product reviews, but here I feel more than average enthusiasm is warranted here. If it delivers on half of what it promises, Verisium will be a ground breaker. You should check it out.

 


WEBINAR: O-RAN Fronthaul Transport Security using MACsec

WEBINAR: O-RAN Fronthaul Transport Security using MACsec
by Daniel Nenni on 09-19-2022 at 10:00 am

Commcore OMAC Webinar

5G provides a range of improvements compared to existing 4G LTE mobile networks in regards to capacity, speed, latency and security. One of the main improvements is in the 5G RAN; it is based on a virtualized architecture where functions can be centralized close to the 5G core for economy or distributed as close to the edge as possible for lower latency performance.

SEE THE REPLAY HERE

The functional split options for the baseband station processing chain results in a separation between Radio Units (RUs) located at cell sites implementing lower layer functions, and Distributed Units (DUs) implementing higher layer functions.

This offers centralized processing and resource sharing between RUs, simple RU implementation requirements, easy function extendibility, and easy multivendor interoperability. The fronthaul is defined as the connectivity in the RAN infrastructure between the RU and the DU.

The O-RAN Alliance, established in February 2018, is an initiative to standardize the RAN with open interoperable interfaces between the radio and signal processing elements to facilitate innovation and reduce costs by enabling multi-vendors interoperable products and solutions while consistently meeting operators’ requirements.

The O-RAN Alliance defines that the fronthaul has to support Low Level Split 7-2x between the O-RAN Radio Unit (O-RU) and the O-RAN Distributed Unit (O-DU). The O-RAN Fronthaul is divided into data planes over different encapsulation protocols: Control Plane (C-Plane), User Plane (U-Plane), Synchronization Plane (S-Plane), and Management Plane (M-Plane). These planes carry very sensitive data and are constrained by strict performance requirements.

SEE THE REPLAY HERE

For its ability to mix different traffic types and ubiquitous application Ethernet is the preferable packet-based transport technology for the next generation fronthaul. An insecure ethernet transport network can expose the fronthaul to different types of threats that can compromise the operation of the network.

For example, data can be eavesdropped due to the packets’ clear-text nature, and the lack of authentication can allow an attacker to impersonate a network component. This can result in the manipulation of the data planes that can be used maliciously to cause a complete network Denial-of-Service, making security solution for the O-RAN Fronthaul critical.

In this live webinar, we will take a look at MACsec as a persuasive security solution for the O-RAN Fronthaul. We will understand the very sensitive data that the fronthaul transports, its strict high-performance requirements, and the urgent need to secure it against several threats and attacks.

We will learn the features that MACsec has to protect the fronthaul together with its implementation challenges. Finally, we will see how the Comcores’ MACsec solution can be integrated and customized for Open-RAN projects accelerating developments and reducing risks and costs.

Also read:

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

CEO Interview: John Mortensen of Comcores

LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface


Advanced EM simulations target conducted EMI and transients

Advanced EM simulations target conducted EMI and transients
by Don Dingee on 09-19-2022 at 6:00 am

Advanced EM simulations yield both conducted and radiated EMI in automotive power integrity analysis

A vital benefit of advanced EM simulations is their ability to take on complicated physical test setups, substituting far easier virtual tests yielding accurate results earlier during design activities. The latest release of Keysight PathWave ADS 2023 continues speeding up engineering workflows. Let’s look at three areas of new capability: conducted EMI analysis, SMPS transient analysis, and cloud-accelerated EM simulation.

Chasing conducted and radiated EMI before finalizing layout

Analyzing EMI has often been an after-the-fact exercise, done late in the game with a complete system hardware prototype. When it fails, there’s either a retrofit or a re-spin coming, followed by another if the problem isn’t fixed the first time.

Guessing the source of EMI and the correct fix is getting much harder. Systems now have many more power rails. For instance, in an automotive environment, 12V battery power turns into 48V for distribution, 3.3V for modules, and 1.25 or 0.8 volts for high-speed logic. Noise shows up every time there is DC/DC conversion anywhere in that chain. Using physical measurements, untangling what’s causing conducted EMI versus radiated EMI can be tricky.

Sorting out EMI observations in virtual space requires two EM simulation methods and high-fidelity modeling with real-world effects. Near-field simulation contributes to conducted EMI, and far-field contributes to radiated EMI. In PathWave ADS 2023, one simulation setup easily runs both and automates parameter iterations, quickly providing complete, accurate EMI results.

Adding conducted EMI analysis is a breakthrough. “With new automated differential setup techniques in modeling and simulation, PIPro is now able to assess potential conducted and radiated EMI issues as layout happens in PathWave ADS 2023,” says Heidi Barnes, Power Integrity Product Manager for Keysight EDA.

Simulations yield both time-domain (ripple) and frequency-domain (spikes or bands) results for EMI, and these results help parameterize sweeps to drill down on the root cause in the layout. For example, PIPro now automates setup of ground plate reference ports, populates a generic large signal switching model, and allows users to insert a higher fidelity switching model if needed. Another benefit of simulation is analyzing layouts under various loading conditions. Barnes concludes, “There’s not a lot of manual setups left – designers can focus on finding and correcting the causes of conducted and radiated EMI issues with simulation at layout.”

An SMPS report card and building better models

There are also improvements on the power electronics engineering side in PathWave ADS 2023 and PEPro. The first is looking at figures of merit for switched-mode power supplies (SMPS). “Measurements like transient recovery, efficiency, and voltage ripple are hard to set up in the real world, and every time the prototype changes, it all needs to be done again,” says Steven Lee, Power Electronics Product Manager for Keysight EDA.

Keysight has been building an SMPS report card, using modeling and advanced EM simulations to streamline analysis of common metrics at layout with just a few clicks in PEPro or ADS. Transient recovery is the first metric available in the new SMPS Performance Testbench, with efficiency and more metrics coming in future releases.

 

Another new capability goes after a nagging problem for designers: how to get more detailed and accurate models for power transistors. SPICE models often leave a lot of context out. Syntaxes differ, and translation doesn’t go well. And with simulation now gaining momentum in the SMPS design community, modeling problems are just being discovered.

Transistor IV and CV response curves don’t lie. However, translating them to a model using equations and polynomial fitting can be a time sink for power engineers. Behind the scenes, Keysight teams have been working on artificial neural networks (ANNs) for automatically creating models. In PathWave ADS 2023 and PEPro, it’s as simple as scanning images of transistor response curves off the datasheet. Cool, right? Support for silicon and silicon carbide power transistors is in this release, with gallium nitride and other technologies coming later.

Lee also says a new EMI curriculum for PathWave ADS 2023 and PEPro is launching, developed by Professor Nicola Femia at the University of Salerno. It focuses on SMPS design with workspaces, labs, reference hardware from Digi-Key, and simulation models.

Here comes the cloud for PathWave ADS 2023

One more area Keysight teams have been working on for some time and are now ready to roll out: high-performance computing support for PathWave ADS 2023. Engineers are accustomed to waiting for simulations to finish, often planning their workflow around the wait. Distributed computing can give back valuable design time by reducing EM simulation run times by up to 80%, using multiple concurrent simulations running in an HPC cluster. Teams with limited hardware access can scale up instantly using turnkey cloud platforms.

For example, in a DIMM-based system with DDR4 memory on a 12-layer board running up to 10 GHz, a single signal integrity simulation takes 3 hours. Parallelizing 12 SIPro jobs within PathWave ADS 2023 on a cloud HPC platform improves simulation time by 84%. It’s not just more powerful processors at work. Keysight has looked at the steps in advanced EM simulations where multi-threading and parallelization can speed up results.

There are two licensing models, both using Rescale as the turnkey cloud provider for PathWave ADS 2023. In Keysight’s Design Cloud user experience, the GUI runs on a local machine with simulations launched on HPC clusters, which can be on-prem or in the cloud. Existing ADS licenses cover the GUI and simulator, and floating HPC licenses enable parallel jobs.

Discover more about PathWave ADS 2023

Whether design teams run entirely on Keysight EDA software or are looking for advanced EM simulations and productivity within other EDA environments, these enhancements can help. Here are a few resources for more info.

Web pages:

What’s New in High-Speed Digital Design and Simulation?

What’s New in Power Electronics Design and Simulation?

Videos:

PathWave ADS: Power Integrity Simulation Workflow with PIPro

Building Your Own Switching Device Models in ADS for SMPS Design

Introduction to High-Performance Computing in PathWave ADS on Rescale

Press release:

Keysight Delivers Design-to-Test Workflow for High-Speed Digital Designs


GM Should BrightDrop-Kick Cruise

GM Should BrightDrop-Kick Cruise
by Roger C. Lanctot on 09-18-2022 at 6:00 pm

GM Should BrightDrop Kick Cruise

The GM Authority newsletter informed us last week that General Motors’ BrightDrop commercial vehicle group was planning to adopt autonomous vehicle technology created by GM-owned Cruise Automation for its delivery vans. Days later, Cruise CEO Kyle Vogt announced plans to bring its nascent autonomous taxi service to Phoenix and Austin before year’s end.

Transitioning GM’s autonomous vehicle development activities toward the commercial vehicle and delivery sector makes a lot of sense. It is the one sector that offers the prospect of rapid scaling to target applications that are dependent upon predictable routes.

Were GM to perform a complete pivot of its Cruise development activities toward commercial vehicles it might be seen as the most brilliant move that the company has made yet in AV. Instead, we are left with a teasing suggestion from BrightDrop’s CEO with no formal endorsement from senior GM management.

In fact, the subsequent announcement from Cruise can be seen as a riposte, a brushback, to suggest that GM execs and their “ideas” are not welcome at Cruise’s San Francisco headquarters. Cruise is doubling down on its pointless money-burning pursuit of unscalable autonomous technology intended to solve a non-existent problem.

The main different between commercial vehicle sector AV applications and the robotaxi path to market is that commercial vehicle operators face real challenges in terms of personnel shortages, safety, and logistics. There is money to be made and saved and useful operational gains to be had from automating delivery vehicles.

Robotaxis are nothing more than an expensive replacement for existing human-operated taxis and ride hailing operators. Robotaxis are not solving a problem and they have a too-narrow operational design domain – i.e. they cannot drive passengers from the city to the airport or suburbs.

This is no time for Cruise to spread its twilight driver novelty act (Cruise is currently operating within restricted neighborhoods and timeframes in San Francisco) to multiple other U.S. cities. There is no organic demand for robotaxis – certainly not as currently conceived.

This is no time for GM to play footsie with a wouldn’t-that-be-nice approach to automating commercial vehicles. With Cruise torching hundreds of millions of dollars each quarter in pursuit of a fantasy, it’s time for a massive rethink and refocus.

GM should shift its massive resources, personnel, technical capabilities, and financial backing toward a campaign to automate commercial vehicles piggy-backed on BrightDrop. BrightDrop is sailing into the market on a sound footing of almost limitless demand ideally tuned to finance AV development and expand valuable data gathering. How about it GM? Hit that clutch and pull Cruise out of the ditch.

Also Read:

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

GM Buyouts: Let’s Get Small!

MAB: The Future of Radio is Here


Intellectual Abilities of Artificial Intelligence (AI)

Intellectual Abilities of Artificial Intelligence (AI)
by Ahmed Banafa on 09-18-2022 at 4:00 pm

Intellectual abilities of artificial intelligence AI

To understand AI’s capabilities and abilities we need to recognize the different components and subsets of AI. Terms like Neural Networks, Machine Learning (ML), and Deep Learning, need to be define and explained.

In general, Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

There are three types of artificial intelligence (AI)

·       Artificial Narrow Intelligence (ANI)

·       Artificial General Intelligence (AGI)

·       Artificial Super Intelligence (ASI)

The following chart explains them

Neural networks

In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory.

Typically, a neural network is initially “trained” or fed large amounts of data and rules about data relationships (for example, “A grandfather is older than a person’s father”). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world).

Deep learning vs. machine learning

To understand what Deep Learning is, it’s first important to distinguish it from other disciplines within the field of AI.

One outgrowth of AI was #machinelearning , in which the computer extracts knowledge through supervised experience. This typically involved a human operator helping the machine learn by giving it hundreds or thousands of training examples, and manually correcting its mistakes.

While machine learning has become dominant within the field of #ai , it does have its problems. For one thing, it’s massively time consuming. For another, it’s still not a true measure of machine intelligence since it relies on human ingenuity to come up with the abstractions that allow a computer to learn.

Unlike machine learning, deep learning is mostly unsupervised. It involves, for example, creating large-scale neural nets that allow the computer to learn and “think” by itself — without the need for direct human intervention.

Deep learning “really doesn’t look like a computer program” where ordinary computer code is written in very strict logical steps, but what you’ll see in deep learning is something different; you don’t have a lot of instructions that say: ‘If one thing is true do this other thing.’“.

Instead of linear logic, deep learning is based on theories of how the human brain works. The program is made of tangled layers of interconnected nodes. It learns by rearranging connections between nodes after each new experience.

Deep learning has shown potential as the basis for software that could work out the emotions or events described in text (even if they aren’t explicitly referenced), recognize objects in photos, and make sophisticated predictions about people’s likely future behavior. Example of deep learning in action is voice recognition like Google Now and Apple’s Siri.

Deep Learning is showing a great deal of promise — and it will make self-driving cars and robotic butlers a real possibility. The ability to analyze massive data sets and use deep learning in computer systems that can adapt to experience, rather than depending on a human programmer, will lead to breakthroughs. These range from drug discovery to the development of new materials to robots with a greater awareness of the world around them.

Deep Learning and Affective Computing 

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science (#deeplearning ), psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical inquiries into emotion (“affect” is, basically, a synonym for “emotion.”), the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions.

Affective computing technologies using deep learning sense the emotional state of a user (via sensors, microphone, cameras and/or software logic) and respond by performing specific, predefined product/service features, such as changing a quiz or recommending a set of videos to fit the mood of the learner.

The more computers we have in our lives the more we’re going to want them to behave politely, and be socially smart. We don’t want it to bother us with unimportant information. That kind of common-sense reasoning requires an understanding of the person’s emotional state.

One way to look at affective computing is human-computer interaction in which a device has the ability to detect and appropriately respond to its user’s emotions and other stimuli. A computing device with this capacity could gather cues to user emotion from a variety of sources. Facial expressions, posture, gestures, speech, the force or rhythm of key strokes and the temperature changes of the hand on a mouse can all signify changes in the user’s emotional state, and these can all be detected and interpreted by a computer. A built-in camera captures images of the user and algorithm s are used to process the data to yield meaningful information. Speech recognition and gesture recognition are among the other technologies being explored for affective computing applications.

Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using deep learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection.

Emotion in machines

A major area in affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine. While human emotions are often associated with surges in hormones and other neuropeptides, emotions in machines might be associated with abstract states associated with progress (or lack of progress) in autonomous learning systems in this view, affective emotional states correspond to time-derivatives in the learning curve of an arbitrary learning system.

Two major categories describing emotions in machines: Emotional speech and Facial affect detection.

Emotional speech includes:

  • Deep Learning
  • Databases
  • Speech Descriptors

Facial affect detection includes:

  • Body gesture
  • Physiological monitoring

The Future

Affective computing using deep learning tries to address one of the major drawbacks of online learning versus in-classroom learning _ the teacher’s capability to immediately adapt the pedagogical situation to the emotional state of the student in the classroom. In e-learning applications, affective computing using deep learning can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client’s emotional state.

Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing with deep learning abilities to enhance realism and provide a higher degree of autonomy.

Other potential applications are centered around Social Monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing with deep learning at the core has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood. Companies would then be able to use affective computing to infer whether their products will or will not be well received by the respective market. There are endless applications for affective computing with deep learning in all aspects of life.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

https://www.mygreatlearning.com/blog/what-is-artificial-intelligence/

http://www.technologyreview.com/news/524026/is-google-cornering-the-market-on-deep-learning/

http://www.forbes.com/sites/netapp/2013/08/19/what-is-deep-learning/

http://www.fastcolabs.com/3026423/why-google-is-investing-in-deep-learning

http://www.npr.org/blogs/alltechconsidered/2014/02/20/280232074/deep-learning-teaching-computers-to-tell-things-apart

http://www.technologyreview.com/news/519411/facebook-launches-advanced-ai-effort-to-find-meaning-in-your-posts/

http://www.deeplearning.net/tutorial/

http://searchnetworking.techtarget.com/definition/neural-network

https://en.wikipedia.org/wiki/Affective_computing

http://www.gartner.com/it-glossary/affective-computing

http://whatis.techtarget.com/definition/affective-computing

http://curiosity.discovery.com/question/what-is-affective-computing

Also Read:

Synopsys Vision Processor Inside SiMa.ai Edge ML Platform

Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Samtec is Fueling the AI Revolution


Podcast EP107: The Impact of Arteris IP and Its Partnerships on the Automotive Industry and Beyond

Podcast EP107: The Impact of Arteris IP and Its Partnerships on the Automotive Industry and Beyond
by Daniel Nenni on 09-16-2022 at 10:00 am

Dan is joined by Michal Siwinski, Chief Marketing Officer for Arteris IP. Arteris provides network-on-chip interconnect semiconductor IP and deployment technology to accelerate SoC development and integration for a wide range of applications from AI to automobiles, mobile phones, IoT, cameras, SSD controllers, and servers.

Dan explores the impact Arteris is having on high-growth markets such as automotive with Michal. The company’s partnership with Arm is also explored. The impact of Arteris and its partnerships beyond the automotive market are also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Samsung Foundry Forum & SAFE™ Forum 2022

Samsung Foundry Forum & SAFE™ Forum 2022
by Daniel Nenni on 09-16-2022 at 6:00 am

SFF SAFE GLOBAL Banner 400 400

It has been an exciting time in the semiconductor industry and the excitement is far from over. Years 2022 and 2023 will be more challenging in many different ways and live activities have just begun. The cornerstones to the semiconductor industry are the foundries so I look forward to the live foundry events coming up in October, absolutely.

Samsung Foundry kicks this series of events off with:

“Join our Samsung Foundry Forum and SAFE Forum 2022. These two forums will be held in several locations around the world. After the in-person events, please join our online on-demand event for even more exciting content. You are invited to take part in these forums where the vision and innovative technology of Samsung Semiconductor will be discussed.”

Now that events are live the ever important semiconductor networking has resumed. While most of the conferences I have attended have been 50% full the foundry events have been 100% full. That is how important they are.

REGISTER NOW

Into the gates of innovation

Samsung Foundry invites you to the Samsung Foundry Forum and SAFE Forum 2022. These forums will be the first in-person event held in three years!

At the Samsung Foundry Forum, join us to gain insight on our vision and latest technological innovations. You will hear from our top technical experts as well as an industry guest speaker. There will be opportunities to speak with our partners at their booths and network with Samsung Foundry and your peers throughout the event.

At the SAFE Forum, join us to hear about industry trends and learn from our SAFE partners as they present solutions to your EDA, IP, DSP, and Packaging challenges. There will be partner and customer keynotes, a panel session, and numerous technical sessions. In addition, similar to SFF, there will be opportunities to visit our partners at their booth and network with Samsung Foundry and your peers throughout the day.

All participants will receive a welcoming gift and be entered into a raffle. We hope you enjoy the 2022 Samsung Foundry Forum and SAFE Forum.

* SAFE Forum 2022 will be only held in U.S on October 4.

Why should you attend? I generally go for the food and Samsung has the best food. Samsung is also very generous with the gifts. But most importantly the networking. SAFE Forum will be filled with experts from the fabless semiconductor ecosystem. More than 100 Samsung ecosystem partners will attend in addition to the expert speakers from Samsung:

Day 1

Keynote: Siyoung Choi, President and GM of Samsung Foundry Business

Process Technology : Gitae Jeong, Head of Technology Development Team, Jong-Ho Lee, VP of Specialty Technology Team

Advanced Heterogenous Integration: MoonSoo Kang, Head of Business Development Team

Manufacturing Excellence: YK Hong, VP of Yield Enhancement Team, Jonathan Taylor, VP Head of Fab SAS Fab Engineering

Design Platform: Ryan Lee, EVP Head of Design Platform Foundry Business

Business Outlook and Service: Sang-Pil Sim, EVP Head of World Wide Sales and Marketing, Marco Chisari, EVP Head of Americas Office Foundry Sales and Marketing

REGISTER NOW

Day 2

Welcoming Remarks: Ryan Lee, EVP Head of Design Platform Foundry Business

Business Overview: MoonSoo Kang, Head of Business Development Team

Tech Session 1 GAA and More than Moore: Ansys, Cadence, Samsung, Siemens/Qualcomm, Synopsys

Tech Session 2: Multi- Die Integration, Amkor Technology, Cadence, Samsung, Siemens, Synopsys.

Tech Session 3: Advanced IP for HPC, Alphawave, Cadence, Rambus, Samsung, Synopsys.

Tech Session 4: Advanced Design Platform, ADT, Samsung, Samsung TSP, SiFive,

And this is a world wide event:

US 03-04 Oct, 2022 (PDT, GMT-7) 170 S Market St, San Jose, CA 95113, USA Learn more

EMEA 07 Oct, 2022 (EST, UTC-5) Terminalstraße Mitte 20, 85356

München-Flughafen, Germany Learn more

Japan 18 Oct, 2022 (JST, UTC+9) 1 Chome-9-1 Daiba, Minato City,

Tokyo 135-8625, Japan Learn more

Korea 20 Oct, 2022 (KST, UTC+9) 524 Bongeunsa-ro, Gangnam-gu, Seoul, Republic of Korea Learn more

 

REGISTER NOW


Three Ways to Meet Manufacturing Rules in Advanced Package Designs

Three Ways to Meet Manufacturing Rules in Advanced Package Designs
by Kendall Hiles on 09-15-2022 at 10:00 am

1

Often designers are amazed at the diversity of requirements fabricators and manufacturers have for metal filled areas in advanced package designs. Package fabricators and manufacturers do not like solid metal planes or large metal areas. Their strict metal fill requirements address two main issues. The dielectric and metal layers can be very thin, 15 µm or less, and during the build-up and RDL process they can suffer from areas of delamination due to trapped pockets of gas. Think of it being like adding a screen protector to your smartphone and how hard it is to get the air bubbles out. Also, uneven conductor densities on the same layer or across layer pairs can cause warpage in the package and/or the wafer.

The combination of these issues makes the designer’s job of meeting the manufacturing rules a challenge. Further, the diversity of substrate technologies from numerous vendors means there’s no one-size-fits-all solution. In this article, we will walk through three methodologies that are commonly utilized on advanced package designs to achieve foundry/OSAT requirements for metal areas and planes:

  1. Dynamic hatched filled metal areas
    By far the easiest and fastest methodology. Some additional steps may be needed based on the density requirements.
  2. Outgassing voids in metal areas
    A post process solution that can be customized for about any situation.
  3. Dummy metal fill
    This is the way most silicon designs handle low density areas. Typically used for silicon interposer designs.

Dynamic hatched filled metal areas

One of the simplest ways to solve both outgassing and metal fill coverage is to use dynamic hatched fill. When adding square or diagonal hatch, the package design tool should tell you what the base density will be across the plane, which makes hitting your target density fairly simple.

You will need to make sure to fill any incomplete or partial hatches to prevent acute angle issues, and you should also offset the hatch on adjacent layers to prevent EMI and signal integrity issues. By setting these up in the beginning of the project, you save time over doing it at the last minute just before or during tape-out. Most manufacturers and fabricators have manufacturing sign off design rules that need to be met before manufacturing can begin. These design rules check for manufacturing and yield issues, like spacing, and problem layout items, like acute angles, density, and fill. When violations are found, designers can save time finding and fixing them by cross-probing from the design rule checking tool to the layout tool, if that capability is supported.

Figure 1. 30 µm void, 40 µm pitch, 43% fill

In some technologies the base hatch can be amended to add other features required for a specific vendor.

Figure 2. HDFOWLP with pad voids and additional plane voids

Dedicated outgassing voids

It is common to see designers utilize standalone outgassing voids. Unlike the dynamic hatched fill, this is a post process. Designers use outgassing voids to get void shapes—like circles, rectangles, oblongs, octagons, or hexagons—or to stagger the voids. Once you find your formula, the process is predictable and very easy to update for layout changes. Using a density-aware, multi-pass outgassing routine enables designers to work on signal integrity and power integrity issues while simultaneously considering the manufacturing process requirements — resulting in significant time savings.

Metal balancing can be a density per layer or a layer pair target. Some manufacturers also utilize sub-layer blocks (125 µm–250 µm windows of density), like walking blocks or adjacent blocks.

Figure 3. 125 µm density sub-block regions

Whatever the rules, make sure the adjacent layer voids are offset and keep the voids from going over any differential signal routing. Differential signals or pairs can have issues if the voids are unevenly dispersed on the adjacent layers over the pair. You may also see clearance rules from the void to a micro-via/polyamide opening or to a trace.

Figure 4. Multi-pass density aware voids

Figure 5. Adjacent layer void clearance

In high-speed designs or designs with high current draw, designers utilize automation guided manual void placement. This helps users to meet the manufacturing requirements while being fully aware of where each void is getting placed. 5G packages are a perfect use case for this method and is recommended over the shotgun approach of the fully automated methods where manual cleanup of unwanted voids is too time consuming.

Figure 6. Staggered rectangle voids to differential pairs

Figure 7. Degassing void analysis identifies areas requiring void insertion

Figure 8. As voids are added, circles show the effective radius.
Green areas needing a void, and adjacent layers are shown

Dummy metal fill

Another metal balancing method utilized on interposer designs with high-bandwidth-memory (HBM) or RDL interconnects is dummy fill. Dummy fill refers to unconnected metal shapes. This can reduce capacitance and help increase manufacturing yield. It can be multi-pass with multiple shapes that can grow to a set maximum length. It can also be density aware and add fill to hit a target value.

Utilizing a density analysis tool that allows visualization of the density windows in the host layout tool is para- mount to find and fix areas and layers that do not meet the vendor rules.

In any of these methodologies you will need to simulate to make sure that the solution meets your performance specification. While foundries and OSATs are focused on manufacturability and yield, it falls to the user to ensure compliance with the performance specification. You must simulate your power delivery before you dismiss an outgassing methodology. At first glance, having signals crossing a plane area with hundreds of voids like we saw in the earlier example might sound like a bad idea: however, it can behave similarly to solid fill and may not present any issues. Without PDN simulation, you’re just guessing at its suitability.

Figure 9. Multi-pass density aware dummy fill

Analysis using the appropriate methodology will ensure that the design meets performance specifications. Recommended types of analysis include DC drop (voltage drop, current density, via currents), PDN impedance analysis, and signal integrity analysis, including return path checks.

Figure 10. Tightly coupled return currents flowing on the cross hatched plane layer underneath the trace

Conclusion

In summary, dynamic hatched fill, outgassing voids, and dummy metal fill are the most common methods to achieve foundry/OSAT requirements for metal areas and planes. The key is choosing the methodology that best meets vendor rules, meets your PDN specifications, allows rapid ECO turns, and is repeatable. To expedite verification, make sure you turn on dynamic cross-probing between the vendor sign off tool and the layout tool.

Kendall Hiles is a

Also Read:

Connecting SystemC to SystemVerilog

Today’s SoC Design Verification and Validation Require Three Types of Hardware-Assisted Engines

Resilient Supply Chains a Must for Electronic Systems


Synopsys Vision Processor Inside SiMa.ai Edge ML Platform

Synopsys Vision Processor Inside SiMa.ai Edge ML Platform
by Bernard Murphy on 09-15-2022 at 6:00 am

Dynamic range min

SiMa.ai just announced that they achieved first silicon success on their new MLSoC, for AI applications at the edge, using Synopsys’ design, verification, IP and design services solutions. Notably this design includes the Synopsys ARC® EV74 processor (among other IP) for vision processing. SiMa.ai claim their platform, now released to customers, is significantly more power efficient than competing options and provides hands-free translation from any trained network to the device. (Confirming an earlier post that software rules ML at the edge.) The company has impressive funding and experienced leadership so this is definitely a company to watch.

Strong Vision ML Starts with Strong Imaging

In modern intelligent designs AI gets the press but would be worthless if presented with low-quality images. A strong image processing stage ensures that, between the camera and the ML stage, images are optimized to the greatest extent possible. Particularly to meet or exceed how the human eye – still the golden reference – sees an image.

A dedicated ISP stage can get pretty sophisticated, up to and including its own elements of machine learning. Note: I don’t know how much if any of the Synopsys ML support is included in the SiMa solution or the range of  EV74 ML capabilities they use. You will have to ask SiMa those questions.

ISP functions include de-mosaicing which compensates for the raw pixel-based image sensor, overlaid by a color filter array, interpolating a smooth image from that pixelated input. Especially in surveillance cameras, fisheye lenses require compensation for geometric distortion, another ISP function. Add to this list de-noising, color balance and a host of other options, essential when matching to similarly compensated training images.

I personally find high dynamic range (HDR) to be one of the most interesting ISP adjustments, especially for AI apps. The opening images for this article illustrate an example HDR application. On the left is an image after other compensations not including HDR. The right image is HDR compensated. Many ISP functions optimize globally; HDR is a local optimization, balancing between bright areas and darker areas in an image. Before compensation, features in low light areas are almost invisible. After compensation, features are clear across the image despite a wide range of brightness. This is critically important for ML to detect say a pedestrian stepping off a sidewalk in a shaded area on a bright day.

While Synopsys doesn’t directly provide application software, they do offer a set of tools that designers use to create and optimize their own ISP software. The Synopsys MetaWare Development Toolkits support C/C++ and OpenCL C programming as well as vision kernels to ease application development. For those of you who don’t have applications expertise in these areas, there are also open-source solutions 😊

Intelligence in image signal processing

The EV74 processor optionally supports ML processing. I suspect this isn’t relevant to the SiMa application, but it is relevant to image processing, even before you get to object identification. Super-resolution methods aim to construct a higher resolution image from a lower resolution input using one of many possible neural net techniques. Consumer and medical applications often apply super resolution for graphic enhancement, using learning to infer reasonable interpolation pixels between existing pixels.

The EV74 DNN option can handle more than just that application. It supports direct mapping from the Caffe and TensorFlow frameworks, and the ONNX neural network interchange format. Edge AI in many applications demands a single chip solution. EV74 can support a standalone implementation (with appropriate memory and other functions in the SoC). Or integrated together with value-added specialist functionality like that from SiMa.ai.

What is coming next in solutions?

I talked more generally with Stelios Diamantidis (Distinguished Architect, Head of Strategy, Autonomous Design Solutions at Synopsys). He mentioned that edge  applications are inherently heterogeneous, as data travels from optics, to sensors, to compute, to memory, to display, etc. Maintaining low end-to-end latency across the system is contributing to the chiplet movement. One example application is for drones which demand fast response times to avoid obstacles. He also sees big pickup in industrial applications, for example LIDAR sensing in production lines to control grippers. Either case requiring strong vision and AI to support the performance requirements of increasingly complex neural network models in SoCs.

Stelios added that between industrial and vehicle applications, such designs must be robust to a lot of environmental variation. Design methods and standards prove this in part . Complemented always by an established track record in design. And supported across a wide variety of applications, from semiconductor leaders to pioneers.

Very interesting stuff. You can read the SiMa.ai press release HERE.