Banner 800x100 0810

Ecosystem: ARM versus Intel

Ecosystem: ARM versus Intel
by Daniel Nenni on 09-05-2013 at 2:45 pm

Ecosystem is everything when it comes to modern semiconductor design, especially if it is mobile. The fabless semiconductor industry has been all about ecosystem since the beginning and that is why we hold supercomputers in our hands today, believe it. After the invention of the transistor in 1947, and the invention of the integrated circuit in 1959, the fabless semiconductor ecosystem started to evolve into what it is today, a force of nature.

The semiconductor business transition started with the emergence of the ASIC (Application Specific Integrated Circuit). Electronic systems companies refused to be limited by the general purpose semiconductors of that era and started doing design work in-house. Back in the day, companies such as VLSI Technology and LSI Logic made billions of dollars making ASICs. In fact, this is how Apple got started as a fabless semiconductor company, they did ASICs with Samsung for the first generations of iProducts.

Next came programmable devices (FPGAs) from the likes of Xilinx and Altera. An FPGA is literally a box of Legos in which you can integrate IP blocks with custom design work using a much smaller amount of time and money. As you can imagine, the Xilinx ecosystem of tools, IP, and design partners is key to their market domination. Xilinx was also one of the first fabless semiconductor companies which brings us to the next and probably the most disruptive phase of semiconductor history; the fabless semiconductor ecosystem.

TSMC started it with what is now called the Open Innovation Platform, investing hundreds of millions of dollars in silicon proven IP, reference design flows, and a network of services partners around the world. If you want to know why TSMC commands such a large market share today it is all about the ecosystem, absolutely. This brings us to the point of this blog, take a close look at the details of the upcoming Intel Developer Forum and the ARM Technical Conference:

IDFis the leading forum for anyone using Intel® Architecture or Intel® technologies to change the world. And this year, it’s more technical than ever. This is where developers, engineers, technology managers, and business leaders from across the industry can meet, share ideas, and learn about Intel’s latest developments.

ARM TechCon™ is one of the fastest growing events in the industry. In 2012, over 4000 hardware and software engineers attended the three-day conference. The event, supported by over 85 Connected Community Partners, provides 140 hours of presentations and tutorials aimed at enabling you to optimize your ARM IP-based design. The show floor features product demonstrations and hands-on workshops fostering the perfect networking environment to Connect, Collaborate and Create future ARM Powered® devices.

I will be attending both events again this year and will do a closer comparison afterwards but based on last year and the current promotional materials, Intel still does not seem to get the whole ecosystem thing. IDF is all about Intel and ARM TechCon is all about the ecosystem, which is why ARM commands such a large market share and will continue to do so in the coming years. Just my opinion of course.

lang: en_US


3D: the Backup Plan

3D: the Backup Plan
by Paul McLellan on 09-05-2013 at 1:20 pm

With the uncertainties around timing of 450mm wafers, EUV (whether it works at all and when) and new transistor architectures it is unclear whether Moore’s law as we know it is going to continue, and in particular whether the cost per transistor is going to remain economically attractive especially for consumer markets that are very price sensitive.

One of the most important alternative approaches is 3D chips based on through-silicon vias (TSVs). This is one of the focuses of Semicon Taiwan which is taking place this week. It is also a topic that Karen Savala, the president of SEMI Americas, will be talking about in her keynote at the upcoming 2013 MEPTEC Roadmaps Symposium on September 24 in Santa Clara. MEPTEC is the Microelectronics packaging and test engineering council.

Although many companies have some sort of interposer or 3D stacking technology on their roadmaps, the actual adoption for production manufacturing is slow. Gartner estimates that TSV adoption for memory will be pushed out to 2014 or 2015, with non-memory applications delayed to 2016 to 2017 if then. They currently forecast that TSV devices will account for less five percent of the units in the total wafer-level packaging market by 2017.

Part of the problem is lack of cooperation across the industry as to what technologies should be introduced when. It looks like a repeat of the 300mm wafer transition where the industry couldn’t agree when to introduce 300mm production and stop advanced development at 200mm, and they couldn’t afford to do both. As a result, there were several false starts and hundreds of millions of dollars were lost. For 450mm there are lots of consortia for collaborative R&D, probably the most important being G450C which is backed by TSMC, Intel, GlobalFoundries, Samsung and IBM and is well enough financed to have its own fab.

For 3D-IC to be widely adopted, meaningful collaboration throughout the value chain still needs to occur. Part of the problem is that it is not even clear which parties in the value chain should be doing which steps in the manufacturing. All the players have an existing business model that must be defended or exploited based on what technical discoveries occur and what customers eventually turn out to want. It is natural that the fabless companies, foundries and OSAT houses should want to make their piece of the pie as big as possible, but without deep collaboration there won’t be a pie to divide up.

As Karen concludes:We’ll continue to see discoveries, inventions and new products in 3D-IC and progress will continue. Hundreds of patents in the area have already been issued. We’re seeing innovation and invention in wafer bonding, via manufacturing, and other areas. Standards work at JEDEC and SEMI will also contribute to the market’s development, both to enable processes and cost-reduce manufacturing, but without the emergence of a new, robust collaboration model that can deliver meaningful agreements between key constituencies, the promise of 3D innovation will remain distant and illusive.

Karen’s thoughts on 3D collaboration are online here. Details of the 2013 MEPTEC Roadmaps Symposium are here.


Did you miss Cadence’s MemCon?

Did you miss Cadence’s MemCon?
by Eric Esteve on 09-05-2013 at 4:42 am

That’s too bad, as you have missed latest news about the Hybrid Memory Cube (presentation by Micron), Wide I/O 2 standard, as well as other standards like LPDDR4, eMMC 5.0, and LRDIMM,the good news is that you may find all these presentations on MemCon proceedings web site.
I first had a look at Richard Goering excellent blog: wideI/O and Memory cube, and then I had a look at the HMC presentation made by Mike Black from Micron. HMC is an amazing technology, the comparison table as shown by Micron help understanding why:

  • Channel complexity: HMC is 90% simpler than DDR3, using 70 pins instead of… 715 pins.
  • Board Footprint: HMC board space occupies 378 mm[SUP]2[/SUP] instead of 8,250 mm[SUP]2[/SUP] for DDR3!
  • Energy efficiency is 75% better
  • Bandwidth: HMC delivers 857 MB/pin compared with 18 MB/pin for DDR3 and 29 MB/pin for DDR4

What is the secret sauce for such amazing performance? Once again, it’s because the protocol uses Very High Speed, SerDes based, serial link, instead of Parallel data transfer. Like for PCI Express instead of PCI, SATA instead of PATA and so on. Except that the link is defined at speed rate between 15 Gbps to 28 Gbps! That is, delivering more than 3X bandwidth per lane that PCIe gen-3 (8Gbps) or more than 4X than SATA III (6Gbps). To be honest, I am not completely surprised to see the emergence of such a high speed serial link protocol for DRAM, I rather think that Memory SC industry has been late compared with the rest of the industry. PCI Express has been defined in 2004, as well as SATA, and Ethernet protocols are even older. Nevertheless, I completely applaud, as HMC is expected to be a real revolution in many electronic industries, like Computing, Networking or Servers. By the way, don’t expect your smartphone or tablet to be HMC equipped… the 3D-IC form factor will prevent these devices to use HMC, see picture below:

The HMC typically includes a high-speed control logic layer below a vertical stack of four or eight TSV-bonded DRAM dies. The DRAM handles data only, while the logic layer handles all control within the HMC. In the example configuration shown at right, each DRAM die is divided into 16 cores and then stacked. The logic die is on the bottom and has 16 different logic segments, with each segment controlling the DRAMs that sit on top. The architecture uses “vaults” instead of memory arrays (you could think of these as channels).

The HMC was originally designed by Micron and is now under development by the Hybrid Memory Cube Consortium (HMCC), which is currently offering its 1.0 specification for public download and review. The HMCC includes eight “developer members” – Altera, ARM, IBM, Micron, Open-Silicon, Samsung, SK-Hynix, and Xilinx – and many “adopter members” including Cadence. I will not reproduce the adopter list, as it’s too long to fit here, as more than 110 companies are part of the consortium so far!

In addition to Wide I/O 2 and HMC, Cadence is announcing memory model support for these emerging standards:

LPDDR4 – Promises 2X the bandwidth of LPDDR3 at similar power and cost points. Lower page size and multiple channels reduce power. This JEDEC standard is in balloting, and mass production is expected in 2014.

eMMC 5.0 – Embedded storage solution with a MMC (MultiMedia Card) interface. eMMC 5.0 offers more performance at the same cost as eMMC 4.5. Samsung announced the industry’s first eMMC 5.0 chips July 27, 2013.

LRDIMM– Supports DDR4 LRDIMMs (load reduced DIMMs) and RDIMMs. This standard is mostly used in computers, especially servers.
Cadence memory models support all leading simulators, verification languages, and methodologies. “We’re involved early on in the standards development,” Jacobson noted. “We are out there developing third-party models early. We work closely with vendors to get the models certified. If you’re looking for a third-party solution for memory models, that’s what we do.”

I have extracted from Martin Lund introduction a very interesting picture, as it can help analyst to understand the new memory standards adoption behavior, as well as shows that the “old” standards (DDR1 or DDR2) are not vanishing so fast. Be careful, this is a log scale!

Just a last point, IPNEST is taking a close look at Verification IP market these days, and I had a look at the various memory standards supported by Cadence, or the associated memory models that the company provides… that’s also a pretty long list, as you can see:

Eric Esteve from IPNEST

lang: en_US


Real Time Concurrent Layout Editing – It’s Possible

Real Time Concurrent Layout Editing – It’s Possible
by Pawan Fangaria on 09-03-2013 at 2:00 pm

Layout editing is a complex task, traditionally done manually by designers, and the layout design productivity largely depends on the designer’s skills and expertise. However, a good tool with features for ease of design is a must. Layout productivity has been an area of focus and various features are constantly being added in layout editing tools for designers to quickly draw the layout. While that continues, we have yet another dimension of looking at the layout productivity. With the advent of SoCs, deep-submicron designs and varying integrated functionalities on a single chip, layout designing is no longer a job of a few designers. It needs a substantial team of designers to work on different parts of a layout and frequent synchronization between them. This again is a time consuming process and needs attention. It becomes extremely critical at the time of tape-out when chip finishing is done on the entire top level layout.

While reviewing Mentor Graphics Pyxis Layout Suite, I came across “Pyxis Concurrent”; what an excellent idea! I was amazed to see the on-line demo (link mentioned at the end). Mentor has rightly and pro-actively identified the need for multiple designers to edit different parts of the same cell and enabled them to do it concurrently, hence accelerating the layout development process and the tape-out time.


[Different parts of the layout being done simultaneously]

Designers can define their work area by creating fences and work on the same cell in shared mode over the network. The shared session is owned by the layout captain. Any edits to cell, path or shape is local to the designer who edits them until he/she saves the design. At the time of saving the design, a message is broadcasted to all designers. Data integrity is maintained as the changes by designers within their fences are local to them until a design save is done. All the typical editing commands such as edit, move, delete, undo stack etc. are supported locally to a designer for that portion of the layout. At the same time, any designer is free to view other portions of the design and provide message / feedback, on any part of the design as appropriate, to other designers.


[A Designer pointing to a layout area and communicating through chat box]

The suite provides a virtual white board and chat service for effective team interaction. Any designer can exactly pin-point the other layout joints, shapes, components and so on which may need modification and use chat area to communicate with other designers.


[Calibre real time interface with Pyxis]

Interestingly, this concurrent layout editing environment is seamlessly integrated with Realtime Calibre verification for on-line DRC checks which is especially important during chip finishing for DRC and other corrections and verification done by designers and the whole team certifying the chip layout together. A designer can just check his/her portion by running Calibre, hence keeping the layout DRC correct all the time.


[Interoperability with third party layout]

Design data interchange is supported through both GDS and OpenAccess. Any third party layout can be imported into Pyxis layout suite and integrated into the design with ease.

For an exciting on-line demo, just click DEMO

The Pyxis Project Manager provides comprehensive integration with design kits, schematic development, design verification, floorplanning, custom routing etc. for the complete layout flow for block as well as complete chip.


Microsoft Buys Nokia

Microsoft Buys Nokia
by Paul McLellan on 09-02-2013 at 11:21 pm

OK. I was wrong. Microsoft did buy Nokia’s handset business. For $7.2B, which for a company that just wrote off nearly $1B on tablets isn’t that much. Nokia is a company that had a peak valuation of $110B although it is not clear how much of that is in the deal versus out of the deal.

Details from Reuters here.

Elop is expected to join Microsoft. Omitted from the deal is NSN which used to be Nokia-Siemens Networks but since Nokia bought out Siemens the S is Service. And right now it is one of the profitable bits of Nokia. Despite good numbers for selling Lumia phones, they still ship dollars with each one.

I still don’t see how this is likely to be successful, although Microsoft clearly has much deeper pockets than Nokia. But their success in the hardware biz has been very variable. Xbox good. Zune bad. Microsoft mouse good, Kin (a mobile phone) the fastest failing phone ever, only six weeks.

More when more is known.


Low-Power Design Webinar – What I Learned

Low-Power Design Webinar – What I Learned
by Daniel Payne on 09-02-2013 at 7:00 pm

You can only design and optimize for low-power SoC designs if you can actually simulate the entire Chip, Package and System together. The engineers at ANSYS-Apachehave figured out how to do that and talked about their design for power methodology in a webinar today. I listened to Arvind Shanmugavel present a few dozen slides and answer questions in just about 33 minutes of time. In a week or so you can view and listen the recorded webinar here.


Arvind Shanmugavel
Continue reading “Low-Power Design Webinar – What I Learned”


Must See SoC IP!

Must See SoC IP!
by Daniel Nenni on 09-02-2013 at 5:30 pm


IP is the center of the semiconductor universe and nobody knows this better than Design and Reuse. The D&R website was launched in 1997 targeting the emerging commercial semiconductor IP market. Today, with more than 15,000 IP/SOC product descriptions updated daily, D&R is the #1 IP site matching customer requirements to IP products around the world.

D&R also hosts IP events including Semiconductor IP – SoC 2013 which will be the 22nd edition of the working conference on hot topics in the design world, focusing on IP-based SoC design and held in the French Alps (Grenoble) as well as Bejing, Shanghai, and Israel.

This event is the only worldwide dedicated semiconductor IP event. The satisfaction level of the attendees is high due to focused sessions and seminars. Over the year semiconductor IP has become Subsystems or Platforms. A natural applicative extension to IP-SoC will include a strong Embedded Systems track addressing a continuous technical spectrum from IP to SoC to Embedded System.

The competitive landscape of the Semiconductor IP Market, 2013 and Beyond!

Ganesh Ramamoorthy,Research Director, Gartner Inc.

Embedded design in the Age of Pervasive Computing

Richard York , Director of Embedded Processor Products, ARM

Open Innovation Platform (OIP): an ecosystem for innovation

Kees Jooss , Business Development Manager, TSMC

The New Tower of Babel – The Languages of Embedded Systems Design

Colin Walls, Mentor Graphics

Morphing Technology and Business Models at 100Gbps and Beyond

Marc Miller,Sr. Director of Marketing, Tabula

The flexible pathway to Flash IP

Christopher Neil Brown, Micro Chip

The conference is organized as a 2 day event:

  • The first day targeting architecture topics from IP to SoC solution to chip and chip set
  • The second day devoted to Embedded systems (from O.S to middleware to application software)

The program of both days is organized within within 4 tracks namely:

  • The well recognized Panel track on hot topics. These panels will address both IP and Embedded Systems to day challenges
  • Technical papers addressing the issues in the IP-based system design and in the Embedded System arenas
  • Visionary scientific seminarson key topics organized by gurus in the field, including invited state of the art academic presentations
  • Exhibitor track offering sponsored speaking opportunities for Companies willing to communicate their technical capabilities in greater depth ideas through technical presentations in one hour or in half-day workshops. Such a presentation slot may be a stand-alone demonstration of a development tool or technique

Important Dates
Deadline for submission of paper summary: September 28, 2013
Notification of acceptance: October 4, 2013
Final version of the manuscript: October 19, 2013
Working conference: November 6-7, 2013

Areas of interest:

Business models

  • IP Exchange, reuse practice and design for reuse
  • IP standards & reuse
  • Collaborative IP based design

Design

  • DFM and process variability in IP design
  • IP / SoC physical implementation
  • IP design and IP packaging for Integration

Quality and verification

  • IP / SoC verification and prototyping
  • IP / SoC quality assurance

Architecture and System

  • IP based platform
  • FPGA SoC
  • IP / SOC transaction level modelling
  • HW/SW integration
  • System-level prototyping and virtual prototyping
  • System-level prototyping and virtual prototyping

Embedded Software

  • IP based platfrom
  • Middelware
  • O.S

Reliability, Real-Time and Fault Tolerant Systems

  • IP reliability computation
  • Security IP
  • Real-time or Embedded Computing Platforms
  • Real-time Operating system

Paper Submission Procedure
To present a paper during the conference a summary of at least 3 pages is required for any submission. You may also apply to present a seminar paper on the topics that will be announced shortly. You can submit an electronic version of your extended abstract in a Word or PDF format using the Online Submission Form.

lang: en_US


Analog ECOs and Design Reviews: How to Do Them Better

Analog ECOs and Design Reviews: How to Do Them Better
by Paul McLellan on 09-02-2013 at 1:00 am

One of the challenges in doing a complex analog or mixed signal design is that things get out of step. One designer is tweaking the schematic and re-simulating, another is tweaking the layout of transistors, another is changing the routing. This is not because the design flow is messed up, but rather it reflects reality. If you wait until the schematic is finished to start layout, then you won’t finish in time. And besides, in a modern process, without detailed layout parasitics you can’t simulate the design accurately and so have the information needed to finish off the schematic. You need to make smaller and smaller changes and cross your fingers that everything converges on a layout that will give you the performance you require.

But this means that the schematic that goes with the current layout is not necessarily the most up-to-date. What is needed is a tool for comparing schematics and layouts. Obviously, these are stored in text or binary files which generally are not human readable and so a traditional diff program that simply tells you what changed in the file is useless. What is required is a visual diff that displays differences graphically, showing added transistors on the schematic or layout that has been moved. ClioSoft’s vdd (Visual Display Diff) is just such a tool.

VDD detects changes between different versions of schematics or layout including modifications to nets, instances, layers, labels and properties. Differences are highlighted graphically in the Cadence Virtuoso schematic or layout and also presented in a list. Users can select or step through the list. Selected changes are highlighted directly in the editor window and automatically zoomed to the area of interest. VDD has the option to ignore cosmetic changes so mere rearrangement or rerouting of wires will not be flagged. Users also can choose to invoke a hierarchical diff where all differences for the entire design hierarchy below the selected view will be flagged. VDD comes integrated with ClioSoft’s SOS design data management system but also can be deployed standalone with any other design management system or even if no design management system is being used.

 

ClioSoft have a game here called Spot the Diff where you have to find differences in schematics. Of course it is just a bit of fun but it has a serious purpose: finding changes by eye is really hard and time-consuming. That is why a tool like VDD is so essential.

Leading semiconductor companies including many in the top 10 are using VDD. You can find out more in an upcoming webinar presented by Srinath Anantharaman of ClioSoft, Managing Design Reviews and ECOs Efficiently. It is on Thursday September 12th at 11am Pacific. Details are here. Registration page is here.

Also Read

ClioSoft at GenApSys

VIA Adopts Cliosoft

Agilent ADS Users, Find Out About Design Data Management


A Brief History of TSMC OIP

A Brief History of TSMC OIP
by Paul McLellan on 09-01-2013 at 9:00 pm

The history of TSMC and its Open Innovation Platform (OIP) is, like almost everything in semiconductors, driven by the economics of semiconductor manufacturing. Of course ICs started 50 years ago at Fairchild (very close to where Google is headquartered today, these things go in circles). The planarization approach, whereby a wafer (just 1” originally) went through each process step as a whole, led to mass production. Other companies such as Intel, National, Texas Instruments and AMD soon followed and started the era of the Integrated Device Manufacturer (although we didn’t call them that back then, we just called them semiconductor companies).

The next step was the invention of ASIC with LSI Logic and VLSI Technology as the pioneers. This was the first step of separating design from manufacturing. Although the physical design was still done by the semiconductor company, the concept was executed by the system company. Perhaps the most important aspect of this change was not that part of the design was done at the system company, but rather the idea for the design and the responsibility for using it to build a successful business rested with the system company, whereas IDMs still had the “if we build it they will come” approach, with a catalog of standard parts.

In 1987, TSMC was founded and the separation between manufacture and design was complete. One missing piece of the puzzle was good physical design tools and Cadence was created in 1988 from the merger of SDA and ECAD (and soon after, Tangent). It was now possible for a system company to buy design tools, design their own chip and have TSMC manufacture it. The system company was completely responsible for the concept, the design, and selling the end-product (either the chip itself or a system containing it). TSMC was completely responsible for the manufacturing (usually including test, packaging and logistics too).

This also created a new industry, the fabless semiconductor company, set up in many ways to be like an IDM except for using TSMC as a manufacturer. So a fabless semiconductor company could be much smaller since it didn’t have a whole fab to fill, often the company would be funded to build a single product. Since this was also the era of explosive growth in the PC, many chips were built for various segments of that market.

At this time, the interface between the foundry and the design group was fairly simple. The foundry would produce design rules and SPICE parameters, and the design would be submitted as GDSII and a test program. Basic standard cells were required, and these were available on the open market from companies like Artisan, or some groups would design their own. Eventually TSMC would supply standard cells, either designed in house or from Artisan or other library vendors (bearing a underlining royalty model transparent to end users). However, as manufacturing complexity grew, the gap between manufacturing and design grew too. This caused a big problem for TSMC: there was a lag from when TSMC wanted to get designs into high volume manufacturing and when the design groups were ready to tape out. Since a huge part of the cost of a fab is depreciation on the building and the equipment, which is largely fixed, this was a problem that needed to be addressed.


At 65nm TSMC started the OIP program. It began at a relatively small scale but from 65nm to 40nm to 28nm the amount of manpower involved went up by a factor of 7. By 16nm FinFET half of the effort is IP qualification and physical design. OIP actively collaborated with EDA and IP vendors early in the life-cycle of each process to ensure that design flows and critical IP were ready early. In this way, designs would tapeout just in time as the fab was starting to ramp, so that the demand for wafers was well-matched with the supply.

In some ways the industry has gone a full circle, with the foundry and the design ecosystem together operating as a virtual IDM.

To be continued in part 2


Reliability sign-off has several aspects – One Solution

Reliability sign-off has several aspects – One Solution
by Pawan Fangaria on 09-01-2013 at 5:00 pm

Here, I am talking about reliability of chip design in the context of electrical effects, not external factors like cosmic rays. So, the electrical factors that could affect reliability of chips could be excessive power dissipation, noise, EM (Electromigration), ESD (Electrostatic Discharge), substrate noise coupling and the like. Any of these can become prominent in a chip due to mishandling of certain design aspects. And they can become critical for different types of chips leading to their failure. Appropriate care must be taken to detect them as early as possible in the design cycle and prevent.

This week, I attended a free webinarof ANSYS-Apache, presented by Vikram Shamirpeta. Vikram talked in great detail about these effects, their solutions and how Apache tools can be used to prevent them throughout the RTL to GDS stages. It was interesting to know about different types of analysis applied to different types of ASICs and SoCs; those were exemplified through case studies. As we know, now a days, analog-digital mixed signal and several IPs integrated together are part of almost all SoCs; I found specific interest in power management to accommodate all of these and managing noise introduced by digital circuitry into analog. Of course there are other important issues also to be taken care of. I am just going to summarise those here, but it’s worth attending the webinar to know the actual details. It’s just about 30 minutes, but the gains are considerable.


[RTL Power Optimization with an example to shut down clock when not required]

The above picture shows, how power saving can be done at the RTL level by setting the clock to be active only when required; such types of methods are meticulously utilized by the RTL Power Optimization tool, PowerArtist which is physical aware.


[Identifying connectivity failures, e.g. high resistance due to missing stacked via]

Totem can perform extensive checks on the layout to find any violation which can cause connectivity issues leading to electrical abnormalities.


[Power Integrity check, e.g. detecting worst instance not getting enough power]

Integrity of Power Delivery Network (PDN) has become important due to shrinking noise margin (as threshold voltage has remained constant, but supply voltage has decreased) and high performance requirement. In the above case, due to simultaneous switching of neighbouring instances, drawing maximum current through the same power grid, there is high voltage drop and hence the corresponding PDN needs adjustments.


[EM Analysis, e.g. detection of uneven current in a power line and its fix]

EM analysis takes care of Average, RMS and Peak current for both power and signal lines. Apache tools take into account all aspects of EM rules such as direction, temperature, topology, and VIA location.


[ESD Analysis with a case of failure due to ground connection during IP integration]

Excessive electrostatic discharge or electrical overstress (ESD) can cause device or interconnect failure. PathFinder can be used to find the root cause of ESD and fix.


[Substrate noise and its modelling to keep it under control]

As digital and analog circuitry sit on the same substrate, digital (aggressor) noise is injected into analog (victim) through the substrate coupling. A correct modelling of this noise injection must be done to keep it within limits. RedHawk and Totem use a smart extraction engine which can handle complex structures such as wide via arrays and metal structures.

Also, since a substantial portion of SoC is covered by various IPs, and they consume extensive power, their power integrity and reliability must be checked. Effect of various modes of operations of IPs at the top level must be validated. Totem can be used to analyze the layout down to transistor level.

Apache, in its Power Noise Reliability Platform has powerful tools such as PowerArtist for power analysis and fix at RTL level, RedHawk for system and full-chip level analysis and fix and Totem for AMS designs. These are high performance tools (with multi-threaded and multi-core architecture) which can handle large size flat designs of 100M+ transistors.

The webinar “Power Noise Reliability Sign-off of Custom Analog IPs” is worth going through; it provides good learning about today’s SoC issues and their solution.