Banner 800x100 0810

Dassault’s Simulation Lifecycle Management

Dassault’s Simulation Lifecycle Management
by Paul McLellan on 09-21-2013 at 4:29 pm

The first thing to realize about Dassault’s Simulation Lifecycle Management platform is that in the non-IC world where Dassault primarily operates, simulation doesn’t just mean functional verification or running Spice. It is anything during the design that produces analytical data. All of that data is important if you are strictly tracking whether a design meets its requirements. So yes, functional coverage and Spice runs, but also early timing data from synthesis, physical verification, timing and power data from design closure, and so on. All of this is what the automotive, aerospace and other worlds call “simulation.” To them, in the mechanical CAD (MCAD) world, anything done on the computer as opposed to on a machine-tool, is simulation. Similarly, with that world view, anything done with a chip design other than taping it out and fabricating it is simulation.

So Simulation Lifecycle Management (SLM) is an integrated process management platform for semiconductor new product introduction. The big idea is to take the concepts and processes used in MCAD to design cars and planes and push them down into the semiconductor design process. In particular keeping track of pretty much everything.

In automotive, for example, there is ISO 26262 (wikipedia) which covers specification, design, implementation, integration, verification, validation, and production release. In practice this means that you need to focus in on traceability of requirements:

  • document all requirements
  • document everything that you do
  • document everything that you did to verify it
  • document the environment that you used to do that
  • keep track of requirement changes and document that they are still met

That’s a lot of documenting and the idea is to make almost all of it happen automatically as a byproduct of the design process. To do that, SLM needs to be the cockpit from where the design process is driven.
There are really two halves to the process. One primarily used by management to keep define processes and keep track of that state of the design. The core management environment really has three primary functions:

  • dynamic traceability: the heart of the document what you did, how you know you did it and the environment you used to do it
  • process management: knowledge that everything that has been done is, in fact, documented
  • work management: see results, keep track of where you are in metrics like functional coverage, percentage of paths meeting timing during timing closure and so on.

The other half, primarily used by engineers actually doing the design, running tools and scripts and making judgement calls. This is called the integrated process architecture and also consists of three parts:

  • process capture: a way to standardize and accelerate design processes and flows for specific tasks
  • process execution: integrating the processes and flows into the load-leveling environment, server farms, clouds or whatever the execution environment(s) in use are
  • decision support: automation of what-if analysis for tasks like area-performance tradeoffs where many runs at many different points may need to be created and then the mass of data analyzed to select the best tradeoff

There is obviously a lot more detail depending on which task you did down into. But to get a flavor of it, the above screen capture shows some of the requirements traceability. A requirement may generate many more sub-requirements that, in turn, generate tasks that are required to demonstrate that the requirement is met (and keep track of everything to be able to reproduce the run too). Again, don’t forget that where the above diagram says “simulation” it might mean keeping track of how many DRC violations remain to be fixed.

Subsequent blogs will go into more detail of just how SLM works in practice.


Designing Power Management ICs

Designing Power Management ICs
by Paul McLellan on 09-20-2013 at 5:49 pm

With all the focus in design on SoCs in the latest sexy process (Hi-K Metal Gate! FinFETs!) it is easy to forget all the other chips that go into a system. When we say “system on a chip” there are actually very few systems that really get everything onto a single chip. One of the big areas that usually cannot go on the latest sexy process are the power management ICs that delivery very precise voltages to those SoCs starting from typically noisy power coming out of whatever plugs into the wall outlet, or from battery power that isn’t so noisy but that changes its characteristics as the battery runs down. One of the big design requirements for power management ICs is to do their work without wasting much of the power. In your smartphone, for example, wasted power shows up as a hotter phone to hold and shorter battery life, neither of which the end-user wants. It is especially important that the power management ICs consume only tiny amounts of power when the associated SoC is largely shutdown, as your smartphone is a lot of the time it is in your pocket.

These power management ICs are usually built in processes like 0.13um or 0.18um, which sound really outdated to the SoC designer but are actually the state-of-the-art processes for a lot of analog, mixed-signal and power designs.

Not surprisingly, the design process for a power IC is very different from that SoC. It is an important market: higher growth than the overall semiconductor market, very competitive, and with an increasing focus on power efficiency, delivering almost all the power taken in as input in whatever form is required as output, and consuming almost none in the power management IC itself.

One of the leaders in power management ICs is Richtek Technology. They have a large portfolio of parts that deliver innovative power management solutions that improve the performance of consumer electronics, computers, and communications equipment. Founded in 1998, the Company is headquartered in Taiwan with additional offices in Asia, the U.S., and Europe.

K C Chang is the VP of Technology Development at Richtek. On October 3rd he will present a webinar, along with Andy Biddle of Synopsys, on some aspects of their design flow, their EDA tool selection criteria and some recent results. Andy will discuss the Galaxy Implementation Platform highlighting some of the recent capabilities that help power management IC designers bring highly efficient products to market earlier. They will present the key challenges and trends with latest power management integrated circuits and discuss recent EDA tool innovations to shorten development time and maximize quality of results.

If you are involved with the design of power management ICs then you should attend this webinar Power Management ICs – Efficient Design: A Richtek and Synopsys Perspective. The live webinar is on October 3rd at 10am Pacific Time. For more details and to register go here. The same link will work to view it after the event. It is scheduled to last 50 minutes plus Q&A.


Using OTP Memories To Keep SoC Power Down

Using OTP Memories To Keep SoC Power Down
by Paul McLellan on 09-20-2013 at 1:43 pm

Virtually all SoCs require one-time programmable (OTP) memory. Each SoC is different, of course, but two main uses are large memories for holding boot and programming code and small memories for holding encryption keys and trimming parameters, such as radio tuning information and so on.

There are alternatives to putting an OTP on-chip. The data can be held off-chip in some sort of programmable memory (or, perhaps, ROM). But this obviously has the disadvantage of requiring the cost of an extra chip. In smartphones it is not just the cost of another chip that is a problem, but the additional volume taken up by two chips. There is just not a lot of room inside a smartphone to fit everything.

Another alternative to OTP memory is flash memory. This has a big advantage, which is that a flash memory can be reprogrammed many times. However, this comes with a big disadvantage in terms of added process complexity and, thus, the cost of the silicon. Even when off-chip flash memory already exists, security reasons may make using it for holding critical data impractical and running code out of flash memory may, in fact, require data from the flash to be copied to SRAM on the chip, which is both an added cost and yet another increase of unwanted power.

OTP memory has the advantage that code can be executed in-place and does not need to be copied from external memory into on-chip SRAM. It is fast enough and with low enough power as to make copying data out to SRAM something that is unnecessary.

The Sidense one-transistor OTP (1T-OTP) architecture is especially area efficient since it uses a single transistor per bit cell. Furthermore, it does not depend on charge storage and so once programmed, it cannot be un-programmed by environmental or electrical upsets. The patented Sidense 1T-Fuse™ antifuse technology works by permanently rupturing the gate-oxide under the bit-cell’s storage transistor in a controlled fashion, obviously something irreversible.

Another big advantage of the Sidense antifuse approach is that it uses an unmodified digital process. No additional masks or process steps are required, so nothing is added to the wafer manufacturing cost. The per-chip cost rises due to the area occupied by the OTP, but since the 1T-OTP macros are very area-efficient this increase is usually very small. Additionally if the 1T-OTP is programmed at the tester, the increase in test time will also result in some extra cost.

The Sidense 1T-OTP memory uses a low read voltage, which further keeps the power of the memory down. The Sidense memory does require some non-standard voltages internally, especially during programming, but these are created using embedded charge pumps and are hidden from the user. The OTP memory can simply be hooked up to the chip’s power supply network just like any other memory block.

Another option to the Sidense solution to lower the power even more is to use differential bit storage. This technique requires each bit of information to be represented using two transistors: one 0 and one 1. This makes sensing the state simpler and as a result the voltage required for the memory can be lower still, along with the associated power. Obviously this comes at the cost of an increase in area since the number of transistors required to represent a given amount of data is doubled within the memory macro.

Read the white paper Using Sidense 1T-OTP in Power-sensitive Applications here.


Who is Blogging at Cadence?

Who is Blogging at Cadence?
by Daniel Payne on 09-20-2013 at 1:31 pm

As a blogger in the EDA industry I get to write every week, however I also end up reading every blog on SemiWiki plus multiple other sites to keep current on what’s happening in our business. I thought that it would be informative to look at Cadence Design Systems and how they are using blogging to talk not just about their own EDA tools but our industry as well.


Continue reading “Who is Blogging at Cadence?”


Process Variation is a Yield Killer!

Process Variation is a Yield Killer!
by Daniel Nenni on 09-20-2013 at 11:00 am

With the insatiable wafer appetites of the fabless semiconductor companies in the mobile space, yield has never been more critical. The result being better EDA tools every year and this blog highlights one of the many examples. It has been a pleasure writing about Solido Design Automation and seeing them succeed amongst the foundries and their top customers. Here is a Q&A with Amit Gupta, president & CEO of Solido, to get more details on the new Solido Variation Designer 3.0 release:

Q: What is Solido Variation Designer used for?

Solido Variation Designer is variation analysis and design software for custom ICs. Our users run Variation Designer to achieve maximum yield and performance on their designs. It boosts SPICE simulator efficiency while increasing design coverage.

Q: Who are the customers of Solido Variation Designer?

Variation Designer is being used by the world’s top semiconductor companies and foundries to design memory, standard cell, analog/RF and custom digital designs at leading design nodes including TSMC, GLOBALFOUNDRIES and Samsung 130nm, 90nm, 65nm, 40nm, 28nm, 20nm, 16nm and 14nm.

Q: What specific customer challenges does Solido Variation Designer 3.0 address?

Variation Designer 3.0 is based on user input from a wide range of semiconductor companies designing anywhere from 130nm to the most advanced process nodes. In general, we are seeing our customers increasingly being hit by variation issues resulting in sub-optimal performance and yield compared to what the manufacturing process allows for. Variation Designer 3.0 gives our users the ability to address the following:

  • PVT corner design.PVT variation includes process (e.g. FF, SS, FS, SF, TT model corners that can be device specific), voltage, temperature, load and parasitic based variation. When taking all the combinations of these parameters, our customers end up having 1000s or 10,000s of corner combinations to simulate. The challenge is that to simulate all the corner combinations is accurate, but very slow. Guessing which corners to simulate is faster but inaccurate.

Our customers use Solido Variation Designer Fast PVTto automatically figure out which are the worst case corners while simulating only a fraction of the corner combinations. This results in far fewer simulations than brute force PVT corner analysis without compromising accuracy.

  • 3-sigma Monte Carlo design. The process model corners that foundries like TSMC, GLOBALFOUNDRIES and Samsung release in their PDKs are not well-suited to individual designs. They are either overly conservative leading to overdesign, or overly optimistic leading to yield loss. Consequently, foundries are now releasing local and global statistical variation models for designs to run Monte Carlo analysis simulation on their designs. However, brute force Monte Carlo SPICE simulation is slow, inefficient and time consuming.

Our customers use Solido Variation Designer Fast Monte Carlo to cut down the number of simulations to achieve 3-sigma design without compromising accuracy, and to extract design specific 3-sigma corners to design to.

  • High-sigma Monte Carlo design. To design to 6-sigma, 5 billion Monte Carlo sample simulations would be needed, which would take years and is therefore impractical. Alternatively, designers are designing to 3-sigma and extrapolating to 6-sigma, but this methodology is inaccurate. Some companies have developed internal importance sampling techniques, but these don’t scale and suffer from accuracy issues.

Our customers use Solido Variation Designer High-Sigma Monte Carlo to get the 5 billion Monte Carlo accuracy runs in only a few thousand simulations.This is a dramatic reduction in SPICE simulations and improvement in design coverage. Solido High-Sigma Monte Carlo is fast, accurate, scalable and verifiable. Example designs being run include memory bit cells, memory sense amps, memory columns/sub-arrays, analog designs (e.g. SerDes, Data Converters), and standard cell library designs (e.g. flip-flops).

  • Variation debug. If the design is failing PVT corners, 3-sigma or 6-sigma Monte Carlo verification steps, designers need to identify the design sensitivities to variation and figure out how to fix the design, making it robust to variation. Manually changing the device sizes and running PVT or Monte Carlo analysis to check whether the changes fix the design is tedious and time consuming.

Our customers use Solido Variation Designer DesignSense to automatically identify design sensitivities to variation, which enables them to quickly make necessary design changes and verify that it’s meeting specifications.

  • Cell optimization. Similar to variation debug where the design is failing PVT corners, 3-sigma or 6-sigma Monte Carlo verification steps, or the design is not optimized against spec, changing device sizes and running PVT or Monte Carlo analysis to check whether the design is optimal is also tedious and time consuming.

Our customers use Solido Variation Designer Cell Optimizer to automatically vary device sizes within any design and PDK sizing constraints, to optimize the design against PVT and 3-sigma to 6-sigma Monte Carlo variation.


Q: How have your customers deployed Solido Variation Designer in their production flows?

Solido Variation Designer has been established in the signoff flow of most world leading semiconductor companies and foundries.

Users input designs into Solido Variation Designer through the integration we have with Cadence Virtuoso Analog Design Environment or simply by feeding it a netlist. Variation Designer then automatically iterates with the user’s SPICE simulator (we integrate with Cadence Spectre/APS, Synopsys HSPICE/XA/HSIM/FineSim, BDA AFS, Mentor Graphics Eldo and Agilent GoldenGate) to run Fast PVT, Fast Monte Carlo, High-Sigma Monte Carlo, DesignSense and Cell Optimizer tasks. We also support all PDKs that contain process corner or Monte Carlo variation data, and we are qualified by various foundries like TSMC and GLOBALFOUNDRIES.

Some example benefits our customers have seen after adopting Solido Variation Designer:

[TABLE] border=”1″
|-
| style=”width: 121px” | Solido Variation Designer App
| style=”width: 113px” | Customer Design
| style=”width: 180px” | Customer Challenge
| style=”width: 225px” | Benefit of adopting
Solido Variation Designer
|-
| style=”width: 121px” | Fast PVT
| style=”width: 113px” | 28nm DAC
| style=”width: 180px” | 1215 corners takes too long to run, guessing which are worst-case is error prone, no standardized methodology.
| style=”width: 225px” | Correctly found worst-case corners for all outputs in only 296 simulations (4.1x simulation reduction), standardized on Solido Fast PVT methodology.
|-
| style=”width: 121px” | Fast Monte Carlo
| style=”width: 113px” | 20nm folded cascode amplifier
| style=”width: 180px” | 3000 Monte Carlo simulations takes too long to run, running only 100 Monte Carlo simulations doesn’t verify to 3 sigma, no standardized methodology.
| style=”width: 225px” | Verified to 3 sigma in only 300 simulations (10x simulation reduction), standard on Solido Fast Monte Carlo methodology.
|-
| style=”width: 121px” | High-Sigma Monte Carlo
| style=”width: 113px” | 16nm memory column
| style=”width: 180px” | Verifying to 6-sigma would take 5 billion simulations which is impractical, extrapolating to 6-sigma is inaccurate.
| style=”width: 225px” | Verified to 6 sigma in only 4500 simulations, run was fast, accurate, scalable and verifiable, standardized on Solido High-Sigma Monte Carlo methodology
|-
| style=”width: 121px” | DesignSense
| style=”width: 113px” | 40nm comparator
| style=”width: 180px” | Determining device sensitivities to PVT corner and statistical variation difficult, no standardized methodology.
| style=”width: 225px” | Automatically determined device sensitivities to variation, making the design robust to variation, standardized on Solido DesignSense methodology.
|-
| style=”width: 121px” | Cell Optimizer
| style=”width: 113px” | 28nm flip-flop
| style=”width: 180px” | Optimizing specifications across PVT and statistical variation is time consuming and uses too many simulations.
| style=”width: 225px” | 24.1% improvement in flip-flop setup time performance in only 2.75 minutes, standardized on Solido Cell Optimizer methodology.
|-

Q: What’s new in Solido Variation Designer 3.0?
Lots, this is Solido’s biggest release ever. Highlights include:

  • Significantly increased capacity
  • New features, enhancements and performance improvements in every application
  • Re-engineered GUI and full command-line interface for all apps
  • Expanded simulator support and third-party tool integration

Q: What detailed features did you add to Solido Variation Designer 3.0?
Solido Variation Designer 3.0 Fast PVT enhancements:

  • Increased capacity by 10x
  • Support for custom string-based variables in the netlist
  • 2D scatterplots
  • Interactive impacts

Solido Variation Designer 3.0 Fast Monte Carlo enhancements:

  • Faster 3-sigma verification with density-based stopping
  • “Simulate-and-predict” mode for up to 20x faster 3-sigma runtimes
  • Improved accuracy and robustness of density estimates
  • Enhanced results visualization when running multiple corners
  • Verified capacity increased by 10x for both number of devices and number of samples

Solido Variation Designer 3.0 High-Sigma Monte Carlo enhancements:

  • 20x faster algorithms for large designs
  • 10x increase in variable capacity
  • Process variable impacts
  • Support for binary and multi-modal output measurements
  • Support for high-sigma global+local analysis

Solido Variation Designer 3.0 Cell Optimizer enhancements:

  • Improved, faster cell optimization algorithm
  • Support for Spectre netlists

Solido Variation Designer 3.0 integration enhancements:

  • Mixed-language netlist support
  • Spectre netlist-in support
  • Support for Agilent GoldenGate
  • Native Mentor Graphics Eldo support
  • Runtime Design Automation NetworkComputer support
  • Faster, more scalable, and more robust Cadence Virtuoso ADE integration

Solido Variation Designer 3.0 general enhancements:

  • Up to 100x faster load times on large circuits/netlists
  • Way better performance with large Cadence designs; especially extracted views
  • Re-engineered, even more responsive GUI
  • New command-line interface for all apps
  • New report generation system with customizable templates
  • Re-designed, more robust netlist parser
  • Updated and more comprehensive documentation
  • TSMC 16nm / TMI2 support
  • Hundreds of minor quality, reliability, usability and performance improvements

Q: How can our readers get more information?
You can visit our website at www.solidodesign.com for more information. You can also contact us at info@solidodesign.com for an in-person or WebEx demo.

lang: en_US


Apple’s 64 Bit Plan to Finish Off Android

Apple’s 64 Bit Plan to Finish Off Android
by Ed McKernan on 09-20-2013 at 10:00 am

Many people are underestimating the speed and the magnitude of the transition that is about to take place with the tandem rollout of iOS 7 and the 64 bit A7 processor. While the former provides a nice visual upgrade to the entire ecosystem the latter will be used to collect accolades and drive application development that will result in a complete, robust 64 bit environment for all Apple users by next Labor Day at the latest. When this transition completes, what happens to mobile, 32 bit computing? The likely guess is that it withers, taking many players with it. At this moment, roadmaps across the globe are being torn up as development teams must aim for a more aggressive market place with not much time to execute. The clock has started ticking until Apple goes full 64 bits by this time next year.

As a measure of comparison, the 286 to 386 and the 32 bit Pentium class to 64 bit x86 Xeon server processor transitions took roughly 5 years in hardware terms alone. When Andy Grove unleashed the 386 Red X advertising campaign in 1990 there were only a handful of apps that ran in 32 bit mode. The folks in Redmond didn’t get around to a full blown 32-bit operating system until the launch of Windows NT in July 1996. It makes one appreciate what Apple is trying to accomplish over this coming year with a user base and software community that is over an order of magnitude larger. How will Google, Samsung, Microsoft, Intel and others respond to this coordinated drive to leave behind all that is 32 bit? All of the above named companies certainly have money to stay in the game for the long run. It is the smaller ARM mobile chip vendors that are most at risk. It is possible to envision a scenario where all the players scatter to a different corner of the market. Microsoft, for one, will likely get closer to Intel in order to save corporate but in doing so may underfund Nokia for a successful consumer push. Google could decide to make peace with Apple on smartphones and concentrate on its wearables while letting Android lag in forked 32-bit land which may be fine for Amazon but what about Samsung and the other China players?

The common threat that Apple imposes with its 64-bit processor and iOS platform has to have disparate mobile players considering alliances so that they can close the technology and capability gap by the Fall of 2014. Apple’s rollout was intended to shock their competitors with the primary goal of testing how fragile the Android market is when the future is incremental. The mobile TAM could very likely consolidate around Apple at the high end and China clones at the low end with Samsung stretched try to serve all. Without a concerted alliance with Google can Samsung really force Apple into single digit market share like Microsoft imposed on them in the 1990s. It is doubtful.

One could paint a scenario in the late summer of 2014 where Android phones are relegated to the sub $100 space along with 7” tablets. Larger screens and improved cameras would not be able to overcome the “32 bit” processor and allow for pricing to even approach whatever becomes the equivalent iphone 5C next year. This is partially subjective but it is based on what I observed in the PC market in the 1990s. Intel consistently obsoleted their processors within a matter of months so that competitors could not gain a profitable foothold by offering something equivalent. The one caveat was during periods of allocation. Thus AMD and the cloners were stuck selling processors for an average price of $60-$70 while Intel enjoyed prices that were on average 3-4 times higher.

The sucking sound you will hear is Apple leveraging technology with a branding campaign that will create separation in the marketplace. Expect to see Apple impose a price floor that is much higher than the ceiling of competitors. You will know when capitulation begins when the subject invariably turns to mobile companies spending more time focusing on the future promise of tens of billions of IoT devices.


Interface PHY IP supporting Mobile Application on TSMC 20nm? Available!

Interface PHY IP supporting Mobile Application on TSMC 20nm? Available!
by Eric Esteve on 09-20-2013 at 8:42 am

If we check the many articles daily published in Semiwiki, I am sure that Moore’s Law has been mentioned every single day. There is a good reason why we constantly write about new technologies and advanced features like FinFet, FD-SOI, 450 mm wafers or double patterning: all of these are new challenges that the SC industry will have to take up. As designs migrate to smaller process nodes, such as 20-nm and 16-nm FinFET, the technology challenges to extend Moore’s Law become increasingly complex. TSMC has implemented double patterning mask technology on its 20SoC process utilizing two photo masks, each with half of a pattern, to enable printing of images below the node’s minimum spacing design rules. We know in 2013 that the most wonderful technology would be useless if it’s not supported by IP vendors, developing the “LEGO” blocks you need to successfully design a SoC. Those who read my articles know how crucial it is for the SC industry to benefit from high quality PHY IP, allowing supporting High Speed Serial Interfaces protocols, like USB, DDR, PCI Express®, and MIPI®.

The above picture is obtained from Synopsys TSMC 20 nm Test Chip characterization of the PHY (here an USB 2.0 PHY), and is representative of the quality of the design. We call it an EYE diagram: if the signal generated by the PHY is well built by the on-chip circuitry, then this EYE will be well open, so you can insert this red form within it and that’s the guarantee that the Interface will work as specified (at 480 Mbps in this case). Porting an existing PHY design, validated on an older technology node (larger gate length and different design rules) is absolutely not straightforward; it may happen that a complete redesign could be a shorther path. Synopsys’ development of DesignWare IP and Interface PHY at 20-nm focused on minimizing yield and manufacturability issues while adhering to the standards’ specifications, as well as TSMC’s advanced layout and design rules for manufacturability with double patterning technology. The result of these efforts can be seen on this EYE diagram for the PCIe 2.0 PHY IP:

A very interesting point is made by John Koeter: “As the leading provider of physical IP with more than 80 test chip tape-outs in 20- and 28-nm, Synopsys is focused on developing IP in the most advanced process nodes to help designers take full advantage of the processes speed and power characteristics while implementing high-quality, proven IP,” said John Koeter, vice president of marketing for IP and systems at Synopsys. “By offering a broad portfolio of IP for the 20-nm process, Synopsys enables designers to more easily meet their goals of creating differentiated products with less risk and faster time to volume production, while also reducing the risks associated with moving to the 16-nm FinFET process.”

What type of application will be targeted by SoC designed in TSMC 20 nm, and later 16-nm FinFET processes? Most probably mobile, smartphone or media tablet, for three main reasons: chip cost (area), performance and power consumption. Synopsys is claiming that TSMC’s 20SoC process enables designers to reduce the power consumption by up to 25 percent or increase performance by 30 percent. Mobile application are known to be MIPI friendly, that’s why the last EYE diagram is for MIPI D-PHY:

Availability
The Synopsys DesignWare USB 2.0 PHY, USB 3.0 PHY, DDR4 multiPHY, PCI Express 2.0 PHY, and MIPI D-PHY for the TSMC 20SoC process are available now, just click to get more information about silicon-proven Synopsys DesignWare USB, DDR, PCI Express and MIPI PHY IP.

If you want to take a look at an EYE diagram being Analog designer’s nightmare, this one is a good example:

Eric Esteve from IPNEST

lang: en_US


What’s in your network processor?

What’s in your network processor?
by Don Dingee on 09-19-2013 at 8:00 pm

Recently, one of those very restrained press releases – in this case, Mentor Graphics and Imagination Technologiesextending their partnership for MIPS software support– crossed my desk with about 10% of the story. The 90% of this story I want to focus on is why Mentor is putting energy into this partnership
Continue reading “What’s in your network processor?”


Cutting Debug Time of an SoC

Cutting Debug Time of an SoC
by Daniel Payne on 09-19-2013 at 2:26 pm

The amount of time spent debugging an SoC dwarfs the actual design time, with many engineering teams saying that debug and verification takes about 7X the effort as the actual design work. So any automation to reduce the amount of time spent in debug and verification would directly impact the product schedule in a big way.

An example Mobile Applications Processor block diagram is shown below and there are a few dozen IP blocks used in this Samsung Exynos 5 Dual along with the popular ARM A15 core.

I’m guessing that a large company like Samsung is likely using IP from third parties along with their own internal IP re-use to get to market quickly.

Virtual models can be used during design, debug and verification phases in order to accelerate the simulation speeds. An approach from Carbon Design Systems called Swap & Play allows an engineer to start simulating a virtual prototype with the fastest functional models, and then swap to a cycle-accurate model at some hardware or software breakpoint. So I could simulate a mobile device booting the operating system using fast functional models, and then start debug or analysis using more detailed models at specific breakpoints:

This Swap & Play feature allow the software driver developers to independently code for their IP blocks using the 100% accurate system model without waiting to boot a cycle accurate system. Another benefit is that after the OS is quickly booted you can do performance optimization profiling on your applications or benchmarks.

Carbon Design and ARM developers have collaborated to make this Swap & Play technology work with the latest ARM processor, interconnect and peripheral IP blocks. For Semi IP blocks like fabric and memory controllers that don’t already have a fast functional model, you have several choices:

  • Create a SystemC or other high-level model.
  • Use a cycle-accurate model, aka Carbonized model. Carbon automatically inserts the adapters needed to go from a Fast Model based system to an accurate model and since most peripherals aren’t big bottlenecks in the actual system, the impact of executing them accurately only when needed is not traditionally a big impact on virtual prototype performance
  • Automatically create a fast functional model from a Carbonized model using the SoCDesigner Plus tool.
  • Use a memory block. Often you need a place to read and write values.
  • Use a traffic generator or consumer model – to look for traffic on the system bus or a sink for traffic.

I see a lot of promise in the third approach because it creates a fast functional model automatically.

For fabric and interconnects the SoCDesigner Plus tool can reduce the logic to a simple memory map, and then create the fast functional model directly from the Carbonized model.

Since memory controllers have configuration registers and other logic which must be modeled, even in the fast functional models, a different approach is needed. Here, SoCDesigner Plus creates a fast functional model which incorporates the Carbonized memory controller to handle configuration accesses which also providing a direct path to memory contents for fast accesses. Since the registers in the memory controller are infrequently changed, using a Carbonized model doesn’t slow down the overall system simulation times much and the vast majority of accesses go directly to memory.

Summary
Using fast, un-timed virtual models is one technique to reduce the debug and verification time on SoC projects. If your SoC design has an ARM core, then consider using the virtual modeling approach from Carbon Design Systems. Bill Neifert blogged about this topic in more detail on July 16th.

Further Reading

lang: en_US


A Brief History of Magillem

A Brief History of Magillem
by Daniel Payne on 09-19-2013 at 1:17 pm

Founders

Cyril Spasevski is the President, CTO and founding engineer at Magillem, bringing a team of engineers, all experts with an SoC platform builder tool. In 2006 Cyril and his team met a seasoned business woman, and decided to form Magillem. Design teams were struggling with different tools at different stages of the flow, redoing the same configurations at ESL and RTL to assemble their virtual platforms.


Cyril Spasevski, President and CTO
Continue reading “A Brief History of Magillem”