RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Managing Semiconductor IP

Managing Semiconductor IP
by Daniel Payne on 01-21-2015 at 5:00 pm

SemiWiki blogger Eric Esteve does an excellent job writing about all of the semiconductor IP available, and the popularity of IP is only growing more each year. Here’s a projection from IBS about semiconductor IP showing revenues of $4.7B by 2020:

Analyst Gary Smith divides IP into three broad categories: Functional, Foundation and Application.


An example of functional platform would be IP provided by ARM, foundation platform would be IP like TI’s OMAP, and finally an application platform example would be IP or software from Audi for their navigation and infotainment systems.

Related – Filling the Gap between Design Planning & Implementation

The number of IP blocks on a modern SoC is about 200 or so, making about 80% of a chip re-used. Here’s the chart from Semico Research:

Another trend with increased IP use is the rising cost of software with each new node. Data from IBS shows that at the 22 nm node we have SoC costs dominated by software development compared with hardware design or manufacturing.

Related – Smart Collaborative Design Reduces Business Risk

One approach to manage these increased product costs is to use a functional virtual prototype:


Functional Virtual Prototype (Dassault Systemes)

 

This approach enables early hardware-software co-development, shortening the product life cycle. You can actually verify a new design before detailed implementation begins. The SoC along with all of the related software IP can become a system virtual prototype and managed with specialized software provided by Dassault Systemes.

Related – Enterprise IP Management – A Whole New Gamut in Semiconductor Space

GSA Working Group

Industry experts in IP will meet for two panel discussions and webcast on Thursday January 22, 2015 from 9AM to Noon (PT) at Synopsys in Mountain View, or by phone at 1-719-352-2630:

 

  • IP Management
    • Warren Savage, CEO, IPextreme, Moderator
    • Ranjit Adhikary, Director of Marketing, ClioSoft
    • Shiv Sikand, VP Engineering, IC Manage
    • Vishal Moondhra, VP Applications, Methodics
    • Michael Munsey, Director ENOVIA Semiconductor Strategy, Dassualt Systems
    • Kands Manickam, Senior VP & GM, IPextreme
  • IP Business Models
    • Warren Savage, CEO, IPextreme, Moderator
    • John Koeter, VP Marketing, Synopsys
    • Brian Gardner, VP Business Development, True Circuits
    • Oliver Gunasekara, CEO, NGCodec
    • Frank Ferro, Senior Director Prod cut Management, Rambus
    • Marty Kovacich, CFO, Sonics

Also Read

Filling the Gap between Design Planning & Implementation

Smart Collaborative Design Reduces Business Risk

Enterprise IP Management – A Whole New Gamut in Semiconductor Space


Analyzing Power Nets

Analyzing Power Nets
by Paul McLellan on 01-21-2015 at 7:00 am

One of the big challenges in a modern SoC is doing an accurate analysis of the power nets. Different layers of metal have very different resistance characteristics (since they vary so much in width and height). Even vias can cause problems due to high resistance. Typically power is distributed globally on high-level metal layers, which have the lowest resistance, but eventually, of course, the power has to get all the way down to the transistors through the much higher resistance metal 1 and 2 and the associated vias. A full analysis requires accurate resistance in order to do IR drop analysis, electromigration (EM) analysis and thus the implications for reliability and timing.

Silicon Frontline’s P2P (it stands for point-to-point) performs full-chip transistor level IR drop and EM analysis for the power net design. It is focused on providing easy-to-setup and easy-to-use analysis that has the speed and unlimited capacity to handle the whole chip.

Power nets form very complex systems, reaching all parts of the chip. One of the guiding principles of P2P is progressive verification—find the gross errors first, in a straightforward way, and save the compute-intensive verification for the troublesome layout issues.


First, for a qualitative view of the power net the user performs resistance analysis. The user provides the GDS and top-level pad(s), and P2P automatically calculates resistance across the complete net and provides graphical and textual output of the results, color coded from blue to red as to how high the resistance is. The user can see at a glance the absolute resistance to every point on each power net, as well as the resistance gradient, making problem identification easy.

For more detailed quantitative view the user performs IR drop/EM analysis. The user just specifies the voltage sources and current sinks and P2P does the rest, doing a complete power net analysis in minutes. Static currents can be given for any level of hierarchy and refinement. Currents can be given for block (e.g. IP), for cell (e.g. cell library element), or for transistor (e.g. P2P automatically characterizes device current according to device width).

There are several ways of making use of this capability. One is full-chip, with currents for each circuit block defined as needed (determined by model availability and stage of design refinement). An alternative method is box based where the various blocks on the SoC are analyzed one after the other, in isolation, before all blocks are analyzed together in context. Additionally, the user can annotate arbitrary currents obtained from a variety of sources. For example, pick the peak block current from a SPICE simulation for one block, and a maximum IP block current from the provider’s datasheet. In this way, the user can analyze different time points with the SoC in different modes of operation, where these modes may have very different demands on the power distribution networks.


P2P visualization allows display of resistance mapping, voltage distribution, and current density in metal interconnect and vias. It can also highlight excessive current and produce layer-by-layer resistance reports making it easy to zero-in on critical contributors to total resistance.

In summary, P2P provides:

  • easy to set-up and highly configurable, with no perturbation to existing flow
  • fast and easy to use analysis of power nets, with unlimited capacity
  • accurate resistance extraction for pad to pad, point to point, and multipoint to multipoint resistance, with layer by layer reporting for each power net
  • resistance mapping of interconnect
  • fast, accurate static IR drop analysis
  • current density analysis highlighting EM issues

P2P can be used at any stage of design and verification, and by providing accessible data covering resistance/potential/current distribution guides users to designing robust power nets able to reliably provide needed currents to all areas of the die.


Not All RTL Synthesis Approaches are the Same

Not All RTL Synthesis Approaches are the Same
by Daniel Payne on 01-20-2015 at 7:00 pm

My first experience with logic synthesis was at Silicon Compilers in the late 1980s using a tool called Genesil. Process technology since that time has moved from 3 um down to 20 nm, so there are new challenges for RTL synthesis. Today you can find logic synthesis tools being offered by the big three in EDA: Synopsys, Cadence, Mentor Graphics. Since RTL synthesis has been around for decades you may be lulled into thinking that all approaches are about the same, and that the market is mature and kind of static. If that was really true, then why did Synopsys have to recently re-write their tool from scratch in order to meet the challenges of capacity, speed and Quality Of Results (QOR)?

Mentor Graphics acquired Oasys Design Systemsabout 13 months ago, and with that move filled a gap in their digital implementation tool flow by adding RTL synthesis. Engineers at Mentor authored an 8 page white paper to explain their approach and how it’s different than anything else out there. In general a modern synthesis tool must provide SoC designers with:

  • High capacity, 100 million+ gates or cells
  • Fast runtimes in hours, not days or weeks
  • Acceptable QoR
  • Physical awareness to decrease design closure times
  • Standard inputs: Verilog, SystemVerilog, VHDL

Related – Oasys Bakes a PIE

Old Approaches
A traditional logic synthesis approach translates RTL code into gates, then optimizes the gates to meet your design specifications. More modern approaches start to take physical information like estimated routing capacitances back into the optimization phase. Optimizing the design at the gate level is a low-level approach, is very localized, and can require long run times as the design size increases.

This approach can force users to break their design up into smaller pieces and have separate synthesis runs, which in turn will increase design closure times.

Without using a full-chip floorplan, a traditional synthesis tool will cause many iterations between front-end and back-end designers trying to reach design closure. You don’t even know where the congestion bottlenecks are with this approach.

New Approach
Back in 2004 when Oasys was founded, they knew that there had to be a better way to approach RTL synthesis, and so the RealTime Designer product came to life starting in 2009, then acquired by Mentor in December 2013. Along the way Oasys received funding from Intel Capital and Xilinx, certainly two very large customers with some of the highest complexity SoC devices. Here’s what makes the new approach different:

  • Includes full chip-level physical synthesis
  • Placement-first
  • Floorplanning
  • Optimization at RTL level
  • Identifies and resolves timing, routability and power issues earlier in the design cycle

Related – Speeding Design Closure at DAC

An immediate benefit of this approach is that you can run RTL synthesis on a design with 10’s of millions of gates in just a few hours, not days. With physical synthesis the tool partitions the RTL into partitions that are placed, then using physical library cells and accurately estimating interconnect between cells. Both placement and timing information gets updated with every optimization transformation. Even congestion maps give early feedback to an RTL designer about routing issues that may limit the physical implementation.


Routing Congestion Map

The RealTime Designer tool will automatically create a floorplan based upon each high-level module and other design data. Modules from the RTL are then assigned to physical regions of the floorplan. All of this physical placement info creates accurate interconnect estimates. You can add in any custom blocks or other hard macros required for your design, along with RTL source code. Here’s a picture of the floorplanning, placement an optimization steps:

Related – Oasys Announces Floorplan Compiler

Because RealTime Designer produces results so quickly, you can now afford to do some explorations to trade off power, performance, area, congestion and DFT goals.


Design Exploration

Instead of separating logical and physical design, with this approach you can actually begin to cross-probe between RTL source code and critical paths in the physical design after floorplanning:


Cross-probing between design views

Within RealTime Designer you’ll find both static and dynamic power analysis, plus support for:

  • Multiple Vt libraries
  • Advanced clock gating
  • Multi-Corner Multi-Mode (MCMM)
  • Power density driven placement
  • UPF and multi-VDD
  • Interactive and batch analysis

DFT engineers can use the scan insertion feature, which minimizes interconnect in the scan chains and creates a standard scandef file for Place and Route or ATPG tools:


Left: design without scan chain ordering.
Right: design with scan chains ordered with physical placement.

Conclusion
Engineers at Oasys, now Mentor Graphics, have developed a new approach to RTL synthesis that can handle 100M+ gate capacity, and produce results up to 10X faster than older architectures, while meeting your PPA (Power, Performance, Area) specifications. Read the complete 8 page White Paper here for more details.


Methodics Access Controls

Methodics Access Controls
by Paul McLellan on 01-20-2015 at 7:00 am

My PhD thesis is titled The Design of a Network Filing System. Yes, that was a research topic back then (and yes, we did call them filing systems not file-systems). One big chapter was on access controls. There are several problems with designing an access control system:

  • it needs to be possible to implement it efficiently
  • it needs to be flexible enough to provide the controls that the organization requires
  • it needs to be comprehensible to the people who have to set up the controls

These are hard goals to reconcile. When I worked at VLSI Technology I was responsible for all the data management infrastructure and I soon discovered that the elegant (to me!) access control systems that a software engineer might dream up were not really usable by design engineers. They only wanted very basic capability, namely that most stuff should be shareable and alterable, except they also wanted the capability to take a snapshot of the design and keep it in a form that could never be changed. I invented something I called a read-only-library (actually a large file containing all the smaller files that made up the snapshot). This was especially useful at tapeout to capture the precise design that was taped-out and ensure nobody could change it (since then it would not match the masks), and for standard cell libraries that were also “released” in specific versions. That was about as far as IP went in the early 1980s when a state-of-the-art chip was 10,000 gates.

Now we are in a different world. As a couple of the speakers at SEMI’s ISS this week stated, the semiconductor industry is becoming a dumb-bell, with one end being very large companies with broad IP portfolios (think Broadcom say, or Synopsys) and at the other are tiny companies with a business of selling one product, either as IP or a chip. And nothing really in-between. Those large companies with huge IP portfolios have blocks that are generic and others that are crown-jewels. They need very different access controls since some products (an LTE-modem say) should be restricted on a need-to-know basis to ensure that a random engineer leaving for the competition hasn’t had access to it (unless working on the product, obviously) but others (a standard cell library perhaps) need to be widely available to every design engineer.

These days we have powerful source control systems such as git and subversion. But these are designed for software projects and don’t really map directly onto the capabilities that are required for managing the IP lifecycle or the design of semiconductors with myriad views, hierarchy and a variety of requirements that don’t map well onto software (you can see the netlist but not the layout). This is where Methodics comes in, providing the capabilities for managing the IP lifecycle in a layer between the basic underlying data management system and the designers themselves.

For example, engineers integrating an IP block (say that LTE-modem) need one level of access, whereas engineers that are designing that block need another level of access. In particular, the first group of engineers need read-only access to some subset of the views of the block, whereas the designers of the block need to be able to alter it and fix bugs in it and create releases of it.

At the IP level there is some hierarchy in the sense that large IP blocks will pull in smaller IP blocks. For example, an Ethernet controller is an integration of a MAC (digital logic) and a PHY (largely analog) probably designed by different groups. It needs to be possible to control access to the high level (designers can alter it, for example) without automatically granting the same access to the lower-level IP blocks (the integration engineers cannot change the MAC).


Methodics ProjectIC provides access control commands that provide this level of flexibility through the pi permcommand. It is possible to do things like:

  • add a new user to the project with read access to everything
  • selectively remove access to lower-level IP blocks
  • preview what permission changes a particular command would make without making the changes, to check that no unwanted changes will happen
  • construct complex access controls that are not wholly hierarchical (a contrived example: allow access to the US, deny it to California in general, but permit it for Palo Alto)

Read the white paper on IP permissions management here


More articles by Paul McLellan…


Aldec increasing the return on simulation

Aldec increasing the return on simulation
by Don Dingee on 01-19-2015 at 10:00 pm

Debate rages about which approach is better for SoC design: simulation, or emulation. Simulation proponents point to software saving the need for expensive hardware platforms. Emulation supporters stake their claims on accuracy and the incorporation of real-time I/O. A few years back, some creative types coined the term SEmulation, a hybrid utilizing both approaches. A quick search turns up an Altera white paper on that exact topic, circa 2007, and an even older reference of first usage of the term in EDA around the year 2000.

Funky names aside, the drawback with most early-stage approaches to any complex problem is they are proprietary. A particular simulator environment was lashed to a certain model of FPGA-based hardware prototyping platform, with a high degree of knowledge about the internals of each required to make it work. A handy idea, but potentially expensive, inflexible, and locked in.

2007 is right about the time the SCE-MI specification was emerging in original form from Accellera. SCE-MI standardizes the co-emulation modeling interface. It also spends a lot of effort on minimizing the interaction, or synchronizations, required between the simulator and emulator platform. Simulators like events, where emulators prefer timed sequences.

SCE-MI establishes the idea of transactors, connecting an untimed testbench in the simulator to timed modules in the emulator. This allows the emulator to run with its faster timing intact. Transactors form a pontoon bridge of sorts. By adhering to the SCE-MI standard, a simulator and emulator are loosely coupled, allowing replacement of one or both sides by adding the proper transactors. Internal knowledge is reduced, flexibility is greatly increased, and lock-in is avoided.

Aldec has taken a big step forward with the latest release of their hardware emulation solution software. HES-DVM brings a powerful simulation environment compliant with SCE-MI. It allows connection of the Aldec HES-7 FPGA-based prototyping system, or another third-party SCE-MI compliant platform, or custom in-house FPGA hardware to provide the hardware acceleration.

Users of UVM need hardware acceleration, desperately. Even with continual improvements in constrained random solvers and other algorithms, HDL simulators are still compute-intensive beasts. As the size of the design increases, the execution time of the simulation goes up dramatically. Using an FPGA-based system provides speed effective acceleration for an HDL simulator without the massive costs of a full-blown hardware emulator.

HES-DVM 2014.12 brings improvements in three main areas:

  • Significant improvements in the SCE-MI 2 Compiler expand capability to convert behavioral code into synthesizable RTL targeting an FPGA. For example, the compiler now supports SystemVerilog DPI-C import and exports as a SCE-MI function-based interface. Support for plusargs has been added, allowing arguments to be passed within transactors, aiding in run-time configurable parameterization. Turbo Mode allows compression of design clocks so edge sequences are preserved, but periods are shortened. Force signal values can send values to force any design net in RTL. Optimization has been applied to constant arguments of DPI-C function calls, reducing synchronization and speeding up emulation.
  • Scalability improvements allow jobs to be scheduled against load sharing facility (LSF) compute farms. This also applies to scaling of acceleration clusters using multiple HES-7 or similar platfforms.
  • Support has been added for the Cadence “NCSim” simulator, part of Cadence Incisive Enterprise Simulator. The DVM generates the SCE-MI DPI emulation bridge compatible with NCSim, and creating a new project or simulation options can select NCSim.

Embracing SCE-MI and adding hardware acceleration dramatically increases the “return on simulation” for ASIC developers. Aldec continues to open their environment, combining their tools, popular tools from other EDA vendors, and custom hardware platforms into a complete solution for RTL verification.

Related articles:


Analyze Substrate Noise in SoC Design?

Analyze Substrate Noise in SoC Design?
by Pawan Fangaria on 01-19-2015 at 4:00 pm

Often substrate noise analysis takes place when everything is there on the chip, but that stage comes near the tape-out which is too late to make major changes in architecture, placement, introducing noise protection circuitry for the victims and so on. It was okay when there used to be very little analog content on the chip. But in today’s SoC where substantial analog and RF content (that may be in the form of specialized IP) can be there on the chip intermingled with large digital content, there is no concession to wait till tape-out risking the schedule because increasing substrate noise can pose severe risks to those sensitive IP blocks. There may be multiple RF circuits operating in close proximity with fast switching circuits, specifically in wireless applications. If not controlled properly, substrate noise can severely affect performance of SoCs.

Although analog and digital blocks have separate power and ground supply structures, they are etched on a common silicon substrate, thus allowing the generated noise to propagate through the substrate. There are isolation techniques to separate these two types of circuits, but how to determine which technique and how much control is appropriate in a particular situation, such that the SoC is neither overdesigned nor left susceptible to substrate noise that can limit its performance? There is a need to accurately model and predict the substrate coupled noise and add appropriate isolation structures to design a robust SoC.

Ansys’s Totem-SE supports different isolation structures in its analysis that include P+ guard-ring, N+ guard-ring, N-well wall, Deep N-well wall, and Deep N-well pocket. Totem-SE considers all substrate layers and necessary technology parameters in constructing the substrate RC network and models all pertinent noise injection elements such as standard cells, memories, IOs, and specific analog and custom circuits, thus providing accurate analysis of right fitment of different isolation structures in typical scenarios.

Experimental results with different isolation structures between a digital processor core and an analog block show close correlation between measurement from silicon (blue waveform) and the prediction from Totem-SE (pink waveform). The waveform indicates the worst noise amplitude starting from digital circuit, going through different isolation structures, into the analog circuit. Totem-SE can also provide DvD (Dynamic Voltage Drop) maps for various substrate layers for designers to determine optimal locations for isolation and protection structures in their designs.

Totem-SE can be used for full-chip analysis in which the noise injection from various digital and analog components are modeled and propagated through the on-die, package and substrate parasitic network. The noise waveforms from the full-chip analysis can be captured at specific locations of an analog block and used as PWL input to the IP level timing and functional simulations using Spice to improve accuracy of the IP level simulation. By using the true voltage noise signature, the impact of the coupled noise on the IP can be explored to determine if additional changes in the layout or protection schemes are needed, thereby preventing silicon issues after tape-out.

In an SoC design, the noise generated by a digital circuit can be controlled by several means including isolation/protection structures, decoupling capacitances, power grid robustness, number of active blocks and activities on the blocks, and distance from active elements. Totem-SE is very versatile to be used in various customer specific SoC design flows to start substrate noise analysis as early as possible. Ansys provides a whole range of tool suite for power, noise, reliability analysis and optimization of SoCs by all means.

Hagay Guterman from CSRand Jerome Toublanc from Ansyswill be presenting a joint paperin DesignCon 2015, in which they will present a novel proven design flow which starts substrate analysis very early in the design stage, even with very basic chip information. The models of noise generation and noise propagation through the substrate can develop in parallel to each other when relevant data is available.

Do reserve your seat for the following session/paper in DesignCon 2015 to know more –

Session Code: 2-TH4
Track Name: 02 Analog and Mixed-Signal Modeling and Simulation Challenges
Paper Title: Substrate Noise Full-chip Level Analysis Flow from Early Design Stages Till Tapeout
Date: January 29, Thursday
Time: 11:05 AM – 11:45 AM

More Articles by PawanFangaria…..


FDSOI jump-start 2015 in Tokyo this week

FDSOI jump-start 2015 in Tokyo this week
by Eric Esteve on 01-19-2015 at 4:38 am

This news in May 2014 that Samsung had licensed FD-SOI Technology from ST-Microelectronics was really amazing, as most of the industry was expecting this kind of agreement, but not with the #2 SC Company. But since May 2014 the news flow has been quite reduced, we can imagine that both SC companies had a lot work to do for transforming the agreement into real. Transferring a complete new process is certainly a heavy task (but I am not an expert), building an efficient ASIC flow, including EDA tools and maybe as much important the right IP offer can take several quarters. It seems that Samsung will unveil their FD-SOI offer during the RF-SOI and FD-SOI Forum, organized by the SOI consortium in Tokyo this Friday, January 23[SUP]rd[/SUP]. If you don’t travel to Tokyo this week (I don’t) you will have to wait for the proceedings to be released, it will certainly be worth the search on: RF-SOI and FD-SOI Forum

If you are not familiar with SOI, let’s me clarify the difference: RF-SOI is dedicated to pure analog IC (Radio Frequency), as the Silicon On Insulator technology provides strong advantages for analog designs: “Devices formed on SOI substrates offer many advantages over their bulk counterparts, including absence of reverse body effect, absence of latch-up, soft-error immunity, and elimination of junction capacitance typically encountered in bulk silicon devices. SOI technology therefore enables higher speed performance, higher packing density, and reduced power consumption.” This sentence is extracted from this blog.

Samsung has licensed Fully Depleted (FD)-SOI, which technology is dedicated to digital designs. The supported nodes are 28nm (already in production at ST) and 14nm currently in development if not prototyping. We have covered the various application supported by FD-SOI ASIC at ST. It was surprising, but the mobile was not the preferred application! 28nm FD-SOI can also target performance hungry networking application and you can check in this recent blog for the mention of ST design-win of a communication infrastructure ASIC in 14nm FD-SOI. Nevertheless, Samsung foundry business is likely to target the very high volume applications, and what is higher than mobile? FD-SOI can provide the Holy Grail feature for an Application Processor: Ultra Low-Power and Performance.

If you take a look at the FD-SOI part of the agenda, the foundry ecosystem is represented by ST and Samsung, and I am very anxious to read Samsung presentation (and to report it in Semiwiki)… be patient.

Any designer knows that the technology availability is necessary, but no more sufficient to cope with the Time to Market and integration requirement. A complete Ecosystem has to include a strong IP (and EDA obviously) offer. The large and fast growing IP vendors, Synopsys and Cadence are presenting in Tokyo. You may be surprised not seeing ARM (the undisputed #1 vendor), but don’t worry: one of the very first chip released by ST a couple of years ago was an Application Processor, integrating ARM 9 core, and more recently an AP integrating ARM® Cortex™-A53 and Cortex™-A57 64-bit processor on 28nm FD-SOI.

The presence of Verisilicon in this agenda is very important, as the ASIC Company is very strong on the Chinese market. When I met with Mark Ma, traveling from China to IP-SoC in Grenoble last November, his first question was about FD-SOI. The technology is clearly becoming hot, generating a real interest in the country…

This SOI Forum taking place in Japan, having a company like Sony sharing about their design experiences with FD-SOI is almost a symbol: FD-SOI penetration in consumer application has started (if you agree to include mobile into consumer, as a smartphone or a tablet is definitely a consumer oriented product).

Next “rendez vous”: SOI Forum in San Francisco on February 27[SUP]th[/SUP], I was told that new names will be added to this agenda, we will disclose it as soon as it will be official, but I am sure you can guess who is missing in Tokyo and should be present in California!
From Eric Esteve from IPNEST


Wireless Charging: Magnetic Induction or Magnetic Resonance?

Wireless Charging: Magnetic Induction or Magnetic Resonance?
by Majeed Ahmad on 01-18-2015 at 7:00 pm

Standard wars are no stranger to technology business. In fact, they are the norm. Take, for instance, the rock star technology of 2015—wireless charging. Magnetic induction or magnetic resonance: which standard will dominate the wireless power ecosystem?

That’s the crucial question while wireless charging continues to win the prominence in the technology industry, and the same time, looks akin to an assortment of workable ideas with a number of pros and cons. The stellar promise of wireless power comes down to a critical premise: clarity that OEMs like smartphone makers need for making decision about incorporating wireless charging as a standard feature in their products.

Ask Integrated Device Technology Inc., the company that offers wireless charging products for both magnetic induction and magnetic resonance technologies and is a board member of the Wireless Power Consortium (WPC) and Alliance for Wireless Power (A4WP) standard bodies. According to Arman Naghavi, Vice President and General Manager of the Analog and Power Division at IDT, “right now the wireless power technology is at step one if there are going to be ten steps.”

So it’d be worthwhile to have a look at the wireless power standards maze and make a sense of the technology merits amid this standards battle and the possible consolidation of these standards later this year.

Magnetic Induction: Device on Mat

The technology is powered by two coils of wire: the coil at the charging station produces an oscillating magnetic field, which in turn induces an alternating current that is received by the coil at the device being charged.


Image credit: IDT

Qi (pronounced as chee) is driving the magnetic induction wireless charging standard and boasts more than 200 members. The Qi standard—developed by WPC, which was established back in 2008—won early adoption but has taken the back seat due to a number of compromises. First and foremost, it’s slightly more expensive to produce compared to methods without a coil.

Second, because Qi uses tightly coupled coils for high transfer efficiency, the arrangement is sensitive to coils misalignment that eventually leads to smaller distances between device and docking station.

Another wireless power standard, which uses inductive charging, has been developed by the Power Matters Alliance (PMA). The PMA standard, which is quite similar to Qi, has Starbucks and MacDonald’s among its early adopters. It has recently joined hands with A4WP, the standards body driving the adoption wireless charging based on magnetic resonance technology.

Magnetic Resonance: Proximity

The magnetic resonance technology still uses a loosely coupled coil arrangement that creates a usable magnetic field; but it also tunes the frequency of oscillation to precisely match between transmitter and receiver. As compared to closely-coupled coils in magnetic induction, magnetic resonance technology increases transfer distance, but looser coupling between the coils leads to suboptimal power transfer.

The Rezence standard, spearheaded by A4WP, promises to charge multiple devices without having to worry about alignment and states 5 cm as a typical operating distance. The charging accessories and mobile devices based on the Rezence standard are expected to be available later this year.

Rezence block diagram (source: A4WP)

Consolidation Ahead

Chipmakers like Broadcom, IDT and MediaTek are launching board and coil designs that support both inductive and resonant coil systems. The partnership recently announced by A4WP and PMA is another harbinger of the coming consolidation within the wireless power standards domain that will encourage device OEMs to commit to the highly promising but still embryonic wireless charging feature.

Another notable indication of the imminent consolidation in the wireless power standards comes from the fact that industry heavyweights like Qualcomm and Samsung, once staunch supporters of A4WP, are now backing up Qi as well. The fragmented world of wireless power merely shows the infancy state of the technology and the fact that there are still missing links within wireless charging technology landscape.

Bridge standards and dual-standard products could fill the void and bring more OEMs into the wireless power fold. Meanwhile, the two competing wireless charging technologies will most likely continue to collide and converge. Steve Goacher, Business Development Manager for TI’s Analog Wireless Power Group, believes that both magnetic induction and magnetic resonance wireless charging standards will have relevance in the future. “Each technology has its pros and cons and I don’t see one dominating the other.”

Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronicsand The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


Tracing methods to multicore gladness

Tracing methods to multicore gladness
by Don Dingee on 01-18-2015 at 9:00 am

Multiple processor cores are now a given in SoCs. Grabbing IP blocks and laying them in a multicore design may be the easy part. While verification is extremely important, it is only the start – obtaining real-world performance depends on the combination of multicore hardware and actual application software. What should engineers look for in evaluating a multicore design?

A new white paper series from Mentor Embedded provides a perspective on this question. Compared to the single core, or more accurately core-centric approach, author Manfred Kreutzer suggests a multicore approach must address three issues for success.

The first is obvious: software must capitalize on concurrency and parallelism. Some scenarios may have the luxury of having roughly the same number of threads and cores, but in most cases, threads will likely outnumber cores. If an overwhelming number of tasks vastly outnumbers cores, this kicks off thread migration, with a high degree of time-slicing and likely significant numbers of cache misses. Visualizing what threads are on what core provides clues into how well tasks are partitioned and assigned.

Resource utilization is the second issue. Even in simpler cases, threads are often non-symmetric; this results in some cores being near fully loaded, and some cores less loaded. There can be conflicts when I/O or interprocess communication enter the picture, throwing off an otherwise efficient thread of execution. This also hits at decisions such as core scaling; for instance, are eight cores necessarily better than four? The answer may not be so simple if more cores wait around more often, or fewer cores churn constantly at full power. Core asymmetry, such as ARM big.LITTLE or using GPU or DSP cores to accelerate tasks, may also be a consideration.

That suggests a third issue, which is a rude awakening for many designers: power consumption is highly software dependent. This is a result of a combination of factors, starting with core partitioning and DVFS, leading into caching and memory allocation, and to system issues such as waiting for I/O resources. In short, implementations must be power-aware – in both hardware and software. One of the capabilities needed is to trace thread execution and power consumption together, showing a correlation. Just as earlier generations of software tracing focused on source code constructs that were hogging execution time, newer tools can look for power hogs.

Doesn’t observing more variables mean more overhead? Most IP designs today are instrumented with performance counters, allowing sampling utilities to quickly grab snapshots. Sampling is best for a statistical overview of what is happening, not a detailed sequence of specific events. For more in depth analysis, tracing performs consistent logging of system and user application events with time stamps, without blowing up overhead.

Tracing operates all the way from the hardware performance counters up to full application code, allowing not only a view of what is happening, but exactly why. Analysis based on tracing suggests how to improve the design. For instance, applying kernel tracing – even without any intent of debugging or modifying kernel code – can show how the system interacts with the kernel. User application space tracing can expose issues such as calls to pre-packaged libraries where no source is available.

The conclusion is tracing scenarios in multicore designs need to consider mixed domain data – a combination of elements grabbed from hardware and software, in kernel and application space. Rather than doing orthogonal analysis and trying to connect the dots, tools can provide correlated data from all these domains and illustrate cause and effect. A key here is support for the LTTng open source tracing framework for Linux.

For the complete text of the white paper (one-time Mentor registration required), visit:

Development of Complex Multicore Systems: Tracing Challenges and Concepts (Part One)

Part Two of the white paper series goes deeper into a tracing cycle and use of Mentor Embedded Sourcery Analyzer tools to explore these tracing concepts. Pawan Fangaria has further analysis:


Johan Peeters on quick IP check through Cdiff

Johan Peeters on quick IP check through Cdiff
by Pawan Fangaria on 01-17-2015 at 8:00 am

On the face of it, if we consider a simple ‘diff’ utility, it doesn’t need any explanation; almost everyone in our community would have used it. But imagine the CTO of a company investing his time in explaining how beneficial a specialized ‘Cdiff’ function can be in evaluating IP. Today’s SoC design world can’t live without IP and hence it’s worth looking at it in detail. Let’s take a dig at it to find out how Cdiff can help in quickly checking an IP.

Fractal TechnologiesCTO, Johan Peeters says that Cdiff (you may call it Crossfire-diff) is a result of active discussions with the Crossfire user-community who is very explorative; which is natural as the time-to-market window is very short, they have to find alternative ways of doing their design with whatever is provided by a tool or technology. It’s the eyes of a smart tool provider who catches up on what is repeatedly being done and automates the same. It was observed that very frequently designers were inspecting new models to locate any differences with the previous ones. Although Crossfire could be used to locate those differences in other ways, a push button approach with a Cdiff option on the GUI is much smarter way to improve designers’ productivity.

One can quickly check every new IP shipment for the requested changes as well as absence of any unexpected or spurious changes to qualify the IP for use; of course Crossfire can anyway check the whole IP for quality, but the Cdiff option can quickly do the in-coming acceptance test against a standard golden. The option can be set for full diff between two .lib files with complete check of parameter values along with any tolerance limit.

It could be argued why gvimdiff can’t be used. Well, it can’t, imagine the kind of result you are going to get by comparing two .lib files; formatting, ordering and various indents can lead to very different results altogether. Cdiff on the other hand checks the .lib semantics as intended and flags meaningful real issues such as a missing pin, cell or an extra reset-to-Q arc on a flop etc. Cdiff is smart in the sense that it uses gvimdiff functionality in reporting. Above picture shows an example where two .lib files are shown in gvimdiff with identical cell order and text formatting. It highlights only an extra arc in the second .lib file with a message to locate it in the original file. Now imagine locating just this one arc difference among a large number of different process conditions and formatting differences by using gvimdiff!

Another angle to look at Cdiff from design perspective is that it lets a designer converge a design to tape-out faster. Imagine at the tape-out time, your IP supplier provided a small incremental update in the IP. You can quickly check for the intended change and if there are other changes that implicate other steps of the flow such as GDS and characterization then you can discuss with your IP supplier to know about the reasons to do that. The precious design time at the tape-out stage can be utilized in these discussions, rather than re-qualifying the whole IP through Crossfire, which is anyway a given.

From a difference from golden perspective, Cdiff does all that what Crossfire does and provides the differences in cells, pins, timing arcs, pin shapes, cell functionalities etc. However, Crossfire is much versatile in assessing the quality of an IP such as pin routability, cell-delay monotony through temperature, and so on.

Read the whitepaperwhere Johan answers in detail to intriguing queries from designer community. He also speaks about future capabilities which can be added in Cdiff, such as ‘filter on differences-of-interest’ and leveraging the concept with repositories.

More Articles by Pawan Fangaria…..