CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

IP Vendor Nabs Top Talent from Semiconductor Industry

IP Vendor Nabs Top Talent from Semiconductor Industry
by Tom Simon on 02-07-2016 at 4:00 pm

The growth of mobile and IoT have helped increase the demand for One Time Programmable Non Volatile Memory (OTP NVM) as a solution for on-chip storage. To continue to meet this demand and grow with it, industry leading Sidense has recently brought on board seasoned semiconductor executive Ken Wagner as VP of Engineering. He was most recently at PMC as VP of Engineering for Communications Products. Also there in his role as Distinguished Engineer he led a number of initiatives, including low power and IP.

During his hectic first few weeks at Sidense I was able to speak with him about his background, his thoughts on the markets that Sidense addresses and the future. At PMC he was heavily involved as a consumer of chip level IP. He sees IP as a very interesting segment. It has evolved quite a bit over the years, not unlike EDA or the semiconductor segments. Initially there were many smaller players but IP, like the rest, has seen consolidation that has allowed larger players to emerge. He cites ARM as a good example of this phenomenon. As this has happened IP has become a much larger market.

Ken felt that Sidense has an excellent technology foundation and a strong customer base. This affords good opportunities for new products development. Ken was naturally a bit reluctant to get into specifics about their plans though.

In his opinion their OTP IP is world class and very secure. This is the kind of building block needed for some of the biggest and fastest growing markets, such as automotive, mobile and IoT. Automotive in particular is seeing new standards for security. OTP NVM is extremely useful in helping to fight hacking, a serious concern in the automotive market.

Another characteristic of the automotive market is that it is not always using the most advanced nodes. This is driven by reliability requirements that arise from the harsh environments found in automotive applications. In addition to older larger nodes, Sidense supports an extensive range of process nodes, including the most advanced nodes like FinFET.

We also spoke about potential alternatives to OTP NVM. Sidense provides a solution that offers very high density. Their solution does not require any additional mask layers, unlike NAND flash. So Ken believes OTP NVM will always be the best choice for small byte count storage needs.

Even so, OTP array sizes can be quite large, making it suitable for applications like boot code storage. This offers the highest security for trusted boot. Sidense OTP NVM can even be configured to emulate multiple writes, allowing for in field updates.

Ken pointed out that another critical Sidense competitive advantage is in write speeds. This saves time for configuring finished chips. He also mentioned that in addition to smaller memory arrays, Sidense has the most compact support logic and most flexible internal power supply options, eliminating the need for extra external power pins and additional power-net routing on chip.

In closing our conversation Ken said that he is excited for what lies ahead in his new position. Sidense is growing and there are many opportunities for additional growth ahead. For more information about Sidense’s OTP offering you can visit their website at www.sidense.com.


Synopsys’ New Circuit Simulation Environment Improves Productivity — for Free

Synopsys’ New Circuit Simulation Environment Improves Productivity — for Free
by Pawan Fangaria on 02-07-2016 at 12:00 pm

When technology advances, complexities increase and data size becomes unmanageable. Fresh thinking and a new environment for automation are needed to provide the required increase in productivity. Specifically in case of circuit simulation of advanced-node analog designs, where precision is paramount and a large number of simulations must be performed over multiple corners under different modes of operation and testbenches, one cannot rely on manual management through legacy scripts. Of course a designer’s expertise is most important in the case of analog design, but circuit simulation of these designs requires an intuitive, intelligent and automated environment for analysis of the huge simulation results to boost productivity.

Having worked in custom design environment in my previous job, I have seen how native environments make simulators more powerful. It’s the environment that provides a complete solution and is the differentiator against point tools. An individual simulator can be integrated into design flows and managed through scripts, but that approach limits the scope for management and control of regressions; moreover the full potential of the simulator can seldom be exploited.

A simulator integrated in a 3[SUP]rd[/SUP] party design environment gets tied-up in that environment and there is often a lag in gaining access to its new features and capabilities; this problem was evident during my talk with Geoffrey Ying, Product Marketing Director for AMS group at Synopsys. On February 3, Synopsysannounced a brand new native environment that will be included at no additional cost with its circuit simulators. It seems to be a very powerful simulation environment; I personally liked some of the features that cater to key requirements of the analog design community today.


SynopsysSimulation Analysis Environment (SAE) will be integrated with HSPICE, FineSim SPICE and CustomSim in the 2016.03 release with the latest advanced simulation features available in the GUI environment. The highest level of precision for 16/10nm FinFET, as well as performance up to the giga-scale level can be obtained in the same environment by utilizing different simulators. The environment is easily customizable, and long simulation jobs can be distributed across a network.

A netlist read directly into the environment, intelligently parsed and understood in the right context provides a unique capability for analog and mixed-signal simulation setup where the netlist can be cross-probed with a visual representation of different simulation blocks under different directives. Testbenches, analysis types, design parameters, output measurement statements, and various other simulation setups can be done with ease. TCL scripts generated from interactive sessions can be customized further for regression runs in batch mode.

The SAE can directly read-in data from the design netlists, and analog/digital partitions can be easily visualized in the language-sensitive text editor or schematic of a mixed-signal design. Annotation for interface elements can be added in the schematic as well as text view. There are interesting simulation management features to assist comprehensive sweeps, corners and Monte Carlo simulation. An intuitive color scheme has been used to distinguish between min and max or pass and fail with ease. Multiple testbenches are supported with a history of results preserved for each testbench. Active job monitoring can be done where a long running job can be halted to let other high priority jobs precede for better resource management.

The SAE provides a powerful post simulation analysis and debugging environment where results are visible along with dynamic data filtering for pass/fail/all and their visual indicators as soon as they become available after the simulation run. The current results can be easily compared with historical results available in the system. The SAE also provides tight integration with Custom WaveViewfor debugging with actual waveforms.


The Synopsys SAE provides a unique and powerful data mining and charting capability for root cause and correlation analysis. Statistical and multi-parameter charts are presented which provide direction to find the best set of desired parameters. The reports are web-based with hyperlinks in the testbenches that can be used for quick access to their corresponding results. The results can be navigated to analyze detailed measurement data and also to view the waveforms saved during simulation, that’s a unique feature.

Synopsys has disclosed in their press release that Samsung has deployed this new simulation environment in their System LSI business unit. Samsung’s System LSI BU was an early collaborator with Synopsys for conceptualizing this new environment; that’s like a customer-driven initiative.

The Synopsys SAE will be included with existing Synopsys simulators in their 2016.03 release in March of 2016 without any additional cost. This is a step in the right direction towards promotion of increased analog automation. Analog automation cannot be like digital, however intelligent, assisted tools and environments like these can increase productivity and throughput significantly for analog designs.

I guess most of the analog design community is familiar and has worked with Cadence ADE which has been around for a long time. Now Synopsys SAEis available for all analog and AMS designers using Synopsys simulators. Going forward, it is expected to interoperate with other simulators, as well.

The Synopsys press release is HERE.

Here is the link to register for the SAE webinar: Improving Analog Verification Productivity Using Synopsys Simulation and Analysis Environment (SAE)

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


How 16nm and 14nm FinFETs Require New SPICE Simulators

How 16nm and 14nm FinFETs Require New SPICE Simulators
by Daniel Payne on 02-07-2016 at 7:00 am

About 35 years ago the first commercial SPICE circuit simulators emerged and they were quickly put to work helping circuit designers predict the timing and power of 6um NMOS designs. Then we had to limit our circuit simulations to just hundreds of transistors and interconnect elements to fit into the RAM and complete simulation runs over night. Today we can enjoy using our smart phones with 5.7″ displays that are using FinFET nodes at 16nm and 14nm, but what kind of SPICE circuit simulator do circuit designers need to be using? Let’s start out by looking at an idealized FinFET and note the 3D nature of the transistor, especially how tall it has become:


FinFET 3D structure, Source: Intel Corporation

Some of the new challenges with FinFET design are:

  • The number of device parasitics have increased
  • Layout rules are more complex and prohibitive
  • Device noise must be included for accurate analysis
  • Model evaluation is more compute intensive
  • SPICE simulation run times are increased

Many consumer and industrial applications are already using the 16nm and 14nm FinFET technology for chips used as application processors, graphics processors, FPGAs and Memory. The typical trade offs occur in both FinFET and planar CMOS technologies: power versus speed versus area versus reliability. To reach your market window on time it is important to have EDA tools and a methodology that are up to the challenge.

Consider designing a PLL (Phase Locked Loop) circuit, the device noise must now be included for closed loop PLL phase noise analysis or else your results are too inaccurate (~30dB difference):


PLL Analysis must include device noise

Device noise is now a first order effect for FinFET transistors, so it’s an added type of modeling required for noise sensitive designs. Adding device noise will slow down the simulation speed of a SPICE circuit simulator.

Going back to the initial example of a smart phone which is battery-powered, we know that circuit designers are minimizing power to extend the battery life by using power gating, adding read/write assist circuits, and using multiple operating voltages. All of this circuit complexity impacts on-chip variations that lead to a jump in the number of PVT (Process, Voltage, Temperature) corner simulations required:


Number of PVT corners increases at 16nm, 14nm

With FinFET transistors the physical layout has more parasitic RC elements than planar transistors that need to be extracted and simulated in SPICE, plus there are more coupling capacitors, which tend to bog down SPICE circuit simulators even more. LDE (Layout Dependent Effects) contribute to longer running circuit simulations, and the following chart depicts the parasitic complexity increases as a function of smaller geometries:


Parasitic complexity versus process geometry

On the reliability front we know that FinFET transistors can degrade over time by a few effects:

  • Hot Carrier Injection (HCI)
  • Positive Bias Temperature Instability (PBTI)
  • Negative Bias Temperature Instability (NBTI)

With thinner dielectrics being used in FinFET designs the effects of device aging will actually increase the voltage thresholds for P and N channel devices, plus the channel carrier mobility will slow down, both which make the circuit perform more slowly and even shortens the circuit lifetime. These aging effects need to be simulated in SPICE to understand the reliability impact.

Analog FastSPICE

The good news is that the SPICE circuit simulator from 35 years ago has been dramatically re-architected to address each of these issues raised so far when designing with FinFET transistors. Just two years ago Mentor Graphics acquired the company Berkeley Design Automation and their Analog FastSPICE circuit simulator to better serve the needs of designing with FinFETs. Planar transistors have been simulated with standardized models like BSIM4 from UC Berkeley, however for FinFET transistors a new model called BSIM-CMG had to be used to account for the 3D nature and new effects:


BSIM-CMG model, Source: University of California-Berkeley

This new BSIM-CMG model can run 2X slower than the previous BSIM4 models, but thanks to the new optimizations in the Analog FastSPICE (AFS) tool you can expect accurate results while using the same memory during simulation.

With AFS as your circuit simulator enjoy the modern architecture that gives you more than 120 dB dynamic range, speedier results than other SPICE simulators, uses multithreading for best sequential runs, handles 10M plus elements, and allows verification of full circuits including parasitics.

Read the complete White Paper online, or jump straight to AFS product details.

Related Blogs


The Next Wave of Semiconductor Companies!

The Next Wave of Semiconductor Companies!
by Daniel Nenni on 02-06-2016 at 7:00 am

As we all know, venture capital has all but disappeared for semiconductor companies. Do semiconductor startups still exist and where do they come from? I ask these questions quite frequently but bloggable answers are hard to come by. When I asked Mike Gianfagna of eSilicon during ISSCC he reminded me of an old new source of emerging technology companies.

eSilicon has been working with a growing community of University researchers to address their multi-project wafer service (MPW) needs with theirSTAR online platform. It seems like the innovation trail for semiconductor companies now starts with University research, not elevator pitches on Sand Hill Road.

Here are some interesting examples that I found:

Ambiq Micro was spun out from the University of Michigan
Ambiq Micro developed a patented Subthreshold Power Optimized Technology (SPOT™) platform that dramatically reduces the amount of power consumed by semiconductors. By applying SPOT, Ambiq produces the world’s lowest power real-time clock (RTC) and microcontroller (MCU), Apollo. Through the use of its pioneering ultra-low power technology, Ambiq is helping innovative companies around the world to develop differentiated solutions that reduce or eliminate the need for batteries, lower overall system power, and maximize industrial design flexibility.

Isocline was also spun out of the University of Michigan:
Isocline is giving senses and situational understanding to consumer products by dramatically improving their ability to interpret sensors, microphones, and cameras. Moore’s law is stressed and this has limited what new experiences chip companies can bring to consumer products. Isocline addresses this problem by developing a sensory processing method that uses analog techniques for signal processing and neural networks. What normally takes thousands of transistors can be done with dozens of transistors. Compared to existing chips, they get a 10-100x improvement across the board for performance, cost, and battery life.

Cubeworks was also spun out of the University of Michigan
CubeWorks was founded in 2013 to make the next-generation millimeter-scale computing available today. The company’s origins come from the Michigan Micro Mote (M3) initiative, a project from the University of Michigan seeking to push the frontiers of computing.

Seamless Devices was spun out of Columbia University:
Analog designers are faced with the challenges of designing higher-performance analog interfaces at lower supply voltages. Seamless Devices’ Switched-Mode Operational Amplifier (SMOA) provides a new class of feedback amplifiers, and addresses these issues through the application of patented switched-mode signal processing algorithms. Developed at Columbia University’s Integrated Systems laboratory, Seamless’ SMOA technology will help designers to achieve higher performance with the same power, or make the tradeoff to lower power while maintaining current performance levels, even as new process nodes continue to reduce supply voltages.

Ferric Semi was also spun out of Columbia University
Ferric is commercializing innovative DC-DC power converter chips and circuit IP based on patented thin-film power inductors for customers in both mobile and cloud computing. Ferric’s proprietary technology can be applied across a broad spectrum of power electronics ranging from full scale servers to the chip level.

Lion Semi was co-founded by Prof Le while he was at UC Berkeley
Lion is a fabless semiconductor startup designing power management ICs (PMICs) for mobile devices. Unlike today’s PMICs that require many large PCB inductors, they created a PMIC with zero PCB components and very small footprint. They are working on a patent-protected, revolutionary PMIC design that is unlike any solution available.

To me this is VERY encouraging. While the VC community is raising herds of unicorns for slaughter, semiconductor professionals are going about the business of bringing world changing technology to the masses. In parallel to the University born fabless semiconductor companies, I predict that the tens of thousands of semiconductor professionals negatively affected by the continuing industry consolidation will also be looking for MPWs in the not so distant future, absolutely.

More articles from Daniel Nenni


Let’s Reduce Wasted Energy in Server Farms

Let’s Reduce Wasted Energy in Server Farms
by Alex Lidow on 02-05-2016 at 4:00 pm

With the growth in streaming video and the promises of 50 billion IoT gadgets making our lives oh-so-much better, there is an alarming demand for online computational horsepower and bandwidth.

Why alarming? In 2014, data centers in the United States consumed approximately 100 billion kilowatt hours (kWh) of energy. According to Sudeep Pasricha, an associate professor in the Department of Electrical and Computer Engineeringat Colorado State University, “that’s almost twice the electricity needed to power the whole state of Colorado for a year.” Further,this growing and insatiable desire for digital content is actually polluting the environment: the massive data centers that house all this digital content on servers are now responsible for an astounding 2 percent of global greenhouse gas emissions, a similar share to today’s aviation industry.

Inefficient grid
To add insult to injury, the power needed to support this rapidly growing demand comes from an electrical grid that is wildly inefficient and is based on infrastructure that was created, in large part, more than a century ago. To put it simply, electricity goes through several conversion stages: first, from its origination at the power plant, then on to transmission through power stations before finally feeding the remaining energy through semiconductor chips to provide computer power to servers. And due to aging equipment, a significant amount of power is lost as it travels from the power plant to the computer chip that does all the actual computing work.

Just how significant is this waste?It turns out that the power grid supplies 150W of power to meet the demands of a digital chip that may need only 100W. Moreover, the amount of wasted energy is even greater because every watt of power lost through power conversion is transferred into heat. And it is necessary to remove that heat from the server farm by expensive and energy-intensive air conditioning. It takes about 1W of air conditioning to remove 1W of power losses, effectively doubling the inefficiency of this power conversion process. Not to mention the enormous amount of carbon-dioxide that these air conditioning units emit in an effort to convert all that wasted energy.

In aggregate, the combined waste across the United States due to data center power conversion is enough to power over half of the state of Colorado.

Also Read: Submerging the Data Center


Limits of silicon
And if the inefficiencies and waste in the power grid aren’t enough, the power conversion process has been built around post World War II silicon-based semiconductors, which have reached their theoretical power conversion performance limitations. Subsequently, these chips are responsible for creating additional power inefficiencies, with great financial and environmental costs.

However, new materials have emerged that can convert electricity more efficiently and at a lower cost. In short, superior crystal properties in these materials enable the elimination of the most wasteful final stages of conversion. It’s a dynamic similar to the evolution of air travel in the post WWII era. Initially, air travel across the country required at least one stop for refueling. When jet powered flight became commercially available, the increased fuel efficiency resulted in not only non-stop coast-to-coast travel, but also significantly reduced costs of the journey.

By eliminating the inefficiencies in this final stage in the server farm power architecture we can realize a direct saving of 7 billion kWh per year. This is doubled when air conditioning energy costs are added, bringing the total to about 14 percent of the total energy consumed by servers in the US alone. The cost savings are also significant. At the average cost of $0.12 per kWh, that’s a savings of $1.7 billion annually, which does not include the additional savings in system cost resulting from fewer power converters and air conditioners.

While the need for computing power is only likely to increase in the upcoming years, technologies are appearing that will help reduce waste and drive subsequent environmental and financial savings that benefit future generations of information gluttons the world over.

We wrote a book about this subject. You can find it at: http://epc-co.com/epc/Products/Publications/DC-DCConverterHandbook.aspx



Qualcomm goes in Data Center thanks to Google

Qualcomm goes in Data Center thanks to Google
by Eric Esteve on 02-05-2016 at 12:00 pm

The Server SoC at the heart of Data Center almost don’t care about power consumption, at the opposite of Application Processors for smartphone. If you design a server multi-core SoC, you target the highest performance, in fact a combination of high frequency and lowest possible latency, and try to pack as many CPU core and embedded cache memory in a single chip. The first limitation is the die size, the chip should exhibit a yield compatible with semiconductor economics, and the power consumption compatible with physic laws (electro-migration, voltage drop and thermal dissipation). Please note that I didn’t mention power efficiency, as I don’t think it’s really a care about for this type of SoC design.

You will appreciate how challenging is the move for Qualcomm to penetrate this data center market. In fact, Qualcomm desperately need to find new market segment out of mobile. Samsung and Apple, the top two leaders in high end smartphones, are going vertical, integrating their internally designed AP. In the lower end smartphone segment, Mediatek and Spreatrum are now reaching double digit market share. To escape from this squeeze effect, Qualcomm has to move into new application. Qualcomm is good at designing application specific processor, the company has built an impressive IP port-folio (ARM architecture license, DSP, GPU, Network-on-Chip, etc.) as well as experienced designer team and is not afraid by the most advanced technology nodes like 16FF nm or even lower.

Should Qualcomm attack an hypothetic IoT market characterized by sub $5 processor price or the very dynamic data center market? I have made some market size evaluation, grapping data from IDC or Gartner. My evaluation is that the server SoC market weight $15 billion in 2015 (99% captured by Intel), SoC shipments are in the 25 million (per year) range, leading to an ASP in the $500-600 range…

25 million units is roughly 100 time smaller than the AP market size… but ASP is 20x larger. Moreover, instead of a myriad of competitors who need to gain market share (at any price?), there is only one competitor… Ok, it’s Intel.

If this news (Google will buy server SoC to Qualcomm, if the expected performance is verified) is confirmed, and we should know it by 2/11, this could ring the bell for a very interesting fight. On my right stands Intel, literally owning this market, and designing for the highest performance at any Silicon area and power consumption expense. On my left, Qualcomm who has been cleaver enough to kick out competitors like Texas Instruments, Nvidia or STMicroelectronics from a $30-40 billion AP market within 5 years (2005-2010). Which makes this move so interesting is the fact that Qualcomm not only brings its own IP, or ARM 64-b CPU, but also comes with a completely different design culture: design for power efficiency.

What’s happen when a SoC consume so many power like it does today in data center? This incredibly high power consumption impacts cost of ownership at every stage, not only electricity bill.

  • The package should have exceptional Theta jA (power dissipation) characteristic, it’s more expansive than a standard package.
  • Power should be dissipated by the means of a heat-sink mounted on the package.
  • The rack itself has to be cooled
  • The data center has to be cooled, amount of electricity spent to run the cooling system is higher than the electricity spent to power the data center chips

Last but not least, as for any IC, the SoC performance degrades when the junction temperature goes up.

I honestly don’t know by how much it would be possible to decrease server SoC power consumption. That I know for sure is that the Qualcomm designer culture is to design for performance and power efficiency. Will they do better than Intel for server SoC? Let’s say that Qualcomm is probably one of the very few companies able to address this challenge…

From Eric Esteve from IPNEST


Also Read: Submerging the Data Center

More articles from Eric…


Supernovae and Safety

Supernovae and Safety
by Bernard Murphy on 02-05-2016 at 7:00 am

Whenever we push the bounds of reliability in any domain, we run into new potential sources of error. Perhaps not completely new, but rather concerns new to that domain. That’s the case for Single Event Upsets (SEUs) which are radiation-triggered bit-flips, and Single Event Transients (SETs) which are radiation-triggered pulses propagating in a circuit. These used to be important primarily for space-based electronics and devices operated close to nuclear reactors, but as circuit sizes shrink and expectations rise, they have also become a concern in safety-critical auto electronics.

SEUs and SETs can be triggered in multiple ways, through nuclear events and external electromagnetic events. Years ago, we worried about ionization caused by alpha-decay from isotopes in the lead in packaging (alpha particles have very short range so any effect has to originate very close to the die). That source seems to be less of a concern now, either thanks to isotopic refinement or use of other materials. Aside from transients induced by lightning, the rest of the problem comes from cosmic rays – there is evidence that a significant percentage of these start in, or are accelerated through supernovae though details are still in debate. The bulk of the flux incident on the earth is protons. These interact quickly in the atmosphere, in part converting to neutrons through electron capture or other mechanisms. Since neutrons have a low interaction cross section they can make it to ground-level quite easily where they can potentially disrupt electronics.

Neutrons have no charge so they can’t directly disrupt an electrical circuit, but they can collide elastically or inelastically with a nucleus. In an elastic collision, the target nucleus is knocked out of the lattice, ionizes in the disruption and can trigger an electrical event. In the inelastic case, the neutron can trigger fission which then has electrical consequences. One significant source starts with [SUP]10[/SUP]B (boron, used in doping), which with the added neutron splits into [SUP]7[/SUP]Li (lithium) and an alpha particle. And then there are multiple inelastic reactions with silicon producing ionizing secondaries:
[INDENT=2][SUP]28[/SUP]Si + n → [SUP]28[/SUP]Al + n
[SUP]28[/SUP]Si + n → [SUP]27[/SUP]Al + d
[SUP]28[/SUP]Si + n → [SUP]25[/SUP]Mg + α
[SUP]28[/SUP]Si + n → [SUP]28[/SUP]Al + p
[SUP]28[/SUP]Si + n → [SUP]27[/SUP]Al + p + n
[SUP]28[/SUP]Si + n → [SUP]24[/SUP]Mg + n + α

Since these events are triggered in or near a transistor, electromagnetic impact is all but certain. Not that this happens very often. Matter looks largely empty to neutrons so a thin die is a negligible barrier (shielding neutron flux from reactors requires thick walls of lead). But the event rate isn’t zero. Xilinx estimated mean-time between failures due to SEU for one of their large Virtex devices to be over 600 years. There is nothing here unique to FPGAs, so put 100 devices together in a car and you could have a failure about every 6 years. Put a million of those cars on the road and you have a serious problem, especially given that average car lifetime these days is around 15 years. A surprising impact for something that started towards us from millions or billions of light-years away.

How do you fix this? You could make devices bigger so that charge flux from a single event would be negligible. But that’s going in the wrong direction in modern device design. The alternative is redundancy with voting. Critical circuits have 3 versions; if one circuit is disrupted, the probability of either of the other circuits being hit at the same time is miniscule (multiply probabilities). So a vote on outputs of the three circuits should have a very high probability of being correct, even in the presences of SEUs. Don Dingee has written recently and more extensively on this topic in SemiWiki. But you can’t use redundancy everywhere – the design would be huge. So you just use redundancy in the safety-critical sections. And there’s the rub, as Shakespeare might have said if he knew more about functional safety. When you start being selective, you can make mistakes. You need a backstop to catch those mistakes.

The best way to do this, interestingly, is through fault simulation. Fault sim seemed to vanish from the design universe when DFT and ATPG took off, so it’s worth recapping what how it works. You simulate the good machine (no faults), inject a fault, simulate the bad machine and compare to find differences in behavior. Except in the SEU/SET case, we’re no longer looking for manufacturing problems. We looking for “faults” which are bit flips or propagating pulses. And we’re no longer concerned about sorting good versus bad die. We’re looking at in-field problems and want to determine if the redundancy logic adequately screens these out in safety-critical areas. This is a perfect application for fault simulation. But just like you don’t want to insert redundant logic everywhere, you don’t want to fault every single node. The safety verification team (separate from the design team) builds a “fault dictionary” listing all the nodes they want to test. Then separation of design and verification (plus ISO26262 process compliance/oversight) provides confidence that all safety-critical logic is indeed hardened to SEU/SET.

Cadence has a comprehensive functional safety simulation solution in Incisive. They work directly with all players in the value chain from auto-makers through tier-1 and on down to component manufacturers, and are actively involved in further developing the safety standards. They told me that chapter 11 of ISO-26262 is in development and is expected to quantify several requirements that today are only described qualitatively. You can learn more about the Cadence safety simulation solution HERE.

More articles by Bernard…


Pathfinding to an Optimal Chip/Package/Board Implementation

Pathfinding to an Optimal Chip/Package/Board Implementation
by Tom Dillinger on 02-04-2016 at 4:00 pm

A new term has entered the vernacular of electronic design engineering — pathfinding. The complexity of the functionality to be integrated and the myriad of chip, package, and board technologies available make the implementation decision a daunting task. Pathfinding refers to the method by which the design space of technology options is reduced to a viable solution, evaluating system parameters and goals against different technical alternatives. The increasing momentum in (2.5D/3D) packaging technology has added significantly to the choices available for implementing and interconnecting the functional design, making path finding a prerequisite to implementation.

Pathfinding is much more intricate than simply partitioning functions between chips. Partitioning algorithms generally seek to minimize some objective by clustering system functions with “high affinity” together into distinct subsets for physical implementation. Conversely, path finding requires exploring the impact of implementation decisions across disparate physical domains — it encompasses a much larger space of chip/package/board design considerations:

  • chip macro floorplan, redistribution layer (RDL) routing, and I/O bump patterns
  • optimal package fan-out trace patterns
  • interposer (or TSV) via and RDL design, for multichip packaging options
  • board routing complexity

All these options have a direct bearing on product cost, to be sure. These implementation decisions much also ultimately satisfy signal integrity, power integrity, and thermal reliability requirements, as well.

A major concern in enabling engineering teams to pathfinding is that the physical and electrical data for chip, package, and board reside in different data formats and unique representations. Making implementation decisions across these design domains requires a single, consistent view — resulting project edits and revisions require appropriate communication of ECO’s back to the original databases.

At the recent DesignCon 2016, I had an opportunity to meet John Park, Methodology Architect at Mentor Graphics, and learned how Mentor has addressed these difficult pathfinding issues in their recently announced Xpedition Package Integrator product. John emphasized that their approach was architected to utilize design data coming from various sources, and that the development team provided the interfaces necessary to incorporate the requisite information into Package Integrator.

In John’s words: “The approach we followed had to be EDA neutral.”

All system connectivity is maintained within Package Integrator, while designers are pursuing their path finding optimizations, as highlighted in the figure below.


A key design data interface into Package Integrator provides a Virtual Die Model, an abstract of the detailed chip layout data suitable for pathfinding. For existing die, a simple abstract of area + pad array + pin name data will suffice. Yet, initial pathfinding will involve chip designs that are still fluid. In these cases, the Virtual Die Model includes the additional data necessary to evaluate chip-interposer-package implementation decisions. As illustrated in the figure below, chip floor plan data — e.g., macro placements, blockages — are required with the die bump pattern. Edits to the floor plan, chip RDL, bumps, and/or package traces are enabled in Package Integrator, to provide visual feedback as to routing congestion (and thus, cost).


The figure below illustrates a specific example of using the Virtual Die Model to evaluate die and package implementation data in a single view. A die floor plan macro (in light blue) is depicted, with the corresponding bump pattern-to-BGA package ball placement and trace connectivity. Edits to the macro LEF placement are enabled, with any resulting proposed ECO’s maintained by the project management supervisor functionality within Package Integrator. These ECO’s can then be reviewed, approved and forwarded, or rejected.


These pathfinding edits must be rule-driven, to ensure a design requirement is not violated. For an example of a user-defined rule, John described the definition of a specific (+ve, -ve, IOVDD, GND) bump array configuration to be used for a differential signal — any edits to bump locations are verified to be consistent with the rules database. John also briefly highlighted examples of the pathfinding activity and rules checking associated with die-to-interposer via and bump assignment, as well.

A path finding solution requires a view of all domains, as illustrated in the die/package/board example below.


The connectivity flightlines between die, packages, and board-level connectors illustrate how congestion can be alleviated by path finding decisions on pin assignment and placement. Specific pin-to-pin routes can be applied. If routed traces are available, an interface to EM model generation and signal integrity analysis is also provided in Package Integrator. Utilizing thermal models for the individual die, a computational fluid dynamics (CFD) thermal analysis of the full system is enabled, as well.

The optimization of a product implementation across the physical boundaries of each die, (2.5D/3D) packages, and board requires a single view of the design data, with a corresponding connectivity model. This view must be agnostic to the source of the data, and must provide ECO project management. This complex co-design activity, aka pathfinding, must also support appropriate rules and constraints, verifying proposed edits within each domain.

Mentor Graphics has recently announced Xpedition Package Integrator to assist product architects and designers with the pathfinding task. They focused on developing data interfaces and model abstracts to be independent of the EDA source. They have successfully bridged the gap between the different physical domains, to enable key (and early) implementation optimizations.

-chipguy


Here is what ‘‘Internet of Things’’ will do for Intelligent Transportation

Here is what ‘‘Internet of Things’’ will do for Intelligent Transportation
by Raj Kosaraju on 02-04-2016 at 12:00 pm

Transportation sector is growing, and we can already see that a fleet of autonomous, shared vehicles – connected to the road infrastructure, to the Internet and to a broader network of public transit options – will create incredible value. The transport sector is trying its level best to improve the safety, reliability, and cost of transportation. And, there is no doubt that if they are provided with better information and connectivity, they will do lot more.


It has made one thing very clear and that is the more, the smart devices are used in the transport, the lesser traffic, parking and vehicles issues are likely to take place. This is where Internet of Things sounds helpful. It has not only taken the lead in the healthcare and automobile sector, but has also left a big impact on the transport. The internet of everything (IoE) promises to disrupt every aspect of our lives and the Transportation experience is no exception. Internet of Things enabled devices equipped with sensors are highly used in the transport sector.

However, the system is down right scary as it exists today! In 2015 DOT officials emphasize on the reports that we need to spend $120 billion on highways and bridges between 2015 and 2020, while spending at all levels of government is just $83 billion; we need $43 billion for public transit, while it’s currently at a dismal $17 billion. Today, our road system scores a mere “D+” grade when compared to the rest of the world. ( Beyond Traffic: The Blue Paper, Feb 2015)

1) Creating Rapid Strides
A whole new world is coming our way. Technology is allowing us to reimagine our future transportation system. It’s hard to be precise, but I think we’ll be cycling and walking more; in crowded urban areas we may see travellators – which we see in airports already – and more two-wheelers and scooters. It’s not difficult to predict how our transport infrastructure will look in 25 years’ time – it can take decades to construct a high-speed rail line or a motorway, so we know now what’s in store. Advances in connected automation, navigation, communication, robotics, and smart cities—coupled with a surge in transportation-related data—will dramatically change how we travel and deliver goods and services. Automation in the field of transportation is everywhere. We are going to see a LOT of multi-level, roads and highways in the near future.

2) How IoT will affect traffic technology over the next 10-20 years
For the most part transportation will be a thing of the past, with most people working at home. Smart connectivity with existing networks and context-aware computation using network resources is an indispensable part of IoT. With the growing presence of WiFi and 4G-LTE wireless Internet access, the evolution towards ubiquitous information and communication networks is already evident. Moreover individual vehicles, freight, and public transport can all be improved by online information and communication between drivers and a central information hub.

Intelligent Transport systems for buses, trains, and passengers themselves will combine to make traveling easier and less stressful. One of the ways IoT can benefit the UK is “Intelligent Transport Systems” (ITS). And it is noteworthy to be mentioned about Ofcom. Ofcom’s vision is a world where cars communicate with each other, making traveling from A to B “smoother and safer.” It is assumed these systems could be in place in the next 10-30 years. Many city centres in Europe are banning the private car. So, there are now and will be more places that are free of traffic. Mobility – the speed and directness of travel – and the density of activities are the two determinants of a city’s accessibility and thus economic vitality. Moving people faster and more directly in order to expand accessibility should be the primary mission of transport agencies.

3) To make Routes Safer and Transport more Reliable

They can make transport safer, more efficient and more sustainable by applying various information and communication technologies to all modes of passenger and freight transport. Moreover, the integration of existing technologies can create new services. ITS are key to support jobs and growth in the transport sector. But in order to be effective, the roll-out of ITS needs to be coherent and properly coordinated across the EU. It also mentions less air and noise pollution as a result – even talking about traffic free zones in cities. There are companies that mention the technology could help cars take the shortest possible route, thereby reducing CO2 emissions. It could even warn drivers about school zones to avoid going into busy areas with lots of children, and even cooperate with pedestrians’ mobile phones to alert drivers and help them navigate fewer hazards. Today, drivers themselves react to transport systems: warning signs on bridges, real-time information on digital alert boards.

4) Seeking Assistance from the Cloud

Transportation is an enormous issue for cities in regards to commerce, environmental impact, and quality of life. Within transportation, parking is an issue that affects everyone – from drivers to merchants, to city governments. Both transportation and parking are quickly gaining traction as key priorities for cities as metropolitan area populations continue to increase and cities examine how to leverage technology to make their communities more liveable. There are already several apps that help drivers find parking spaces. But imagine a car that could identify an empty parking space as it passes by and then upload that information to the cloud. New and existing apps could then use the real-time data to improve alerts to drivers about open spaces nearby.

This new functionality could also help eliminate the time and resources we waste parking our cars. This is where vehicles and roadside units communicate through nodes in order to provide each other with information, such as safety warnings and traffic information. The information is then applied to solutions that can be utilized by road users such as red light warnings, automatic tolling, and routing and navigation information. The wants to change the way people live through parking, a universal challenge that’s seen little innovation in years. There are several companies which are helping cities utilize Smart Parking technology to realize the benefits of smarter transportation, including reduced traffic and emissions, more vibrant local economies, and better quality of life for citizens.

In closing,
Over the next few years, we see Internet of Things becoming an integral part of our lives, whether it is through Smart homes, Smart cars, Transportation or smart health care. It’s clear that the IoT will disrupt most industries. The transportation systems around which the modern world has been built are on the verge of a significant transformation. Intelligent transportation systems (ITS) are making driving and traffic management better and safer for everyone. New technology for on-road communications will dramatically change how vehicles operate and provide information and capabilities for better, real-time traffic management — if the necessary network infrastructure is in place.


Cadence Adds New Dimension to SoC Test Solution

Cadence Adds New Dimension to SoC Test Solution
by Pawan Fangaria on 02-04-2016 at 7:00 am

It requires lateral thinking in bringing new innovation into conventional solutions to age-old hard problems. While the core logic design has evolved adding multiple functionalities onto a chip, now called SoC, the structural composition of DFT (Design for Testability) has remained more or less same based on XOR-based compression since long. Increasing SoC design sizes are pushing up increased test logic, tester time and test resources, thus increasing the test cost to the tune of billions of dollars. Alternatives to reduce such high test costs are awaited since long.

Typically shorter scan chains require lesser number of clock cycles to shift test bits and thus reduce test time, but that leads to far increased number of scan chains compared to the number of scan pins. This requires decompression logic between scan-in pins and the scan chains and at the other end compression logic between scan chains and scan-out pins; also the connections between scan chains and compression/decompression logic consumes significant routing resources. Increase in compression ratio (i.e. number of scan chains to number of scan pins) increases compression/decompression logic impacting the die size and routing resources. Also, higher compression ratio beyond certain limit can no more reduce the test time and can adversely impact on test coverage too.

Although about 100x compression ratio is seen optimal with current XOR architecture of compression logic, it consumes a significant ~4% of total chip routing; that increases to 10% if compression ratio is increased to 400x which outweighs the saving in test cost. Do we have alternatives to achieve higher compression ratio yet consume lesser routing resource and achieve higher test coverage?

Here comes Cadencewith its new Modus[SUP]TM[/SUP] Test Solutionwith physically aware 2D Elastic Compression architecture integrated into Cadence digital flow. This innovative solution has several patents pending.


The Modus Test Solution is integrated with Cadence’s Genus[SUP]TM[/SUP]Physical Synthesis, Innovus[SUP]TM[/SUP]P&R system, and Tempus[SUP]TM[/SUP]Timing Signoff solution in a common environment to provide a seamless flow for digital designs which can achieve up to 2.6x lesser routing for compression logic and up to 3x lesser test time with the new 2D elastic compression technology compared to conventional 1D XOR-based compression logic.

To know more details about this new dimension in test compression, I had a nice opportunity talking to Paul Cunningham, VP R&D at Cadence, responsible for front-end digital design solution; earlier Paul was co-founder and CEO of Azuro which was acquired by Cadence in 2011.

In the 2D compression architecture, the compression logic forms a 2D grid across the chip. This allows the routing between the compression logic and the scan chains to be distributed evenly on the grid, thus requiring lesser wire lengths. With 2D compression, various types of designs including CPU, GPU, Networking, DSP and Automotive chips in the range of 1.3 to 2.5 million instances require same wire length at 400x compression ratio as compared to that at 100x compression ratio with traditional 1D XOR compression.


Another innovation added to the compression technology is to add elasticity by embedding registers and feedback loops in the decompression logic. This allows controlling care bits sequentially across multiple scan cycles during ATPG (Automatic Test Pattern Generation), thus maintaining fault coverage at high levels. With 2D elastic compression, compression ratios beyond 400x can be easily achieved without loss of fault coverage. And test time can be reduced by up to 3x compared to that with traditional 1D XOR compression. The designs discussed above have shown test time saving in the range of 1.6x to 3.6x, all with fault coverage > 99%; a little more than that achieved with 1D XOR compression.

Also, the Modus Test Solution allows automatic insertion of a single shared test access bus to enable one MBIST controller to service multiple memories, thus separating CPU for higher performance.

Cadence Modus Test Solution provides complete test features for scan, MBIST, logic BIST, ATPG, and self diagnostics in a common environment with synthesis, implementation, and timing signoff including debugging and scripting.

This new innovation in test solution for SoCs is a solid opportunity for Cadence to improve their market share in test automation business, currently dominated by Mentor’s TestKompress. There are good customer endorsements on this technology which can be seen in Cadence’s press release on Modus Test Solution HERE.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com