CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Why is Jet-Lag Worse Flying East?

Why is Jet-Lag Worse Flying East?
by Bernard Murphy on 08-18-2016 at 7:00 am

Anyone who travels long distances frequently is painfully familiar with this problem, but you may be wondering why I am mentioning it in this forum. The American Institute of Physics has a Chaos journal which looks at interdisciplinary problems in non-linear dynamics and recently published an article on just this topic.

There are cells in brain’s Suprachiasmatic nucleus (SCN) which regulate the body’s circadian rhythms and therefore are a candidate for the source of the problem. This rhythm is governed by oscillator cells which individually might oscillate at different frequencies, but which collectively tend to sync up to a period around 24.5 hours in the absence of external influences.

University of Maryland researchers developed a non-linear mathematical system for a group of oscillators, which they were able to reduce to a single equation to determine how the grouped oscillation would respond to sudden changes through multiple time-zones (which affects daylight hours, a principal component in governing synchronization of oscillations).

They discovered that the fact that the natural period is slightly longer than a day is significant. Remember this is a non-linear model so small deviations can quickly amplify. In particular, they found that this 30-minute difference when added to a west to east (but not east to west) change of multiple time-zones can cause significant delay in oscillations re-synchronizing with the diurnal cycle at the destination.

So now you know. The reason you have terrible jetlag is that your brain is chaotic. You can read more HERE.

More articles by Bernard…


Optimization and verification wins in IoT designs

Optimization and verification wins in IoT designs
by Don Dingee on 08-17-2016 at 4:00 pm

Designers tend to put tons of energy into pre-silicon verification of SoCs, with millions of dollars on the line if a piece of silicon fails due to a design flaw. Are programmable logic designers, particularly those working with an SoC such as the Xilinx Zynq, flirting with danger by not putting enough effort into verification? Continue reading “Optimization and verification wins in IoT designs”


Semiconductors negative in 2016, positive in 2017

Semiconductors negative in 2016, positive in 2017
by Bill Jewell on 08-17-2016 at 12:00 pm

Note: the table and text below have been revised from an earlier post to correct the numbers for STMicroelectronics.

Semiconductor companies posted a wide range of results in 2nd quarter 2016. Intel, Micron Technology and Renesas Electronics all had declines in revenue in 2Q 2016 versus 1Q 2016. Samsung Semiconductor, Qualcomm and SK Hynix had double digit revenue growth. MediaTek posted the strongest growth, up 33% (in Taiwan dollars) driven by smartphones. The 2Q 2016 weighted average revenue growth of the companies providing revenue guidance for 3Q 2016 was 2%. World Semiconductor Trade Statistics (WSTS) reported semiconductor market growth in 2Q 2016 was 1%. The companies below are the top semiconductor suppliers. Broadcom, which recently combined with Avago, should be in this group but will not release 2Q 2016 results until September 2.



The guidance provided by these companies for 3Q 2016 is mixed, but in general shows improved growth. Qualcomm’s midpoint guidance was a 4% decline in revenue in 3Q 2016 compared to 2Q 2016, but the high end of its guidance was 2.6% growth. Renesas expects 3Q 2016 to be down 2% from 2Q 2016 in Japanese Yen, but should should show growth in U.S. dollars. Samsung Semiconductor and SK Hynix did not provide revenue guidance for 3Q 2016, but both companies expect solid demand growth in the quarter.

Intel, the largest semiconductor company, projects a 10% revenue increase in 3Q 2016 at the midpoint, with a 13.8% increase at the high end of guidance. Texas Instruments (TI), Micron Technology, and STMicroelectronics guided for revenue growth in the 5% to 6% range. However the high end of guidance for these three companies is in the 9% to 11% range. MediaTek expects 11.9% growth in Taiwan dollars. The weighted average in U.S. dollars of the companies providing guidance is 6% growth in 3Q 2016 from 2Q 2016.

Despite the strong outlook for 3Q 2016, the year 2016 semiconductor market will almost certainly decline from 2015. The first half 2016 semiconductor market was $157 billion, according to WSTS, down 6.4% from second half 2015 and down 5.8% from first half 2015. Second half 2016 would need to grow 13% from the first half to reach a 2016 market flat with 2015. Second half growth will probably be less than 10%, resulting in a 2016 decline. Recent forecasts for the 2016 semiconductor market range from a 3% decline (Gartner) to a 1% decline (IC Insights). Our forecast at Semiconductor Intelligence is a 2% decline compared to our May forecast of a 1% increase.


The deterioration of the electronics and semiconductor markets in 2016 is reflected in the table below showing IDC’s forecasts from March 2016 and June 2016. The June 2016 IDC forecasts cut about 2 percentage points of growth (or added 2 percentage points of decline) for PCs, mobile phones and smart phones.


The expected healthy increase in the semiconductor market in the second half of 2016 will set up 2017 market for growth. The current weakness in the global economy and electronics markets are expected to improve in 2017. The International Monetary Fund (IMF) July 2016 forecast calls for global GDP growth to improve from 3.1% in 2015 and 2016 to 3.4% in 2017. Recent forecasts for the 2017 semiconductor market are 2% from WSTS, 4.7% from Gartner and 6.1% from Mike Cowan. We at Semiconductor Intelligence have revised our 2017 forecast to 8.0% from our May forecast of 7.5%.


Design IP Growth Is Fueling 94% of EDA Expansion

Design IP Growth Is Fueling 94% of EDA Expansion
by Eric Esteve on 08-17-2016 at 7:00 am

Last June, the ESD Alliance (ESDA) has released Q1 2016 results for EDA (CAE, PCB & MCM and IC Physical), Silicon IP (SIP) and Services. Not a surprise for Semiwiki readers since 2013, the SIP category is recognized as the largest with $689 million revenues for the quarter, and four-quarters moving average increasing by 11.6 percent. Total ESDA revenues have increased by 4.3%, so I have tried to investigate more deeply the various growth sources. SIP represents $688.7 million out of $1962 million for the four categories, or 35% of the total. Let’s do some simple computing:

The SIP growth contribution is 11.6% * 0.35% = 4,06% of the total growth (94%+).

The total growth is 4.3%, so SIP contribution is 4.06/4.30, or 94.4% of the four quarter moving average increase of total ESDA revenues! If you prefer, EDA four-quarters moving average without SIP would have increased only by 0.24%. Not only SIP is the largest ESDA category, but SIP is responsible for most of ESDA growth, fueling this growth at 94%.

Can we say that the future of EDA is IP? Certainly yes! But let’s see the various consequences derived from this simple fact on Big 3 strategy, Design Automation Conference content or EDA analyst pertinence…

The two most important trends in the semi industry are consolidation (illustrated by recent M&A) and huge investment increase linked with SoC design for the players who have to follow Moore’s law, or at least design on the latest technology nodes, 16FF nm or below. As of today, the transistor cost doesn’t anymore decline when moving down (28nm à 16FF, or 16FF to 10FF), the overall NRE cost are exploding, but the race for integration is still rewarding. Samsung, Apple or Intel have to offer SoC integrating more functions, more CPU cores to stay leaders on the market and provide more power efficient IC (not only faster and delivering more powerful computing, but also offering lower power per MIPS). But the same R&D investment allows developing much less SoC than before (less design starts).

When two semi companies merge, the product port-folio will be optimized and redundancy will be eliminated, leading to less design starts as well. If you are a pure EDA vendor (not selling IP) you will sell fewer tools. In fact, the design tools supporting SoC targeting advanced nodes are more complexes and by consequence more expansive. But if you look at the above curves for “IC Physical” and “CAE” categories, the revenue growth between 2007 (before 2008 crisis) and 2016 has been very modest (less than 10% for the whole period). In other words, selling EDA tools doesn’t bring you enough growth. That’s why two of the big 3, namely Cadence and Synopsys, have strongly developed their respective IP port-folio and are still investing, by IP vendor acquisition and by staffing more and more their IP groups.

I am sure that you have a good question to ask: if we consider semi consolidation and increasing SoC development cost for advanced nodes, both leading to fewer design starts, why should SIP category grow as it did, moving from $280 million by quarter in 2007 to almost $700 in 2015? It’s a paradox!

By analyzing this IP market since 2007, I know that the answer is complex and I will try to explicit it here. The first reason is IP externalization trend. Since the mid 2000’s the chip makers tend to focus on their core competencies and outsource the too complexes functions, like SRAM, eFlash, CPU, GPU, DSP, SerDes, etc, as well as the peripheral functions (like interfaces: USB, PCIe, HDMI, SATA…). The externalization “factor” is growing year after year, but it grows only by 2% to 4% per year (result from IPnest analysis of the IP market since 2007).

There should be other reasons to explain a sustained 10% to 12% yearlygrowth since 2005!

SoC integration has also an important impact. When chip makers move from 40nm to 28nm, 28nm to 16FF and so on, they can develop chip integrating 50% more transistor at every step. It has been shown that EDA tools improvement was not going fast enough to support this higher integration capability, and that you can’t extend the size of a design team indefinitely. IP Reuse was part of the solution, and integrating more and more IP when developing on smaller technology node is a matter of fact, this trend was proven by analyst. In fact, when moving down by one technology node, the number of IP per SoC is growing more than the design start count decrease, the net result (number of IP x Design start count) is higher. This is clearly another growth factor for the IP market.

The last reason explaining IP market growth despite consolidation and development cost increase is quite subtle, but is proven when reviewing the IP market by segment, especially the standard based IP like USB, DDRn, PCIe and more. These IP products, controller and PHY, can escape commoditization thanks to the constant release of new protocol (PCIe 4.0 after PCIe 3.0 for example). At every new protocol release, the product is more complex (the specification is more complicated) and faster. PHY IP running at 16 Gbps is selling at much higher price than this running at 8 Gbps! By reviewing this protocol based IP market since almost 10 years, I can affirm that a PCIe or USB IP product (PHY or controller) license cost has grown constantly. If a PCIe 1.0 Controller IP was selling for $100K in 2006, the PCIe 3.0 version is selling for at least 50% more, if not 100%.

I realize that I have not mentioned another factor, very important for IP vendors offering royalty based business model, like ARM or CEVA. Because the chip cost tend to decrease year after year (for the same function), we don’t realize how strong the pervasion of semiconductor product is in every electronic segment. That simply means that 100’s of million if not billions of new IC integrating CPU, DSP, GPU IP are shipped every year. More products shipped on year N+1 simply mean higher royalties based revenues than on year N, these revenues being accounted in the IP category.

Let’s come to the important point. If the IP category is not only the most important of the EDA market, but also offering most of the growth (94%!), why the DAC committee seem to neglect this important EDA segment? Just take a look at DAC 2016 agenda and you will realize that the percentage of presentation focusing on IP is much lower than the IP market weight in $. The good news is that Michael McNamara, VP of Adapt IP, will be DAC 2017 chairman, so we can expect the next DAC agenda to be more “IP friendly”…

I just want to pass a message to analyst dealing with the EDA market. If the fact that most of the EDA market growth is coming from the IP category is confirmed (and I think it will be), it may be the right time to take a closer look at the IP market… ESL may be a very interesting concept, but this concept will not generate growth the same way IP market will!

Eric Esteve from IPNEST


Solido Saves Silicon with Six Sigma Simulation

Solido Saves Silicon with Six Sigma Simulation
by Tom Simon on 08-16-2016 at 4:00 pm

When pushing the boundaries of power and performance in leading edge memory designs, yield is always an issue. The only way to ensure that memory chips will yield is through aggressive simulation, especially at process corners to predict the effects of variation. In a recent video posted on the Solido website, John Barth of Invecas goes into detail how they design and verify to maintain high yields. One of their challenges is to provide for the lowest possible interface voltages while maintaining internal voltages necessary for bit cell operation. Their designs utilize dual power rails to achieve this, but this also creates a larger operational window for the design and complicates verification.

Their designs are primarily based on the following GLOBALFOUNDRIES processes: 22nm fully depleted SOI technology, 14nm FinFET and 7nm which is in development. In the case of a single bit cell in any of these processes, assuring high yield is fairly straightforward. Indeed, if you want 99.865% reliability in theory a 3 sigma analysis will suffice. However, a few problems arise for memory chip designers. For one, the distribution curves for semiconductor yields are not ideal bell curves, they have long tails that can skew ideal statistics. The even bigger problem is that the chips like those that John works on have 300 million bit cells, so even with a 1 in 300 million bit cell failure rate, every chip is likely to fail.

To get orders of magnitudes fewer failures, analysis is needed out to 6 sigma. At this level, you can expect to see 10 failures per ~10B. This is more in the desirable range for a device with 300M instances of the cell in question.

Monte Carlo simulation is the favored approach for ensuring yield across the range of variation expected for designs like the Invecas memory chips. With Monte Carlo simulation huge numbers of simulations are run with varying process corner and variation parameters. The result provide a good look at performance under the bell curve. However, to get a better look at the troublesome long tail, truly enormous numbers of simulations are usually called for. Let’s say you run between 100K and 1M simulations. In this case you can see from the illustration below that we are just getting into the tail, where the most important and interesting results are sitting.

Solido has a clever solution to this problem that gives designers access to the simulation results well into the tail without having to run millions or billions of simulations. With Solido’s High-Sigma Monte Carlo, the parameters for a large number of potential simulation runs are generated. A subset of these are run based on a preselection criterion. The results of these simulations is used intelligently by the Solido software to further select and refine the specific samples that need to be run to populate the tail selectively. There is a feedback loop that ensures the correct order prediction.

The net result is that the most interesting part of the distribution curve – the tail – is thoroughly explored without having to brute force simulate all the samples. Going back to John’s run on his Ivencas example, we see that he generated 623M samples in order to get to 5.5 sigma. However, he only needed to actually simulate less than 13K of those to obtain useful results. If instead he had extrapolated the results based on the median results, he would have been off by a significant margin.

The nice thing about Solido’s approach is that it is self-verifying. In part this is because it is based on true Monte Carlo sampling. Their solution is feasible on small and large designs. They can support up to 1,000’s of active devices. Solido’s High-Sigma Monte Carlo can be used with pretty much all of the available SPICE simulators including fast SPICE simulators. A partial list includes HSPICE, FineSim, APS, Spectre, BDA, Eldo and GoldenGate. Their recent growth and acceptance at many major semiconductor companies is testament to the value of this unique approach and solution.

Hearing a discussion of a real life case is pretty interesting. If you want to see the entire talk by John Barth at Invecas, you can find it here on the Solido website.


Are Your Transistor Models Good Enough?

Are Your Transistor Models Good Enough?
by Daniel Payne on 08-16-2016 at 12:00 pm

SoC designers can now capture their design ideas with high-level languages like C and SystemC, then synthesize those abstractions down into RTL code or gates, however in the end the physical IC is implemented using cell libraries made up of transistors. Circuit designers use simulation tools like SPICE on these transistor-level netlists to ensure that the specifications for timing, power and standby current are being met across process, voltage and temperature (PVT) corners to account for process variation effects. This all begs the bigger question, “Are your transistor models good enough?” If your transistor models are not quite accurate enough, then the SPICE simulation results will give you inaccurate numbers and your next chip may fail to be competitive or simply not work at all.

Fortunately we have an EDA industry where such critical questions about transistor models are being addressed. There are four major tasks that should be considered when discussing transistor model accuracy:

  • Device model extraction
  • QA of the models versus silicon
  • Cell circuit validation
  • Design oriented model characterization

In the past you may have used four separate tools for each task listed, maybe from different vendors or even your own custom scripts. Today it’s possible to use a single SPICE modeling platform called MeQLab provided by Platform Design Automation that spans all four tasks.

Related blog – A Brief History of Platform Design Automation

With the MeQLab tool there are five types of engineers that will benefit from using it to get their specific questions answered:

  • Foundry modeling engineers – need to generate new models: device, corner, statistical, mismatch. Circuit level verification also needs to be performed.
  • Foundry model QA engineers – have the task of verifying each SPICE model library then produce a model report for review.
  • Design house foundry engineers – will double check that the foundry SPICE models are ready for use by circuit designers.
  • Circuit design engineers – they need to optimize either the process or a specific IP block to get the right performance for a given process node.
  • Researchers – looking into device-level research topics requiring data analytics, model extraction, QA, or process and device optimization.

Device model extraction is the mathematical process of measuring silicon wafer characteristics (current, voltage, resistance, capacitance, charge), then creating industry-standard models used for IC devices: MOS, SOI, BJT, Diode, Resistor, Capacitor, Varactor or user-defined circuit. All of the latest FinFET models and BSIM6 are supported, so it fits into your existing EDA tool flow. Wafer Acceptance Test (WAT) data can be viewed graphically or numerically, corner models are automatically generated, Length of Diffusion (LOD) models generated, plus both HSPICE and Spectre models are generated. Here’s a screen shot of model parameters fitted to curves based on measurements:

Statistical modeling is a popular approach that enables circuit designers to simulate their transistor-level IP to see how it will perform across process variations, although creating accurate statistical models can be quite time-consuming to run on modern CPUs. With the MeQLab approach there is an internal SPICE simulation that produces statistical models 100X faster than with a traditional SPICE tool because Platform DA is using their own machine learning algorithm to get faster simulation speeds to create statistical models and mismatch models.


I[SUB]Dlin[/SUB] of NMOS vs. PMOS

With smaller process nodes we’ve learned to accept more model corner models to account for new physical effects, so to account for both global and local variations you can start with a 5 corner model library then have an 11 corner model library automatically generated. Have a Quality Assurance (QA) verification process ensures that your models will be free of data issues like:

  • Fitting errors
  • Scalability
  • Crossing data
  • Kinks
  • Out of bounds
  • Not smooth enough

Another time-saving feature is the ability to quickly compare two model libraries, let’s say looking at an HSPICE model library versus a Spectre model library. Perhaps you have version 1.0 and 1.1 of the same model library and are curious to know exactly what has changed in the model library so that you can better understand the impact to your existing IP blocks.

A final type of QA that comes with MeQLab is the ability to use your existing PDK (Process Design Kit) files and then automatically run comparison circuits between pre-layout and post-layout simulations using just a few mouse clicks, saving you valuable setup and analysis time from doing a manual comparison.

If your company designs for high voltage environments like automotive or industrial then it’s good to know that MeQLab supports the two most popular models, HiSIM_HV and the proprietary Level 66/101 HV models for use in commercial circuit simulators like HSPICE or Spectre.

Memory designers for SRAM will save time by using the automation features to view buttery charts, Static Noise Margin (SNM), Write Noise Margin (WNM), plus get a statistical analysis of their SRAM cell performance:

If you need to perform noise characterization, then the MeQLab software can be connected to a specialized hardware testing platform called the NC300, giving you noise modeling and statistical/corner noise modeling. For VCO circuits you can even analyze the effects of 1/f noise.

Design or process optimization is an important technique to make your silicon IP differentiated from competitors, so there’s a neat optimizer feature in MeQLab where you can specify a circuit performance as your target then optimize model parameters with circuit or process parameters. Here’s a snapshot of optimizing a delay time versus Vsupply across different temperatures:

Bright Power Semiconductor
Located in Shanghai is a virtual IDM called Bright Power Semiconductor that does AMS IC designs for LED lighting, motor drive products and home appliance products. Here’s what they have to say about using MeQLab:

“With the unique features above and complete modeling offerings embedded on a single platform, MeQLab were quickly adopted by the industry’s leading foundries, design companies and IDMs, MeQLab not only helps foundries to generate SPICE models faster and with better quality, it also helps design companies to improve margin and better protect circuit IP, Bright Power Semiconductor, world’s leading chip vendor on LED lighting products, has adopted MeQLab for optimizing foundry models and customizing models for its own process, “foundry models address a wide range of applications, so accuracy is balanced between device geometries and bias conditions, for our designers, we always have the need for optimizing foundry models to better suit our design needs, however device modeling has always been a challenging task to us until we adopted MeQLab, it addresses all the modeling needs including circuit validation and with the high voltage device modeling knowledge built-in, the work is getting much easier, also PDA team has provided excellent support and training sessions for our designer to quickly catch up speed with device modeling, we are pleased that we chose MeQLab as our modeling platform”

Tom Zhang, Director of Process Development

Summary
Circuit designers, foundry modeling engineers, model QA engineers, design house foundry interface engineers and researchers can all benefit from using a tool like MeQLab to perform device model extraction, QA, cell circuit validation and design oriented model characterization. Instead of piecing together dissimilar software and scripts, why not use something commercially available that was defined for that purpose.


Real Artificial Neurons

Real Artificial Neurons
by Bernard Murphy on 08-16-2016 at 7:00 am

Neural nets are a hot topic these days and encourage us to think of solutions to complex tasks like image recognition in terms of how the human brain handles that task. But our model today for this neuromorphic computing is several steps removed from how neurons actually work. We’re still using conventional digital computation at the heart of our models, albeit in a non-algorithmic way, training the system to encode weights and thresholds for feature recognition.

So other than in this abstract sense, we can’t claim this our neuron models are faithful representations. This may not be just semantic hair-splitting; the difference may be significant in how energy-efficient and robust neuromorphic models can be. Researchers at IBM announced very recently that they have constructed artificial neurons which more closely mimic characteristics of real neurons and which should be able to show improvements in both these areas.

To improve the energy component, they exploited a well-known trick – hardware is more energy-efficient than software. But these guys dug deep when thinking of hardware – all the way down to material physics, specifically phase-change materials. Use of phase-change effects has already been mentioned on this site in the context of fast memories developed by IBM. The researchers in this case are also from IBM, but are applying in a different way the ability of these materials to switch from amorphous to crystalline states.

The base material in both cases is a germanium antimony telluride alloy, which is encouraging because IBM have already demonstrated enough process expertise with this material to build a 64k-bit memory. In the memory case, the goal had been to build (3-state) bit cells, but for neurons they are using the phase transition in a more analog operation. As currents inputs progressively flow through a device, it switches from the amorphous phase to the crystalline phase, in effect integrating those currents over time.

This transition, when it happens, is sudden and is coupled with a corresponding transition in conductance which can be observed electrically or optically, after which a neuron must be reset with a higher voltage pulse. This is very similar to the way biological neurons integrate and fire. Energy required per neuron update has been measured at ~5 picojoules, although the cycle time for firing is disappointingly low at ~100Hz (presumably gated by time to melt back to the amorphous phase).

These neurons have another interesting property – firing is stochastic (not completely predictable) because the crystallization and melting stages necessarily cycle through random configurations. This stochastic property is thought to be important in real neurons as a method to avoid getting stuck in local minima (think of simulated annealing for example) and therefore to provide robustness in conclusions, especially in ensembles of neurons. However little is reported in results in this direction so far; the research report is quite recent (publication date May 2016).

You can read a summary of the research HERE and the more detailed paper (paid access) HERE.

More articles by Bernard…


Semi execs look at IoT tradeoffs a bit differently

Semi execs look at IoT tradeoffs a bit differently
by Don Dingee on 08-15-2016 at 4:00 pm

What happens when you get a panel of four executives together with an industry-leading journalist to discuss tradeoffs in IoT designs? After the obligatory introductions, Ed Sperling took this group into questions on power, performance, and integration. Continue reading “Semi execs look at IoT tradeoffs a bit differently”


Rigid-Flex Cabling is Cool! (and requires unique EDA support)

Rigid-Flex Cabling is Cool! (and requires unique EDA support)
by Tom Dillinger on 08-15-2016 at 10:00 am

The three F’s of electronic product development are: form, fit, and function. Although the F/F/F assessment typically refers to the selection of the right component, it most definitely also refers to the selection of the proper cabling between assemblies. The requirements for cables are varied, and demanding: ability to provide optimal signal transmission impedance and shielding, flexibility, lightweight, resistance to vibration, reliability and material stability (especially when subjected to environmental extremes and repetitive flex motion), ease of manufacturing assembly, and, compatible with the constraints of the product enclosure (e.g., suitable for difficult contours, minimal perturbation to airflow, etc.).

For over 60 years — longer than Moore’s Law — flexible flat cables (FFC) have been an attractive solution to the F/F/F demands for cabling. In its most basic implementation, the FFC is a composite of polyimide layers to which a set of metal wire connections have been adhesively bonded. A coverlay is bonded over the core layers to protect the metal. The adhesive coverlayer can be tinted, to give the FFC its characteristic amber or green color. The ends of the wires are exposed and tinned, for insertion into a connector. A stiffener layer is typically added at the ends of the cable to improve the assembly process. FFC’s have been a fundamental enabler for the computer and consumer electronics industries. (FFC’s have been on the moon, as part of NASA’s F/F/F strategy.)

Much as the IC industry has evolved to support an increasing number of signal interconnect layers, so too has the flexible cable technology evolved. Multiple polyimide and patterned metal layers in the composite are used to create complex interconnect topologies. Copper-plated through vias connect signals on different layers of the multi-sided flex cable. The multi-layer cable requires metal patterning/etching/via drilling/plating — these designs are often denoted as flexible printed circuits (FPC), due to their similarity to PCB manufacture.

Fast forward to the dramatic increase in the adoption of mobile and wearable products over the past decade. Product designs are increasingly stressing the minimization of volume and weight, the simplification of manufacturing assembly, increased product reliability, and (to be sure) final cost, all serving as critical product goals.

The FFC or FPC inserted into sub-assembly connectors no longer satisfies these requirements. An alternative implementation has emerged, and is now the preferred approach for a vast range of consumer, automotive, medical, and mil-aero applications — the rigid-flex assembly.

A rigid-flex design includes multiple PCB substrates for mounting components and connectors, and attachment to the product chassis/enclosure. The key differentiating feature for rigid-flex technology is the lamination of and electrical connectivity to a set of flex layers within the PCB substrate, as illustrated in the figure below. (Note that this example also illustrates the attachment of surface mount components to the flex cable, as well.)

I recently had the pleasure of chatting with David Wiens, Xpedition Product Manager at Mentor Graphics, about the tremendous growth in the rigid-flex market, and specifically, about the unique EDA tool requirements associated with this technology. I have to confess that I didn’t appreciate the difficulty in developing a rigid-flex asembly — David opened my eyes to the intricacies, and how features in the recent Xpedition Enterprise release have accelerated rigid-flex design productivity and quality.

Dave highlighted, “Rigid-flex design requires managing the complexity of the assembly across both electrical and mechanical domains. Designers need to work seamlessly across both 2D and 3D domains, to complete the physical (2D) implementation while maintaining a 3D flex perspective.” The figure below is a screenshot example from Xpedition, showing the two visual representations side-by-side.

Another example of complexity management that Dave pointed out was the unique support required for maintaining multiple stack-up cross-sectional definitions for the various rigid board and standalone flex regions.



“The Xpedition design environment is integrated with our HyperLynx tool for signal and power integrity analysis. That collaboration is important, to ensure that the electrical characteristics of these complex stack-ups and material interfaces are extracted and simulated accurately.”
, Dave emphasized.

An Xpedition Enterprise feature that is an absolute must for rigid-flex is the capability to define and exercise structural rules to apply to both the folded and unfolded assembly data. The rigid-flex manufacturer and design team will collaborate on rules for:

  • component placement and folded clearance
  • flex bend geometries, both for static bending and (especially) any repetitive dynamic flex motion

    • no changing wire direction or vias present in the “flex zone” to eliminate micro-cracking
  • smoothness of the bend
  • the requirements at the rigid-flex interface for a stiffener

    • no vias close to stiffener

The 3D data representation enables accurate and thorough examination of these manufacturability rules. “Designers no longer have to build physical mock-ups of their assembly concept, and try to visualize rule violations.”, Dave highlighted.

Another unique requirement of flex design that Xpedition supports is the unique data representation for vias and wire arcs.

Dave informed me, “Metal arcs require true curve data formatting — attempts to represent the arc as a set of vectors will increase the risk of micro-cracking. Similarly, vias and T-junctions require teardrop traces around the geometry. Solid geometries require cross-hatching (with specific rules for hatching orientation in a flex zone). The automated layout features of Xpedition Enterprise have been enhanced to handle these unique requirements.”

As the rigid-flex assembly is linked to the product enclosure, electrical-mechanical co-design is required. Xpedition interfaces with mechanical MCAD design tools, using the STEP/ProSTEP data interchange format. Mentor has expanded the (de facto standard) ODB++ manufacturing data release format to include the necessary design constructs.

The emergence of new applications emphasizing product volume, flexibility, manufacturability, and cost has been enabled by advances in rigid-flex technologies. The latest release of Mentor’s Xpedition Enterprise provides support to visualize and analyze the complexity of the 2D unfolded and 3D folded assembly. The collaboration with HyperLynx enables accurate SI/PI analysis. The rules coding and checking support enables designers to ensure the manufacturability of intricate topologies. The data interfaces ensure consistency and collaboration with mechanical product co-design.

Much like Moore’s Law has described IC fabrication evolution over the past 50 years, so has flexible cable/assembly technology been an indication of the advances in electronic products. It will be enlightening to continue to follow the development of flexible electronics.

If a rigid-flex design is appropriate for your product, I would encourage you to check out the following link, with information on Mentor Xpedition Enterprise.

-chipguy


Catching low-power simulation bugs earlier and faster

Catching low-power simulation bugs earlier and faster
by Daniel Payne on 08-15-2016 at 7:00 am

I’ve owned and used many generations of cell phones, starting back in the 1980’s with the Motorola DynaTAC phone and the biggest usability factor has always been the battery life, just how many hours of standby time will this phone provide and how many minutes of actual talk time before the battery needs to be recharged again. My smart phone today is the Samsung Galaxy Note 4, and I can make it through an entire business day before a recharge, so that’s real progress on usability for battery life.

Engineers love to model, simulate and predict the battery usage of consumer devices like our popular smart phones. At the other end of the power spectrum are electronic products like servers, where the cost to keep the racks cool contributes to the operating costs in a big way. What’s the best methodology to design and debug for low-power then?

Hardware designers have been using RTL coding for many years, so that’s a well understood methodology. UPF is being adopted more frequently now to define the power architecture, but how does that fit into an EDA tool flow exactly?

To learn more about catching low-power simulation bugs earlier and faster there’s a 60 minute webinar on August 31 from EDA vendor Synopsys. The specific EDA tool from Synopsys is called Verdi and the power-aware features will be introduced with an objective of:

[LIST=1]

  • How visualization of the power architecture can help identify power strategy and connectivity issues upfront
  • How to use annotated power intent on source code, schematics and waveforms to rapidly root-cause power-related errors back to UPF/RTL
  • How to debug unexpected design behavior such as Xs caused by incorrect power-up/down sequences etc

    ​The two people from Synopsys that are speaking at this webinar include Vaishnav Gorur and Archie Feng, members of the verification group.


    Vaishnav Gorur is currently Staff Product Marketing Manager for debug products in the Verification Group at Synopsys. He has more than a decade of experience in the semiconductor and EDA industry, with roles spanning IC Design, field applications, technical sales and marketing. Prior to joining Synopsys, Vaishnav worked at Silicon Graphics, MIPS Technologies and Real Intent. He has a Masters degree in Computer Engineering from University of Wisconsin, Madison and is currently pursuing an M.B.A. at University of California, Berkeley.

    Archie Feng is currently a Corporate Applications Engineer for debug products in the Verification Group at Synopsys. He has more than 15 years of experience in IC design and the EDA industry. Prior to joining Synopsys, Archie was an ASIC designer at the Industrial Technology Research Institute of Taiwan and has held positions in software design, applications engineering and product marketing at Springsoft. He has a Bachelors in Engineering Science from National Cheng-Kung University and a Masters in Computer Science from National Chung-Cheng University in Taiwan.

    REGISTER ONLINE

    This webinar is the 2nd of 4 planned on verification, you may watch the archived 1st webinar in the series: