BannerforSemiWiki 800x100 (2)

What Makes A Designer’s Day? A Bottleneck Solved!

What Makes A Designer’s Day? A Bottleneck Solved!
by Pawan Fangaria on 12-04-2013 at 3:00 pm

In an environment of SoCs with tough targets of multiple functionalities, smallest size, lowest power and fastest performance to achieve within a limited design cycle window in order to meet the rigid time-to-market requirements, any day spent without success becomes very frustrating for a designer. Especially during tape-out time, when the actual layout is ready, every day spent in debugging and resolving issues feels like climbing one large peak of the mountain. And in a semiconductor design, the actual proof of design becomes available only with actual layout when actual parasitic elements like resistances and capacitances often dominate the design performance. Fixing the parasitic issues at that stage is a burgeoning and time consuming task as fixing at one place can lead to side effects at other places in the overall design.


An efficient methodology and tools to quickly analyse and fix these issues can be a boon for designers. I’m pleased to see this webinar on Parasitic Debugging made easyon 11[SUP]th[/SUP] Dec, organized by Concept Engineering that provides a great amount of detail about their tools and methodologies to quickly debug parasitic effects under various contexts. I believe this can make designers job a lot easier to analyse and fix issues post layout.

What I see from the postings on their website is – several capabilities at the schematic and SPICE netlist level which can efficiently and effectively be used with ease to analyse, debug, re-arrange and visualize different parts of a design, and also integrate the design with other designs which may have been developed by using other industry standard EDA tools.

Parasitic networks (can be in any format such as SPEF, DSPF or RSPF) can be analysed and SPICE netlists created for critical path simulation. In the SPICE circuit, identified parasitic structures can be turned ON and OFF for better understanding of CMOS function.

Important parts of the circuit can be navigated, extracted and saved as SPICE netlist that can be further re-used as IP or other external use. Schematics can be rendered on-the-fly for SPICE level netlists to easily understand the functioning of the circuit.

The design environment supports ‘Drag & Drop’ of selected elements between all design views including source code, parasitic window, logic cone and schematic. This provides a finer quicker cross probing and debugging.

Other features to complete the design environment include automatic creation of symbols and schematics from SPICE netlists, ERC checking and debugging, export of schematics into Cadence Virtuosoand compatibility of SPICE support with its variations and different simulation tools across the semiconductor design industry.

Another important item to be discussed is – how to extend SpiceVision (Concept Engineering’s interactive visualization tool for analysing and debugging SPICE models and circuits) functionality, according to project needs, by interfacing with the open database through TCL scripts.

A detailed notice about the program of the webinar is posted on semiwiki website here. It’s a one hour free webinar on 11[SUP]th[/SUP] Dec and that one hour is worth spending to gain the valuable insight into how to avoid wastage of time and utilize it productively in fixing crucial parasitic issues post layout.

Register Here
Happy attending!!

More Articles by Pawan Fangaria…..

lang: en_US


3D: Atlanta and Burlingame

3D: Atlanta and Burlingame
by Paul McLellan on 12-04-2013 at 12:44 pm

Two conferences on 3D, one just over and one coming up next week. The one that was just over was hosted by Georgia Tech, the 3rd Annual Global Interposer Technology Workshop (GIT). I wasn’t there but my ex-colleague from VLSI Technology Herb Reiter was. Herb has become very much associated with all things 3D since he led the working group created by GSA to coordinate their activities. He has written up his “trip report” over on the EETimes website. The big conclusion:”In a mega-panel moderated by Matt Nowak, a 3D stacking expert at Qualcomm, more than a dozen experts discussed a few technical and many business challenges related to interposers. They concluded the technology is ready but we need lower costs.”

I wrote earlier in the year about Xilinx and Micron who are using interposer and 3D designs. For now these are limited to very high priced devices that can absorb these costs. But if More than Moore is going to be an alternative to the huge number of process steps required for 10nm and 7nm (I’m assuming EUV won’t work well enough, but your opinion may vary) then it has to be economic not just for the highest price chips.

Herb’s overall conclusion:”This and similar data points during the workshop confirmed my fears that in many companies, management as well as system and IC designers still see interposer technology as a topic only for the packaging experts and of no or very limited impact on their own work or their company’s future. I believe risky projects and new technologies such as 3D chip stacking have to be sold to high-level management with their benefits to the business justified extensively.”

Coming up next week is the 10th Annual 3D Architectures for Semiconductor Integration and Packaging conference at the Hyatt Regency in Burlingame on December 11th to 13th. I will be going to that one and there will be blog(s) about it to enjoy with your holiday eggnog.

The opening keynote is by Doug Yu of TSMC’s integrated interconnect and packaging division on Wafer Scale System Integration Technology. As you may know, TSMC has a 3D technology they call CoWoS (Chip on Wafer on Substrate) which I assume will be one of the topics he will cover.

The second day keynote is by Kiavan Karimi of Freescale on Role of Advanced 3-D Packaging Architectures in Support of the Internet of Things (IoT) Edge/sensing Node devices. The IoT is for sure going to be cost-sensitive and so it will be interesting to see his opinion on whether we are on track to have costs coming into line.

Herb’s report is here. Details on 3D ASIP are here.


More articles by Paul McLellan…


A Brief History of DSP…Not By Any of Us

A Brief History of DSP…Not By Any of Us
by Paul McLellan on 12-04-2013 at 11:35 am

I came across an interesting article by Will Strauss which is pretty much the history of DSP in communication chips. Having lived through the early part of the history while I was at VLSI Technology I found it especially interesting.

At VSLI, our first GSM (2G, i.e. digital not analog air interface) was a 5-chip chipset. The DSP functionality was all hard-coded since I don’t think in that era there were any licensable DSP cores that had enough power. As usual when a new communications standard is first implemented (think LTE today, or gigabit Ethernet if that is your thing) it is right on the edge of what is implementable in the silicon technology of the day. Nobody wants a standard that is lower performance than necessary, and which will inevitably be superseded by a higher performance standard as silicon performance improves. After all, we’d all love terabit downloads to our smartphones if we knew how to build them, it is not the standardization process that is the limiting factor, it is mostly just raw semiconductor technology performance (and power). I am not even sure if the DSP functionality was even coded in RTL, that was the era when synthesis was just going mainstream. It might still have been schematics.

This 5-chip chipset was reduced to two chips and then to one. But the tradeoffs between hardware and software were changing. It was attractive to use a programmable (in assembly) DSP core instead of writing RTL (and especially instead of drawing schematics!). VLSI licensed Pine and Oak from DSP Group, one of the ancestors of today’s CEVA. DSP Group had originally developed their own DSP cores primarily, I believe, for building tapeless telephone answering machines (remember them?). But they also licensed the cores to other companies, following the model that ARM had started to use. Later VLSI also licensed the higher performance Teak core for CDMA chipsets (which required about 100 MIPS of DSP performance, much more than GSM).


Today, nobody in their right minds would consider building a state-of-the-art LTE modem by sitting down with a text editor and writing RTL. The only approach is to either build a DSP (as Qualcomm and nVidia have done) or license one (as Samsung and Mediatek have done). CEVA are the market leaders in DSP in general and DSP for communications SoCs in particular). Plus another change is that nobody in their right minds would think of programming a modern DSP in assembly. This is not your father’s DSP: it is multiple issue VLIW instruction-set with dozens of MACs and other vector units all operating in parallel. There is no way to program something like that by hand. Creating the software toolchain for a DSP (compilers, debuggers, models etc) is just as important as creating the netlist.

Will Strauss’s article covers the same process as we went through at VLSI and brings it up to the present day. One big trend in smartphone SoCs is the move to integrate the modem (and thus the DSP) onto the application processor and thus also make it available for specialist use for non-modem functionality such as image processing. Apple currently is going against this trend, the Ax SoCs are designed to work with Qualcomm modems. But the Qualcomm Snapdragon chips have both functions integrated, as does Mediatek, and the low-end smartphone market largely uses these chips.

But to emphasize just how daunting it is to build a state of the art modem like this from scratch, without licensing a whole substem from someone like CEVA, look at Intel. Even they, after their acquisition of Infineon Wireless, struggled to build their own and had to buy Fujitsu’s LTE modem group to get the job done and as far as I know still haven’t managed to build one that runs in their own 22nm FinFET process.

Once again the Will Strauss article is here. The CEVA website is here.


More articles by Paul McLellan…


SPICE Development Roadmap 2013!

SPICE Development Roadmap 2013!
by Daniel Nenni on 12-04-2013 at 11:00 am


The MOS-AK/GSA Modeling Working Group, a global compact modeling standardization forum, delivered its annual autumn compact modeling workshop on Sept. 20, 2013 as an integral part of the ESSDERC/ESSCIRC Conference in Bucharest (RO). The event received full sponsorship from leading industrial partners including Agilent Technologies, LFoundry and Microchip. More than 30 international academic researchers and modeling engineers attended two sessions to hear 12 technical compact modeling presentations and posters including the keynote by Larry Nagel.

The MOS-AK/GSA speakers discussed:

  • SPICE In The Twenty-First Century (Larry Nagel, Omega Enterprises Consulting),
  • NGSPICE: recent progresses and future plans (Paolo Nenzi, NGSPICE Development Team),
  • KCL and Linear/NonLinear Separation in NGSPICE (Francesco Lannutti, NGSPICE Development Team),
  • Modeling Junction Less FETs (Jean-Michel Sallese, EPFL),
  • HiSIM-Compact Modeling Framework (Hans Juergen Mattausch, Uni. Hiroshima),
  • The Correct Account of Nonzero Differential Conductance in the Saturation Regime in the MOSFET Compact Model (Valentin Turin, State University-ESPC),
  • State of the Art Modeling of Passive CMOS Components (Bernd Landgraf, Infineon Technologies),
  • Compact I-V Model of Amorphous Oxide TFTs (Benjamin Iniguez, URV),
  • Three-Dimensional Electro-Thermal Circuit Model of Power Super-Junction MOSFET (Aleš Chvála, Uni. Bratislava),
  • A Close Comparison of Silicon and Silicon Carbide Double Gate JFETs (Matthias Bucher, TUC),
  • Towards wide-frequency substrate model of advanced FDSOI MOSFET (Sergej Makovejev, UCL),
  • A Simple GNU Octave-Based Tool for Extraction of MOSFET Parameters (Daniel Tomaszewski, ITE).

The presentations will be open for downloads at <http://www.mos-ak.org/bucharest/>

The MOS-AK keynote speaker, Larry Nagel, delivered “SPICE in the Twenty-First Century” talk drawing a roadmap of future SPICE development directions. So how will SPICE evolve in the future? Several proprietary SPICE versions already contain one or more of these improvements, but they have not yet been migrated to open source versions.

1. SPICE will remain an Open Source tool: SPICE has lasted as long as it has in part because it is an open source tool, versions of which are available free of charge to all takers. An open source SPICE is a far better educational tool because the student can always take the tool apart to see how it works and how it can be improved. An open source SPICE a far better research tool because it can be modified to fit the needs of a new technology, even if it is not profitable for and EDA vendor to make the modification. For SPICE to continue to be successful, it must continue to be an open source tool.

2. SPICE will take advantage of new hardware (Multicores, GPU’s, Cloud Computing, even Smart Phones): One new development in parallel circuit simulation is the XYCE “SPICE Compatible” circuit simulator that was developed at Sandia National Laboratories with the express goal of exploiting parallel processing hardware. XYCE has been in development for fourteen years and soon will be released as an Open Source tool in the very near future. Keep an eye on the website http://xyce.sandia.gov for more news.

3. SPICE will include an advanced version of ADMS to accommodate model development: It is pretty clear that the replacement for CMOS has not yet been invented. Hundreds of “new” device models will have to be added to SPICE to try out new technology ideas in the quest for a new technology. The ADMS program, originally developed by Laurent Lemaitre at Motorola, is an open source tool that was designed to compile SPICE models coded in a subset of Verilog-A into the C language routines required by most versions of SPICE. However, some newer device models, such as BSIMCMG and BSIM6, have been coded in a subset of Verilog-A that is not compatible with ADMS, so these models are not available to open source versions of SPICE. This problem has to be fixed.

4. SPICE will accept Verilog-A as input: In addition to being able to accept device models coded in Verilog-A, it would be highly desirable to have SPICE accept netlists that are compatible with Verilog-A. This compatibility will aid compact model development by allowing the full power of Verliog-A language. Verilog-A compatibility also will allow noncritical portions of the circuit to be described at the behavioral level. Partitioning the circuit into behavioral (functional) blocks will aid parallelization.

5. SPICE will include RF Analysis: By the end of the 1980’s, at around the 1µm technology node, CMOS transistor fT had extended well into the GHz region. As transistors became faster, it became possible to integrate RF circuits and the wireless explosion began. This necessitated an entirely new line of algorithms and simulators. Each simulator had different algorithms that worked on some RF circuits but not others. The user interface and netlist description varied from program to program. With the exception of Qucs, none of the programs were Open Source. Because of the lack of open source RF simulation tools, RF simulators are only slowly working their way into educational institutions.

6. SPICE will include Variational Analysis: Variational analysis in the past has been added to SPICE almost as an afterthought. As an educational tool, students now need to learn variational analysis from the very start. Variational analysis no longer can be treated as an afterthought! New technologies now have to be specified with random variations, and SPICE needs to have Variational Analysis integrated into its basic framework.

7. SPICE will include Thermal Analysis: Devices are being placed closer and closer to each other, and devices are being placed on substrates that are not good thermal conductors. MOSFET models already include thermal effects, but only self heating is included. Thermal coupling between devices is or will be significant in analog IC design, and Larry Nagel thinks this analysis feature will become more important as time goes on.

This list is hardly all inclusive. After forty years of development, there still are dozens of new features and enhancements that are needed by analog circuit designers! Larry Nagel’s list represents those features that he think are most pressing, but of course every analog designer will have his or her own list which may or may not include these seven items. The future of SPICE is indeed bright, even after all these years.

The MOS-AK/GSA Modeling Working Group is coordinating several upcoming modeling events to focus on the Verilog-A compact model standardization as well as open source SPICE developments: a winter Q4/2013 MOS-AK/GSA meeting in Washington DC (http://www.mos-ak.org/washington_dc_2013/), and a spring Q2/2014 MOS-AK/GSA meeting in London (http://www.mos-ak.org); a special compact modeling session at the MIXDES’14 Conference in Lwow (https://www.mixdes.org); an autumn Q3/2014 MOS-AK/GSA workshop in Venice.

In the meantime please also visit <http://www.mos-ak.org/washington_dc_2013/> where we will continue the discussion of all compact/SPICE modeling topics.

lang: en_US


Intel Comes Clean on 14nm Yield!

Intel Comes Clean on 14nm Yield!
by Daniel Nenni on 12-04-2013 at 8:00 am

Hopefully this blog will result in a meaningful discussion on truth and transparency, and how Intel can do better in regards to both. Take a close look at the manufacturing slides presented by William Holt, Executive Vice President General Manager, Intel Technology and Manufacturing Group. You can see the slide deck HERE. Slide number four has a yield graph and that is where the truth finally comes out about the 14nm process move-in delay.


This slide tracks 14nm yield progress based on 22nm which clearly shows significant problems that led to the delay as early as July 2013 and coincides nicely with my “Intel 14nm Delayed?” blog on August 1[SUP]st[/SUP]:

One of the more interesting pieces of information I overheard at SEMICON West earlier this month was that Intel 14nm was delayed. This rumor came from the semiconductor equipment manufacturers and they would know. What I was told is that the Intel 14nm process has not left the OR development facility to be replicated in the OR and AZ fabs.

The Intel 22nm yield looks very good which means FinFETs were not as much of a manufacturing challenge as the FUDsters had predicted. 14nm on the other hand requires double patterning which is what I’m told Intel struggled with (DPT and FinFETs). The foundries tackled DPT at 20nm which is now yielding better than 28nm at the same point in the ramping process. You will see production 20nm parts from the top TSMC customers in Q1 2014, absolutely. Samsung already has 14nm FinFET tape-outs with DPT in hand which will be in silicon 2H 2014 and is on par with Intel 14nm SoCs. TSMC is roughly one quarter behind with 16nm tape-outs scheduled for Q1 2014.

Intel Corporation (INTC) Q2 2013 Earnings Conference Call July 17, 2013 5:00 PM ET:

“14 nanometer on-track to enter production by the end of the year” EVP and CFO Stacy J. Smith

“We are on track to start production on our 14 nanometer process technology in the back half of this year. CEO Brian M. Krzanich”

Given the comments above where yield was at a low point at the time of this conference call, I find these statements “non-transparent” to say the least. Brian Krzanich again denied 14nm was delayed at the Intel Developers Conference (9/10/2013) but had this to say in the Q3 2013 conference call on October 15[SUP]th[/SUP]:

While we are comfortable with where we are at with yields, from a timing standpoint, we are about a quarter behind our projections. As a result, we are now planning to begin production in the first quarter of next year.

I know, not a big deal, a three month delay at these geometries is well within tolerance. The problem I have is transparency. Brian looked us in the eyes and said no 14nm delays at IDF and one month later he changed his message. For a CEO with 30 years manufacturing experience at Intel I find this dishonest and that is not the Intel culture from CEO’s of the past. Read Andy Grove’s memoir “Swimming Across”, which I just did, it is all about truth and transparency and that is the foundation of trust.

More Articles by Daniel Nenni…..


Cadence & ARM Optimize Complex SoC Performance

Cadence & ARM Optimize Complex SoC Performance
by Pawan Fangaria on 12-03-2013 at 3:00 pm

Now a day, a SoC can be highly complex, having 100s of IPs performing various functionalities along with multi-core CPUs on it. Managing power, performance and area of the overall semiconductor design in the SoC becomes an extremely challenging task. Even if the IPs and various design blocks are highly optimized within themselves, the SoC design as a whole may not provide optimum results if not architected well. The central idea to gain the best performance is in how best all these components are interconnected together. So, how to get that optimum arrangement in a large solution space which can have several possibilities?


[A typical SoC configuration]

In the above SoC configuration there are ARMCoreLink advanced System IP components connected to CadenceDatabahn DDR controller. These advanced System IP components provide many choices to designers for interconnection along with other important solutions such as cache coherency, thus enabling them to rapidly explore the design space and determine the best optimized configuration.

Cadence and ARM have developed an efficient solution for SoC integration where ARM’s highly configurable IP components are used in Cadence’s Interconnect Workbench environment. ARM’s CoreLink NIC-400 can have many master and slave interfaces of AMBA family, i.e. AHB, AXI, APB and AXI4 which again can have configurable address space, width and clock speeds. Then there are Quality of Service (QOS) and Virtual Networks (QVN) which provide mechanisms for bandwidth and latency management. To manage routing congestion, there are Thin Links at user disposal for any special point-to-point connection.


[AMBA Designer – Configuration of a complex NIC-400 interconnect]

ARM provides a CoreLink AMBA Designer for user to easily and interactively select implementation options for the best configuration. A design topology having appropriate size and switch matrix with sufficient throughput and low latency must be selected from the rest.

After quickly architecting the SoC, the immediate challenge is to verify how this complex arrangement is going to perform under different scenarios of a full SoC context. This is where Cadence Interconnect Workbench comes into picture and provides many different types of “what if?” analysis capabilities.



[Top – Latency distribution, Bottom – One-click waveform debug of slow transactions]

Above is an example of latency distribution of a group of simulations. It clearly shows that writes are quicker than reads. Slower transactions can be debugged further by right-clicking and launching Cadence’s SimVision tool.

Interconnect Workbench provides comprehensive capabilities for quickly capturing and analysing cycle-accurate performance of the architecture. AMBA Designer generates the RTL as well as IP-XACT XML file that matches the design. Interconnect Workbench reads the IP-XACT file and automatically generates a UVM testbench in either ‘e’ or ‘SystemVerilog’.


[Interconnect Workbench generated testbench]

The IP-XACT descriptions of System IP cores enables Interconnect Workbench to provide performance analysis for interconnect components as well as cycle-accurate models of DDR controllers. Above figure shows the testbench of a SoC core along with DDR controller.

It’s imperative that with flexible tools to select appropriate architectures, clock schemes, memory and cache sizes, power domains and other required configurations, semiconductor design integrations can be accelerated to meet the time-to-market challenge. A detailed description of ARM’s CoreLink, AMBA Designer and Cadence’s Interconnect Workbench is provided in a whitepaperat Cadence website. It’s an interesting read to know more about recent advances in System IP components and interconnect performance optimization in SoCs.

More Articles by Pawan Fangaria…..

lang: en_US


Webinar: Parasitic Debugging made easy!

Webinar: Parasitic Debugging made easy!
by Daniel Nenni on 12-03-2013 at 3:00 pm

We cordially invite you to attend this webinar and learn how to quickly debug post layout designs. Concept Engineering is a privately held company based in Freiburg, Germany. It was, founded in 1990 to develop and market innovative schematic generation and viewing technology for use with logic synthesis, verification, test automation and physical design tools. The company’s customers are primarily original equipment EDA tool manufacturers (OEMs), in-house CAD tool developers, semiconductor companies, IC/ASIC designers and FPGA designers.

Parasitic Debugging made easy!

[TABLE] cellspacing=”5″ style=”width: 540px”
|-
| colspan=”5″ style=”padding: 0px 20px; padding-top: 10px” | [TABLE] cellspacing=”5″
|-
| Date:
| December 11, 2013
|-
| Time:
| 10:00 am – 11:00 am (PST)
|-
| valign=”top” | Location:
| Online Webinar via WebEx
|-

REGISTER NOW

What you will learn:

• Render schematics on the fly for Spice level netlists to understand function of circuit easily. Supported dialects include SPICE, HSPICE, Spectre, Calibre, CDL, DSPF, SPEF, Eldo, PSPICE and IBIS

• Extract, navigate and save fragments of circuits as Spice netlists with the ‘cone view’, for reuse as IP or external use in partial simulation

• Drag & drop selected components/nets between all design views (schematic, logic cone, Parasitic window and source code view) to cross probe and shorten debug time, especially during tape-out for full chip debug

• Automatically create digital logic symbols and schematics from pure SPICE netlists for easy design exploration

• Visualize and analyze parasitic networks (Post layout formats: DSPF, RSPF, SPEF) and create SPICE netlists for critical path simulation

• Instantly turn off/on parasitic structures in SPICE circuits for better comprehension of CMOS function

• Export schematics and schematic fragments into Cadence Virtuoso Schematic for further optimization and debugging

• ERC Checking: Verify/debug connectivity especially for multi fan-in and fan-out nets by identifying floating input and output nets, heavy connected nets, etc.

• Generate design statistics & reports: Instance & primitive counts

• Extend functionality of SpiceVision to match project needs by interfacing with the open database through tcl scripts
|-

Concept Engineering’s products deliver the fastest, highest quality automatic schematic generation and viewing technology. They integrate easily into all EDA and CAD tools and design flows and run on Windows and Unix computing platforms.

  • Nlview™ Widgets – a family of GUI building blocks for design visualization (Tcl/Tk, MFC, Qt, Perl/TK and Java) that can be easily customized and seamlessly integrated into other EDA tools.
  • T-engine™ – an advanced visualization engine for transistor-level structures.
  • RTLvision® PRO – a graphical debugger for SystemVerilog, Verilog and VHDL based designs.
  • GateVision® PRO – a standalone design analyzer that generates easy-to-read schematics from any Verilog or EDIF netlist. GateVision is a point tool that fits into most design flows.
  • SpiceVision® PRO – an interactive visualization tool that can be used to debug and analyze SPICE circuits and SPICE models.
  • SGvision™ PRO – a customizable mixed-mode debugger (SPICE and Verilog).
  • StarVision™ PRO – a customizable mixed-signal and RTL debugger (SPICE, VHDL, Verilog and SystemVerilog).

REGISTER NOW

lang: en_US


And the 2013 Mobile Winner is … Micron?

And the 2013 Mobile Winner is … Micron?
by Ed McKernan on 12-03-2013 at 9:00 am

To the surprise of nearly all observers and due to no extraordinary technological advancement, there is one true mobile winner of the past year and that is Micron, whose stock has soared in 2013 from $5 to $21. I know, you’re probably saying, “Micron, you can’t be serious.” Let’s run through the facts. For analysts who recall, it was 1995 when DRAM was basking in the sun under the then gargantuan demands of Windows 95 and when PC makers fretted about finding enough memory to load their machines in time for the September 1995 launch from Redmond to the blaring Rolling Stone tune “Start me up.” Looking back in time, it was peak DRAM. Then and thereafter, for nearly 20 years, there was one processor vendor who reigned profitably as its stock soared under the command of Andy Grove. We now may be witnessing an unbelievable trading places scenario as smart mobile kicks into high volume and memory (DRAM and Flash) will be worth significantly more than the apps processor. Who’s the real commodity silicon now?

Intel is going all out to win the mobile wars with Atom running in its 22nm process. The company has one unique feature that makes the coming months a rather interesting one for processor watchers. Intel is the only true IDM that ships its own silicon as far and wide as possible. An empty fab is a terrible thing to waste and it is at this point that Intel is most dangerous. The equipment has been paid for and the lights and air conditioning must remain on. Time to load the plant with Atom wafers and sell the parts for dimes above the cost of the sand, packaging material and assembly. Intel can go scorched earth if needed and take some ARM merchants with them to the single digit pricing realm. No one ever survives unscathed in a price war with Intel.

I was trading emails with an analyst friend who has lived semiconductors since the early 1990s. It was then that we came across the idea that how remarkable it was that we have flipped from a playing field of 20 years ago when a dozen DRAM vendors fought it out to sit next to the Intel socket to now where there are a dozen mobile processor vendors fighting to be next to the Micron or Samsung DRAM socket. Pricing power goes to the few who can play the game. When Micron bought the then bankrupt Elpida for a song at the height of the Japanese Yen valuation and Hyundai’s DRAM fab blew up, the tables turned to what has forever been considered a commodity but now prints real dollars.

The wandering years from 1995 to now when profitability was less important than cash flow, only Samsung and Micron were able to carry on to the finish. My prediction is that in the coming year, mobile processor commoditization continues. Rumors abound that Intel is offering $10 atoms. Can a $5, 32 bit ARM processor be far behind? Meanwhile DRAM and NAND pricing look to be much more stable. Ken Fisher, the financial columnist for Forbes magazine refers to the stock market as the great humiliator. Moore’s Law has through the years done a number on many semiconductor company’s but none is so great as when a market as large as mobile is overwhelmed with a multitude of suppliers all wanting to grasp nary a minority share believing that it will be profitable. No, history again shows it is better to be alone or no more than second in a field of two in order to retain significance with profits.

What next for Intel? Perhaps they should return to their roots. Their x86 franchise in the Data Center and the PC will offer plenty of profits to sustain them in the near future, however with mobile, maybe it is time to size up the value that lies in the memory portion of the platform relative to the dwindling dollars that occupy the application processor space. Right now memory is looking like it is top dog by a wide margin in the mobile platform and under less assault than Qualcomm’s baseband chip. It is interesting to recall, Intel did put in production the DRAM, the EPROM and the Microprocessor all in the first three years of the company – A remarkable feat. Returning to the abandoned DRAM and Flash (the successor to EPROM) could be a better play for the company as Micron proved in 2013. Could it be that we are, for the first time in 20 years, on the verge of a Memory Dominant Semiconductor Market?

lang: en_US

More articles by Ed McKernan…


Simplified Assertion Adoption with SystemVerilog 2012

Simplified Assertion Adoption with SystemVerilog 2012
by Daniel Payne on 12-02-2013 at 7:01 pm

SystemVerilog as an assertion language improved and simplified with the 2012 version compared to the 2005 version. I recently viewed a webinar on SystemVerilog 2012 by consultant Srinivasan Venkataramanan, who works at CVC Pvt. Ltd. There’s been a steep learning-curve for assertions in the past, and hopefully you’ll feel more comfortable after reviewing this blog or watching the full 61 minute recorded webinar. The webinar is hosted at Aldec, although the material applies to any IC design or verification engineer using SystemVerilog 2012 with assertions.


​Srinivasan Venkataramanan, CVC Pvt. Ltd.
Continue reading “Simplified Assertion Adoption with SystemVerilog 2012”


Conquering errors in the hierarchy of FPGA IP

Conquering errors in the hierarchy of FPGA IP
by Don Dingee on 12-02-2013 at 10:00 am

FPGA design today involves not only millions of gates on the target device, but thousands of source files with RTL and constraints, often generated by multiple designers or third party IP providers. With modules organized in some logical way describing the design, designers brace themselves for synthesis and a possible avalanche of errors.

Large FPGA designs are usually segregated into several types of IP – known-good legacy modules, third party IP, modules under development, and modules not yet started – arranged in some type of hierarchy. Synthesis tools may not pick up on the context, however.

Traditionally, an FPGA synthesis tool would take a whack at the entire design, combining and analyzing the thousands of source files, and likely encountering errors. Rather than complete synthesis, the tool would stop, making a report of items needing resolution. Designers would go in and decipher the report (not necessarily organized along the lines of the design hierarchy), and kick off another synthesis run which would presumably get farther, probably generating new errors. This iterative synthesize-find-fix process could easily take weeks before successful synthesis completion for the entire design.

A better solution is “continue-on-error”, capability found in Synopsys Synplify Premier and Synopsys Certify. In this approach the synthesis tool observes the hierarchy, and is able to broadside the design on a module-by-module basis, attempting synthesis of all modules. Many modules, presumably the known-good ones and hopefully a good number of third party and newly developed ones, will complete successfully.

In a first-pass synthesis, there may still be a multitude of errors, as Angela Sutton, staff product marketing manager for Synopsys, put it. The difference in a continue-on-error synthesis is most if not all of the errors existing in the total design are spotted, and categorized against specific modules in the hierarchy. This allows modules to be taken out, worked on in isolation, and merged back in. Such a “divide-and-conquer” approach may not completely eliminate iterations, but it dramatically reduces the number of runs needed compared to the traditional approach where only part of the design is synthesized if subsequent runs produce errors.

How is the hierarchy established? Synplify supports compile points, specifying user-defined RTL partitioning, or a hierarchical design interface where partitions can be exported based on the problem areas. Files can be merged back into the total design either at the RTL level (top-down flow) or at the netlist level (bottom-up flow).

Continue-on-error synthesis and divide-and-conquer debug of hierarchically-organized modules are just two of the techniques described in a recent Synopsys white paper authored by Angela:

10 Ways to Effectively Debug Your FPGA Design

I asked Angela to discuss what her team is working on next on the list of continuing improvements to Synplify, and she mentioned two areas. One is what to do with black-box IP, where the synthesis and debug tools have limited visibility. Synopsys is working with IEEE P1735 IP encryption, allowing tools to interpret a standardized format including design constraints, while maintaining the integrity of rights management. The second area is overall improvements in querying a design database and providing improved reporting. She cited a strong example in how gated clock transforms are handled and reported on. TCL scripting, either from Synopsys or user written, can extend the reporting capability to suit particular needs.

The challenge for FPGA designs continues to be bigger designs in less time, and clobbering tedious work of manual exploration of errors with automated solutions to zoom in on problem areas quickly. Synopsys continues to drive state-of-the-art for FPGA synthesis.

lang: en_US

More articles by Don Dingee…