Banner Electrical Verification The invisible bottleneck in IC design updated 1

TSMC OIP and the Insatiable Computing Trend!

TSMC OIP and the Insatiable Computing Trend!
by Daniel Nenni on 09-14-2017 at 12:00 am

This year’s OIP was much more lighthearted than I remember which is understandable. TSMC is executing flawlessly, delivering new process technology every year. Last year’s opening speaker, David Keller, used the phrase “Celebrate the way we collaborate” which served as the theme for the conference. This year David’s catch phrase was “Insatiable computing trend” which again set the theme.

First up was Dr. Cliff Hou’s update on the design enablement for TSMC’s advanced process nodes. Cliff again hit on the Mobile, HPC, IoT and Automotive markets with a focus on 55ULP, 40ULP, 28HPC+, 22ULP/ULL, 16FFC, and 12FFC. Speaking of 16FFC, TSMC’s Fab 16 in Nanjing, China is on track to start production in the second half of 2018 approximately two years after the ground breaking. This will be the first FinFET wafers manufactured in China which is another first for TSMC. China represents the largest growth opportunity for TSMC so this is a very big deal.

Not surprisingly 10nm was missing from the presentations but as we all know Apple is shipping 10nm SoCs in the new iPads and iPhones. As you may have read, the new iPhone X supports the “Insatiable computing trend” but we can talk about that in more detail when the benchmarks and teardowns become available. Needless to say I will be one of the first ones on the block to own one.

Cliff made comparisons between 16nm and 7nm giving 7nm a 33% performance or 58% power advantage. 7nm is now in risk production with a dozen different tape-outs confirmed for 2017 and you can bet most of those are SoCs with a GPU and FPGA mixed in. 7nm HVM is on track for the first half of 2018 followed by N7+ (EUV) in 2019. N7+ today offers a 1.2x density and a 10% performance or 20% power improvement. The key point here is that the migration from N7 to N7+ is minimal meaning that TSMC 7nm will be a very “sticky” process. Being the first to EUV production will be a serious badge of honor so I expect N7+ will be ready for Apple in the first half of 2019.

Finally, Cliff updated us on TSMC’s packaging efforts: InFO_OS, InFO_POP, CoWos, and the new InFO_MS (integrated logic and memory). Packaging is now a key foundry advantage so we will be doing a much more detailed look at the different options in the coming weeks as the presentations are made available.

As you all know I’m a big fan of Cliff’s (having known him for many years) and he has never led me astray so you can take what he says to the bank, absolutely.

The other keynotes were done by our three beloved EDA companies who celebrated TSMC’s accomplishments over the last 30 years. I would give Aart de Geus the award for the most content without the use of slides. Aart offered a nice retrospective since Synopsys is also 30 years old so they really grew up together. Anirudh Devgan of Cadence talked about systems companies doing specialized chips to meet the need for their insatiable computing. As I mentioned before, systems companies now dominate SemiWiki readership so I found myself nodding my head quite a bit here. Wally Rhines gets the award for the funniest slide illustrating the yield improvements TSMC logos have accomplished over the years:

All-in-all it was time very well spent. It was a good crowd, the food was great, and I gave away another 100 books and SemiWiki pens in an effort to stay relevant. There were more than 30 technical papers that we will cover as soon as they are made available and if you have specific questions hit me up in the comments section.

Also read: TSMC Design Enablement Update


Semiconductor Device Physics, Lab in a Box

Semiconductor Device Physics, Lab in a Box
by Daniel Payne on 09-13-2017 at 12:00 pm

One of my favorite classes in college was the lab exercise, mostly because we actually got to use real electronics and then measure something, finally writing it up in our lab notebooks. The issue today is that a college student taking Electrical Engineering probably doesn’t have much access to 10nm FinFET silicon for use in a lab class, and they are probably using really out of date materials and lab exercises. At the DAC exhibit in Austin held in June I was able to visit the booth of Platform Design Automation and see something new that they call the Semiconductor Education Kit (SEK).

Let’s say that you want to train someone about modern semiconductor devices so that they can plot waveforms of currents versus voltage (IV), or parasitics (CV) of modern devices like:

  • FinFET
  • 28nm planar CMOS
  • SOI
  • Bipolar
  • III/V devices
  • Passive devices

Using the SEK, this semiconductor device training is now practical and possible. Here are four situations where using a semiconductor device emulator could be used:

[LIST=1]

  • Semiconductor device characterization lab.
  • Device Model teaching tool to give hands on experience with both device modeling and circuit analysis best practices.
  • Semiconductor physics teaching, short-channel effects, stress effects, reliability, and variation.
  • What-if analysis, where you can mock up new device architectures with standard SPICE modeling and interface software.

    So what’s included with the SEK? There’s the hardware jig and software combined.

    On the left panel of the DE101 is a device type selection for emulation, giving you lots of choices:
    [TABLE]
    |-
    | valign=”top” | Name
    | valign=”top” | Device Type
    | valign=”top” | Description
    |-
    | valign=”top” | NMOS 180nm
    | valign=”top” | MOSFET
    | valign=”top” | W=0.204um,L=0.15um
    |-
    | valign=”top” | NMOS 28nm
    | valign=”top” | MOSFET
    | valign=”top” | W=0.1um,L=0.03um
    |-
    | valign=”top” | PMOS 28nm
    | valign=”top” | MOSFET
    | valign=”top” | W=0.1um,L=0.04um
    |-
    | valign=”top” | LDMOS
    | valign=”top” | MOSFET
    | valign=”top” | W=5um,L=3um
    |-
    | valign=”top” | SOI FB
    | valign=”top” | MOSFET
    | valign=”top” | W=0.15um,L=0.13um
    |-
    | valign=”top” | SOI TB
    | valign=”top” | MOSFET
    | valign=”top” | W=0.15um,L=0.13um
    |-
    | valign=”top” | BJT NPN
    | valign=”top” | BIPOLAR
    |-
    | valign=”top” | BJT PNP
    | valign=”top” | BIPOLAR
    |-
    | valign=”top” | Diode
    | valign=”top” | DIODE
    | valign=”top” | Area=4e-10 m2
    |-
    | valign=”top” | Resistor
    | valign=”top” | RESISTOR
    | valign=”top” | W=0.15um; L=75um
    |-
    | valign=”top” | Capacitor
    | valign=”top” | CAPACITOR
    | valign=”top” | L=5um;W=5um
    |-
    | valign=”top” | Varactor
    | valign=”top” | VARACTOR
    | valign=”top” | L=0.5um;W=1um;Finger=40
    |-
    | valign=”top” | HEMT GaAs
    | valign=”top” | MOSFET
    | valign=”top” | W=20um;L=0.07um;nf=2
    |-
    | valign=”top” | HEMT GaN
    | valign=”top” | MOSFET
    | valign=”top” | W=20um;L=0.07um;nf=6
    |-
    | valign=”top” | FINFET
    | valign=”top” | MOSFET
    | valign=”top” | L=0.02;nfin=3;tfin=0.014um
    |-

    In the upper right panel you can see the connections for you Device Under Test (DUT) displayed on the LED.

    The lower right panel is where the Test Setup is located, where you have Source Measure Units (SMUs), LCR connections, temperature, and time for reliability.

    Using the SEK in measurement mode shows real semiconductor device parameters as a function of the environmental choices:


    In modeling mode you can use the SEK with the MeQlab modeling software to show real model fitting, nominal and corner model extraction, explore process variability, or view noise and reliability characteristics:


    Teachers can use the EELab software to show students how device physics can be understood quickly when visualized as plots. Shown circled on this plot is the operating region between sub-threshold conduction and the linear region:


    The final mode of operation for the SEK is called demo mode, and this is where you can learn about analyzers, SMUs, LCR and noise characterization:


    I sure wish that the University of Minnesota had one of these setups back when I got my Electrical Engineering degree, because it would’ve made me come up to speed a bit quicker for when I joined Intel and learned how to do DRAM design. This is a new type of product category, and I’ve never heard of another vendor offering anything quite like it.

    Read more about the SEK online.


  • Webinar Preview: Alexa, can you help me build a better SoC?

    Webinar Preview: Alexa, can you help me build a better SoC?
    by Mitch Heins on 09-13-2017 at 7:00 am

    Nothing is pushing complexity in system-on-chips (SoCs) designs like the drive (no pun intended) to make autonomous vehicles a widespread reality. Autonomous vehicle systems require heterogeneous architectures with reliable, efficient communications between CPU clusters, vision processing accelerators, storage and shared virtual memory, all while ensuring mission critical security. These systems cry out for automated tools and advanced machine learning to build correct-by-construction designs that are not prone to human error.

    Don’t miss this upcoming webinar hosted by NetSpeed Systems and Imagination Technologies where they will identify the system level challenges facing the autonomous vehicle SoC market and explore architectures and design strategies, including the use of machine learning and neural networks, for these complex SoCs. Special focus will be put on the interactions between the SoC interconnect fabric and the CPU sub-systems including how to deal with functional safety (FuSa) requirements in light of the ISO 26262 standard. NetSpeed and Imagination Technologies will also be previewing their solutions to these challenges.

    Speakers for the webinar will be John Bainbridge, Principal Applications Architect with NetSpeed Systems and Tim Mace, Senior Manager Business Development MIPS with Imagination Technologies.

    John Bainbridge holds a PhD in computer science from the University of Manchester, UK, and has more than 20 years experience in SoC interconnect architecture, design, modeling and implementation. Prior to joining NetSpeed, he worked on interconnect and CPU architectures and deployment at Qualcomm, Sonics, Silistix and the University of Manchester.

    Tim Mace held engineering, technical marketing and product marketing roles from 2001 to 2014 at ARM Ltd. Previous to working with ARM, Tim held engineering and consulting positions at a variety of companies. He holds a BS in Physics with Electronics from the University of Manchester – Institute of Science and Technology, and a MBA from The Open University.

    The webinar will be held on Thursday, September 28[SUP]th[/SUP], 2017 starting at 11:00a Pacific Daylight Time. Please register in advance by using this link:

    REGISTER for Webinar

    About NetSpeed Systems:
    NetSpeed Systems provides scalable, coherent on-chip network IPs to SoC designers for a wide range of markets from mobile to high-performance computing and networking. NetSpeed’s on-chip network platform delivers significant time-to-market advantages through a system-level approach, a high level of user-driven automation and state-of-the-art algorithms. NetSpeed Systems was founded in 2011 and is led by seasoned executives from the semiconductor and networking industries. The company is funded by top-tier investors from Silicon Valley. It is based in San Jose, California and has additional research and development facilities in Asia. For more information, visit www.netspeedsystems.com.

    About Imagination Technologies:
    Imagination is a global technology leader whose products touch the lives of billions of people across the globe. The company’s broad range of silicon IP (intellectual property) includes the key processing blocks needed to create the SoCs (Systems on Chips) that power all mobile, consumer and embedded electronics. Its unique multimedia, processor and connectivity technologies enable its customers to get to market quickly with complete and highly differentiated SoC platforms. Imagination’s licensees include many of the world’s leading semiconductor manufacturers, network operators and OEMs/ODMs who are creating some of the world’s most iconic products. See: www.imgtec.com.


    Is there anything in VLSI layout other than “pushing polygons”?

    Is there anything in VLSI layout other than “pushing polygons”?
    by Dan Clein on 09-12-2017 at 12:00 pm

    As I travel a lot in the last 15 years and visited customers as well as friends I was many times invited to talk to the Layout teams. The main purpose is always to encourage automation. So I developed a presentation related to market trend, technology trends, and latest tools advancements. In many cases I present updates from DAC (Design Automation Conference), to which I am a big fan participating for more than 20 years. From Memory to Analog and Digital Place & Route in all cases the first question was:

    Is there anything else we can do other than just layout? What is my future? What can I do to grow faster? What did you do?

    Everybody is afraid that there is nothing else but schematic to layout or Netlist to P&R and their life will be as monotone as a production line work. Nothing more wrong to think about.

    In the last 3 years I decided to add an answer to this question and everywhere I talk to layout and design teams I reveal this in a last 20 minutes. So I decided this time to share this is with all of you so you will be able to act before I come to visit your company and present. Will write a series of articles describing the tools in which I was involved as a Layout Designer, from internal to external. Will use approximate timeframe, as some happen 30 years ago, and will provide some context of the condition that enabled such involvement. Will add names of people who actually did the work, or managed it as I was only the instigator, advisor, or tester in many cases.

    There are a few condition that favorited my involvement in these developments:
    [LIST=1]

  • The Layout team encounters always the challenge of being the last in the chip design flow – so everything that is late or missed ahead in the design flow, falls on the layout team to “save the day”. If and when the Management analyses the full flow it is obvious that the biggest issue is not layout creation but the ECO (Engineering Change Order) or changes to post layout simulations. So only when somebody looks at the full flow can understand that the sooner you bring layout (physical information) into circuit simulations the faster and more predictable gets the flow. From floorplan to placement, from routing to final layout clean from verification, layout is always “somehow” in the hot seat. So if you want change something work on the FLOW.
  • I was a lucky guy to start working in MSIL (Motorola Semiconductor Israel). Our CEO at that time Zvi Soha cared only about tapeout (and at that time this meant a real tape 19 inch diameter going out with the final chip GDSII). He wanted MSIL to be the best site within Motorola Semiconductor, so he enabled internal cooperation between teams and supported any new tools development that can support his plan, even so sometimes against the corporate “policies”. Not too many companies even today, have meetings between CAD, Circuit Design, Layout Design, etc., to enable flow automation as a whole not only as a section of it. PMC Sierra with Norbert Diesing Mixed Signal CAD had a “cooperation group” for support and roadmaps, but no resources to build new tools.
  • The hardware and software were all over the map. The front end design was divided in 2 sections: the system and functional simulations were done on IBM mainframe (cooled with water in a climate controlled room) and the circuit design was done on Daisy machines and software. The layout was done on Calma, a computer made by Data General (mainframe) with 4 terminals, each with 2 monitors, one text and one graphics. We did not have a mouse with 3 functions but a pen and a menu on the screen (taking very valuable visual rea). The pictures attached show a terminal of S-140 machine taken from the web.
  • The verification was done on the IBM using a software called MASKAP, for which Motorola had the source code. So you had to write a tape from Calma and load onto IBM to run verification then write the results on tape and upload into Calma. We had a parser for the DRC images (errors polygons) on Calma screen and a printed error file to identify what is actually the error. A very tedious work I may say…
  • I was a new immigrant to Israel and had problems with Hebrew. All the technology was in English so I focused on getting better at that, as this was a familiar language from my school days. The only way to move ahead was to add value to my knowledge, to be better at something that for others may not be of interest. Calma needed a workstation in the “cold room” the one with the server, so coming from Romania I was the most acclimatized to use it. We had shifts as we had 4 terminals and 8 people but if you used the cold room terminal you can stay longer without bothering others. So I started to read all CALMA manuals, in 3 months I knew enough to be “dangerous”.

    First step in starting my non-layout involvement was from an error. We taped out a chip verified and qualified “free of errors” but it came back with a short. As most of the work in 1984 was in multiples of microns polysilicon size 45 degree polygons where not needed. At that time the technology had only 1 (one) metal routing and we did route 45 degrees but not poly (which was used for GATE and for routing). However somebody used poly in 45 degree and the verification deck did not cover that. On the screen we could see the error between ad poly/metal 1 contact distance to a 45 degree poly as “below accepted distance” but in silicon was a real short. MSIL was only doing development using all verification decks provided by Austin, TX. After that date Zvi decided that we should have our own verification calibration done locally. So from the CAD department was assigned a very new software expert, Karina (later married Ben-Tvi) who needed a layout person to build all layout test cases for verification. Guess what, even so this was a simple task, below an interesting layout challenge, I volunteered thinking that this can widen my horizon.

    This was my first experience outside “layout” and working with Karina was a pleasure. We started daily meetings to figure out how to do this as no documentation was available, and we were doing it, kind of“under the table”. After about 3 months we had our own MSIL DRC deck. But with good food comes more appetite for automation. By end of 1985 we had additional checks not related to manufacturing but related to “good layout design practices”… Some of you may be aware that DRC and LVS actually have an underline ERC running, specifically these days with multiple voltages supplies. We developed our first SOFT CHECK by connecting one piece of diffusion at 2 difference voltages (text base) and found out a new verification that Motorola did not have. We proposed it to the headquarters and this is when they learnt that we have our own calibration in-house. As we had metal gates for input/output buffers (!) we added verification for that also, as the corporate delivered them “as is” and we were not allowed to modify them. After that working with CAD became a routine. Calma language was called GPL and we (all the layout team) wrote some small scripts to open the chip (it was taking one (1) hour to open top level, to add text, to move screens for routing, etc. But none of us was really trained for software and working with CAD proved to be a solid extension of capabilities.

    More about how a layout designer can have “spice” in their profession next time.

    Dan Clein
    CMOS IC Layout Concepts, Methodologies and Tools

    Also Read: Is there anything in VLSI layout other than pushing polygons? (2)


  • Life Imitates Art

    Life Imitates Art
    by Bernard Murphy on 09-12-2017 at 7:00 am

    Neural nets, neuromorphic computing and other manifestations of artificial intelligence are popular topics these days. You might think of this as art (as in the art of computing) imitating life. What about the other direction – does life ever imitate art in this same sense? A professor at ASU’s Biodesign Institute thinks it can, in this case by building circuits using ribonucleic acid or RNA.

    The 5-cent story on RNA is that this is the intermediary in synthesizing protein, ultimately from DNA. DNA doesn’t create proteins directly; instead when a cell needs to produce a certain protein, it activates the protein’s gene – the portion of DNA that codes for that protein – and produces multiple copies of that piece of DNA in the form of RNA. These are then used to translate the genetic code into protein. Thus, RNA expands the quantity of a protein that can be made at one time from a gene.

    Researchers showed that they could develop a switch in synthesized RNA (called a toehold switch). As I understand it, this is an RNA sequence which is folded over on itself (as in the right picture above) with bases bonded to each other and thus disabled from synthesizing proteins. But when certain trigger RNA sequences are present and bind to the toehold switch, the folded-over structure breaks open and can synthesize proteins.

    At the simplest level the trigger can be a single complementary RNA sequence, which might be part of the cell’s natural RNA. You could consider this a basic logic gate. Starting from this elementary capability, the team went on to develop and validate more complex logic functions where two or more complementary RNA sequences occurring in the cell were required to trigger the hairpin (the folded-over structure) breaking open. Impressively, they were able to construct a 12-input logic function using AND, OR and NOT functions (specifically (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2)) as the trigger.

    OK – impressive, but so what? The so what is that this allows for intelligent detection of toxins and other signals, which may require complex logic conditions to accurately diagnose. This would then trigger the switch and synthesize corrective proteins. Which promises cell-level response to disease diagnosis and correction, perhaps even as far as for cancer. An especially interesting aspect of this work is that viruses, from the common cold to Zika, use RNA to infect cells and are always stubbornly resistant to conventional anti-infection methods. This approach could could take the fight to viruses on their own turf.

    Perhaps not as immediately exciting to this audience as electronics-based health solutions, but with a little widened perspective, why limit your logic design aspirations to silicon? To learn more, read HERE and HERE.


    Fusing CMOS IC and MEMS Design for IoT Edge Devices

    Fusing CMOS IC and MEMS Design for IoT Edge Devices
    by Mitch Heins on 09-11-2017 at 12:00 pm


    In my 34 years in IC and EDA, it never ceases to amaze me as to how ingenious designers can be with what is given them. Mentor, a Siemens business, has released a wonderful white paper that is proof of this yet again. The white paper steps through how one of their customers, MEMSIC, used the Tanner tool suite to develop a combination CMOS IC and MEMs accelerometer on a single die with no moving parts. This IC is truly intriguing and ingenious. It’s a mixture of digital, analog, and MEMs that uses heated gas molecules to sense 3-axis accelerations. More on this in a second…

    The other thing that I found interesting in this white paper is that it gives a quick, succinct overview of how one goes about merging these multiple domains together on a single die. Many people now routinely mix analog and digital on an IC, but adding the MEMs part is tricky. The secret sauce is to have a design flow that enables the designer to plug the foreign entity (the MEMs in this case) into a standard analog/mixed signal design flow. The second and third parts of the solution are to then have a way to efficiently do the layout and modeling of the MEMs components and then bring the model back into the overall system simulation with digital, analog and MEMs. The Mentor Tanner solution tool does all that.

    The first part is simple enough. Add a set of symbols into the schematic capture library and a set of pcells into the layout library that enable the capture and layout of both the electrical and the MEMs components. In addition, you also need models for all the components (electrical and mechanical). Electrical models already exist in your foundry PDK. The MEMs models must be created from scratch. In this flow, designers can either use analytical equations that can be directly simulated in spice or they can code the MEMs behavior in Verilog-A. OK, so far, that’s straight forward, assuming you know the analytical equations for your MEMs components.

    The second part is doing the layout for the MEMs and then third part is back annotating a more accurate model of the MEMs for re-simulation. The second step is usually where designers mess up. The problem is that most MEMs designers are used to designing MEMs and not electronics. As such they typically start with a 3D-mesh model of their layout that they iterate through a series of finite element analysis (FEA) simulations to get the true MEMs behavior correct. All good and well, except once they have the 3D structure working correctly, they are then faced with how to create a faithful 2D representation of that structure that can be encoded into mask layers for wafer processing. Invariably steps are missed and the structure does not get manufactured as it was simulated. The bad news is that there isn’t really a good way to check the 2D layout against the 3D structure as modeled in the FEA tool. In short, this part may get manufactured two or three times before everything is correct!

    Mentor’s Tanner tools suite takes a different tact. They encourage designers to start with a 2D mask layout of the MEMs and they then combine the 2D layout with a 3D Solids Modeler and a set of 3D fabrication process steps to automatically generate a 3D solid model for FEA simulations. Following this methodology, the 3D structure is iterated using 2D source masks and simulation using auto-generated 3D layouts until the right solution is found. At any point in this process the 2D masks are always in sync with the 3D layout that is being simulated. This avoids the nasty surprise of things not working after fabrication due to masks not creating the right 3D structures. Mentor calls this their “mask-forward design flow” and it differentiates them from the rest of the design flows out there today.

    The third step in the process is that once the 3D FEA simulations are working well, the Mentor Tanner flow then makes use of a Compact Model Builder tool that employs reduced-order modeling techniques to create behavioral models from the 3D layout FEA simulation results. These behavioral models can then be used in the final system-level mixed signal simulations to re-verify that the part works as designed before doing the tape-out. This step sounds easy, but without the Compact Model Builder tools it can be extremely time consuming and error prone.

    Back to the ingenuity of designers. MEMSIC used this flow to design a 3-axis accelerometer without any moving parts. In the center of the 1mm-square sensor is a heater operating at 100°C above ambient temperature. Around the heater are symmetrically placed thermopiles for reporting temperature in different locations. A thermopile is a series of thermocouples, or temperature-sensing elements, connected in a series to boost voltage. The entire sensor is hermetically sealed in an air/gas cavity, outside of which is analog circuitry for amplification, control, analog-to-digital conversion and, in the 3-axis models, digital compensation/calibration circuitry.

    In the absence of motion, the thermal profile is balanced among the thermopiles. But any motion or acceleration modifies the convection pattern around the heater, such that the thermopiles in the direction of the acceleration become hotter than the others. The analog circuitry interprets the resulting signal changes from the thermopiles as motion and acceleration. Wow! It’s like a miniature storm sensor watching the hot gas moving from one side to another as the device is moved. Amazing.

    All in all, the MEMSIC guys did a heck of a job with this IoT edge device and the Mentor Tanner guys made it relatively easy for them to do using their fused, analog/mixed signal/MEMs flow. As I said, it never ceases to amaze me as to how ingenious designers can be.

    For More details see also:
    White Paper: Fused CMOS IC and MEMs Design for IoT Edge Devices
    White Paper: MEMs JumpStart Series: Preparing for Finite Element Analysis


    QCOM vs Apple and Everyone Else!

    QCOM vs Apple and Everyone Else!
    by Daniel Nenni on 09-11-2017 at 7:00 am

    Having worked with Qualcomm in many different capacities during my career I can tell you there are some amazing people in and around that company. I am always positive when people I know are considering working there and QCOM people who leave are an easy reference for other jobs. Unfortunately, I lost respect for the QCOM higher ups a few years back and am not surprised a bit by their current troubles.

    Just one of my personal experiences that I can talk about openly where QCOM fumbled: When writing the book “Mobile Unleashed” we did all research independently with detailed footnotes and sent the first draft of the respective chapters to ARM, Apple, QCOM, and Samsung. All of the chapters included embarrassing moments which is the natural course of business. The responses were quite diverse. ARM accepted the draft without question and even submitted the foreword to the book. This was not surprising since the British are known for their self-depreciating humor and ARM is very British. Apple proofed the text privately and offered factual corrections only. Samsung corporate didn’t respond. QCOM requested that we remove certain embarrassing sections and even hinted at legal repercussions. The most embarrassing one was their idiotic reaction to Apple’s first 64-Bit SoC (“64-bit Apple A7 processor is a “marketing gimmick” says QUALCOMM exec“) and we left that in of course.

    The root of the QCOM problem, in my opinion, was their early dominant market position and the resulting chip on their shoulder. Intel has the same issue (“Our transistors are the best in the world!”) which, in my opinion, has doomed them in markets other than their core business. Intel Custom Foundry is the most glaring example but I digress…

    Unfortunately for QCOM there are other companies and Governments with even bigger egos and much deeper pockets and that is where QCOM’s problems began. In 2015 QCOM agreed to pay $975M to end the Chinese government’s antitrust investigation. At the time I viewed this as a serious bullet dodged up until Apple and partners joined in the fray. Junko Yoshida did an interesting article which sums up very nicely: Apple vs QCOM: Who Extorted Who? With this handy timeline graphic:

    And the latest ruling comes from a court in QCOM’s backyard (San Diego):

    Qualcomm Loses Bid to Force Apple Manufacturers to Pay Royalties. Judge’s ruling allows manufacturers to withhold payments to chip supplier as case continues

    Bottom line: QCOM gets a royalty on the entire smartphone versus just the chip(s) with the IP which is the industry standard. When I started in IP we would charge an upfront licensing fee with NRE but no royalty. That was followed by a hybrid upfront fee/royalty model based on chip sales. Getting a percentage of the entire device is every IP company’s fantasy but QCOM is the only case I know of where it became a reality. In fact, IP companies are now hard pressed to get chip royalties in addition to licensing fees but some still do of course (ARM).

    I have no idea if Apple directly participated in the antitrust actions against QCOM or if they pushed suppliers to halt royalty payments to QCOM. What I do know is that Apple has experience with legal action against partners (Apple v Samsung) where Samsung not only lost millions, they lost their biggest customer so my bet is on Apple to win this one, and it will cost QCOM dearly, absolutely.


    How to protect #IoT devices from software attacks!

    How to protect #IoT devices from software attacks!
    by Diya Soubra on 09-10-2017 at 4:15 pm

    #IoT devices are supposed to function properly in the field for many years without human intervention. Given that we know in advance that each #IoT node is going to be hacked in the future, it is essential that some trusted code be isolated from that hack to restore the #IoT application code to a known good state.
    Continue reading “How to protect #IoT devices from software attacks!”


    Embedded FPGA IP as a Post-Silicon Debugger

    Embedded FPGA IP as a Post-Silicon Debugger
    by Tom Dillinger on 09-08-2017 at 12:00 pm

    The hardware functionality of a complex SoC is difficult to verify. Embedded software developed for a complex, multi-core SoC is extremely difficult to verify. An RTOS may need to be ported and validated. Application software needs to be developed, and optimized for performance. Sophisticated methodologies are employed to assess pre-tapeout functional verification coverage — yet, there will still potentially be defect escapes. During silicon bring-up, visibility into the SoC runtime behavior is required, to analyze the interface communications between SoC cores — the need for this signal visibility extends to diagnosis of errors in production hardware in the field, as well. To address this requirement, SoC’s often have added a debug busand a trace bus to the microarchitecture.

    In this context, the connections to the debug bus consist of a debugger unit responsible for obtaining basic runtime control and evaluating/updating the state of the SoC, using a limited set of various IP core signals with some degree of observability and controllability to each core’s internal execution state.

    The trace bus connections are defined to provide passive capture of key signal behavior during SoC execution, based upon a triggering event, say perhaps, an RTL assertion synthesized into logic. Ideally, tracing allows the defect corner case that escaped pre-silicon functional verification to be captured and analyzed. In perhaps the most direct implementation, the embedded software could itself be written to invoke a trace routine to write out values of specific control/data signals — yet, this may provide insufficient detail, and requires the application and RTOS software developers to add this feature to their coding tasks. And, this form of core-driven trace will be unlikely to capture interactions between cores — e.g., a data coherency issue between local core caches and shared system memory.

    The overhead of implementing an SoC debug bus and trace bus of sufficient scope may be problematic, in terms of design time, validation complexity, and (potentially) additional IP licensing costs — instead, many designs need a low-impact path to implementing SoC debug and trace features.

    I was recently chatting about this debug/trace requirement with Valy Ossman and Tony Kozaczuk at Flex Logix. The discussion was an enlightening one, to be sure. Valy said,“Our eFPGA tiles (and array of tiles) are implemented to provide a very large number of input/output signals. For example, the EFLX-2.5K tile offers 632 inputs and 632 outputs.(2,520 LUT’s in this tile)A maximal 7X7 array of EFLX-2.5 tiles(123K LUT’s)supports 4,424 inputs and 4,424 outputs. For many customer applications, there will likely be additional eFPGA signal pin and LUT logic resources available.”

    Tony added, “We would encourage SoC designers to consider adding debug and trace features as a secondary application, to merge with the primary application for which the eFPGA IP is being integrated.”

    A representative block diagram of an SoC include an eFPGA block with debug features is depicted below.

    “Doesn’t the debug and trace capabilities require judicious floorplanning and signal route distribution to an eFPGA block on the SoC?”, I inquired.

    Valy replied, “Yes, but it’s a simpler design task than developing a specific debug unit with the related debug bus and trace bus implementations. The high connectivity available with the eFPGA means designers are not as constrained in signal selection. A huge benefit is that the debug and trace logic is re-programmable. For example, triggering events for tracing are no longer based on a fixed logic design, but could adapt as needed during silicon bring-up, by updating the eFPGA configuration.”

    “Speaking of tracing, what about the corresponding memory storage?”, I asked.

    Valy explained,“There are several options available. The eFLEX tile arrays are designed to tightly integrate small memory arrays.” (see the example below)

    “SoC memory outside the eFPGA IP could also be used. Perhaps the most straightforward implementation would be to utilize external memory — the rich multiplexing capabilities of the eFPGA could be used to direct the trace data through a limited number of SoC I/O’s.”, Valy continued. (See the figure below, which illustrates a smaller EFLX-100 block serving as the debug unit and providing the multiplexing of trace data out a small set of general-purpose I/O’s.)

    I had not previously considered how to leverage signal connectivity and LUT resources that may be available for asecondary application within an eFPGA block (or multiple eFPGA blocks) on an SoC. The integration of debug and trace features is an excellent idea.

    Valy has written an application note, illustrating the use of eFPGA resources as part of a debug/trace strategy — that app note is available here.

    -chipguy