RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Advantages when Designing with FD-SOI

Advantages when Designing with FD-SOI
by Daniel Nenni on 02-23-2015 at 7:00 pm

In total we have blogged 41 times about FD-SOI on SemiWiki which has drawn an audience of 202,960 thus far. Of that traffic 31.68% came directly to SemiWiki (Newsletter), 30.13% came from search, 26.17% from social media (LinkedIn, FaceBook, Twitter, Google+, Reddit, etc…), and 11.99% came from other referring sites. The most interesting number here is “search” which means more than 60,000 visits were from people searching for FD-SOI related topics over the last two years. So, if there is a question in your mind as to when FD-SOI will come to the mainstream semiconductor market the answer is very soon, absolutely.

Speaking of FD-SOI, there is a workshop this week during the ISSCC conference in San Francisco sponsored by STMicroelectronics. The theme of ISSCC this year is SILICON SYSTEMS — SMALL CHIPS for BIG DATA which fits nicely with FD-SOI because big data will require small ULTRA POWER EFFICIENT chips.

Please notice that at 11:20am I will be moderating a panel on “Advantages and Opportunities when Designing with FD-SOI.” I’m working on questions for the panelists so please let me know in the comments section if you have any. Lunch will be offered to all the attendants. Registration is mandatory, free and open to everyone. I hope to see you there: SOI Consortium FD-SOI and RF-SOI Forum

Friday, February 27[SUP]th[/SUP], 2015 in San Francisco, USA

Leading companies are joining the SOI Industry Consortium to organize a forum covering planar FD-SOI as well as RF-SOI technologies. The full day workshop will be held in San Francisco (Palace Hotel) on Friday February 27[SUP]th[/SUP] 2015, the same week of ISSCC. A broad range of technology and design leaders from across the industry such as Cadence, Ciena, GlobalFoundries, IBM, IMEC, Samsung, STMicroelectronics, Synopsys, VeriSilicon will present compelling solutions about FD-SOI and RF-SOI technologies, including competitive comparisons and product results.

The day will be articulated as following:
[TABLE] style=”width: 100.0%”
|-
| style=”width: 47px” | 8.00am
| Registration
|-
| style=”width: 47px” | 8.30am
| Welcome Speech – SOI Consortium introduction
|-
| 8.40am
| FD-SOI Workshop – FD-SOI Foundry Offer

  • FD-SOI advantages for applications and ecosystem (by Philippe Magarshack, STMicroelectronics)
  • 28FD-SOI: Cost effective low power solution for long lived 28nm (by Kelvin Low, Samsung SSI)
  • [Title TBA] (by Jamie Schaeffer, GlobalFoundries)

|-
| 9.40am
| FD-SOI Workshop – FD-SOI IP Offer

  • Synopsys FD-SOI IP Solutions (by Mike McAweeney, Synopsys)
  • FD-SOI: Ecosystem and IP Design (by Amir Bar-Niv, Cadence)

|-
| 10.20am
| Break
|-
| 10.35am
| FD-SOI Workshop – FD-SOI Design Experience

  • [Title TBA] (by Naim Ben-Hamida, Ciena)
  • 28nm FD-SOI Design/IP Infrastructure (by Shirley Jin, Verisilicon)

|-
| 11.20am
| FD-SOI Workshop – Panel Discussion

  • Advantages and Opportunities when Designing with FD-SOI (Moderator: Dan Nenni, SemiWiki)

|-
| 12.30pm
| FD-SOI Workshop – Innovation

  • Driving Profitable Innovation and Rapidly Growing Ecosystems with a Semiconductor Start-up Incubator (by Mike Noonen, Silicon Catalyst)

|-
| 12.50pm
| Morning Conclusion
|-
| 1.00pm
| Lunch
|-
| 2.30pm
| More than Moore Workshop

  • RFSOI: Redefining mobility and more in the front-end (by Mark Ireland, IBM Systems & Technology Group)
  • Towards a Highly-Integrated Front-End Module in RF-SOI using Electrical-Balance Duplexers (by Barend Van Liempd, IMEC / VUB)
  • RF SOI: from Material to ICs – an Innovative Characterization Approach (by Mostafa Emam, Incize)
  • ST H9SOI_FEM: 0.13µm RF-SOI Technology for Front End Module Integration (by Laura Formenti, ST Microelectronics)

|-
| 4:30pm
| Afternoon Conclusions and coming events announcements
|-
| 5:00pm
| Social Event: Cheese & Wine
|-

Hotel location:
Palace Hotel
2 New Montgomery Street, San Francisco, California, 94105 (USA)


CEVA Showcasing Image Processor at MWC Barcelona

CEVA Showcasing Image Processor at MWC Barcelona
by Majeed Ahmad on 02-23-2015 at 1:00 pm

Cameras are becoming ubiquitous thanks to a new wave of applications that span GoPro for sports, smart glass for the Internet eyewear, ADAS for car safety, and more. However, while these cameras boast an increasing amount of megapixels to enhance the quality of vision, what they increasingly need is more processing power to analyze what they see.

That’s because these smart cameras are meant to collect a lot of data and perform complex analysis tasks related to motion detection, object detection, gesture recognition, augmented reality, etc. The solution to the processing needs that come with using advanced software algorithms, according to CEVA Inc., is offloading the device’s main CPU and GPU for processing tasks tied to performance-intensive imaging and computer vision applications.

CEVA is showcasing its MM3101 computer vision and image processing platform at the Mobile World Congress (MWC) in Barcelona next week. The CEVA-MM3101 processor core can be used in system-on-chip (SoC) platforms to offload application processors in mobile devices, wearable electronics, connected cars, surveillance applications and home entertainment systems.

CEVA managers claim that the MM3101 platform is ideally suited for the extreme computational needs in sophisticated computer vision applications for its ability to offload the performance-intensive imaging tasks from the CPUs and GPUs to the DSP. That, in turn, dramatically reduces the power consumption of the overall system, a key value proposition in embedded vision functions like gesture recognition, emotion detection and augmented reality.


An outline of applications supported by CEVA-MM3101

The MM3101 IP platform accelerates imaging and vision applications through software libraries, software tools and ability to offload the CPU through a dedicated framework. CEVA also allows algorithm developers to leverage the MM3101’s programmable architecture to implement their own proprietary software, so they can address unique use cases and differentiate their products.

Android Use Case

According to CEVA, its MM3101 image processor can be easily integrated into an SoC by using the simple interface between the host CPU and the CEVA-MM3101. Here, it presents the Android Multimedia Framework (AMF) as a use case. The AMF feature—which allows Android programmers to access a CEVA DSP core through the application processor on an Android device—can simplify the implementation of vision and imaging apps on Android devices.

The AMF feature allows SoC designers to leverage the performance of the vector DSP directly from the Android environment, offloading the CPU and abstracting any programming and system complexities for software developers. CEVA claims that, while implementing MM3101 image processing core on an Android device, the use of AMF will consume a fraction of power required to run these tasks on a CPU. And that MM3101 can reduce the power consumption by a factor of 50x.

CEVA also presents facial recognition expert nViso as a testament; leveraging the AMF abstraction layers, nViso was able to port its emotion detection algorithms onto the CEVA-MM3101 platform within a week. The CEVA-MM3101 platform has enabled an emotion detection application for nViso that uses 3D facial imaging technology to interpret human emotions and reactions to stimuli. It does this by tracking hundreds of micro-expressions and face movements to gain a more accurate and real-time understanding of the user’s emotions.


CEVA Android Multimedia Framework

CEVA’s Application Developer Kit (ADK) combines a library of computer vision algorithms with a framework for connecting to the DSP platform through the CPU. That lets application developers write C programs on the CPU that call functions on the DSP. The library contains algorithms needed in the vision applications like gesture recognition, facial tracking, and object detection.

Anatomy of Image Processor

In the CEVA-MM3101 computer vision and computational photography platform, the driving force is the vector processing (VP) engine, which performs filtering and the vector-type operations required for pixel processing. It’s based on a dedicated pixel-processing VLIW/SIMD architecture with 10-stage pipeline and contains seven different units that can work in parallel to enable flexible combination for different types of instructions.

The programmable engine can handle 32-byte operations in a single cycle and contains special instructions that can be configured to create proprietary filters for video and imaging processing. Another key feature is the decrease in data bandwidth transfer from the DDR to the core and vice versa. Here, CEVA uses unique patents for data folding and processing on-the-fly to enhance internal memory structure.


CEVA-MM3101 image processor architecture

The vector processing engine can handle large amounts of data for burst-mode image pipeline requirements as well as HD video encoding and decoding without hurting the overall performance. Moreover, it offers optimized kernels for pre- and post-image processing to ensure that CEVA customers, partners and third-party developers can conveniently develop their own apps.

At the MWC in Barcelona next week, CEVA will showcase the updates on its MM3101 image processing platform, as well as the latest versions of its CEVA-TeakLite-4 for smartphones, CEVA-XC4500 catering to LTE wireless infrastructure applications and CEVA-Bluetooth. The company will demonstrate the latest developments in its DSP cores and IP connectivity platforms at the stand 6A50 in Hall 6.

A brief profile of CEVA- MM3101 image processing platform can be seen here.

Image credit: CEVA Inc.

Majeed Ahmad is author of books Age of Mobile Data: The Wireless Journey To All Data 4G Networksand Essential 4G Guide: Learn 4G Wireless In One Day.


GlobalFoundries 2014: a Year of Change

GlobalFoundries 2014: a Year of Change
by Paul McLellan on 02-23-2015 at 7:00 am

GlobalFoundries at the end of 2014 is a very different company from what it was a the beginning of the year.

At the start of 2014, GF was a company with:

  • a CEO in Ajit Manocha who was reputed to be just a safe pair of hands while the company found a new CEO
  • several 200mm fabs in Singapore (the old Chartered fabs) running mature processes, and one small 300mm one
  • a 300mm fab (fab 1) in Dresden, Germany not capable of running leading edge processes
  • a fab (fab 8) under construction in Malta, NY for 20nm, 14nm and beyond scheduled to begin volume production late in the year

They were very late to 28nm, essentially conceding the entire market to TSMC during the first couple of years when the bulk of the money is made. Since everyone agrees that 28nm will be a long-lived process, late is a lot better than never, and the process now seems to be shipping in volume. However, rumors were that their 14nm process development was not going well and was, like 28nm, likely to be late assuming the process eventually yielded acceptably.

Then 2014 started and everything changed.

In January GF appointed a new CEO, Sanjay Jha. His background was at Qualcomm (where he rose to be COO) and Motorola in the mobile business, which since it is the largest semiconductor market ever seen could turn out to be an asset.

Fab 7 in Singapore was upgraded by merging it with the neighboring fab and upgrading everything to 300mm. This is still used for running non-leading-edge processes such as BCD and analog. However, with high-capacity 300mm the economics are much improved, especially for process steps that operate across the whole wafer at once (as opposed to lithography patterning which proceeds a die at a time). Other foundries have also been upgrading their non-leading edge fabs and processes since there is gold in them thar hills.

Next in April, GF announced that they were licensing Samsung’s 14nm FinFET process and would be a true second source in the sense that companies (in particular one whose name is a fruit) could go to either or both of companies for production.

In October at ARM Techcon I attended a panel session where Kelvin Loh of Samsung and Shubhankar Basu of GF presented and the technology transfer seemed to be on-track. Since then there have been rumors about Samsung slipping ramp-to-volume for 14nm but, if anything, that should just make the transfer easier giving it an extra few months to run test wafers.

Then in September and even more significant deal was announced. GF will acquire IBM’s semiconductor division for a sum of…-$1.5B. That’s right, IBM will pay GF billions of dollars to take it off their hands. GF reckons that they can run the business profitably even though IBM could not since foundry is their business and they can run other product in the fabs in a way that IBM was not flexible enough to do. Plus they have IBM as a captive customer for a minimum of 10 years.

The IBM deal comes with two fabs. The old non-leading edge 200mm fab in Burlington, VT running BiCMOS, RF and all sorts of other esoteric stuff; and the leading-edge 300mm one in East Fishkill, NY. It also comes with a lot of people. With IBM having a huge layoff there are also rumors that a lot of extra people got stuffed into the major deals just before the layoff, namely the sale of the low-end server division to Lenovo and the semiconductor division to GF. It remains to see if the economics work.

So at the end of 2014 GlobalFoundries had:

  • a new CEO, who (according to people I’ve talked to) has done a good job of making the company much more focused
  • upgraded fab 7 in Singapore to 300mm with a capacity of 50,000 wafers per month (wpm)
  • the old AMD fab 1 in Dresden with a capacity of 80,000 wpm
  • fab 8 in upstate New York with a peak capacity of 60,000 wpm
  • a 14nm process from Samsung
  • IBM’s semiconductor business and a huge capacity in R&D (of course technically the deal hasn’t closed yet so this statement is really jumping the gun)
  • SOI capacity, although I don’t really know if this is of any interested beyond IBM’s server business which is dependent on it
  • a 300mm fab in East Fishkill although with a capacity of only 14,000 wpm
  • a 200mm fab in Essex Junction VT with a wide portfolio of specialized processes (capacity of about 40,000 200mm wpm)

Quite a transformation in 12 months!


Synchronizer Optimization 101

Synchronizer Optimization 101
by Daniel Nenni on 02-22-2015 at 9:00 pm

A webinar presented Last week introduced two free aids to evaluating synchronizer Mean Time Between Failures (MTBF). The first, MetaACE LTD, is used to characterize the intrinsic parameters needed to calculate MTBF (tau and Tw). This limited version of MetaACE supports up to 250 circuit nodes, which is enough for a typical C-only-extracted synchronizer netlist. You will need the transistor model supplied by your foundry, in addition, but that is all that is required to get an approximate value for the MTBF at any process corner, value of supply voltage or junction temperature.

To calculate the MTBF of a synchronizer based on a fully extracted netlist will require the professional version ofMetaACE. This tool is typically used before tapeout, but the limited version is useful at an earlier point in the design cycle to compare different synchronizer designs. It can also be used to optimize a design for synchronizer service where clock-to-Q can be allowed to increase in order to minimize metastability resolving-time.

Another aid introduced during the webinar was A Public Synchronizer design that can serve as a benchmark to compare with a standard-cell flip-flop you may be planning to use. This Public Synchronizer is a straightforward master-slave circuit that includes a scan circuit for good testability. However, the layout of its regenerating transistors has been optimized for synchronizer service.

If you are contemplating solving a clock-domain-crossing issue with a synchronizer, it might be a good idea to take a look at the Blendics webinar to see if these free aids to design might be useful to you.

A UNIQUE APPROACH
In essence, we have focused our solution on how IP-Cores and other components communicate. Traditionally, IC components communicate synchronously via a global clock that controls each and every individual component and forces them to talk to each other in lock-step. So, we asked the question, how can we find a better way to support more robust communication requirements while not asking design teams to throw out existing IP or having to learn new approaches to designing IP?

OUR ANSWER
a globally asynchronous design methodology where we break the IC design into small independently operating IP-Cores and then re-connect each of the cores to each other, allowing each to communicate on its own timescale.

THE “AHA” MOMENT
It was 2004, and three of our founders were attending an international symposium they organized on Clockless Computing (Coordinating Billions of Transistors), at Washington University in St. Louis, Missouri. In the program, leaders in asynchronous computing reviewed future design challenges imposed on IC densities according to Moore’s Law.
We saw that work on asynchronous computing done decades earlier toward the goal of arbitrarily-large, discrete-component computer systems would be relevant again, this time at the microscopic scale. These older clockless techniques could be blended with modern clocked methods to solve the anticipated complexity and reliability challenges and thereby achieve continued Moore’s Law scalability. Bingo! we said to ourselves.

THE FORMATION OF BLENDICS

So, after much discussion and excitement, we determined that we could make a real difference. We brought together an astonishing group of mega-talented people who have each had a significant hand in some of the world’s most impactful technological innovations over the last 50 years, and in 2007, launched Blendics.

Our name “Blendics” can be deconstructed as: Blended Integrated Circuit Systems.


Product Review: Google Chromecast

Product Review: Google Chromecast
by Daniel Payne on 02-22-2015 at 1:00 pm

Our household members own both Apple and Android devices, so we wanted a way to share our photos or videos on the Samsung TV. The device we ended up buying is called Chromecast from Google, and it’s a small media streaming device that plugs into an HDMI port on our TV. We’ve had Chromecast for about six weeks now.


Continue reading “Product Review: Google Chromecast”


Simply the Highest Performing Cortex-M MCU

Simply the Highest Performing Cortex-M MCU
by Eric Esteve on 02-22-2015 at 11:30 am

If you target high growth markets like wearable (Sport Watches, Fitness Bands, Wearable medical) industrial (mPOS, Telematics, etc.) or Smart Appliances, you expect using a power efficient MCU delivering high DMIPs count. We are talking about systems requiring a low Bill of Material (BoM) both in term of cost and devices count. Using a MCU (microController) and not a MPU (microProcessor) allows minimizing the power consumption as such device like the SAM-S70 run at the 300 MHz range, not the GigaHertz, while delivering 1500 CoreMark. In fact, it’s the Industry’s highest performing Cortex-M MCUs, but the device is still a microcontroller, offering multiple interface peripherals and the related control capabilities, like 10/100 Ethernet MAC, HS USB port (including PHY), up to 8 UARTs, two SPI, three I2C, SDIOs and even interfaces with Atmel WiFi and ZigBee companion IC.

This brand new SAM S/E/V 70 32-bit MCU is just filling the gap between the 32-bit MPU families based on Cortex A5 ARM processor core delivering up to 850 DMIPS and the other 32-bit MCU based on ARM Cortex M. Why developing a new MCU instead of using one of this high performance MPU? Simplicity is the first reason, as the MCU does not require using an operating system (OS) like Linux or else. Using a simple RTOS or even a scheduler will be enough. Using a powerful MCU help to match increasing application requirements, like:

  • Network Layers processing (gateway IoT)
  • Higher Data Transfer Rates
  • Better Audio and Image Processing to support standard evolution
  • Graphical User Interface
  • Last but not least: Security with AES-256, Integrity Check Monitor (SHA), TRNG and Memory Scrambling

Building MCU architecture probably requires more human intelligence to fulfill all these needs in a smaller and cheaper piece of Silicon than for a MPU! Just look at the SAM S70 block diagram:

The memory configuration is a good example. Close to the CPU, implementing 16k Bytes Instruction and 16k Bytes Data caches is well-known practice. On top of the cache, the MCU can access Tightly Coupled Memories (TCM) through a controller running at CPU speed, or 300 MHz. These TCM are part of (up to) 384 Kbytes of SRAM, implemented by 16 Kbytes blocks and this SRAM can also be accessed through a 150 MHz bus matrix by most of the peripheral functions, either directly through a DMA (HS USB or Camera interface), either through a peripheral bridge.

The best MCU architecture should provide the maximum flexibility: a MCU is not an ASSP but a general purpose device, targeting a wide range of applications. The customer benefit from flexibility when partitioning the SRAM into System RAM, Instruction TCM and Data TCM as you can see below:

As you can see, the raw CPU performance efficiency can be increased by smart memory architecture. But, in term of embedded Flash memory, we come back to a basic rule: the most eFlash is available on-chip, the easier and the safer will be the programming. The SAM S70 (or E70) family offers 512 Kbytes, 1 MB or 2 MB of eFlash… and this is a strong differentiator with the direct competitor offering only up to 1 MB of eFlash. Nothing magic here as the SAM S70 is processed on 65nm when the competition is lagging on 90nm. Targeting a most advanced node is good for embedding more Flash, it’s also good for CPU performance (300 MHz delivering 1500 DMIPS, obviously better than 200 MHz) and it’s finally very positive in term of power consumption.

In fact Atmel has built a four mode strategy to minimize overall power consumption:

  • Backup mode (VDDIO only) with low power regulators for SRAM retention
  • Wait mode: all clocks and functions are stopped except some peripherals can be configured to wake up the system and Flash can be put in deep power down mode
  • Sleep mode: the processor is stopped while all other functions can be kept running
  • Active mode

If you think about IoT, the SAM S70 is suited to support IoT Gateway application, but this is only one of the many potential usages of this device able to support wearable (medical or sport), industrial or Automotive (in this case it will be the SAM V70 MCU, offering EMAC and dual CAN capability on top of S70).

Product line presentation on Atmel portal: SAM

or:
http://www.atmel.com/products/microcontrollers/arm/sam-s.aspx

From Eric Esteve from IPNEST


Analyzing Power Nets Early and Often, a New White Paper

Analyzing Power Nets Early and Often, a New White Paper
by Paul McLellan on 02-22-2015 at 7:00 am

One of the big challenges in designing ICs today is designing a robust power net capable of delivering necessary current levels to all areas of the die. Getting it wrong can, of course, lead to circuit failures that range from non-functional silicon, through intermittent performance and functional problems, to early EM-driven failures. Designers carefully perform accurate power net analysis before tapeout. However, finding problems this late in the design cycle can result in schedule slips if anything more than a trivial fix is required.

Large SoCs have complex and widely-distributed power nets, but since most of them are constructed by automated place and route they tend to have fewer late issues. They also are less amenable to early analysis since every time the design is re-placed pretty much everything changes. Furthermore, with 10 or more layers of metal, some of which are very low resistance, the problem is just not so acute.

But analog/mixed-signal ICs, memories and image sensors have many fewer layers of metal, and sometimes these are narrower (by design necessity) and of lower quality (higher resistance) materials. In addition, often these designs use complex non-orthogonal routing of power nets, which can complicate extraction and analysis for some verification tools. Obviously, eventually the power has to get down to the transistors and as a result power often has to be distributed at least partially on low levels of metal. But these low levels of metal are narrower and so resistance is more of an issue.

This is where Silicon Frontline’s P2P (which stands for “point to point”) comes in. It allows for extremely fast analysis of power nets very early in the design. It can even start to give preliminary analysis before the layout is complete. It does an accurate calculation of the resistance between any two points or groups of points (hence the name) with various resistance-map displays that allow the designer to quickly zoom into the issues where the resistance is very high (just look for the bright red regions in a sea of blue).

The tool is very easy to configure, very fast and has essentially unlimited capacity. Where the tool really shines is on analog/mixed-signal, memories, image sensors and other designs where the power nets, because of their complexity and all-angle shapes, often require manual intervention. The resistance mapping mode of P2P can be used on incomplete layouts, or during layout development in the architecture and partitioning stage of design. And then, when the design is complete and P2P resistance mapping has been used to ensure that all power nets are low resistance and any simple problems have been fixed, the designer can perform detailed IR drop and electromigration (EM) analysis with a good candidate design. If all resistances are low then IR drop will be low (or lower) by definition and typically EM is less of an issue too, since low resistance metal tends to be wider.

A new white paper is now available that covers P2P in detail including an example of its use to track down some errors in the design and take a power network from a resistance of 30 ohms, way too high, down to a resistance of just over 3 ohms in a matter of minutes.

The white paper can be downloaded here.


IoT Sensor Node Designs Call for Highly Integrated Flows

IoT Sensor Node Designs Call for Highly Integrated Flows
by Tom Simon on 02-21-2015 at 7:00 pm

Applications for IoT sensors are becoming more sophisticated, especially for industrial usage. Building optimal sensors for different applications requires multi-domain design, optimization and verification flows. The sensor devices are usually MEMS, and as such have electrical properties that need to be tailored to the analog circuitry they are connected to. Many MEMS devices are not completely passive: they often have drive systems to keep them in their most linear range of operation. For example an accelerometer will have two comb capacitors, one is for sensing, the other is to control the proof mass.

Cadence, Coventor and ARM recently held a webinar that showed how many important considerations in designing an industrial IoT sensor node can be addressed. The full session is available here.

In these designs the analog circuity needs to be designed and optimized at the same time as the MEMS structures. Chris Welham, Worldwide Applications Engineering Manager at Coventor, points out in the webinar that Coventor offers their MEMS+ product as a vehicle for building 3D design of MEMS elements in conjunction with circuit design tools. The key to making this effective is that after the MEMS designer creates a device, they can export it to Cadence, where it is represented as a parametric simulation model, symbol and PCell. The parameters exposed to the circuit designer are specified when the MEMS+ model is generated. This means that the circuit designer can alter specific parameters of the MEMS device easily and independently. In the webinar Cadence showed how Virtuoso ADE GXL can be used to concurrently optimize the circuit and MEMS parameters to meet the system design spec. The PCell that is produced by MEMS+ produces the necessary layout for mask generation.

IoT sensors need to be compact, rugged and have battery life considerations. These needs often drive the specific packaging configuration for the various SOC’s and MEMS chips in the unit. Designers can utilize BGA, bond wires and TSV’s in an assortment of configurations that can include stacked die with silicon interposer. In the webinar Ian Dennison, Solutions Group Director at Cadence, shows examples of each of the 3D-IC approach alternatives and highlights design and verification aspects of each.

For designs with bond wires, stacked die present special challenges. Manufacturing and coupling noise considerations play a major role in wire placement and shape. Cadence SIP allows wire profiles to be defined and then viewed in 3D. The webinar showed several examples where wire profiles need to be configured to provide adequate clearances to avoid things like overhanging shelves or neighboring wires.

TSV’s offer many advantages over bond wires, but working with them adds complexity to the chip design process. First off, on the plus side, TSV’s reduce overall system cost. On-chip they save routing resources that would otherwise be needed to get signals to the chip boundary and they lower parasitic capacitance and inductance. However the chip floorplan must account for their location. In the webinar Cadence discussed how Encounter and Virtuoso let designers work with TSV’s.

Tim Menasveta, CPU Product Manager at ARM went last but covered the critical aspects of how creating a sensor hub in the IoT sendor device can help the IoT senor meet its many design requirements. Without a hub, all the raw sensors would be transmitting to the aggregation point continuously. This wastes power and bandwidth. Instead with a local processor the IoT sensor node can decide when and what data should be sent. Additionally sensor fusion is extremely important. Many of us are familiar with the necessity of combining the raw inputs from a gyroscope and accelerometer to obtain accurate real world results. Also temperature is an important input for most sensor interpretation. Sensor fusion is useful for dealing vibration or effects of nearby iron objects when calibrating a compass.

The new Cortex-M7 boasts an improved DSP and floating point unit when compared to its predecessor the Cortex-M4. The M7 is ideal for bare metal code. The M8 is more suitable for higher level OS’s. There is also an optional double precision floating point unit available for the M7. To facilitate development of designs using the Cortex-M7, Cadence and ARM have collaborated on an implementation reference methodology built on TSMC’s 40LP process. This design uses Physical IP by the ARM Physical IP Division. It is a low power design that has support for power gating.

The webinar pulled together a wide range of technology, all of which is necessary for putting together leading edge IoT sensor based designs. For a more in depth review of the technology,I suggest following the link at viewing it.


The PTAB Inter Partes Review process: Danger, Will Robinson

The PTAB Inter Partes Review process: Danger, Will Robinson
by Scott Griffith on 02-21-2015 at 7:00 am

Companies with significant investment in their patent portfolios have recently faced a harsh reality: their intellectual property has become a collection of paper with large targets on them. Taking aim is the US Patent and Trademark Office’s Patent Trials and Appeal Board (PTAB), and recent figures on dismissed claims shows that the Board’s aim is not only true, but often deadly.

Since the September 2012 onset of the Leahy-Smith America Invents Act (AIA), the PTAB has earned a troubling reputation of being a “death squad” for patents that come under its review.

In the article “Inter Partes Review Initial Filings of Paramount Importance: What Is Clear After Two Years of Inter Partes Review under the AIA”, Michael McNamara and Patrick Driscoll of Mintz Levin extracted some jarring statistics from the first 24 months of IPR proceedings. Here is what the raw data of PTAB actions as of 9/4/14 showed. Of 11,046 claims in 348 petitions presented:

· 5,045 claims were challenged, 6,001 not challenged
· 3,344 claims instituted from 237 Petitions, with 66% challenged
· 1,701 claims challenged but not instituted, with 34% of claims challenged

Of the 3,344 claims instituted:

· 999 claims were found unpatentable
· 606 claims were cancelled or disclaimed during the proceedings
· 1,739 claims were found patentable

On the surface, the results would appear to represent a 48% chance of a given claim failing to survive once the review is instituted. However, as McNamara and Driscoll point out, this is somewhat misleading, and the results for patent owners may be even grimmer. Taken on a per-petition basis, 66 proceedings reached decisions on patentability. That is a small number for statistical analysis, but the results should still give patent owners and their attorneys pause. Of the 66 cases:

· 6 cases resulted in all claims found patentable (9%)
· 10 cases resulted in a mix: some claims patentable, some unpatentable (15%)
· 50 cases resulted in all claims found unpatentable (73%)

The PTAB has actively tried to combat the perception that is has an agenda to find claims unpatentable. However, the unavoidable conclusion has to be that few patents survive completely unscathed once a review is instituted.

In the months following the period that McNamara and Driscoll studied, the pace of filings has increased. In fact, December 22, 2014 saw an all-time high of 28 petitions filed in a single day. The ongoing statistics to February 15, 2015 show the continuing patterns, from 20,206 claims in 617 petitions:

· 9,048 claims were challenged, 11,158 not challenged
· 6,114 claims instituted from 425 Petitions, with 68% challenged
· 2,934 claims challenged but not instituted, with 32% of claims challenged

Of the 6,114 claims instituted:

· 2176 claims were found unpatentable
· 893 claims were cancelled or disclaimed during the proceedings
· 3,045 claims were found patentable

Source: USPTO

At the time of writing, results on a per-petition basis from 9/4/2014 to 2/15/2015 has not yet been completed. However, the chance of a given claim failing to survive remains on the order of 50 percent. The initial evidence suggests that the trend remains essentially unchanged at best, and could even be worsening. In the final analysis, the chances of a given patent surviving unscathed will very likely continue to be on the order of 10 percent at best. Poor odds, indeed.

The news is of particular importance in computer technology and, specifically, the semiconductor industry. As of February 5[SUP]th[/SUP], 2015, the vast majority of AIA petitions for fiscal year 2015 — 68.3%, or 432 petitions — addressed TCs 2100, 2400, 2600, 2800, or electrical, electronic, and computer areas.

Furthermore, we at Micro Methods have noted an increased number of petitions filed by major semiconductor manufacturers against Non-Practicing Entities (commonly known as “patent trolls”, thanks to Intel) since roughly October of 2014. It would appear that some of the major players in our industry have begun to see that the IPR process might well be precisely the blunt instrument needed to respond to a lawsuit filed against them by a patent troll. This can be very effective as a strategic move, as quite often the initial court case can be stayed pending the results of any PTAB action.

This trend certainly deserves further study. As McNamara and Driscoll conclude, “…a petitioner’s best strategy is to ensure that a review is instituted, while a patent owner’s best defense against an IPR is to make certain that one is not instituted in the first place.”

The message has become clear to those subject matter experts who support Inter Partes review efforts. As Cyrus Morton and David Prange of Robins Kaplan LLP point out in their article “Surviving Inter Partes Review: Good Experts Are Key”, experts must redouble their efforts to provide unassailable detail and basis for any opinions, and be prepared to comprehensively rebut each and every argument of the opposing side, whether retained by the petitioner or the owner. The stakes are high, and the PTAB has been very consistent in its willingness to find less-than-perfect expert testimony unpersuasive, often with disastrous results for the patent owner.

Scott Griffith is a Member of Micro Methods LLC, a group of Subject Matter Experts committed to excellence and unerring accuracy in providing semiconductor focused Intellectual Property services for our multinational client base.


SPIE Advanced Lithography Preview

SPIE Advanced Lithography Preview
by Scotten Jones on 02-20-2015 at 1:00 pm

Next week is the SPIE Advanced Lithography Conference in San Jose, the premier conference for advanced lithography used to produce state-of-the-art semiconductors. Last year I blogged after the conference about some of the key points I heard at the conference and this year I plan to do the same.

Last year’s blog is available HERE:

One of the things that really struck me last year was how pessimistic the general mood was about EUV and how optimistic the people I spoke to were about the extendibility of multi-pattering. In the last year it seems to me that EUV has picked up some momentum so I am very interested to see what the general tone is about EUV and multi patterning.

I have been going through the program for the conference looking for sessions I want to attend.

Monday morning and early afternoon have some interesting sessions on EUV and multi patterning that look like they will address the issues I spoke about above. There is also some interesting etch session papers in the afternoon.

Tuesday morning will see the EUV sources addressed. The output of the EUV sources is a key gating item for high volume usage of EUV so this will be an important session. Nikon will also discuss their non EUV roadmap, Nikon is no longer working on EUV and instead focused on 450mm ArFi. There are also interesting sessions on directed self-assembly (DSA) and negative tone develop (NTD). I am hearing that DSA has been used to make DRAMs and may be close to at least partial implementation so it will be interesting to see what is presented. NTD is also a technique that is seeing growing usage for ArFi layers with dark field masks. Tuesday afternoon will feature more interesting sessions on EUV and DSA.

Wednesday morning includes more interesting DSA and etch presentations as well as multi-patterning presentations. Wednesday afternoon features a couple of interesting papers on multi-beam e-beam lithography.

Thursday warps the conference up with additional papers on EUV systems and processes for sub 10nm resolution.

Stay tuned for my post conference blog on what I see and hear at the conference.

About SPIE, the international society for optics and photonics, was founded in 1955 to advance light-based technologies. Serving more than 256,000 constituents from approximately 155 countries, the not-for-profit society advances emerging technologies through interdisciplinary information exchange, continuing education, publications, patent precedent, and career and professional growth. SPIE annually organizes and sponsors approximately 25 major technical forums, exhibitions, and education programs in North America, Europe, Asia, and the South Pacific. SPIE provided $3.4 million in support of education and outreach programs in 2014.