CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Samsung’s Luck with Gear S2

Samsung’s Luck with Gear S2
by Pawan Fangaria on 10-18-2015 at 4:00 pm

The wave of new and improved smartwatches started this year. Apple launched its first smartwatch in April. Several other players including Motorola, LG, ASUS, and Huawei have launched their new smartwatches this year. Sony launched its latest ‘SmartWatch 3’ in October last year. Rising consumer electronics brands like Xiaomi and traditional electronic brand Casio are also planning to launch their versions of smartwatches. And now prominent names from Swiss watch bazaar including Tag Heuer and Swatch would be launching their smartwatches with the so called “Swiss quotient” very soon.

Apple, with an “unofficially estimated” sale of Apple Watch between 3 to 5 million units so far, certainly brought a momentum in the smartwatch market and now commands a leadership position just next to Fitbit in the overall wearable segment. Apple’s arch rival Samsung unveiled its latest smartwatch, Gear S2 last month in IFA 2015. I assume Samsung by now has released more smartwatches than any other smartwatch company in the wearable segment. In 1999 Samsung had launched the first commercial smartwatch, which was actually a watch phone, SPH-WP10. This was followed by several models until now. Although there were improvements, the initial models of smartwatches were so bulky that they gave the impression of a phone hanging with your wrist. Lately with continued improvement in its smartwatch models Samsung had earned the leadership in smartwatch business with its Gearseries. Samsung had sold ~1.2 million units of Gear watches in 2014. However that leadership was short lived after the arrival of Apple Watch. Although Apple Watch sales figure is impressive, it definitely needs further improvements. Samsung in its Gear S2 has filled some of those gaps along with keeping the point-of-parity with the Apple Watch. Also it’s much different from Samsung’s own Gear watch. How the Gear S2 fares in the market remains to be seen, but an initial impression from its features says it has an edge over the Apple Watch.


With a round face and metal band it gives a traditional look. It’s a stand-alone watch performing phone functions without needing a smartphone to pair with it. It has an on-screen QWERTY keyboard for writing and sending text messages. There is 3G connectivity. You can make phone calls and send e-mails. The 1.2” circular screen has a display resolution of 360 x 360 and a pixel density of 302 pixels per inch. The other key features that come along with the Gear S2 are smart car keys, smart home control, mobile payment, and so on.

Like Apple Watch the Gear S2 also has several apps including weather information, heart-rate monitoring, sports tracking, stopwatch, time-zone and so on. The app icons are rounded and elegant; those go well with the round face. The apps can run natively on the watch without needing your smartphone, thus eliminating any lag due to communication with your phone. This is a downside in Apple Watch because apps there need to communicate with your iPhone.


The circular face is very easy to navigate. You can rotate the face to control the touch screen. The screen can be swapped to notifications or apps by simply and smoothly turning the bezel towards left or right instead of swiping on that small screen. The Gear S2 comes preloaded with 26 third-party watch faces, which include the ones made by CNN, Bloomberg, and so on.

The Gear S2 has Samsung’s Tizen OS customized for the watch. The device is made compatible to go with any other Android device with KitKat 4.4 or higher version of OS. It has a dual-core Exynos processor that runs at 1 GHz clock. It comes with 4GB memory and 512MB RAM. The device is equipped with a barometer, an accelerometer, a gyroscope, and sensors for ambient light, heart rate monitoring, and so on. A single charge on battery can go for 2-3 days.

The Gear S2 series has three models – Gear S2 Classic with traditional look, Gear S2 3G for techies, and the basic Gear S2. The models have been tested in various conditions; wet climate, dusty environment, submerged 1 meter deep in water for 30 minutes, and so on.

LG’s Urbane Luxe, Motorola’s 2[SUP]nd[/SUP] generation Moto 360, and Huawei Watch are other similar models, also unveiled during IFA 2015. Sometime later I will talk about them as well. While competition has risen beyond Apple Watch for Samsung, I must say the Gear S2 has good potential to bring back Samsung’s fortune in the smartwatch business. The smartwatch holds a major portion in the wearable market segment. According to IDC the wearable segment is expected to grow beyond 150 million units by the end of 2019, and out of that more than 80% will be wrist-wear.


Extendible Processor Architectures for IoT Applications

Extendible Processor Architectures for IoT Applications
by Tom Dillinger on 10-17-2015 at 7:00 am

The Internet of Things has become a ubiquitous term, to refer to a broad (and somewhat ill-defined) set of electronic products and potential applications – e.g., wearables, household appliances and controllers, medical applications, retail applications (signage, RFID), industrial automation, machine-to-machine communication, automotive control and communication, agricultural monitoring, etc.
Continue reading “Extendible Processor Architectures for IoT Applications”


IoT chipsets and enterprise emulation tools

IoT chipsets and enterprise emulation tools
by Don Dingee on 10-16-2015 at 12:00 pm

When most people talk about the IoT, it is usually all about wearables-this and low-power-that – because everyone is chasing the next huge consumer post-mobile device market. Mobile devices have provided the model. The smartphone is the on-ramp to the IoT for most consumers, with Bluetooth, Wi-Fi, and LTE, and maybe a dozen or so sensors in a personal cluster at a time.

That model represents just the edge. IoT applications, especially ones designed for the industrial IoT, have two more tiers. In the middle is the multi-protocol gateway, capable of handling streams of incoming data from tens, hundreds, or even thousands of sensors under a wide variety of protocols. These gateways usually smash everything into an IP-compatible blend, in real-time, for further analysis.

Behind both the smartphone and the multi-protocol gateway is some kind of infrastructure, the third IoT tier. Much of the “cloud” application infrastructure is designed for human interaction through a web services protocol, often using a RESTful programming model. A short delay, no more than a second or two, is an acceptable response for most applications.

IoT infrastructure is usually real-time, where a second to respond might as well be never, and unpredictable latency in an IoT network can be an incredibly bad thing. To handle real-time operations, many IoT architectures are moving to a hybrid cloud model, with a converged modular server running multiple cores to handle incoming data and analytics.

We have smaller chips somewhat improving for the edge, but the problem of optimizing bigger chips for the IoT gateway and infrastructure tiers looms very large.

Mentor Graphics dives in to the deeper end of the IoT pool with a brand new white paper discussing how large-scale Veloce emulation systems can help in design and verification of IoT chipsets. They come at the problem from an evolved mobile SoC or network processor (NPU) point of view, which is a valid point of reference. Their discussion of protocols centers on IP-level interconnect, not IoT frameworks such as AllJoyn, MQTT, Thread, and others – I’d like to see them extend their concept to that next level, perhaps with an IoT ecosystem partner.


Nonetheless, Mentor touches on some important issues in IoT chipset design and verification. I’d like to mention two more of their five main points – power consumption, and software.

As Apple is discovering with their A9 chip from its dual sources, system-level power management is becoming a very big deal. Mentor claims Veloce can run the billions of cycles needed to expose power issues, starting early in the design and continuing to full-up verification. Veloce also integrates with third-party tools such as ANSYS for even more advanced power analysis. Power management at remote gateways, or even on a per-core basis in a converged modular server or Spark node in the infrastructure, may prove particularly tricky.

Software for the IoT is another issue entirely. Most co-verification efforts focus on Linux or Android as the platform. IoT operating systems and software may be completely different, running compact RTOS platforms at the edge, OSGi on gateways, and an advanced language such as Lua or Rust on the infrastructure to support data management and analytics. Without getting to the real operating system or language in use for an IoT chip, issues may go entirely unnoticed. Veloce is virtualized, allowing applications to be run over Veloce OS which abstracts emulator and debug details from the application. Again, I’d like to see more IoT software specifics from Mentor in the Veloce environment, but their idea of virtualization is headed in the right direction.

The white paper concludes with a discussion of how Veloce is an emulation data center, rather than just a piece of engineering lab equipment. An Enterprise Server allows job management and scheduling over LSF, relieving a major weakness of earlier hardware emulator implementations. This allows multiple projects to be configured on a single Veloce emulator of sufficient capacity.

To download and discuss the Mentor paper, head here:

SoC Verification for the Internet of Things

I think it is great we are starting to explore these ideas. While the building blocks of the IoT are very similar to those used in mobile chipsets, there are going to be implementation nuances that need addressing in optimized IoT designs. Simulators will not get us where we need to go in terms of understanding behavior of multi-protocol gateway and infrastructure chipsets that must be highly reliable under any real-time traffic scenario. Hardware emulation makes a stronger case for IoT chipset verification, if we can see some actual software and protocols applied.

More articles from Don…


Is the IP market expected to decline by 2020?

Is the IP market expected to decline by 2020?
by Eric Esteve on 10-16-2015 at 7:00 am

To answer this question, I will share the results about Interface IP, more precisely the Top 5. The Top 5 protocols, USB, PCIe, Ethernet, DDRn and MIPI, are part of the interface IP market and each of them has been characterized by very strong growth rate. If you compute actual numbers for 2010 to 2014, it results to a Cumulated Annual Growth Rate (CAGR) of 14%.
Continue reading “Is the IP market expected to decline by 2020?”


Learning about 3D Integration of ICs and Systems

Learning about 3D Integration of ICs and Systems
by Daniel Payne on 10-15-2015 at 4:00 pm

We blog a lot about Moore’s Law, and even “More than Moore” where 3D integration of ICs and systems are used to get lower product costs. One big challenge with 3D integration of ICs is that most EDA software was really intended only for abstracting at 2D or 2.5D structures. Over the past several years there have been new EDA tools developed that do specifically address the 3D nature of new IC design and packaging. If you’re interested in 3D ICs then consider attending a webinar from Silvaco on Thursday, October 22nd to get some new insight for two specific aspects: Partitioning and DRC for 3D systems and ICs.If you’re an IC engineer or manager looking for better ways to implement 3D designs and analyze, then here’s a summary of what to expect:

  • Entry barrier: Difficulties facing 3D ICs and heterogeneous designers.
  • EDA Concept: Rationale behind the novel 3D space partitioning software and its integration with the 3D DRC tool
  • Design targets: Concept of weighted formulated design penalties
  • Design conflicts: Co-optimization of multi-physics criteria
  • Speed: High abstraction level for very fast simulations of complex orthogonal design criteria
  • Manufacturability: How the DRC violations in 3D space can be easily visualized and corrected
  • Case studies – 3D partitioning: How to obtain an optimal partitioning and placement of the blocks composing a 3D system in very short time
  • Case studies – 3D DRC: how to apply DRC checks in 3D space to feed subsequent full routing and final verification using traditional functional simulators

Let me start by showing you a 3D design flow prototype:In the upper-left corner is our top-level schematic of a SiP (System in Package) design. On the top is the block and package library, so the challenge is to figure out how to place and partition each block in a 3D design efficiently and that meets cost goals. Inside the gold box is where all of the partitioning iterations are performed automatically, while giving you feedback about the costs of building each 3D instance.In the lower left is the 3D DRC function, where you would find out if your selected 3D design passes all of the Design Rule Checks for manufacturing.3D DRC ViolationsThis webinar is presented by Stefano Pettazzi who earned his MS degree in EE from the University of Pavia, Italy. Stefano has been with Silvaco since 2012 and has 15 years experience with both EDA and microelectronics companies. The webinar runs from 4PM to 5PM (BST) and registration is required. If you live in a different timezone, it’s still OK to register for this webinar and then receive an email link to the archived video so that you can view it a more convenient time.Related:


Our Own Cadence Amongst the Best Multinational Workplaces!

Our Own Cadence Amongst the Best Multinational Workplaces!
by Daniel Nenni on 10-15-2015 at 12:00 pm

There were some very happy faces around MemCon this week for a variety of reasons. Paul McLellan was smiling because he now works full time for Cadence and has the best medical benefits ever and of course I was smiling because there was free food!
Continue reading “Our Own Cadence Amongst the Best Multinational Workplaces!”


Wafer-Level Chip-Scale Packaging Technology Challenges and Solutions

Wafer-Level Chip-Scale Packaging Technology Challenges and Solutions
by Tom Dillinger on 10-15-2015 at 7:00 am

At the recent TSMC OIP symposium, Bill Acito from Cadence and Chin-her Chien from TSMC provided an insightful presentation on their recent collaboration, to support TSMC’s Integrated FanOut (InFO) packaging solution. The chip and package implementation environments remain quite separate. The issues uncovered in bridging that gap were subtle – the approaches that Cadence described to tackle these issues are another example of the productive alliance between TSMC and their EDA partners.

WLCSP Background
Wafer-level chip-scale packaging was introduced in the late 1990’s, and has evolved to provide an extremely high-volume, low-cost solution.

Wafer fabrication processing is used to add solder bumps to the die top surface at a pitch compatible with direct printed circuit board assembly – no additional substrate or interposer is used. A top-level thick metal redistribution layer is used to connect from pads at the die periphery to bump locations. The common terminology for this pattern is a “fan-in design”, as the RDL connections are directed internally from pads to the bump array.


[Ref: “WLCSP”, Freescale Application Note AN3846]

WLCSP surface-mount assembly is now a well-established technology – yet, the fragility of the tested-good silicon die during the subsequent dicing, (wafer-level or tape reel) pick, place, and PCB assembly steps remains a concern.

To protect the die, a backside epoxy can be applied prior to dicing. To further enhance post-assembly attach strength and reliability, an underfill resin with an appropriate coefficient of thermal expansion is injected.

A unique process can be used to provide further protection of the backside and also the four sides of the die prior to assembly – an “encapsulated” WLCSP. This process involves separating and re-placing the die on a (300mm) wafer, which will be used as a temporary carrier.

The development of this encapsulation process has also enabled a new WLCSP offering, namely a “fan-out” pad-to-bump topology.

Chip technology scaling has enabled tighter pad pitch and higher I/O counts, which necessitate a “fan-out” design style to match the less aggressively-scaled PCB pad technology. TSMC’s new InFO design enables a greater diversity of bump patterns. Indeed, it offers comparable flexibility as conventional (non-WLCSP) packaging.

Briefly, the fan-out technology starts by adding an adhesive layer to the wafer carrier. Die are (extremely accurately!) placed on this layer at a precise separation, face-down to protect the active top die surface. A molding compound is applied across the die backsides, then cured. The adhesive layer and original wafer are de-attached, resulting in a “reconstituted” wafer of fully-encapsulated die embedded in the compound:

(Source: TSMC. Molding between die highlighted in blue. WLCSP fan-out wiring to bumps extends outside the die area.)

This new structure can then be subjected to “conventional” wafer fabrication steps to complete the package:

  • addition of dielectric and metal layer(s) to the die top surface
  • patterning of metals and (through-molding) vias
  • addition of Under Bump Metal, or UBM (optional, if the final RDL layer can be used instead)
  • final dielectric and bump patterning
  • dicing, with molding in place on all four sides and back
  • back-grind to thin the final package

As illustrated in the figure, multi-chip and multi-layer wiring options are supported.

InFO design and verification Cadence tool flow
Cadence described the tool enhancements developed to support the InFO package. The key issue(s) arose from the “chip-like” reconstituted wafer process fabrication.

For InFO physical design, TSMC provides design rules in the familiar verification infrastructure for chip design, using a tool such as Cadence’s PVS. As an example, there are metal fill and metal density requirements associated with the fan-out metal layer(s) that are akin to existing chip design rules, a natural for PVS. (After the final package is backside-thinned, warpage is a major concern, requiring close attention to metal densities.)

Yet, InFO design is undertaken by package designers familiar with tools such as Cadence’s Allegro Package Designer or SiP Layout, not Virtuoso. As a result, the typical data representation associated with package design (Gerber 274X) needs to be replaced with GDS-II.

Continuous arcs/circles and any-angle routing need to be “vectorized” in GDS-II streamout from the packaging tools, in such a manner to be acceptable to the DRC runset – e.g., “no tiny DRC’s after vectorization”.

Viewing of PVS-generated DRC errors needs to be integrated into the package design tool environment.

Additionally, algorithms are needed to perforate wide metal into meshes. Routing algorithms for InFO were enhanced. Fan-out bump placements (“ballout”) should be optimized during co-design, both for density and to minimize the number of RDL layers required from the chip pinout.

For electrical analysis of the final design, integration of the InFO data with extraction, signal integrity, and power integrity tools (such as Cadence’s Sigrity) is required.

Cadence will be releasing an InFO design kit in partnership with TSMC, integrated with their APD and SiP products, to enable package designers to work seamlessly with (“chip design-like”) InFO WLCSP data. The bridging of these two traditionally separate domains is pretty exciting stuff.

-chipguy


A Connectivity Verification Idea

A Connectivity Verification Idea
by Bernard Murphy on 10-14-2015 at 4:00 pm


A Wirble

In case you hadn’t noticed, I like to write from time to time about EDA product ideas. I assume these are somewhat original, but given the maxim “there’s nothing new under the sun…”, I may well be wrong. In any event, I like to share these ideas if only to demonstrate that innovation in EDA is not stalled because we’ve run out big, exciting things to do. I’ll grant there are plenty of challenges on the business side but I’m hopeful that sooner or later someone will prove that profitable innovation outside the box is still possible.

We start with a proven problem: verification, the monster eating SoC design resources and schedule. High coverage, the benchmark of verification completeness as it used to be understood, has become a distant memory replaced by concepts such as “test all reasonable software use-cases” and “test until you run out of time”. Part of the problem is that the directed-random approach so effective at the IP level does not scale to SoC, leaving you with inevitably bounded case-based testing and an abiding suspicion that gremlins may still lurk in use-modes you haven’t tested.

And that motivates interest in static verification of integration-level logic as a way to get to at least one component of coverage completeness. Conventional wisdom builds a giant spreadsheet of expected connections between IPs, qualified by configuration settings and simple temporal conditions. A checker verifies that the SoC-level RTL is consistent with this spreadsheet, using graph-tracing and closely-bounded formal analysis, without needing to drill down inside IP-level logic. Because these are static checks, a complete check of all connections offers a coverage-like sense of confidence for this verification objective.

Nobody believes this is a great solution. It’s a huge amount of work and it’s massively tedious to develop the spreadsheet. And it is just as likely to be as error-prone as the RTL and probably will repeat systematic interpretation errors in the RTL, since it will be developed by the RTL design team or or a closely-related team.

So here’s the product idea. Spreadsheet approaches fall short because they are too atomic. They are a kind of machine-code representation of the integration when what we want is a higher-level, human-readable view: under such-and-such configuration settings, this IP gets a clock of this frequency or this reset pin gets a warm reset from this reset generator pin with a stall of 10 cycles before being released or … So I propose a tool that could first reverse-engineer this architectural intent from the implemented RTL. I call it a “What I Really Built Logic Extractor” or Wirble (the name is at least memorable and I’m no longer responsible for real product names).

A Wirble abstracts architecture one plane at a time: clock, reset, interrupt, bus, test, IO, .., just at the integration level, not down into IP. It may need a few hints here and there (for example this is a PLL clock output pin) but whatever may be required will be orders of magnitude simpler than what is a required for a spreadsheet. It then builds, per plane, a table or graph, a timing diagram or a very abstracted logic description showing what resources are delivered from what sources, under what conditions, to what consumers. This might be a static representation or a dynamic view where you can click through configuration options. For clocks you have to understand multipliers and dividers and muxes and gating, for resets you want to understand reset domains and what activates them (at a high level) and with what stalls. And so on.

All of this should (if well implemented) be much easier for a designer or architect to scan and immediately spot implementation problems – wherever what you really built doesn’t match what you should have built. Perhaps you may even conclude there was a bug in the spec, leading you to a corrected or perhaps even improved implementation.

I’m not claiming this will be easy. This is a tough (but I think tractable) problem with high potential value. It will require graph-tracing and formal (with cleverness in black-boxing, so formal doesn’t get lost). It will require a lot of special-case handling to cover different architectures (this is nothing new – synthesis has mass amounts of special-casing to handle different RTL structures). And it will require a clever and very intuitive representation of abstracted functionality which a designer can scan quickly for potential errors. On the plus side, it isn’t necessary to handle every plane from the outset. A Wirble that could do a good job on the clock and reset planes would provide immediate value in verification; support for more planes could be added over time.

Once the SoC team has a signed off Wirble output, they now have a human-readable regression standard against which subsequent Wirble runs could compare as the design progresses, and report not simply that connection XYZ is different from the latest RTL drop, but also that the USB external clock is disabled in configuration ABC where before it was enabled.

One last thought. Perhaps instead of abstracting Wirble output from an SoC RTL, you could do a reverse-Wirble and generate that plane of integration logic from a hand-created Wirble spec. That means you now have an integration which is correct by construction. And you can still Wirble-verify that what was generated was what you expected. Lest you think I am now completely in fantasy-land, I know of at least one design company that is already doing something very similar.

That’s it – build a Wirble if you want to build something out of the ordinary. But please don’t ask me detailed market or implementation questions. I gave you the concept, the rest is up to you.

More articles by Bernard…


EUV sees further delays?

EUV sees further delays?
by Robert Maire on 10-14-2015 at 12:00 pm

Headwinds which will likely continue into 2016…
ASML reported revenues of 1.55B Euros with EPS of 0.75 Euros more or less in line with expectations. Orders were the weak spot, falling to 904M Euros versus the previous Q2 orders of 1.523B Euros. The company guided Q4 revenues to be down about 10% to 1.4B Euros below current flattish expectations. The company largely blamed foundry weakness as the primary culprit but we are concerned that they will also see memory slowing much as other equipment companies have reported.

EUV stumbles & delays yet again…..Intel likely part of the pushout
The company now expects to ship only 4 EUV systems in 2015 versus the prior expectation of 7 systems. We had predicted this push out yesterday after hearing Intel’s comments on their call which all but named ASML as the equipment company.

The key issue here is why?

ASML said that customers were being cautious and that was the reason for the delay. However, Intel said that the tool(s) needed to be reconfigured for higher output (ie; they either weren’t making spec or didn’t make required milestones).

Intels CEO, BK said “So you have to remember, there is this lag, and that’s why, as we looked at the tool, actually, we are making an adjustment on the efficiency of that tool, basically the number of units per tool out. In order to get more capacity, when that tool is really required”

It sounds like he is saying that the units per tool (read that as wafer throughput) is too low and needs to be higher. Interestingly he also added that when the tool is really required which would indicate its not yet required so we can push it out and wait for the latest and greatest version of the tool when its ready for prime time….sounds a lot like it may not be required for 10nm (as Intel has stated they have a non EUV process flow for 10nm). This is not the first time that Intel has laid the blame on ASML as we have previously pointed out.

Immersion tools slipping?
We would point out that the number of immersion tools shipping is slipping over the last year. While upgrades and service increase may point to customers finding other ways to get more out of existing tools it could be that customers don’t want to invest more in new, expensive immersion tools while waiting for next generation EUV. Its likely we are seeing more equipment reuse in the near term.

EUV system up time still an issue…
ASML said that at “certain select customers” EUV had up time of 70%. which sounds a lot like if you averaged the whole customer base that up times would likely be well below 50%. So even if you can do 1000 wafers per day, the up time cuts that in half to 500 wafers per day (or less) suggesting we still have a long way to go in improvements

Memory risk…
So far, ASML has not seen as much of a drop off in memory spend as others in the industry have reported . Memory went from 38% of orders to 52% of orders in the current quarter. Given concerns about memory pricing and the overall stability of the memory market we think this high exposure adds risk. Orders also fell from 41 systems in Q2 to 32 systems in Q3 with foundry falling from 38% to 23% and IDM (likely Intel) fell sharply from 39% of orders to 10%.

The stock…
As expected the stock had a negative reaction. We think there may be more downside as investors digest the EUV issues and memory risks. The company did not say a lot about 2016 which makes us a bit nervous about the outlook for next year if memory does indeed slow. We think there could be potential downside to $80ish.

Robert Maire
Semiconductor Advisors LLC