CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

"Night Gathers, and Now My Watch Begins"

"Night Gathers, and Now My Watch Begins"
by Paul McLellan on 09-08-2015 at 7:00 am

What is going on in the watch world? And I don’t mean Game of Thrones‘ nights watch.

Lots, actually. Whether it will amount to a lot remains to be seen. I still think the usefulness versus the price isn’t there yet. Apple has sold 3.5M iWatches (or something close) which for anyone else would count as a runaway success but for them counts as lackluster. Even in silicon valley you don’t see iWatches on everyone’s wrists so I expect they are rare in middle America. Apple has a big event later this week on the 9th and it is possible that a new watch will be announced. But everyone else is rushing out new watches too since it is the IFA consumer electronics show this week in Berlin, . Here are some of them.

Caveat lector: I haven’t used any of these watches except the Pebble, I’m just doing the Semiwiki thing of following the industry so you don’t have to.

The Samsung Gear S2: This gets great reviews since it is light, the weight of a normal watch. It displays the time constantly, just like one of those old mechanical things that I suppose are dumb watches. It has rotating bezel which is how you control it. It has integrated 3G radio (so you don’t need to pair it with a phone, Samsung or otherwise). You can order Uber from your watch, for example.

Huawei Smartwatch: Available for pre-order and shipping September 17th with prices from $349 to $799 (there are 6 models). Another watch that looks like a watch with real hands. I’ve not seen any report by anyone who has used one, everyone seems to be just reporting on the press releases. Huawei is usually strongest in the China market but I find it hard to believe that the sort of people who have made Xiaomi #1 there are going to be buying a $799 watch at several times the price of their phone. A few technical specs:a 300mAh battery that promises to deliver up to two days of battery life. It also sports 4GB of internal memory, 512MB RAM, Android Wear 1.3, a built-in microphone, six-axis motion sensor and heart rate sensor.

LG Smartwatch: LG don’t even call it a smartwatch, they call it a “smartpiece”. A few specs: Android Wear running on a 1.2GHz quad-core Snapdragon 400 processor and 512MB of RAM, with a 1.3-inch P-OLED circular display with a 320 x 320 resolution and a 410 mAh.But if you really want to be ostentatious, it is available as the 23-karat gold Urbane Luxe, which will be priced at $1,200.

Sony Smartwatch: Only available in Japan for now. One reason is that its NFC payment interface only supports Japan. May eventually be available here but the one shown at IFA won’t even be available in Japan until March 2016. The intelligence is in the bracelet, not just the watch itself. Definitely at the vanguard of smartwatches that are trying to look like non-smartwatches, which at glance at most of the pictures here seems to be the current approach.

The latest version of one of the first smartwatches, the Kickstarter funded Pebble: unlike most watches, only buttons control the Pebble (no touch-screen). 5 day battery life. Cheaper and more utilitarian than most of the other watches here that are a cross between expensive jewellery and cool electronics.

Android Wear now works with the iPhone. I’ve not tried it but according to thisreview, don’t bother:The ability to pair an Android watch with an Apple phone is conceptually interesting but functionally, it’s a lose-lose proposition. Android Wear watches can’t do most of the things they can do when paired with Android phones, and your iPhone can’t be extended through an Android watch the way it can with the Apple Watch. It’s an experiment that may yield results one day, but that day isn’t today.

I still think watches are awaiting a killer app. If they could continuously monitor blood pressure, say, then I think they would fly off the shelves. But just moving notifications from your phone to your wrist and even being able to take calls doesn’t seem enough. I own a Pebble but I don’t bother to wear it. Given that I already paid for it, finding it useful enough to wear is a pretty low bar to clear.

“And now his watch has ended.”


M&A Frenzy in the Chip Industry, the Growth of GaN, and Why It Matters

M&A Frenzy in the Chip Industry, the Growth of GaN, and Why It Matters
by Alex Lidow on 09-07-2015 at 12:00 pm

If expanding industries typically indicate vibrancy, a race to acquire and consolidate is generally reflective of the opposite – a period of slowed growth in mature, once high-flying categories. And while many industries experience a period of stardom, followed by a sharp and steady decline, we should be extremely worried when they occur in industries that are fundamentally central to our socio-economic vitality.

Enter the semiconductor industry, where in the past 24 months, there have been at least 10 significant mergers and acquisitions, including big name brands such as Avago’s acquisition of Broadcom, Intel’s purchase of Altera, and Infineon’s acquisition of International Rectifier .

Further, since the year 2000, the semiconductor industry as a whole has grown at a mere 5% annually, as compared with 22% in the 1980s (see figure 1). The semiconductor “go-go years” of the 80s have been replaced by the more sedate, incremental growth rate of a mature industry – with fewer and fewer bright lights from product innovation. But does a shrinking semiconductor industry indicate bigger problems for the technology industry as a whole?

End of Moore’s Law

It is not a coincidence that this race to consolidate has coincided with the end of Moore’s Law as we know it. Moore’s Law depends upon the continuous reduction in the size of a transistor to maintain positive momentum in both cost and performance. Today, the realization of this bold prediction made 50-years ago has become an either/or proposition – either deliver on better performance or a lower price.

The problem began to rear its ugly head about twenty years ago: as the size of transistors continued to shrink, the cost to produce them got bigger. Other costs, such as designing, packaging and testing have also escalated, and the overall bill to develop an advanced silicon-based device – now in the 10’s to 100’s of millions of dollars – has become unaffordable to all but the well-funded, established companies.

Why It Matters

New chips are the fuel for the semiconductor industry’s growth, and, as the costs escalate, the number of new semiconductors (and their innovation contribution) decreases. But it’s not just the chip companies that stand to lose from this new reality. Industry luminaries predict that the sputtering of Moore’s Law will likely hinder the innovation and advancement to which we have all become accustomed,putting in peril many of the devices and applications that businesses and consumers covet, including virtual reality glasses, wireless power, autonomous vehicles, 5G mobile communications and advanced medical devices to name a few.

The de-coupling of cost and performance is due to three underlying trends for silicon chips; (1) the slower growth in end markets for semiconductors in general, (2) the cost to develop a new chip, and (3) the capital investments needed to build factories to produce each new generation of product.

Knowing this, it becomes clear why there has been so much consolidation in the chip industry lately: for many of the large semiconductor players, it’s simply less risky to buy existing revenues than to invest in new products and factories to develop and introduce new products. The total market size of the industry “pie” is relatively fixed, so organic growth from technological innovation is high-risk, in addition to being very costly.

The original wellspring of innovation – the venture-funded startup in the chip space has all but disappeared, as there is little venture money available given the poor cost-to-risk ratio for new product development and the less vibrant growth prospects.

Unfortunately, this is not good news for consumers as startups are often the source of the technology industry’s greatest innovations. Given these metrics, advancements in the semiconductor industry will continue slowly, putting us farther and farther behind on the promise of Moore’s Law.

A Glimmer of Light – the Growth of #GaN
Some exceptions can be found, however. Alternatives to silicon, such as gallium nitride (GaN) and silicon carbide (SiC) offer the potential for a refreshing return to the financial metrics the semiconductor industry enjoyed in the 1970s and 1980s. GaN in particular, is facing a period of rapid growth. This growth is coming from both the replacement of lower performing silicon devices and from emerging applications that are enabled by GaN’s superior performance. These applications – from those that make our lives more convenient to those that have life-altering impacts – are critical for the technological advancement to which we have become accustomed.

Given worrisome events of the last two years, I say it’s time for the semiconductor industry and the venture community to come together to rally around innovation – not consolidation. With that in mind, we will be able to fuel product development and propel advancement at the speeds to which Gordon Moore predicted just five decades ago.


Semiconductor Usage Revolves Around Asia

Semiconductor Usage Revolves Around Asia
by Pawan Fangaria on 09-07-2015 at 7:00 am

I just read Daniel Nenni’s blog titled “Is Silicon Valley Gridlock a Good Sign for Semiconductors?” Dan, there is no definitive answer to this, I mean in terms of semiconductors. Let me call it Semiconductor Gridlock in Silicon Valley. Yes it’s good because Silicon Valley promotes research, brings up innovative technology and products, demonstrates to the world the use of those products by consuming those in Silicon Valley and USA, and then scales the production to make it affordable by the masses. Of course during scaling of production multiple players across the world join the game to further innovate, scale, adopt and make that technology or products ubiquitous across the world. And no, the Semiconductors cannot remain gridlocked into Silicon Valley forever. Smartphones are saturated in USA. Now, it’s time for other regions to manufacture Smartphones and consume for themselves. Similar is the scenario for other semiconductor products. I get my conviction towards this when I see a recent chart on IC usage across the world, published by IC Insights.


This clearly shows Asia-Pac region leading the pack with 58.9% of total IC sales across the world. If you look at the specific categories, the Asia-Pac region has largest share in communications; 24.2% out of a total of 38.3% sales in that category, arguably. The communications segment is followed by the computer segment with 22.4% sales in Asia-Pac region out of a total of 35.6% sale in that category, which is understandable why. Europe leads in the auto segment by a very narrow margin compared to Asia-Pac; that margin is expected to vanish going forward, once Chinese economy improves. The categories which are purely semiconductor driven, i.e. computer and communication, have the combined maximum share at 73.9% of total IC sales. And sales in these categories are heavily dominated by the Asia-Pac region this year.

Although, the chart shows it region wise, in my analysis I am seeing it according to human population in different regions. Out of more than 7 billion people around the world, the Americas have ~4.45%, Europe has slightly more than 10%, Japan has the least ~1.75%, and Asia-Pac has the largest ~55% of world population.

Now connect this with the semiconductor sales; Asia-Pac has the maximum semiconductor sales at 58.9% and Japan has minimum at 7.8%. Interestingly, Americas with 4.45% population contribute much more (22.9%) in semiconductor sales compared to Europe (10.4%) with a population more than 10%. This can be attributed to the effect of America being the initiator and consumption leader in the first place. We can also attribute it to the current financial crisis in Europe. However, the key point I see here is – as things become affordable, they start moving to more populous regions who can afford. Isn’t that the reason, why Xiaomi is entering PC/notebook segment even though that segment is expected to be in decline for next 2 years?

The 30+ years of semiconductors have given enough know-how, affordability, and consumption appetite to many regions around the world, even Africa to start with consumption. Today, Asia-Pac appears self sufficient in manufacturing and specifically dominating in consumption. Within Asia-Pac, China has more than 19% population and India has ~17.5% population. While China already has strong manufacturing, India has to catch up in manufacturing. India’s “Make in India” program is promoting foreign companies to start manufacturing in India, i.e. where the consumption is. There has been good progress – last month Intel opened its Maker Lab in India to provide innovation infrastructure around IoT (Internet of Things) to start-ups; several Smartphone companies including Samsung, Xiaomi, Lava, Karbonn, and others have started manufacturing base in India.

In my view, Asia-Pac region will continue to dominate in semiconductor sales for couple of years until any significant new break-through happens elsewhere in the world.

The IC Insights chart can be referred here.
Also read Daniel Nenni’s blog here.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Improve SoC Front-end Design Productivity

Improve SoC Front-end Design Productivity
by Khan Kibria on 09-06-2015 at 4:00 pm

I have been involved in SoC developments for a long time. During this period I tried to learn what impacts the productivity and subsequently the market opportunity. Over the last year or so at SoCScape I have been involved designing solutions that can improve them. I have decided to post some of my thoughts here in a series of blogs from this experience and effort. I hope you participate and enlighten me on this topic more by providing your comments below.

Integrated circuits have been packing larger functionality due to tighter geometry to reduce total system cost. However, design cost for large chips is on the rise. There are multiple elements contributing to the cost to consider. I am not talking about the cost of simulation or synthesis of complex functionality here. Those are addressed by the multitude of -constantly improving EDA products and methodologies. I am addressing cost of preparing the RTL before hand-off.

The following three issues stand out the most:

Complexity
Cost due to the inherent complexity of functionality. It takes time to put them together in an SoC. Delivering the complete RTL to the simulation and synthesis team in a consistent manner is non trivial.

Lack of uniformity
The building blocks, commonly termed as intellectual properties (IPs), may come from different sources. These IPs are typically reused over multiple chips. The style, naming convention etc. are different depending on where they come from. The lack of uniformity in situations like this can increase resource needs and have a negative impact on cost.

Change in requirement

Today’s design practices present requirement changes based on market demand a few times through its life cycle. Adjusting to these changes incur cost since time to market often has a significant impact on product revenue.

A solution that controls cost by improving productivity in these areas is my primary interest. After all, experience suggests that SoCs are not becoming simpler and smaller! We need to make chips to make money without letting the market opportunity slide away. A great SoC delivered late to the market does not yield its full revenue potential.

Complexity
I would like to give a brief historical perspective. Remember when hardware description languages (HDL) were introduced? When HDLs were in their infancy, many designers were reluctant to adopt HDL because of the absence of robust synthesis tools. Although digital design at gate level was tedious, chips were simpler. It was still fun to craft new functions with gates. But then the complexity made gate level more mechanical and therefore the creative fun was not at the gate level any more. I started coding chips with Verilog and VHDL. Both have their great features and pains, but it started becoming fun to code at RTL level. I could cater to my creative side and I also got immediate productivity boost.

History is repeating itself again! I am encountering many proponents of high level synthesis (HLS) these days. This is a good thing. With HLS, designing logic that produces high level complex functions will not be as tedious.

So how does the complexity of today impact the designers?

Lots of signals to connect
I mean a lot! They are traversing up and down through the hierarchy. Hooking them up individually is error prone and time consuming. This is where the protocol based connections can help. So what is a protocol based connection?

They behave much like cables. They contain a collection of functionally related wires. When you connect them to a connector they all get connected at the same time, no need to connect them individually. Moreover, you cannot take one type of cable and connect it to another type of connector. Of course I am purposely ignoring the case where someone tries to use pure brute force!

It is possible to use HDL to apply such a paradigm. A VHDL record aggregates a collection of signals. A System Verilog struct also provides similar facilities. HLS methodology supports this paradigm as well.

However, in any given SoC or a chip design, we may have a number of IP. One could be designed using a different HDL than the next. The front-end designer of an SoC or chip has to deal with this mixed bag and that is where it starts becoming tedious. Fortunately, this is where automation comes to the rescue. To take advantage of the automation, an IP needs to provide meta-data describing all the protocol based connections. An EDA tool can use the information and connect everything on behalf of the designer. This type of automation presents a huge opportunity to improve productivity and reduce manual error.

The IP-XACT (IEEE 1685) standard provides an XML format to describe the meta-data. There are a growing number of EDA vendors providing support for IP-XACT. As a result, an increasing number of IP vendors are also delivering IP-XACT files with their IPs.

IP-XACT format is quite comprehensive and elaborate. It addresses a broad spectrum of needs above and beyond the protocol based connections. The XML format is intended to facilitate machine reading. Manually producing the IP-XACT files are not practical.

As a result, a particular EDA tool may support one or more additional mechanisms such as human readable meta-data description language, simpler meta-data format, APIs, or graphical user interface. These additional mechanisms allow the user to provide the meta-data for an IP that lacks a preexisting IP-XACT file. A designer should evaluate an EDA tool based on the simplicity of its additional mechanisms. Simplicity is a powerful contributor towards productivity.

Creation of glue logic
Designers will need to provide glue logic that realizes a multitude of functions while they are bolting the IPs to their SoC or chip. They spend quite a bit of time on the glue logic. This is what makes the SoC act as a single cohesive entity. Typically these are small pieces of logic, but they could be large as well. Complexity in SoC design increases the number of these pieces. Automation can improve productivity here too by introducing division of labor.

An EDA tool for this purpose should be able to provide facilities for the designer to dynamically introduce glue logic blobs inside different areas of an SoC as the designer needs. These blobs should not contain any actual logic. They simply should provide a placeholder for the designer’s need so that he can focus on integrating the rest of the IPs. The EDA tool should permit one to come back to these blobs and add logic later. This will allow us to take advantage of divide and conquer strategy. You can employ multiple designers to parallel process the task, some defining and creating the blobs while others filling up the blobs with actual logic.

Chip specific customized IP
There are quite a few pieces in an SoC which are to be customized per SoC basis. Clock/power management is a great example. It is indeed very complex in nature. There are other pieces such as padring, design for test (DFT) logic, etc. which also fall under customized IP category.

These pieces are challenging since change in SoC design impact them quite a bit. Additionally, their role also continue to evolve over time in terms of how they handle their designated tasks.

This is actually an interesting topic, which deserves its own space. I will go over this topic in my next post. Stay tuned! I appreciate you providing feedback, which allows me to refine my thoughts. Cheers!


Resolution Enhancement Technology – the key to Moore’s Law

Resolution Enhancement Technology – the key to Moore’s Law
by Tom Dillinger on 09-06-2015 at 10:00 am

The ability to extend photolithography utilizing 193i light sources to current process nodes is truly the key technical achievement that has enabled Moore’s Law to continue. The interplay between the exposure equipment, the materials – especially, resists and related coatings – and the fundamental principles of optics is complex, and also fascinating.

In the early deep submicron process nodes, “optical proximity corrections” (OPC) were made to the original mask data, to enhance the fidelity of the photolithography process – e.g., “serifs” added at shape corners to reduce line end pullback. At the 22nm process node and beyond, a detailed analysis of the entire optical system is required. The printing of the target design data requires optimization of the source illumination pattern together with the corresponding mask data – aka, “Source Mask Optimization” (SMO).

Neal Lafferty, Director of SMO Development at Mentor Graphics, was gracious enough to educate me on the recent advancements in this field. As part of the breadth of Mentor’s Calibre product family, pxSMO, pxOPC, and OPCVerify are the tools used for source and mask data generation, and final verification of “printability”. These tools are key to both initial process development and production fabrication.

During development, process integration engineers run SMO/OPC experiments on wafer test runs, to guide development of the layout design rules. The rules reflect the tradeoff between the goals of high circuit density and manufacturing yield, due to variations in both process and photolithography steps. The Design Rule Manual component of the production Process Design Kit (PDK) release is the culmination of these early experiments. Subsequent customer tapeout layer data are also analyzed using these tools, to create the production masks and corresponding illumination source patterns.

Neal indicated that the current 193i exposure equipment provides highly-programmable “pixelated” light sources, utilizing an independently-addressable multi-mirror array. The ability to propagate (plane wave) illumination with different angles provides many degrees of freedom for the pxSMO transform algorithms. The light source pattern is optimized to compensate for the diffractive elements in the overall system – e.g., the mask chrome pattern, the (thick glass) mask itself, the photoresist and coating materials, and any nonlinearities in the optical path. Light sources have definitely come a long way, from in-axis uniform intensity, to off-axis quadrupole illumination, to the current state-of-the art with programmable pixelated patterns.

To determine the quality of the SMO solution, the Mentor tools utilize “gauges” and “clips” in the layout data. A measurement gauge is a feature of the layer data where the critical dimension (CD) is a key process control parameter – e.g., a device gate length, a metal line width, a contact or a cut mask opening. The gauges are defined by the process integration engineer.

From the layout data, specific clips are selected, typically ~1-2um on a side. In practice, a design mask layer might utilize ~10-20 clips for SMO optimization. A clip might contain up to a few hundred gauges. Clips may be defined manually (painstakingly!), or auto-selected by a new feature incorporated into the Calibre toolset. Additionally, the tools provide settings that toggle whether specific gauges are to be omitted from consideration during the SMO data generation phase, to save computation time.

After the light source pattern and mask data are generated, a subsequent OPCVerify analysis will evaluate all gauges for their printability. (OPCVerify also cleans up the generated mask data, removing vertices that are immaterial to the printed image.)

The simulated exposure for each gauge after SMO is analyzed, and a quantitative calculation of the quality of the image for all gauges is made. The process integration engineer provides the set of metrics for each gauge, and the relative weight associated with each measure – e.g., the light intensity (dose), the slope of the intensity at a line edge (contrast), the depth of focus. The engineer can run multiple SMO experiments with different weighting factors on these metrics, and compare the results to select the preferred source pattern and mask data.

Neal highlighted that this layer clip + gauge analysis methodology is applicable to any layout style, whether a highly periodic array or a very asymmetric (logic-like) layout – although more gauges may be appropriate for the latter case.

The figure briefly illustrates an example of SMO results, for a layout with an array of mask data openings. The rectangle in the very center of the clip is the typical drawn layout shape. The green lines in the center of the rectangle are the gauges (in both directions), across which the simulated image result will be analyzed. The detailed, multi-edge shape around the drawn shape is the final mask data, with the corresponding illumination source pattern. The orange oval is the nominal, simulated image for the opening.

Parenthetically, designers at current process nodes are now required to exercise a lithography process checking (LPC) step prior to tapeout, using a tool such as Mentor’s LFD and an encrypted runset from the foundry’s design kit. During process integration development, the wafer run experiments on the film stacks, resists, masks, and illumination sources will require iterations with pxSMO/pxOPC and OPCVerify optimizations. The production OPCVerify step serves as the basis for the LPC runset released in the design kit, for the required pre-tapeout lithographic analysis.

Neal briefly mentioned that EUV lithography presents new challenges to SMO, due to the unique transition to reflective optics, and the unique pattern of the EUV source. He described EUV illumination as “more like points of light… discrete samples across the field… yet, the EUV mask shadowing effects are understood, and of course, the laws of optics haven’t changed.” 🙂

Indeed, there is a long history and a wealth of experience at Mentor with RET, and SMO technology in particular. Mentor will continue to provide both the fundamental computational lithography technology and the ease-of-use features to allow foundries, IDM’s, and equipment/materials providers to manage the complexity and challenges of lithography at new process nodes.

A technical article that provides additional detail on pxSMO/pxOPC/OPCVerify is available from Mentor’s web site, at this link.

-chipguy


Computer Vision in Mobile SoCs and the Making of Third Processor after CPU and GPU

Computer Vision in Mobile SoCs and the Making of Third Processor after CPU and GPU
by Majeed Ahmad on 09-05-2015 at 12:00 pm

Qualcomm’s teaser of its upcoming Snapdragon 820 system-on-chip (SoC) was supposed to make up for the issues like overheating and bad press that haunted its predecessor Snapdragon 810. Instead, the San Diego, California–based semiconductor giant chose to show off the GPU and image processing muscle. Especially, its Spectra image processor is painting a rosy picture of the next-generation camera applications like 3D vision, augmented reality, virtual reality and deep learning.

Qualcomm once more reiterated its strategic focus on computer vision technology when it released more details of the Hexagon 680 DSP inside Snapdragon 820 at the Hot Chips conference in Cupertino, California on August 24, 2015. The Hexagon 680 is the next iteration of Qualcomm’s DSP technology that it uses to offload multimedia tasks from the CPU cores in Snapdragon chips.


Machine vision is creating new opportunities for smartphones

Qualcomm’s new DSP technology boasts heavy vector engine—that it calls Hexagon Vector eXtensions or HVX—for compute-intensive workloads in computational photography, computer vision, virtual reality and photo-realistic graphics on mobile devices. Moreover, it expands single instruction multiple data (SIMD) from 64-bit to 1,024-bit in order to carry out image processing with a wider vector capability.

Qualcomm is using three DSPs in Snapdragon 820 chip: one for image processing, one for wireless modem and one for always-on sensor listening. However, it’s the Hexagon 680 DSP-based Spectra image signal processing unit that is drawing the most headlines. It’s centered around the premise of enhanced computer vision that now aims to take smartphones and tablets to an entirely new level of imaging experience.

Why DSP Matters in Computer Vision

The vision applications have been largely relying on CPUs, GPUs, FPGAs and DSPs, but for mobile devices like smartphones and tablets, programmable DSP solutions are becoming a strategic choice because they consume less power and die area on vision SoCs. The leverage comes from the fact that instruction sets in DSPs are focused on single-core performance and are tailored for specific applications like audio or image processing.

Moreover, ISAs in DSP cores are often driven by very long instruction word or VLIW, which uses multiple executions units in parallel to carry out a single instruction. And that significantly boosts optimization of specific applications like image and video processing. Next, DSPs offer support for critical features such as histograms, LUTs and sliding window filters at the ISA level.


DSP is now the third main processor in mobile SoCs along with CPU and GPU

Qualcomm’s tightening focus on next-generation camera applications with the launch of Hexagon 680 is a stark reminder that DSP engines are going to be the workhorse of computer vision and other imaging-centric apps in smartphones, tablets and wearable devices. It’s a sea change in mobile SoC design, which is the harbinger of the camera envy that consumers will most likely see in smartphones and tablets coming to the market in 2016.

And the DSP-centric image and video processing pitch is coming from a chipmaker that has the reputation of being a step ahead in the mobile semiconductors market. The mobile silicon powerhouse Qualcomm has set the benchmark for dual camera and dual sensor applications in smartphones, and it’s a testament that in the mobile SoC recipe, image processor is now positioned as the third most important processor after CPU and GPU parts.

The Other Vision DSP

Another company that has been advocating DSP-based solutions for computer vision on mobile devices is CEVA Inc. The supplier of DSP cores has recently launched the XM4 vision processor—a low-power DSP and memory subsystem IP core that has been designed from the ground up to meet heavy computing needs of image processing and computer visions applications on mobile devices.

The CEVA-XM4 is the company’s fourth-generation imaging and vision processor IP that boasts a mix of scalar and vector engines, VLIW, and SIMD functions for heavy-duty signal processing workloads. It also features a power scaling unit (PSU) that allows SoC designers to scale power according to application requirements and thus minimize the overall power consumption.


XM4 is designed for mobile and embedded vision systems

The CEVA-XM4 is a vision-optimized DSP engine that offloads the compute-intensive imaging algorithms from CPUs and GPUs so that designers of mobile devices can employ advanced algorithms and avoid compromises on image quality and battery life. The vision algorithms that XM4 processor supports include real-time 3D depth map generation and point cloud processing for 3D scanning, object and image recognition, and deep learning technologies like convolutional neural networks (CNN).

These vision algorithms will create a matrix of possibilities for the smartphones of 2016 for whom two cameras on the back and one camera on the front are going to be a norm. These phones with mega sensors and high-resolution screens will enable a new breed of features encompassing 3D vision, computational photography, visual perception and analytics. And that’s a lot of work for CPU and GPU on the application processor of a smartphone.

A DSP running at half the clock speed of CPU can achieve similar results in terms of image processing. Likewise, using GPU as a compute engine in vision processing applications can yield lower performance due to strict memory constraints. So there might be more mobile SoC makers lining up a vision processor next to CPU and GPU and make the best of the new era of computer vision on smartphones and other mobile devices.

Also read:

Snapdragon 820 SoC Finds Qualcomm at Crossroads

New CEVA-XM4 Vision IP Does Point Clouds and More

CEVA-XM4 White Paper


Smartwatch – A Tough Puzzle to Crack

Smartwatch – A Tough Puzzle to Crack
by Pawan Fangaria on 09-05-2015 at 7:00 am

Discounting the initial electronic or digital watch wave in 1970s which saw its sudden death sooner than expected, the recent Apple Watch event was the third attempt to invade the big watch market; the first being in 1999 and the second in 2012-13 led by Pebble. Although it’s stated that Apple sold about 3.6 million Smartwatches so far, it has not been able to create that buzz as was expected in the beginning when the Apple Watch was launched. In my view, even most of the 3.6 million watches sold could be due to impulsive and conspicuous buying behavior of consumer because it was Apple’s new product! What’s still missing? What’s going wrong?

Let me go back to the basic principles of marketing mix, the 4Ps – Product, Place, Price, and Promotion. In the case of Smartwatch, the very basic principle, the ‘Product’ itself has not been defined yet in its real sense. What does this product offer – time, phone, message, data, health monitoring, or a mix of these things but very little on its own? If it’s just time keeping and health monitoring the Smartwatch provides on its own, then who should buy it? Does that justify the price? Okay, price can be debatable. But is the ‘Place’ defined with enough clarity? Who should buy a Smartwatch, where and in what conditions? I will talk more about the ‘Place’ where I will draw more attention towards the next level of marketing strategy involving Segmentation, Targeting, and Positioning a little later. Before that let’s talk about the ‘Promotion’, Smartwatch is late; already its ‘time’ has been stolen by the Smartphone. Hence, Smartwatch has to find other independent and smart strategies to promote itself; not by still remaining under the shadow of the same stealer, the Smartphone. Moreover, a Smartphone cannot be viewed as a complementary product for an Smartwatch, like gas for car, or butter and cheese for bread.

Apply the same 4P principles on Smartphone and you will find that it’s a very well defined product (more than an isolated product with multiple versatile functionalities). The other aspects of 4P fall in line very amicably with this product. Now apply these principles on a traditional mechanical watch. Again, you will find a very well defined product which runs forever without much intervention, shows you time at the drop of your hat, and defines your personal statement according to the watch you are wearing; it’s a perfect companion for those who wear watch. So, where did we go wrong in defining the Smartwatch?

Okay, let’s come back to the marketing strategy. Which is the segment targeted for Smartwatch? If it’s not the watch wearers, then is it worth the effort and ROI? Why the baselworld gets worried when repeated thrust is made by tech companies on developing a market for Smartwatch? Smartwatch cannot win back Smartphone holders but it can try winning watch wearers. Of course, if you have a good, well defined Smartwatch product, then you can also attract non-wearers of watch to wear Smartwatch. So, clearly the implicit target is the traditional watch wearer. However, the strategy has not been formulated with that target in mind; the product has not been defined for that audience. What has happened is a half cooked story; a hurried jump into conclusion that a smart engineering with data, message, apps, and some phone functions, added with a fashion statement will attract the general consumer (watch wearer and non-wearer) from the crowd. The market segment and exact target has not been thought of and the product not defined according to that. That’s where is the problem; my personal opinion. I’m open to other views from the audience.


A non-wearer of watch is anyway used to take out her Smartphone from her pocket to check several things, several times a day. So she wouldn’t mind checking the Smartphone for time and some other things that a Smartwatch also provides. A percentage of fitness enthusiasts would definitely like Smartwatch, mainly for those pulse readings for heart rate and other body functions’ monitoring which the Smartwatch provides by remaining attached to their bodies. So that’s an exclusive function Smartwatch provides. Is that the reason Fitbitremains at the top in wearable market share? In marketing terms, the Fitbit does define a health monitoring product per se. However, Smartwatch vendors’ wish is not to cater to only health monitoring or fitness segment. Otherwise why would Apple bring Apple Watch Edition? Is it not for watch-wearer?

Definitely, watch-wearer segment is the one which makes sense and should be the primary target for Smartwatch. Although the baselworld is worried, but they understand the watch-wearer psychology very well, and they also know that the kind of technology provided by Smartwatches wouldn’t be able to win the consumers addicted to exotic, fashionable, mechanical watches; so there is natural boasting by a few traditional and exotic watch vendors. Therefore, it’s important for the Smartwatch marketer to enter into the consumer psychology and embrace that. Once the watch-wearer segment is realized with an appropriate and smart product, it will be easier to bring non-wearers into the fold. So, what else, other than the fitness functions, a Smartwatch should provide? This should be carefully analyzed before jumping into the engineering and fashion aspects. I beg to differ from an article (link given at the end) on Smartwatch design at AnandTech website where the author says consumers as well as vendors are confused about Smartwatch. In my view, consumers are well aware, the vendors are confused. The vendors are not offering something valuable in their Smartwatches about which the consumers are not aware; the consumers are already getting those features elsewhere.

Before defining the engineering aspects, the product itself must be defined according to the consumer preferences. In my last article (link given at the end), I already said about the class of engineering done by Apple in its Smartwatch. But that’s not enough. Here are some of the features; I think are must for a Smartwatch. However, this is not complete; many more things can be added.

Battery Life – My reasonable estimate for a Smartwatch to work on a single charge should be at least 15 days if not a month. It may be difficult, but that’s the reality vendors have to face to make a traditional watch-wearer willingly accept a Smartwatch. It would be still better if some kind of auto recharging based on solar or piezoelectric principles can be added.

Independence – The Smartwatch must have its own identity and shouldn’t need any other complementary device for it to work properly. A longer battery life is also related to independence in a subtle way.

Always-on – While wearing the Smartwatch on one wrist, if I have to use the other hand to put it on, it defeats the purpose. Okay, smart engineering can help hear to put it on with a twist of wrist! Design an appropriate sensor.

Form-Factor – The Smartwatch must be fitting well on the wrist and should be of reasonable weight. Samsung Gear S2 with round face of 1.2” is a good idea. More smart engineering has to be done to realize other aspects of a good form factor. Not sure if chips with 10nm or below process technology can help here, if you want to cram a lot of functionality in the Smartwatch.

Material – If a Smartwatch is expected to cling to my wrist for most of the day, the casing and the band should be of good, safe, and durable material which I would feel comfortable with. The inner material will, of course, be dictated by the engineering.

Smart use-model – The Smartphone shouldn’t need my other hand to operate it most of the times. May be a speech recognition based command interface with suitable screen resolution can help. Of course, all provisions for using the watch screen as a touch keyboard should be there for a person to use it that way when needed.

Security – Of course, security is essential for any electronic device now a day from data perspective. In case of Smartwatch, if should have an in-built traceability for the device itself.

Exclusive features – The Smartwatch should have some exclusive and unique features. We talked about fitness and health tracking. The health parameters such as heart rate must be measured accurately and the measurements must be consistent with normal usage of the device. Smart engineering is needed here to place right sensors, right analytics, and right measurements in all circumstances. The other features such as mobile payment by just waving your wrist, weather display, home key and car key systems in-built in the Smartwatch, and so on can prove worthy. There can be many more.

Software – It’s an age of Apps in digital world. There should be identified watch specific apps for weather, sports, time-zones, GPS, flight information, calendar, and so on. We see AndroidWear, WatchOS, and Tizen operating systems on Smartwatches promoted by Google, Apple, and Samsung respectively. Two things here – the software needs to be versatile and there must be interoperability because a Smartwatch may need to connect with many devices.

Fashion statement – Yes, it’s a great motivator factor. In traditional watch segment the essential factor of time keeping is insignificant compared to the overwhelming fashion statement possessed by a designer watch. But that should not be confused with in case of Smartwatch. The Smartwatch has to first establish its own identity by delivering the essentials. The Apple Watch seems to have done well in fashion statement, but I guess, it has to still deliver more on essentials.

Effectively, a Smartwatch should stand apart on its own, irrespective of a Smartphone or a traditional watch, in all respects. Then only it can gain acceptability in the mass market. The companies like Pebble, Samsung, Fitbit, Apple, Garmin, and others have tried their best, but that’s not enough. They have to work towards making the Smartwatch appear like a superior piece in its own space compared to any other device in the electronic world. After Apple Watch it’s Samsung Gear S2; progressive, but let’s see where it goes!

AnandTech article is HERE
Also read: Apple Watch – A Great New Design, Needs More

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Is Silicon Valley Gridlock a Good Sign for Semiconductors?

Is Silicon Valley Gridlock a Good Sign for Semiconductors?
by Daniel Nenni on 09-04-2015 at 4:00 pm

Let’s face it, the semiconductor world has always revolved around Silicon Valley and it always will. After commuting to Silicon Valley from the East Bay (45 miles each way) for the past 30+ years I’m acutely aware of the traffic patterns and how they relate to the economy. With the advent of smartphones I can work or be entertained just about anywhere (waiting in line at Starbucks for example) except of course while driving which is one of the many reasons why I absolutely hate traffic!

Traffic really started getting bad in Silicon Valley during the late 1990s due to the dot-com bubble. Back then semiconductors were all about computers and computers were powering the internet explosion. After the bubble burst there was a brief traffic hiatus but the housing boom again brought gridlock to Northern California. In 2008 the housing bubble burst leaving Silicon Valley with the highest unemployment rate I have ever experienced. Traffic was also the lightest I have experienced which was a welcome change. I no longer had to leave home before the crack of dawn and return home well after dinner. Nor did I have to suffer road raging imbeciles. I drive a little tiny car so modern day SUVs look like weapons of mass destruction. My monster truck driving brother calls my car a speed bump!

An interesting side note, I have seen some very aggressive drivers heading to Starbucks but when they are in line waiting for their caffeine fix they are polite as can be! Go figure…

If you drive down the main arteries of Silicon Valley you will see thousands of new or soon to be built apartments and monstrous office buildings on the rise. As Apple finishes its 2.8-million-square-foot Space Ship Campus in Cupertino that will house more than 12,000 employees they are also leasing hundreds of thousands of square feet in Santa Clara and San Jose. FaceBook and Google are also expanding their Silicon Valley footprint and hundreds of baby unicorns are taking up residence. The unicorns I’m referring to are the companies that have billion dollar valuations based on fundraising rather than revenue. From what I read there are now more than 100 full grown unicorns so yes there is another bubble coming, absolutely.

Fortunately or unfortunately most of the bubble headcount in Silicon Valley seems to be software related versus semiconductor professionals. Are our jobs safe in the next bubble? I would say yes, much more so than the last two bubbles. During the dot-com bubble there were tens of millions of people intermittently connected to the internet. Today there are more than three billion people “always-on” the internet out of a total population of more than seven billion (less than half). In fact, last week Mark Zuckerberg announced that a record one billion people logged onto Facebook in one day. Yet out of the more than seven billion people on this planet only two billion have smartphones (less than one third) so we have plenty of semiconductor growth yet to come. Sound reasonable?


Smartphone Penetration in Bali & Gili Islands

Smartphone Penetration in Bali & Gili Islands
by Eric Esteve on 09-04-2015 at 12:00 pm

Whether you travel for business purpose or to stay for holidays in one of the paradise island part of the “Lesser Sunda Islands” like Bali or the Gili Meno, you always learn about mobile phone penetration. If you stay in one of the luxury resorts, populated by rich western or Chinese people, you probably don’t learn more than when walking in an international airport, or staying at home in Europe or USA. As far as I am concerned, I was lucky enough to stay in a friend’s house in Bali. The house was embedded in a typical suburb between Denpassar (100% Indonesian) and Kuta (100% touristic). That means that when you walk 100 meters outside the house, you can buy fruits in a grocery like local people, and have an idea about their way of life…

The first time I went to Bali was 15 years ago and I was surprised to see that in any shop, from the smaller lost in the country to those located in the touristic area, the vendor was always using a pocket calculator (even for the most obvious calculation). I must say that things have changed a lot: the same people tend to use their cell phone instead of the calculator! I have seen many people in the street with a cell phone, rather the robust Nokia type that we were using in Europe in the early 2000’s than the last Apple or Samsung (or even the $100 smartphone like from Wiko or Huawei).

I said “with” on purpose, as they don’t use it that much. Why? As often, the cost is the reason. Asking for using somebody’s cell phone in an hotel to call a friend, I was told OK, but I should pay Rupias 7,000 per minute. It’s only 50 cents of $, but you have to keep in mind that you can get an excellent main course for Rupias 30,000 as a tourist… or 4 minutes call! It’s probably way too expensive when the average monthly salary is 2 Million Rupias, or $140… my marketing message to operators!

I couldn’t get a picture of the guy fishing and driving the boat with a cell-phone in hand, for obvious reasons: one hand is for fishing and the second for the boat. But I can tell you that the phone is inside the orange hermetic bag! And, by the way, we didn’t catch any fish, even a small Mackerel or white Tuna… The picture is taken around Gili Meno, a minuscule island close to Lombock, and the fisher is also running a small resort with his two brothers. It was interesting to see how useful can be a cell phone during electricity shut down: that’s the only way to listen music! From the waiter to the boss, every people in the staff had his own phone, even if they really don’t use it very often.

Just a precision about Gili Meno, this island is the country of marine turtles, and I have discovered that snorkeling with these ladies, sometimes for 10 or 12 minutes in a row, can be a real ecstasy (and I do snorkeling for about 40 years). Gili Meno habitant used to be fishermen, but they clearly enjoy a better life by working for tourism (there is no mass tourism on such a small island). Their behavior with turtles demonstrates that they are pretty cleaver and that they have understood their market segment positioning: when the turtles lay, for example at a few meters from the resort, they capture the baby turtles. The goal is to nurture the babies (see below when they change water) until they are strong enough to escape more easily from predators (it takes a few months) and to release them in the sea, during a ceremony. Turtles help the island peoples to bring tourist (and money) so the peoples help the turtles to survive!

I agree that we escape a little bit from smartphone marketing survey, but I found this story a good marketing case. Want some more insight about smartphone in Indonesia? Let’s talk about the traffic in Bali. About 95% of the vehicles are made of scooters, and that’s a good news because the 5% remaining cars and trucks are creating huge traffic jam. As usual in Asia, you can see a complete 4 people family riding one scooter, but let’s look at the simple case: two (upper class) teenagers riding a scooter, the boy drive, the girl is beside. As far as I have seen, it’s very, very fashionable for the girl to consult messages on her smartphone while the boy is driving… and to answer to these Emails.

The most important being to acting just like she was at home sitting on a chair!

There is absolutely no reason for posting this last picture… but I like it!

Eric Esteve from IPNEST


SEMATECH, Silvaco and SRAM

SEMATECH, Silvaco and SRAM
by admin on 09-04-2015 at 7:00 am

SEMATECH has been around for over 20 years, starting in Austin. Today it is in upstate New York which increasingly seems to be the area for semiconductor research with IBM (still doing research although they sold their semiconductor business to GlobalFoundries), GlobalFoundries’ own Fab 8, the College of Nanoscale Science and Engineering (CSNE).

For a couple of years, Silvaco have been working with SEMATECH in creating an environment where advanced CMOS processes and devices can be created and optimized, all within 2D/3D modeling to help reduce the burdensome costs of manufacturing real wafers. Once a simulation methodology has been created then it can be used for design of experiments (DoE) where design variables can be changed and the outcomes examined in detail.

This collaboration is driven by the challenge of getting maximum performance from advanced designs as CMOS scaling continues to 10nm and beyond. One area of increased interest in simulation is that standard rule-based resistance and capacitance extraction may be insufficient for designs in advanced FinFET technologies. FinFETs introduce even more complexity in designs due to the 3 dimensional nature, including coupling within a single device, between devices and between the devices and the local interconnections. Physical 3D field solver based extraction on the cell-level ensures that the designer takes into account coupling effects between the FinFET device and middle-of-line interconnects.

Silvaco approaches this problem via a multi-step process, all using Silvaco’s tools:

[LIST=1]

  • a 6T SRAM layout is created in the Silvaco Expert layout editor as a test vehicle
  • the 3D structure is created via Victory Process Silvaco’s 3D Process Simulator solution capable of fast geometric structure building as well as detailed physical process modeling.
  • Victory Process is layout-based, meaning the 6T SRAM layout is used as an input to define help define the physical structure.
  • The resulting 3D structure is passed to Silvaco’s Clever, a 3D physics-based field solver. The active devices are identified and the parasitic the Rs and Cs are extracted with high accuracy, creating a new SRAM netlist which includes added parasitics
  • the output netlist is fed into SmartSPICE where it can be paired with compact models for the active transistors and simulated in SPICE
  • this creates results at the level that design engineers can analyze
  • rinse and repeat via DoE in Silvaco’s Virtual Wafer Fab to understand and optimize the designThe above diagram shows the layout of the 10nm bitcell. Below is a 3D visualization of the TCAD model of how the bitcell would be built up in the modeled 10nm process.
    Using a design of experiments feedback approach allows study of how design choices impact cell circuit performance and allow optimization to understand and minimize impact of parasitic RCs. Furthermore, due to the direct link between the 3D structure and RC-extracted SPICE simulation, it is possible to analyze the impact of structural variation due to process or layout on performance to better understand margining requirements.

    In addition to the RC extraction investigation, Silvaco has also collaborated with SEMATCH in other related areas.

    • thermo-mechanical simulation of through silicon vias (TSVs)
    • Detailed 3D TCAD Device simulation, modeling electrical and stress performance of 14nm FinFETs

    For full details see Silvaco at at TSMC’s OIP on September 17th